0% found this document useful (0 votes)
353 views515 pages

Calculus and Analysis in Euclidean Space: Jerry Shurman

Calculus

Uploaded by

一凡 袁
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
353 views515 pages

Calculus and Analysis in Euclidean Space: Jerry Shurman

Calculus

Uploaded by

一凡 袁
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 515

Undergraduate Texts in Mathematics

Jerry Shurman

Calculus and
Analysis in
Euclidean
Space
Undergraduate Texts in Mathematics
Undergraduate Texts in Mathematics

Series Editors:

Sheldon Axler
San Francisco State University, San Francisco, CA, USA

Kenneth Ribet
University of California, Berkeley, CA, USA

Advisory Board:

Colin Adams, Williams College


David A. Cox, Amherst College
Pamela Gorkin, Bucknell University
Roger E. Howe, Yale University
Michael Orrison, Harvey Mudd College
Lisette G. de Pillis, Harvey Mudd College
Jill Pipher, Brown University
Fadil Santosa, University of Minnesota

Undergraduate Texts in Mathematics are generally aimed at third- and fourth-


year undergraduate mathematics students at North American universities. These
texts strive to provide students and teachers with new perspectives and novel ap-
proaches. The books include motivation that guides the reader to an appreciation
of interrelations among different aspects of the subject. They feature examples
that illustrate key concepts as well as exercises that strengthen understanding.

More information about this series at http://www.springer.com/series/666


Jerry Shurman

Calculus and Analysis


in Euclidean Space

123
Jerry Shurman
Department of Mathematics
Reed College
Portland, OR
USA

ISSN 0172-6056 ISSN 2197-5604 (electronic)


Undergraduate Texts in Mathematics
ISBN 978-3-319-49312-1 ISBN 978-3-319-49314-5 (eBook)
DOI 10.1007/978-3-319-49314-5
Library of Congress Control Number: 2016958974

© Springer International Publishing AG 2016, corrected publication 2019


This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar
methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt from
the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained herein or
for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer imprint is published by Springer Nature


The registered company is Springer International Publishing AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

1 Results from One-Variable Calculus . . . . . . . . . . . . . . . . . . . . . . . . 1


1.1 The Real Number System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Foundational and Basic Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Taylor’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Part I Multivariable Differential Calculus

2 Euclidean Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.1 Algebra: Vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Geometry: Length and Angle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.3 Analysis: Continuous Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
2.4 Topology: Compact Sets and Continuity . . . . . . . . . . . . . . . . . . . . 51

3 Linear Mappings and Their Matrices . . . . . . . . . . . . . . . . . . . . . . . 59


3.1 Linear Mappings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
3.2 Operations on Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
3.3 The Inverse of a Linear Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
3.4 Inhomogeneous Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
3.5 The Determinant: Characterizing Properties and Their
Consequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
3.6 The Determinant: Uniqueness and Existence . . . . . . . . . . . . . . . . 97
3.7 An Explicit Formula for the Inverse . . . . . . . . . . . . . . . . . . . . . . . . . 109
3.8 Geometry of the Determinant: Volume . . . . . . . . . . . . . . . . . . . . . . 111
3.9 Geometry of the Determinant: Orientation . . . . . . . . . . . . . . . . . . 120
3.10 The Cross Product, Lines, and Planes in R3 . . . . . . . . . . . . . . . . . 123
vi Contents

4 The Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131


4.1 Trying to Extend the Symbol-Pattern: Immediate, Irreparable
Catastrophe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
4.2 New Environment: The Bachmann–Landau Notation . . . . . . . . . 132
4.3 One-Variable Revisionism: The Derivative Redefined . . . . . . . . . 140
4.4 Basic Results and the Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4.5 Calculating the Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
4.6 Higher-Order Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
4.7 Extreme Values . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
4.8 Directional Derivatives and the Gradient . . . . . . . . . . . . . . . . . . . . 185

5 Inverse and Implicit Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199


5.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.2 The Inverse Function Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
5.3 The Implicit Function Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213
5.4 Lagrange Multipliers: Geometric Motivation and Specific
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
5.5 Lagrange Multipliers: Analytic Proof and General Examples . . 240

Part II Multivariable Integral Calculus

6 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
6.1 Machinery: Boxes, Partitions, and Sums . . . . . . . . . . . . . . . . . . . . . 253
6.2 Definition of the Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
6.3 Continuity and Integrability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
6.4 Integration of Functions of One Variable . . . . . . . . . . . . . . . . . . . . 277
6.5 Integration over Nonboxes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284
6.6 Fubini’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294
6.7 Change of Variable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 306
6.8 Topological Preliminaries for the Change of Variable Theorem 328
6.9 Proof of the Change of Variable Theorem . . . . . . . . . . . . . . . . . . . 335

7 Approximation by Smooth Functions . . . . . . . . . . . . . . . . . . . . . . . 347


7.1 Spaces of Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 349
7.2 Pulse Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
7.3 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 356
7.4 Test Approximate Identity and Convolution . . . . . . . . . . . . . . . . . 362
7.5 Known-Integrable Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368

8 Parametrized Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375


8.1 Euclidean Constructions and Two Curves . . . . . . . . . . . . . . . . . . . 375
8.2 Parametrized Curves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 384
8.3 Parametrization by Arc Length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
8.4 Plane Curves: Curvature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
Contents vii

8.5 Space Curves: Curvature and Torsion . . . . . . . . . . . . . . . . . . . . . . . 398


8.6 General Frenet Frames and Curvatures . . . . . . . . . . . . . . . . . . . . . . 404

9 Integration of Differential Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409


9.1 Integration of Functions over Surfaces . . . . . . . . . . . . . . . . . . . . . . . 410
9.2 Flow and Flux Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 418
9.3 Differential Forms Syntactically and Operationally . . . . . . . . . . . 423
9.4 Examples: 1-Forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 427
9.5 Examples: 2-Forms on R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 430
9.6 Algebra of Forms: Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . 437
9.7 Algebra of Forms: Multiplication . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
9.8 Algebra of Forms: Differentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . 441
9.9 Algebra of Forms: The Pullback . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
9.10 Change of Variable for Differential Forms . . . . . . . . . . . . . . . . . . . 458
9.11 Closed Forms, Exact Forms, and Homotopy . . . . . . . . . . . . . . . . . 460
9.12 Cubes and Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 466
9.13 Geometry of Chains: The Boundary Operator . . . . . . . . . . . . . . . 469
9.14 The General Fundamental Theorem of Integral Calculus . . . . . . 477
9.15 Classical Change of Variable Revisited . . . . . . . . . . . . . . . . . . . . . . 481
9.16 The Classical Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 487
9.17 Divergence and Curl in Polar Coordinates . . . . . . . . . . . . . . . . . . . 493

Correction to: Calculus and Analysis in Euclidean Space . . . . . . . C1

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 501
Preface

This book came into being as lecture notes for a course at Reed College on
multivariable calculus and analysis. The setting is n-dimensional Euclidean
space, with the material on differentiation culminating in the inverse function
theorem and its consequences, and the material on integration culminating
in the general fundamental theorem of integral calculus (often called Stokes’s
theorem) and some of its consequences in turn. The prerequisite is a proof-
based course in one-variable calculus and analysis. Some familiarity with the
complex number system and complex mappings is occasionally assumed as
well, but the reader can get by without it.
The book’s aim is to use multivariable calculus to teach mathematics as
a blend of reasoning, computing, and problem-solving, doing justice to the
structure, the details, and the scope of the ideas. To this end, I have tried to
write in an informal style that communicates intent early in the discussion of
each topic rather than proceeding coyly from opaque definitions. Also, I have
tried occasionally to speak to the pedagogy of mathematics and its effect on
the process of learning the subject. Most importantly, I have tried to spread
the weight of exposition among figures, formulas, and words. The premise is
that the reader is eager to do mathematics resourcefully by marshaling the
skills of
• geometric intuition (the visual cortex being quickly instinctive)
• algebraic manipulation (symbol-patterns being precise and robust)
• and incisive use of natural language (slogans that encapsulate central ideas
enabling a large-scale grasp of the subject).
Thinking in these ways renders mathematics coherent, inevitable, and fluid.
In my own student days I learned this material from books by Apostol,
Buck, Rudin, and Spivak, books that thrilled me. My debt to those sources
pervades these pages, and there are many other fine books on the subject as
well. Indeed, nothing in these notes is claimed as new. Whatever effective-
ness this exposition has acquired over time is due to innumerable ideas from
my students, and from discussion with colleagues, especially Joe Buhler, Paul
x Preface

Garrett, Ray Mayer, and Tom Wieting. After many years of tuning my presen-
tation of this subject matter to serve the needs in my classroom, I hope that
now this book can serve other teachers and their students too. I welcome sug-
gestions for improving it, especially because some of its parts are more tested
than others. Comments and corrections should be sent to [email protected].
By way of a warmup, Chapter 1 reviews some ideas from one-variable
calculus, and then covers the one-variable Taylor’s theorem in detail.
Chapters 2 and 3 cover what might be called multivariable precalculus, in-
troducing the requisite algebra, geometry, analysis, and topology of Euclidean
space, and the requisite linear algebra, for the calculus to follow. A pedagogical
theme of these chapters is that mathematical objects can be better understood
from their characterizations than from their constructions. Vector geometry
follows from the intrinsic (coordinate-free) algebraic properties of the vector
inner product, with no reference to the inner product formula. The fact that
passing a closed and bounded subset of Euclidean space through a continuous
mapping gives another such set is clear once such sets are characterized in
terms of sequences. The multiplicativity of the determinant and the fact that
the determinant indicates whether a linear mapping is invertible are conse-
quences of the determinant’s characterizing properties. The geometry of the
cross product follows from its intrinsic algebraic characterization. Further-
more, the only possible formula for the (suitably normalized) inner product,
or for the determinant, or for the cross product, is dictated by the relevant
properties. As far as the theory is concerned, the only role of the formula is
to show that an object with the desired properties exists at all. The intent
here is that the student who is introduced to mathematical objects via their
characterizations will see quickly how the objects work, and that how they
work makes their constructions inevitable.
In the same vein, Chapter 4 characterizes the multivariable derivative as a
well-approximating linear mapping. The chapter then solves some multivari-
able problems that have one-variable counterparts. Specifically, the multivari-
able chain rule helps with change of variable in partial differential equations,
a multivariable analogue of the max/min test helps with optimization, and
the multivariable derivative of a scalar-valued function helps to find tangent
planes and trajectories.
Chapter 5 uses the results of the three chapters preceding it to prove the
inverse function theorem, then the implicit function theorem as a corollary,
and finally the Lagrange multiplier criterion as a consequence of the implicit
function theorem. Lagrange multipliers help with a type of multivariable op-
timization problem that has no one-variable analogue, optimization with con-
straints. For example, given two curves in space, what pair of points—one
on each curve—are closest to each other? Not only does this problem have
six variables (the three coordinates of each point), but furthermore, they are
not fully independent: the first three variables must specify a point on the
first curve, and similarly for the second three. In this problem, x1 through x6
Preface xi

vary though a subset of six-dimensional space, conceptually a two-dimensional


subset (one degree of freedom for each curve) that is bending around in the
ambient six dimensions, and we seek points of this subset where a certain
function of x1 through x6 is optimized. That is, optimization with constraints
can be viewed as a beginning example of calculus on curved spaces.
For another example, let n be a positive integer, and let e1 , . . . , en be
positive numbers with e1 + ⋯ + en = 1. Maximize the function

f (x1 , . . . , xn ) = xe11 ⋯xenn , xi ≥ 0 for all i,

subject to the constraint that

e1 x1 + ⋯ + en xn = 1.

As in the previous paragraph, since this problem involves one condition on


the variables x1 through xn , it can be viewed as optimizing over an (n − 1)-
dimensional space inside n dimensions. The problem may appear unmotivated,
√ quickly to a generalization of the arithmetic–geometric
but its solution leads
mean inequality ab ≤ (a + b)/2 for all nonnegative a and b,

ae11 ⋯aenn ≤ e1 a1 + ⋯ + en an for all nonnegative a1 , . . . , an .

Moving to integral calculus, Chapter 6 introduces the integral of a scalar-


valued function of many variables, taken over a domain of its inputs. When the
domain is a box, the definitions and the basic results are essentially the same as
for one variable. However, in multivariable calculus we want to integrate over
regions other than boxes, and ensuring that we can do so takes a little work.
After this is done, the chapter proceeds to two main tools for multivariable
integration: Fubini’s theorem and the change of variable theorem. Fubini’s
theorem reduces one n-dimensional integral to n one-dimensional integrals,
and the change of variable theorem replaces one n-dimensional integral with
another that may be easier to evaluate. Using these techniques, one can show,
for example, that the ball of radius r in n dimensions has volume

π n/2 n
vol (Bn (r)) = r , n = 1, 2, 3, 4, . . . .
(n/2)!

The meaning of the (n/2)! in the display when n is odd is explained by a


function called the gamma function. The sequence begins
4 3 1 2 4
2r, πr2 , πr , π r , ... .
3 2
Chapter 7 discusses the fact that continuous functions, or differentiable
functions, or twice-differentiable functions, are well approximated by smooth
functions, meaning functions that can be differentiated endlessly. The approx-
imation technology is an integral called the convolution. One point here is that
xii Preface

the integral is useful in ways far beyond computing volumes. The second point
is that with approximation by convolution in hand, we feel free to assume in
the sequel that functions are smooth. The reader who is willing to grant this
assumption in any case can skip Chapter 7.
Chapter 8 introduces parametrized curves as a warmup for Chapter 9
to follow. The subject of Chapter 9 is integration over k-dimensional parame-
trized surfaces in n-dimensional space, and parametrized curves are the special
case k = 1. Aside from being one-dimensional surfaces, parametrized curves
are interesting in their own right. Chapter 8 focuses on the local description
of a curve in an intrinsic coordinate system that continually adjusts itself as
it moves along the curve, the Frenet frame.
Chapter 9 presents the integration of differential forms. This subject poses
the pedagogical dilemma that fully describing its structure requires an in-
vestment in machinery untenable for students who are seeing it for the first
time, whereas describing it purely operationally is unmotivated. The approach
here begins with the integration of functions over k-dimensional surfaces in
n-dimensional space, a natural tool to want, with a natural definition suggest-
ing itself. For certain such integrals, called flow and flux integrals, the inte-
grand takes a particularly workable form consisting of sums of determinants
of derivatives. It is easy to see what other integrands—including integrands
suitable for n-dimensional integration in the sense of Chapter 6, and includ-
ing functions in the usual sense—have similar features. These integrands can
be uniformly described in algebraic terms as objects called differential forms.
That is, differential forms assemble the smallest coherent algebraic structure
encompassing the various integrands of interest to us. The fact that differen-
tial forms are algebraic makes them easy to study without thinking directly
about the analysis of integration. The algebra leads to a general version of
the fundamental theorem of integral calculus that is rich in geometry. The
theorem subsumes the three classical vector integration theorems: Green’s
theorem, Stokes’s theorem, and Gauss’s theorem, also called the divergence
theorem.
The following two exercises invite the reader to start engaging with some
of the ideas in this book immediately.

Exercises
0.0.1. (a) Consider two surfaces in space, each surface having at each of its
points a tangent plane and therefore a normal line, and consider pairs of
points, one on each surface. Conjecture a geometric condition, phrased in
terms of tangent planes and/or normal lines, about the closest pair of points.
(b) Consider a surface in space and a curve in space, the curve having at
each of its points a tangent line and therefore a normal plane, and consider
pairs of points, one on the surface and one on the curve. Make a conjecture
about the closest pair of points.
(c) Make a conjecture about the closest pair of points on two curves.
Preface xiii

0.0.2. (a) Assume that the factorial of a half-integer makes sense, and grant
the general formula for√the volume of a ball in n dimensions. Explain why
it follows that (1/2)! = π/2. Further assume that the half-integral factorial
function satisfies the relation

x! = x ⋅ (x − 1)! for x = 3/2, 5/2, 7/2, . . . .

Subject to these assumptions, verify that the volume of the ball of radius r
in three dimensions is 43 πr3 as claimed. What is the volume of the ball of
radius r in five dimensions?
(b) The ball of radius r in n dimensions sits inside a circumscribing box
with sides of length 2r. Draw pictures of this configuration for n = 1, 2, 3.
Determine what portion of the box is filled by the ball in the limit as the
dimension n gets large. That is, find

vol (Bn (r))


lim .
n→∞ (2r)n

The original version of the book was revised: The Author’s later corrections have been
incorporated. The corrected book is available at https://doi.org/10.1007/978-3-319-
49314-5_10
1
Results from One-Variable Calculus

We begin with a quick review of some ideas from one-variable calculus. The
material of Sections 1.1 and 1.2 in assumed to be familiar. Section 1.3 discusses
Taylor’s theorem at greater length, not assuming that the reader has already
seen it.

1.1 The Real Number System


We assume that there is a real number system, a set R that contains two
distinct elements 0 and 1 and is endowed with the algebraic operations of
addition,
+ ∶ R × R Ð→ R,
and multiplication,
⋅ ∶ R × R Ð→ R.
The sum +(a, b) is written a + b, and the product ⋅(a, b) is written a ⋅ b or
simply ab.
Theorem 1.1.1 (Field axioms for (R, +, ⋅)). The real number system, with
its distinct 0 and 1 and with its addition and multiplication, is assumed to
satisfy the following set of axioms.
(a1) Addition is associative: (x + y) + z = x + (y + z) for all x, y, z ∈ R.
(a2) 0 is an additive identity: 0 + x = x for all x ∈ R.
(a3) Existence of additive inverses: for each x ∈ R there exists y ∈ R such that
y + x = 0.
(a4) Addition is commutative: x + y = y + x for all x, y ∈ R.
(m1) Multiplication is associative: (xy)z = x(yz) for all x, y, z ∈ R.
(m2) 1 is a multiplicative identity: 1x = x for all x ∈ R.
(m3) Existence of multiplicative inverses: for each nonzero x ∈ R there exists
y ∈ R such that yx = 1.
(m4) Multiplication is commutative: xy = yx for all x, y ∈ R.

© Springer International Publishing AG 2016 1


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5_1
2 1 Results from One-Variable Calculus

(d1) Multiplication distributes over addition: (x + y)z = xz + yz for all x, y, z ∈


R.

All of basic algebra follows from the field axioms. Additive and multi-
plicative inverses are unique, the cancellation law holds, 0 ⋅ x = 0 for all real
numbers x, and so on.
Subtracting a real number from another is defined as adding the additive
inverse. In symbols,

− ∶ R × R Ð→ R, x − y = x + (−y) for all x, y ∈ R.

We also assume that R is an ordered field. That is, we assume that there
is a subset R+ of R (the positive elements) such that the following axioms
hold.

Theorem 1.1.2 (Order axioms).


(o1) Trichotomy axiom: for every real number x, exactly one of the following
conditions holds:

x ∈ R+ , −x ∈ R+ , x = 0.

(o2) Closure of positive numbers under addition: for all real numbers x and y,
if x ∈ R+ and y ∈ R+ then also x + y ∈ R+ .
(o3) Closure of positive numbers under multiplication: for all real numbers x
and y, if x ∈ R+ and y ∈ R+ then also xy ∈ R+ .

For all real numbers x and y, define

x<y

to mean
y − x ∈ R+ .
The usual rules for inequalities then follow from the axioms.
Finally, we assume that the real number system is complete. Complete-
ness can be phrased in various ways, all logically equivalent. One version of
completeness is phrased in terms of set-bounds.

Theorem 1.1.3 (Completeness as a set-bound criterion). Every non-


empty subset of R that is bounded above has a least upper bound.

This statement of completeness is an existence statement.


A subset S of R is inductive if
(i1) 0 ∈ S,
(i2) For all x ∈ R, if x ∈ S then x + 1 ∈ S.
1.1 The Real Number System 3

Every intersection of inductive subsets of R is again inductive. The set of


natural numbers, denoted N, is the intersection of all inductive subsets
of R, i.e., N is the smallest inductive subset of R. There is no natural number
between 0 and 1 (because if there were then deleting it from N would leave a
smaller inductive subset of R), and so

N = {0, 1, 2, . . . }.

A proposition is a statement P that is either true or false. A proposition


form defined over N is an expression P (n), with n a formal symbol, that
becomes a proposition when any particular natural number is substituted
for n. For instance, the proposition form P (n) = “n is even” becomes the true
proposition “0 is even” when 0 is substituted for n, and it becomes the false
proposition “1 is even” when 1 is substituted for n.

Theorem 1.1.4 (Induction theorem). Let P (n) be a proposition form de-


fined over N. Suppose that
• P (0) is true.
• For all n ∈ N, if P (n) is true then so is P (n + 1).
Then P (n) is true for all natural numbers n.

Indeed, the hypotheses of the theorem say that P (n) is true for a subset
of N that is inductive, and so the theorem follows from the definition of N as
the smallest inductive subset of R.
The Archimedean property of the real number system states that the
subset N of R is not bounded above. Equivalently, the sequence {1, 12 , 13 , . . . }
converges to 0: there are no infinitesimal real numbers greater than 0 but
less than every reciprocal positive integer. The Archimedean property follows
from the assumption that R satisfies the set-bound criterion for completeness.
A second version of completeness is phrased in terms of monotonic se-
quences. Again it is an existence statement.

Theorem 1.1.5 (Completeness as a monotonic sequence criterion).


Every bounded monotonic sequence in R converges to a unique limit.

This version of completeness follows from the first one. However, it does
not imply the first one unless we also assume the Archimedean property.
The set of integers, denoted Z, is the union of the natural numbers and
their additive inverses,
Z = {0, ±1, ±2, . . . }.

Exercises

1.1.1. Referring only to the field axioms, show that 0x = 0 for all x ∈ R.
4 1 Results from One-Variable Calculus

1.1.2. Prove that in every ordered field, 1 is positive. Prove that the complex
number field C cannot be made an ordered field.

1.1.3. Use a completeness property of the real number system to show that 2
has a positive square root.

1.1.4. (a) Prove by induction that


n
n(n + 1)(2n + 1)
∑i = for all n ∈ Z+ .
2
i=1 6

(b) (Bernoulli’s inequality) For every real number r ≥ −1, prove that

(1 + r)n ≥ 1 + rn for all n ∈ N.

(c) For what positive integers n is 2n > n3 ?

1.1.5. (a) Use the induction theorem to show that for every natural num-
ber m, the sum m + n and the product mn are again natural for every natural
number n. Thus N is closed under addition and multiplication, and conse-
quently so is Z.
(b) Which of the field axioms continue to hold for the natural numbers?
(c) Which of the field axioms continue to hold for the integers?

1.1.6. For every positive integer n, let Z/nZ denote the set {0, 1, . . . , n − 1}
with the usual operations of addition and multiplication carried out taking
remainders on division by n. That is, add and multiply in the usual fashion
but subject to the additional condition that n = 0. For example, in Z/5Z we
have 2 + 4 = 1 and 2 ⋅ 4 = 3. For what values of n does Z/nZ form a field?

1.2 Foundational and Basic Theorems

This section reviews the foundational theorems of one-variable calculus. The


first two theorems are not theorems of calculus at all, but rather are theorems
about continuous functions and the real number system. The first theorem
says that under suitable conditions, an optimization problem is guaranteed to
have a solution.

Theorem 1.2.1 (Extreme value theorem). Let I be a nonempty closed


and bounded interval in R, and let f ∶ I Ð→ R be a continuous function. Then
f takes a minimum value and a maximum value on I.

The second theorem says that under suitable conditions, every value
trapped between two output values of a function must itself be an output
value.
1.2 Foundational and Basic Theorems 5

Theorem 1.2.2 (Intermediate value theorem). Let I be a nonempty in-


terval in R, and let f ∶ I Ð→ R be a continuous function. Let y be a real
number, and suppose that

f (x) < y for some x ∈ I

and
f (x′ ) > y for some x′ ∈ I.
Then
f (c) = y for some c ∈ I.

The mean value theorem relates the derivative of a function to values of


the function itself with no reference to the fact that the derivative is a limit,
but at the cost of introducing an unknown point.

Theorem 1.2.3 (Mean value theorem). Let a and b be real numbers with
a < b. Suppose that the function f ∶ [a, b] Ð→ R is continuous and that f is
differentiable on the open subinterval (a, b). Then

f (b) − f (a)
= f ′ (c) for some c ∈ (a, b).
b−a
The fundamental theorem of integral calculus quantifies the idea that inte-
gration and differentiation are inverse operations. In fact, two different results
are both called the fundamental theorem, one a result about the derivative
of the integral and the other a result about the integral of the derivative.
“Fundamental theorem of calculus,” unmodified, usually refers to the second
of the next two results.

Theorem 1.2.4 (Fundamental theorem of integral calculus I). Let I


be a nonempty interval in R, let a be a point of I, and let f ∶ I Ð→ R be a
continuous function. Define a second function,

F (x) = ∫ f (t) dt.


x
F ∶ I Ð→ R,
a

Then F is differentiable on I with derivative F ′ (x) = f (x) for all x ∈ I.

Theorem 1.2.5 (Fundamental theorem of integral calculus II). Let I


be a nonempty interval in R, and let f ∶ I Ð→ R be a continuous function.
Suppose that the function F ∶ I Ð→ R has derivative f . Then for every closed
and bounded subinterval [a, b] of I,

f (x) dx = F (b) − F (a).


b

a
6 1 Results from One-Variable Calculus

Exercises

1.2.1. Use the intermediate value theorem to show that 2 has a positive square
root.

1.2.2. Let f ∶ [0, 1] Ð→ [0, 1] be continuous. Use the intermediate value theo-
rem to show that f (x) = x for some x ∈ [0, 1].

1.2.3. Let a and b be real numbers with a < b. Suppose that f ∶ [a, b] Ð→ R
is continuous and that f is differentiable on the open subinterval (a, b). Use
the mean value theorem to show that if f ′ > 0 on (a, b) then f is strictly
increasing on [a, b]. (Note: The quantities called a and b in the mean value
theorem when you cite it to solve this exercise will not be the a and b given
here. It may help to review the definition of “strictly increasing.”)

1.2.4. For the extreme value theorem, the intermediate value theorem, and
the mean value theorem, give examples to show that weakening the hypotheses
of the theorem gives rise to examples for which the conclusion of the theorem
fails.

1.3 Taylor’s Theorem


Let I ⊂ R be a nonempty open interval, and let a ∈ I be any point. Let n be a
nonnegative integer. Suppose that the function f ∶ I Ð→ R has n continuous
derivatives,
f, f ′ , f ′′ , . . . , f (n) ∶ I Ð→ R.
Suppose further that we know the values of f and its derivatives at a, the
n + 1 numbers

f (a), f ′ (a), f ′′ (a), ..., f (n) (a).

(For instance, if f ∶ R Ð→ R is the cosine function, and a = 0 and n is even,


then the numbers are 1, 0, −1, 0, . . . , (−1)n/2 .)
Question 1 (Existence and uniqueness): Is there a polynomial p of
degree n that mimics the behavior of f at a in the sense that

p(a) = f (a), p′ (a) = f ′ (a), p′′ (a) = f ′′ (a), ..., p(n) (a) = f (n) (a)?

Is there only one such polynomial?


Question 2 (Accuracy of approximation, granting existence and
uniqueness): How well does p(x) approximate f (x) for x ≠ a?
Question 1 is easy to answer. Consider a polynomial of degree n expanded
about x = a,

p(x) = a0 + a1 (x − a) + a2 (x − a)2 + a3 (x − a)3 + ⋯ + an (x − a)n .


1.3 Taylor’s Theorem 7

The goal is to choose the coefficients a0 , . . . , an to make p behave like the


original function f at a. Note that p(a) = a0 . We want p(a) to equal f (a), so
set
a0 = f (a).
Differentiate p to obtain
p′ (x) = a1 + 2a2 (x − a) + 3a3 (x − a)2 + ⋯ + nan (x − a)n−1 ,
so that p′ (a) = a1 . We want p′ (a) to equal f ′ (a), so set
a1 = f ′ (a).
Differentiate again to obtain
p′′ (x) = 2a2 + 3 ⋅ 2a3 (x − a) + ⋯ + n(n − 1)an (x − a)n−2 ,
so that p′′ (a) = 2a2 . We want p′′ (a) to equal f ′′ (a), so set
f ′′ (a)
a2 = .
2
Differentiate again to obtain
p′′′ (x) = 3 ⋅ 2a3 + ⋯ + n(n − 1)(n − 2)an (x − a)n−3 ,
so that p′′′ (a) = 3 ⋅ 2a3 . We want p′′′ (a) to equal f ′′′ (a), so set
f ′′ (a)
a3 = .
3⋅2
Continue in this fashion to obtain a4 = f (4) (a)/4! and so on up to an =
f (n) (a)/n!. That is, the desired coefficients are
f (k) (a)
ak = for k = 0, . . . , n.
k!
Thus the answer to the existence part of Question 1 is yes. Furthermore, since
the calculation offered us no choices en route, these are the only coefficients
that can work, and so the approximating polynomial is unique. It deserves a
name.
Definition 1.3.1 (nth-degree Taylor polynomial). Let I ⊂ R be a nonempty
open interval, and let a be a point of I. Let n be a nonnegative integer. Sup-
pose that the function f ∶ I Ð→ R has n continuous derivatives. Then the
nth-degree Taylor polynomial of f at a is
f ′′ (a) f (n) (a)
Tn (x) = f (a) + f ′ (a)(x − a) + (x − a)2 + ⋯ + (x − a)n .
2 n!
In more concise notation,
n
f (k) (a)
Tn (x) = ∑ (x − a)k .
k=0 k!
8 1 Results from One-Variable Calculus

For example, if f (x) = ex and a = 0 then it is easy to generate the following


table:
f (k) (0)
k f (k) (x)
k!
0 ex 1
1 ex 1
1
2 ex
2
1
3 ex
3!
⋮ ⋮ ⋮
1
n ex
n!
From the table we can read off the nth-degree Taylor polynomial of f at 0,

x2 x3 xn n
xk
Tn (x) = 1 + x + + +⋯+ =∑ .
2 3! n! k=0 k!

Recall that the second question is how well the polynomial Tn (x) approx-
imates f (x) for x ≠ a. Thus it is a question about the difference f (x) − Tn (x).
Giving this quantity its own name is useful.

Definition 1.3.2 (nth-degree Taylor remainder). Let I ⊂ R be a non-


empty open interval, and let a be a point of I. Let n be a nonnegative integer.
Suppose that the function f ∶ I Ð→ R has n continuous derivatives. Then the
nth-degree Taylor remainder of f at a is

Rn (x) = f (x) − Tn (x).

So the second question is to estimate the remainder Rn (x) for points x ∈ I.


The method to be presented here for doing so proceeds very naturally but it
is perhaps a little surprising, because although the Taylor polynomial Tn (x)
is expressed in terms of derivatives, as is the expression to be obtained for the
remainder Rn (x), we obtain the expression by using the fundamental theorem
of integral calculus repeatedly.
The method requires a calculation, and so, guided by hindsight, we first
carry it out so that then the ideas of the method itself will be uncluttered.
For every positive integer k and every x ∈ R define a k-fold nested integral,

Ik (x) = ∫
x x1 xk−1
∫ ⋯∫ dxk ⋯dx2 dx1 .
x1 =a x2 =a xk =a

This nested integral is a function only of x because a is a constant and x1


through xk are dummy variables of integration. That is, Ik depends only on
the upper limit of integration of the outermost integral. Although Ik may
1.3 Taylor’s Theorem 9

appear daunting, it unwinds readily if we start from the simplest case. We


interpret the empty nested integral I0 (x) to be identically 1. Next,
x
I1 (x) = ∫ dx1 = x1 ∣
x
= x − a.
x1 =a
x1 =a

Move one layer out and use this result to get

I2 (x) = ∫ I1 (x1 ) dx1


x x1 x
∫ dx2 dx1 = ∫
x1 =a x2 =a x1 =a
x
(x1 − a) dx1 = (x1 − a)2 ∣ = (x − a)2 .
x 1 1
=∫
x1 =a 2 x1 =a
2

Again move out and quote the previous calculation,

I3 (x) = ∫ I2 (x1 ) dx1


x x1 x2 x
∫ ∫ dx3 dx2 dx1 = ∫
x1 =a x2 =a x3 =a x1 =a
x
(x1 − a)2 dx1 = (x1 − a) ∣ = (x − a)3 .
x 1 1 1
=∫ 3
x1 =a 2 3! x1 =a
3!

The method and pattern are clear, and the answer in general is

Ik (x) = (x − a)k ,
1
k = 0, 1, 2, . . . .
k!
Note that this is part of the kth term (f (k) (a)/k!)(x − a)k of the Taylor
polynomial, the part that makes no reference to the function f . That is,
f (k) (a)Ik (x) is the kth term of the Taylor polynomial for k = 0, 1, 2, . . . .
With the formula for Ik (x) in hand, we return to using the fundamental
theorem of integral calculus to study the remainder Rn (x), the function f (x)
minus its nth-degree Taylor polynomial Tn (x). According to the fundamental
theorem,
f (x) = f (a) + ∫ f ′ (x1 ) dx1 .
x

a
That is, f (x) is equal to the constant term of the Taylor polynomial plus an
integral,
f (x) = T0 (x) + ∫ f ′ (x1 ) dx1 .
x

a
By the fundamental theorem again, the integral is in turn

f ′ (x1 ) dx1 = ∫ (f ′ (a) + ∫ f ′′ (x2 ) dx2 ) dx1 .


x x x1

a a a

The first term of the outer integral is f ′ (a)I1 (x), giving the first-order term
of the Taylor polynomial and leaving a doubly nested integral,

f ′ (x1 ) dx1 = f ′ (a)(x − a) + ∫ f ′′ (x2 ) dx2 dx1 .


x x x1
∫ ∫
a a a
10 1 Results from One-Variable Calculus

In other words, the calculation so far has shown that

f (x) = f (a) + f ′ (a)(x − a) + ∫ f ′′ (x2 ) dx2 dx1


x x1

a a

= T1 (x) + ∫ f ′′ (x2 ) dx2 dx1 .


x x1

a a

Once more by the fundamental theorem, the doubly nested integral is

f ′′ (x2 ) dx2 dx1 = ∫ (f ′′ (a) + ∫ f ′′′ (x3 ) dx3 ) dx2 dx1 ,


x x1 x x1 x2
∫ ∫ ∫
a a a a a

and the first term of the outer integral is f ′′ (a)I2 (x), giving the second-order
term of the Taylor polynomial and leaving a triply nested integral,

f ′′ (a)
f ′′ (x2 ) dx2 dx1 = (x −a)2 + ∫ ∫ f ′′′ (x3 ) dx3 dx2 dx1 .
x x1 x x1 x2
∫ ∫ ∫
a a 2 a a a

So now the calculation so far has shown that

f (x) = T2 (x) + ∫ f ′′′ (x3 ) dx3 dx2 dx1 .


x x1 x2
∫ ∫
a a a

Continuing this process through n iterations shows that f (x) is Tn (x) plus
an (n + 1)-fold iterated integral,

f (x) = Tn (x) + ∫ f (n+1) (xn+1 ) dxn+1 ⋯dx2 dx1 .


x x1 xn
∫ ⋯∫
a a a

In other words, the remainder is the integral,

Rn (x) = ∫ f (n+1) (xn+1 ) dxn+1 ⋯dx2 dx1 .


x x1 xn
∫ ⋯∫ (1.1)
a a a

Note that we now are assuming that f has n + 1 continuous derivatives.


For simplicity, assume that x > a. Since f (n+1) is continuous on the closed
and bounded interval [a, x], the extreme value theorem says that it takes a
minimum value m and a maximum value M on the interval. That is,

m ≤ f (n+1) (xn+1 ) ≤ M, xn+1 ∈ [a, x].

Integrate these two inequalities n + 1 times to bound the remainder inte-


gral (1.1) on both sides by multiples of the integral that we have evaluated,

mIn+1 (x) ≤ Rn (x) ≤ M In+1 (x),

and therefore by the precalculated formula for In+1 (x),

(x − a)n+1 (x − a)n+1
≤ Rn (x) ≤ M
(n + 1)! (n + 1)!
m . (1.2)
1.3 Taylor’s Theorem 11

Recall that m and M are particular values of f (n+1) . Define an auxiliary


function that will therefore assume the sandwiching values in (1.2),

(x − a)n+1
g ∶ [a, x] Ð→ R, g(t) = f (n+1) (t)
(n + 1)!
.

That is, since there exist values tm and tM in [a, x] such that f (n+1) (tm ) = m
and f (n+1) (tM ) = M , the result (1.2) of our calculation can be rephrased as

g(tm ) ≤ Rn (x) ≤ g(tM ).

The inequalities show that the remainder is an intermediate value of g. And g


is continuous, so by the intermediate value theorem, there exists some point
c ∈ [a, x] such that g(c) = Rn (x). In other words, g(c) is the desired remain-
der, the function minus its Taylor polynomial. We have proved the following
theorem.

Theorem 1.3.3 (Taylor’s theorem). Let I ⊂ R be a nonempty open inter-


val, and let a ∈ I. Let n be a nonnegative integer. Suppose that the function
f ∶ I Ð→ R has n + 1 continuous derivatives. Then for each x ∈ I,

f (x) = Tn (x) + Rn (x)

where
f (n+1) (c)
Rn (x) = (x − a)n+1
(n + 1)!
for some c between a and x.

We have proved Taylor’s theorem only when x > a. It is trivial for x = a.


If x < a, then rather than repeat the proof while keeping closer track of signs,
with some of the inequalities switching direction, we may define

f˜ ∶ −I Ð→ R, f˜(−x) = f (x).

Since f˜ = f ○ neg, where neg is the negation function, a small exercise with
the chain rule shows that

f˜(k) (−x) = (−1)k f (k) (x), for k = 0, . . . , n + 1 and −x ∈ −I.

If x < a in I then −x > −a in −I, and so we know by the version of Taylor’s


theorem that we have already proved that

f˜(−x) = T̃n (−x) + R


̃n (−x)

where
f (−a)
T̃n (−x) = ∑
n ˜(k)
(−x − (−a))k
k=0 k!
and
12 1 Results from One-Variable Calculus

˜(n+1) (−c)
̃n (−x) = f (−x − (−a))n+1 for some −c between −a and −x.
(n + 1)!
R

But f˜(−x) = f (x), and T̃n (−x) is precisely the desired Taylor polyno-
mial Tn (x),

f (−a)
T̃n (−x) = ∑
n ˜(k)
(−x − (−a))k
k=0 k!
n
(−1)k f (k) (a) n
f (k) (a)
=∑ (−1)k (x − a)k = ∑ (x − a)k = Tn (x),
k=0 k! k=0 k!

̃n (−x) works out to the desired form of Rn (x),


and similarly R

̃n (−x) = f
(n+1)
(c)
(x − a)n+1
(n + 1)!
R for some c between a and x.

Thus we obtain the statement of Taylor’s theorem in the case x < a as well.
Whereas our proof of Taylor’s theorem relies primarily on the fundamental
theorem of integral calculus, and a similar proof relies on repeated integration
by parts (Exercise 1.3.6), many proofs rely instead on the mean value theorem.
Our proof neatly uses three different mathematical techniques for the three
different parts of the argument:
• To find the Taylor polynomial Tn (x), we differentiated repeatedly, using
a substitution at each step to determine a coefficient.
• To get a precise (if unwieldy) expression for the remainder Rn (x) = f (x) −
Tn (x), we integrated repeatedly, using the fundamental theorem of integral
calculus at each step to produce a term of the Taylor polynomial.
• To express the remainder in a more convenient form, we used the extreme
value theorem and then the intermediate value theorem once each. These
foundational theorems are not results from calculus but (as we will discuss
in Section 2.4) from an area of mathematics called topology.
The expression for Rn (x) given in Theorem 1.3.3 is called the Lagrange
form of the remainder. Other expressions for Rn (x) exist as well. Whatever
form is used for the remainder, it should be something that we can estimate
by bounding its magnitude.
For example, we use Taylor’s theorem to estimate ln(1.1) by hand to within
1/500 000. Let f (x) = ln(1+x) on (−1, ∞), and let a = 0. Compute the following
table:
1.3 Taylor’s Theorem 13

f (k) (0)
k f (k) (x)
k!
0 ln(1 + x) 0
1
(1 + x)
1 1

1 1
− −
(1 + x)2
2
2
2 1
(1 + x)
3 3 3
3! 1
− −
(1 + x)
4 4 4
⋮ ⋮ ⋮
(−1)n−1 (n − 1)! (−1)n−1
(1 + x)n
n
n
(−1)n n!
n+1
(1 + x)n+1
Next, read off from the table that for n ≥ 1, the nth-degree Taylor polynomial
is
x2 x3 xn n
xk
Tn (x) = x − + − ⋯ + (−1)n−1 = ∑ (−1)k−1 ,
2 3 n k=1 k
and the remainder is
(−1)n xn+1
Rn (x) =
(1 + c)n+1 (n + 1)
for some c between 0 and x.

This expression for the remainder may be a bit much to take in, because
it involves three variables: the point x at which we are approximating the
logarithm, the degree n of the Taylor polynomial that is providing the ap-
proximation, and the unknown value c in the error term. But we are in-
terested in x = 0.1 in particular (since we are approximating ln(1.1) using
f (x) = ln(1 + x)), so that the Taylor polynomial specializes to

(0.1)2 (0.1)3 (0.1)n


Tn (0.1) = (0.1) − + − ⋯ + (−1)n−1 ,
2 3 n
and we want to bound the remainder in absolute value, so we write

(0.1)n+1
∣Rn (0.1)∣ =
(1 + c)n+1 (n + 1)
for some c between 0 and 0.1.

Now the symbol x is gone. Next, note that although we don’t know the value
of c, the smallest possible value of the quantity (1 + c)n+1 in the denominator
of the absolute remainder is 1, because c ≥ 0. And since this value occurs in
14 1 Results from One-Variable Calculus

the denominator, it lets us write the greatest possible value of the absolute
remainder with no reference to c. That is,
(0.1)n+1
∣Rn (0.1)∣ ≤
(n + 1)
,

and the symbol c is gone as well. The only remaining variable is n, and the
goal is to approximate ln(1.1) to within 1/500 000. Set n = 4 in the previous
display to get
∣R4 (0.1)∣ ≤
1
.
500 000
That is, the fourth-degree Taylor polynomial

T4 (0.1) =
1 1 1 1
− + − ,
10 200 3000 40000
which numerically is

T4 (0.1) = 0.10000000 . . .
−0.00500000 . . .
+0.00033333 . . .
−0.00002500 . . .
= 0.09530833 . . . ,

agrees with ln(1.1) to within 0.00000200 . . . , so that

0.09530633 ⋅ ⋅ ⋅ ≤ ln(1.1) ≤ 0.09531033 . . . .

Any computer should confirm this. The point here is not that we have ob-
tained impressively many digits of ln(1.1), or that we would want to continue
carrying out such calculations by hand, but that we see how Taylor’s theo-
rem guarantees correct computation to a specified accuracy using only basic
arithmetic.
Continuing to work with the function f (x) = ln(1 + x) for x > −1, set x = 1
instead to get that for n ≥ 1,

Tn (1) = 1 − + − ⋯ + (−1)n−1 ,
1 1 1
2 3 n
and
∣Rn (1)∣ = ∣ ∣
1
(1 + c)n+1 (n + 1)
for some c between 0 and 1.

Thus ∣Rn (1)∣ ≤ 1/(n + 1), and this goes to 0 as n → ∞. Therefore ln(2) is
expressible as an infinite series,
1 1 1
ln(2) = 1 − + − + ⋯.
2 3 4
This example illustrates an important general principle:
1.3 Taylor’s Theorem 15

To check whether the Taylor polynomial Tn (x) converges to f (x) as


n grows, i.e., to check whether the infinite Taylor series

f (k) (a)
T (x) = lim Tn (x) = ∑ (x − a)k
n→∞
k=0 k!
reproduces f (x), check whether the remainder Rn (x) converges to 0.
To show that the remainder Rn (x) converges to 0, estimate ∣Rn (x)∣ in
a way that gets rid of the unknown c and then show that the estimate
goes to 0.
To repeat a formula from before, the nth-degree Taylor polynomial of the
function ln(1 + x) is
x2 x3 xn n
xk
Tn (x) = x − + − ⋯ + (−1)n−1 = ∑ (−1)k−1 .
2 3 n k=1 k
The graphs of the natural logarithm ln(x) and the first five Taylor polynomials
Tn (x − 1) are plotted from 0 to 2 in Figure 1.1. (The switch from ln(1 + x)
to ln(x) places the logarithm graph in its familiar position, and then the switch
from Tn (x) to Tn (x − 1) is forced in consequence to fit the Taylor polynomials
through the repositioned function.) A good check of your understanding is to
see whether you can determine which graph is which in the figure.

0.5 1 1.5 2

−1

−2

−3

Figure 1.1. The natural logarithm and its Taylor polynomials

For another example, return to the exponential function f (x) = ex and


let a = 0. For every x, the difference between f (x) and the nth-degree Taylor
16 1 Results from One-Variable Calculus

polynomial Tn (x) satisfies

xn+1
∣Rn (x)∣ = ∣ec ∣
(n + 1)!
for some c between 0 and x.

If x ≥ 0 then ec could be as large as ex , while if x < 0 then ec could be as large


as e0 . The worst possible case is therefore

∣x∣n+1
∣Rn (x)∣ ≤ max{1, ex }
(n + 1)!
.

As n → ∞ (while x remains fixed, albeit arbitrary) the right side goes to 0,


because the factorial growth of (n + 1)! dominates the exponential growth
of ∣x∣n+1 , and so we have in the limit that ex is expressible as a power series,

x2 x3 xn xk
ex = 1 + x + + +⋯+ +⋯= ∑ .
2! 3! n! k=0 k!

The power series here can be used to define ex , but then obtaining the prop-
erties of ex depends on the technical fact that a power series can be differenti-
ated term by term in its open interval (or disk if we are working with complex
numbers) of convergence.
The power series in the previous display also allows a small illustration of
the utility of quantifiers. Since it is valid for every real number x, it is valid
with x2 in place of x,

x4 x6 x2n x2k
ex = 1 + x 2 + + +⋯+ +⋯= ∑ for every x ∈ R.
2

2! 3! n! k=0 k!

There is no need here to introduce the function g(x) = ex , then work out its
2

Taylor polynomial and remainder, then analyze the remainder.


We end this chapter by sketching two cautionary examples. First, work
from earlier in the section shows that the Taylor series for the function ln(1+x)
at a = 0 is

x2 x3 xn xk
T (x) = x − + − ⋯ + (−1)n−1 + ⋯ = ∑ (−1)k−1 .
2 3 n k=1 k

The ratio test shows that this series converges absolutely when ∣x∣ < 1, and
the nth-term test shows that the series diverges when x > 1. The series also
converges at x = 1, as observed earlier. Thus, while the domain of the func-
tion ln(1 + x) is (−1, ∞), the Taylor series has no chance to match the func-
tion outside of (−1, 1]. As for whether the Taylor series matches the function
on (−1, 1], recall the Lagrange form of the remainder,

(−1)n xn+1
Rn (x) =
(1 + c)n+1 (n + 1)
for some c between 0 and x.
1.3 Taylor’s Theorem 17

Consequently, the absolute value of the Lagrange form of the remainder is


∣x∣
n+1
∣Rn (x)∣ = ( )
1
for some c between 0 and x.
n+1 1+c
From the previous display, noting that ∣x∣ is the distance from 0 to x while
1 + c is the distance from −1 to c, we see that:
• If 0 ≤ x ≤ 1 then ∣x∣ ≤ 1 ≤ 1 + c, and so Rn (x) goes to 0 as n gets large.
• If −1/2 ≤ x < 0 then ∣x∣ ≤ 1/2 ≤ 1 + c, and so again Rn (x) goes to 0 as n
gets large.
• But if −1 < x < −1/2 then possibly 1 + c < ∣x∣, and so possibly Rn (x) does
not go to 0 as n gets large.
That is, we have shown that
ln(1 + x) = T (x) for x ∈ [−1/2, 1],
but the Lagrange form does not readily show that the equality in the previous
display also holds for x ∈ (−1, −1/2). Figure 1.1 suggests why: the Taylor
polynomials are converging more slowly to the original function the farther
left we go on the graph. However, a different form of the remainder, given in
Exercise 1.3.6, proves that indeed the equality holds for all x ∈ (−1, 1]. Also,
the geometric series relation
1
= 1 − x + x2 − x3 + ⋯, −1 < x < 1
1+x
gives the relation ln(1 + x) = T (x) for x ∈ (−1, 1) upon integrating termwise
and then setting x = 0 to see that the resulting constant term is 0; but this
argument’s invocation of the theorem that a power series can be integrated
termwise within its interval (or disk) of convergence is nontrivial.
For the last example, define f ∶ R Ð→ R by


⎪e−1/x if x ≠ 0,
2

f (x) = ⎨

⎪ if x = 0.
⎩0
It is possible to show that f is infinitely differentiable and that every derivative
of f at 0 is 0. That is, f (k) (0) = 0 for k = 0, 1, 2, . . . . Consequently, the Taylor
series for f at 0 is
T (x) = 0 + 0x + 0x2 + ⋯ + 0xn + ⋯.
That is, the Taylor series is the zero function, which certainly converges for all
x ∈ R. But the only value of x for which it converges to the original function f
is x = 0. In other words, although this Taylor series converges everywhere,
it fails catastrophically to equal the function it is attempting to match. The
problem is that the function f decays exponentially, and since exponential be-
havior dominates polynomial behavior, any attempt to discern f using poly-
nomials will fail to see it. Figures 1.2 and 1.3 plot f to display its rapid decay.
The first plot is for x ∈ [−25, 25] and the second is for x ∈ [−1/2, 1/2].
18 1 Results from One-Variable Calculus

−20 −10 10 20
Figure 1.2. Rapidly decaying function, wide view

−0.4 −0.2 0.2 0.4


Figure 1.3. Rapidly decaying function, zoom view

Exercises

1.3.1. (a) Let n ∈ N. What is the (2n+1)st-degree Taylor polynomial T2n+1 (x)
for the function f (x) = sin x at 0? (The reason for the strange indexing here
is that every second term of the Taylor polynomial is 0.) Prove that sin x is
equal to the limit of T2n+1 (x) as n → ∞, similarly to the argument in the text
for ex . Also find T2n (x) for f (x) = cos x at 0, and explain why the argument
for sin shows that cos x is the limit of its even-degree Taylor polynomials as
well.
(b) Many years ago, the author’s high-school physics textbook asserted,
bafflingly, that the approximation sin x ≈ x is good for x up to 8○ . Deconstruct.

1.3.2. What is the nth-degree Taylor polynomial Tn (x) for the following func-
tions at 0?
1.3 Taylor’s Theorem 19

(a) f (x) = arctan x. (This exercise is not just a matter of routine mechan-
ics. One way to proceed involves the geometric series, and another makes use
of the factorization 1 + x2 = (1 − ix)(1 + ix).)
(b) f (x) = (1 + x)α where α ∈ R. (Although the answer can be written
in a uniform way for all α, it behaves differently when α ∈ N. Introduce the
generalized binomial coefficient symbol

α(α − 1)(α − 2)⋯(α − k + 1)


( )= k∈N
α
,
k k!
to help produce a tidy answer.)

1.3.3. (a) Further tighten the numerical estimate of ln(1.1) from this section
by reasoning as follows. As n increases, the Taylor polynomials Tn (0.1) add
terms of decreasing magnitude and alternating sign. Therefore T4 (0.1) un-
derestimates ln(1.1). Now that we know this, it is useful to find the smallest
possible value of the remainder (by setting c = 0.1 rather than c = 0 in the for-
mula). Then ln(1.1) lies between T4 (0.1) plus this smallest possible remainder
value and T4 (0.1) plus the largest possible remainder value, obtained in the
section. Supply the numbers, and verify by machine that the tighter estimate
of ln(1.1) is correct.
(b) In Figure 1.1, identify the graphs of T1 through T5 and the graph of ln
near x = 0 and near x = 2.

1.3.4. Working by hand, use the third-degree Taylor polynomial for sin(x)
at 0 to approximate a decimal representation of sin(0.1). Also compute the
decimal representation of an upper bound for the error of the approximation.
Bound sin(0.1) between two decimal representations.

1.3.5. Use a second-degree Taylor polynomial to approximate 4.2. Use Tay-
lor’s theorem to find a guaranteed√accuracy of the approximation and thus to
find upper and lower bounds for 4.2.

1.3.6. (a) Another proof of Taylor’s Theorem uses the fundamental theorem
of integral calculus once and then integrates by parts repeatedly. Begin with
the hypotheses of Theorem 1.3.3, and let x ∈ I. By the fundamental theorem,

f (x) = f (a) + ∫ f ′ (t) dt.


x

Let u = f ′ (t) and v = t − x, so that the integral is ∫a u dv, and integrating by


x

parts gives

f (x) = f (a) + f ′ (a)(x − a) − ∫ f ′′ (t)(t − x) dt.


x

Let u = f ′′ (t) and v = 21 (t − x)2 , so that again the integral is ∫a u dv, and
x

integrating by parts gives


20 1 Results from One-Variable Calculus

(x − a)2 (t − x)2
f (x) = f (a) + f ′ (a)(x − a) + f ′′ (a) + ∫ f ′′′ (t)
x
dt.
2 a 2
Show that after n steps, the result is

(t − x)n
f (x) = Tn (x) + (−1)n ∫ f (n+1) (t)
x
dt.
a n!
Whereas the expression for f (x) − Tn (x) in Theorem 1.3.3 is called the La-
grange form of the remainder, this exercise has derived the integral form
of the remainder. Use the extreme value theorem and the intermediate value
theorem to derive the Lagrange form of the remainder from the integral form.
(b) Use the integral form of the remainder to show that

ln(1 + x) = T (x) for x ∈ (−1, 1].


Part I

Multivariable Differential Calculus


2
Euclidean Space

Euclidean space is a mathematical construct that encompasses the line, the


plane, and three-dimensional space as special cases. Its elements are called
vectors. Vectors can be understood in various ways: as arrows, as quantities
with magnitude and direction, as displacements, or as points. However, along
with a sense of what vectors are, we also need to emphasize how they interact.
The axioms in Section 2.1 capture the idea that vectors can be added together
and can be multiplied by scalars, with both of these operations obeying fa-
miliar laws of algebra. Section 2.2 expresses the geometric ideas of length
and angle in Euclidean space in terms of vector algebra. Section 2.3 discusses
continuity for functions (also called mappings) whose inputs and outputs are
vectors rather than scalars. Section 2.4 introduces a special class of sets in
Euclidean space, the compact sets, and shows that compact sets are preserved
under continuous mappings.

2.1 Algebra: Vectors


Let n be a positive integer. The set of all ordered n-tuples of real numbers,

Rn = {(x1 , . . . , xn ) ∶ xi ∈ R for i = 1, . . . , n} ,

constitutes n-dimensional Euclidean space. When n = 1, the parentheses


and subscript in the notation (x1 ) are superfluous, so we simply view the
elements of R1 as real numbers x and write R for R1 . Elements of R2 and
of R3 are written (x, y) and (x, y, z) to avoid needless subscripts. These first
few Euclidean spaces, R, R2 , and R3 , are conveniently visualized as the line,
the plane, and space itself. (See Figure 2.1.)
Elements of R are called scalars, of Rn , vectors. The origin of Rn ,
denoted 0, is defined to be
0 = (0, . . . , 0).

© Springer International Publishing AG 2016 23


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5_2
24 2 Euclidean Space

Figure 2.1. The first few Euclidean spaces

Sometimes the origin of Rn will be denoted 0n to distinguish it from other


origins that we will encounter later.
In the first few Euclidean spaces R, R2 , R3 , one can visualize a vector as
a point x or as an arrow. The arrow can have its tail at the origin and its
head at the point x, or its tail at any point p and its head correspondingly
translated to p + x. (See Figure 2.2. Most illustrations will depict R or R2 .)

p+x
p
x x

Figure 2.2. Various ways to envision a vector

To a mathematician, the word space doesn’t connote volume but instead


refers to a set endowed with some structure. Indeed, Euclidean space Rn comes
with two algebraic operations. The first is vector addition,

+ ∶ Rn × Rn Ð→ Rn ,

defined by adding the scalars at each component of the vectors,

(x1 , . . . , xn ) + (y1 , . . . , yn ) = (x1 + y1 , . . . , xn + yn ).

For example, (1, 2, 3) + (4, 5, 6) = (5, 7, 9). Note that the meaning of the “+”
sign is now overloaded: on the left of the displayed equality, it denotes the
new operation of vector addition, whereas on the right side it denotes the old
addition of real numbers. The multiple meanings of the plus sign shouldn’t
cause problems, because the meaning of “+” is clear from context, i.e., the
2.1 Algebra: Vectors 25

meaning of “+” is clear from whether it sits between vectors or scalars. (An
expression such as “(1, 2, 3) + 4,” with the plus sign between a vector and a
scalar, makes no sense according to our grammar.)
The interpretation of vectors as arrows gives a geometric description of
vector addition, at least in R2 . To add the vectors x and y, draw them as
arrows starting at 0 and then complete the parallelogram P that has x and y
as two of its sides. The diagonal of P starting at 0 is then the arrow depicting
the vector x + y. (See Figure 2.3.) The proof of this is a small argument with
similar triangles, left to the reader as Exercise 2.1.2.

x+y
y
P
x

Figure 2.3. The parallelogram law of vector addition

The second operation on Euclidean space is scalar multiplication,

⋅ ∶ R × Rn Ð→ Rn ,

defined by
a ⋅ (x1 , . . . , xn ) = (ax1 , . . . , axn ).
For example, 2⋅(3, 4, 5) = (6, 8, 10). We will almost always omit the symbol “⋅”
and write ax for a⋅x. With this convention, juxtaposition is overloaded as “+”
was overloaded above, but again this shouldn’t cause problems.
Scalar multiplication of the vector x (viewed as an arrow) by a also has a
geometric interpretation: it simply stretches (i.e., scales) x by a factor of a.
When a is negative, ax turns x around and stretches it in the other direction
by ∣a∣. (See Figure 2.4.)

−3x

x
2x

Figure 2.4. Scalar multiplication as stretching


26 2 Euclidean Space

With these two operations and distinguished element 0, Euclidean space


satisfies the following algebraic laws.
Theorem 2.1.1 (Vector space axioms).
(A1) Addition is associative: (x + y) + z = x + (y + z) for all x, y, z ∈ Rn .
(A2) 0 is an additive identity: 0 + x = x for all x ∈ Rn .
(A3) Existence of additive inverses: for each x ∈ Rn there exists y ∈ Rn such
that y + x = 0.
(A4) Addition is commutative: x + y = y + x for all x, y ∈ Rn .
(M1) Scalar multiplication is associative: a(bx) = (ab)x for all a, b ∈ R, x ∈ Rn .
(M2) 1 is a multiplicative identity: 1x = x for all x ∈ Rn .
(D1) Scalar multiplication distributes over scalar addition: (a + b)x = ax + bx
for all a, b ∈ R, x ∈ Rn .
(D2) Scalar multiplication distributes over vector addition: a(x + y) = ax + ay
for all a ∈ R, x, y ∈ Rn .
All of these are consequences of how “+” and “⋅” and 0 are defined for Rn
in conjunction with the fact that the real numbers, in turn endowed with “+”
and “⋅” and containing 0 and 1, satisfy the field axioms (see Section 1.1). For
example, to prove that Rn satisfies (M1), take any scalars a, b ∈ R and any
vector x = (x1 , . . . , xn ) ∈ Rn . Then

a(bx) = a(b(x1 , . . . , xn )) by definition of x


= a(bx1 , . . . , bxn ) by definition of scalar multiplication
= (a(bx1 ), . . . , a(bxn )) by definition of scalar multiplication
= ((ab)x1 , . . . , (ab)xn ) by n applications of (m1) in R
= (ab)(x1 , . . . , xn ) by definition of scalar multiplication
= (ab)x by definition of x.

The other vector space axioms for Rn can be shown similarly, by unwinding
vectors to their coordinates, quoting field axioms coordinatewise, and then
bundling the results back up into vectors (see Exercise 2.1.3). Nonetheless,
the vector space axioms do not perfectly parallel the field axioms, and you
are encouraged to spend a little time comparing the two axiom sets to get a
feel for where they are similar and where they are different (see Exercise 2.1.4).
Note in particular that
For n > 1, Rn is not endowed with vector-by-vector multiplication.
Although one can define vector multiplication on Rn componentwise, this mul-
tiplication does not combine with vector addition to satisfy the field axioms
except when n = 1. The multiplication of complex numbers makes R2 a field,
and in Section 3.10 we will see an interesting noncommutative multiplication
of vectors for R3 , but these are special cases.
One benefit of the vector space axioms for Rn is that they are phrased
intrinsically, meaning that they make no reference to the scalar coordinates
2.1 Algebra: Vectors 27

of the vectors involved. Thus, once you use coordinates to establish the vector
space axioms, your vector algebra can be intrinsic thereafter, making it lighter
and more conceptual. Also, in addition to being intrinsic, the vector space
axioms are general. While Rn is the prototypical set satisfying the vector space
axioms, it is by no means the only one. In coming sections we will encounter
other sets V (whose elements may be, for example, functions) endowed with
their own addition, multiplication by elements of a field F , and distinguished
element 0. If the vector space axioms are satisfied with V and F replacing Rn
and R then we say that V is a vector space over F .
The pedagogical point here is that although the similarity between vector
algebra and scalar algebra may initially make vector algebra seem uninspiring,
in fact the similarity is exciting. It makes mathematics easier, because familiar
algebraic manipulations apply in a wide range of contexts. The same symbol-
patterns have more meaning. For example, we use intrinsic vector algebra to
prove a result from Euclidean geometry, that the three medians of a triangle
intersect. (A median is a segment from a vertex to the midpoint of the opposite
edge.) Consider a triangle with vertices x, y, and z, and form the average of
the three vertices,
x+y+z
p= .
3
This algebraic average will be the geometric center of the triangle, where
the medians meet. (See Figure 2.5.) Indeed, rewrite p as
2 y+z
p=x+ ( − x) .
3 2
The displayed expression for p shows that it is two-thirds of the way from x
along the line segment from x to the average of y and z, i.e., that p lies on
the triangle median from vertex x to side yz. (Again see the figure. The idea
is that (y + z)/2 is being interpreted as the midpoint of y and z, each of these
viewed as a point, while on the other hand, the little mnemonic

head minus tail

helps us to remember quickly that (y + z)/2 − x can be viewed as the arrow-


vector from x to (y + z)/2.) Since p is defined symmetrically in x, y, and z,
and it lies on one median, it therefore lies on the other two medians as well.
In fact, the vector algebra has shown that it lies two-thirds of the way along
each median. (As for how a person might find this proof, it is a matter of
hoping that the geometric center (x + y + z)/3 lies on the median by taking
the form x + c((y + z)/2 − x) for some c and then seeing that indeed c = 2/3
works.)
The standard basis of Rn is the set of vectors

{e1 , e2 , . . . , en }

where
28 2 Euclidean Space

y+z
2
p
x

Figure 2.5. Three medians of a triangle

e1 = (1, 0, . . . , 0), e2 = (0, 1, . . . , 0), ..., en = (0, 0, . . . , 1).

(Thus each ei is itself a vector, not the ith scalar entry of a vector.) Every
vector x = (x1 , x2 , . . . , xn ) (where the xi are scalar entries) decomposes as

x = (x1 , x2 , . . . , xn )
= (x1 , 0, . . . , 0) + (0, x2 , . . . , 0) + ⋯ + (0, 0, . . . , xn )
= x1 (1, 0, . . . , 0) + x2 (0, 1, . . . , 0) + ⋯ + xn (0, 0, . . . , 1)
= x 1 e1 + x 2 e2 + ⋯ + x n en ,

or more succinctly,
n
x = ∑ x i ei . (2.1)
i=1

Note that in equation (2.1), x and the ei are vectors, while the xi are scalars.
The equation shows that every x ∈ Rn is expressible as a linear combination
(sum of scalar multiples) of the standard basis vectors. The expression is
unique, for if also x = ∑ni=1 x′i ei for some scalars x′1 , . . . , x′n then the equality
says that x = (x′1 , x′2 , . . . , x′n ), so that x′i = xi for i = 1, . . . , n.
(The reason that the geometric-sounding word linear is used here and
elsewhere in this chapter to describe properties having to do with the alge-
braic operations of addition and scalar multiplication will be explained in
Chapter 3.)
The standard basis is handy in that it is a finite set of vectors from which
each of the infinitely many vectors of Rn can be obtained in exactly one way
as a linear combination. But it is not the only such set, nor is it always the
optimal one.

Definition 2.1.2 (Basis). A set of vectors {fi } is a basis of Rn if every


x ∈ Rn is uniquely expressible as a linear combination of the fi .
2.1 Algebra: Vectors 29

For example, the set {f1 , f2 } = {(1, 1), (1, −1)} is a basis of R2 . To see this,
consider an arbitrary vector (x, y) ∈ R2 . This vector is expressible as a linear
combination of f1 and f2 if and only if there are scalars a and b such that

(x, y) = af1 + bf2 .

Since f1 = (1, 1) and f2 = (1, −1), this vector equation is equivalent to a pair
of scalar equations,

x = a + b,
y = a − b.

Add these equations and divide by 2 to get a = (x + y)/2, and similarly b =


(x − y)/2. In other words, we have found that
x+y x−y
(x, y) = (1, 1) + (1, −1),
2 2
and the coefficients a = (x + y)/2 and b = (x − y)/2 on the right side of the
equation are the only possible coefficients a and b for the equation to hold.
That is, scalars a and b exist to express the vector (x, y) as a linear combina-
tion of {f1 , f2 }, and the scalars are uniquely determined by the vector. Thus
{f1 , f2 } is a basis of R2 , as claimed.
The set {g1 , g2 } = {(1, 3), (2, 6)} is not a basis of R2 , because every linear
combination ag1 + bg2 takes the form (a + 2b, 3a + 6b), with the second en-
try equal to three times the first. The vector (1, 0) is therefore not a linear
combination of g1 and g2 .
Nor is the set {h1 , h2 , h3 } = {(1, 0), (1, 1), (1, −1)} a basis of R2 , because
h3 = 2h1 − h2 , so that h3 is a nonunique linear combination of the hj .
See Exercises 2.1.9 and 2.1.10 for practice with bases.

Exercises

2.1.1. Write down any three specific nonzero vectors u, v, w from R3 and any
two specific nonzero scalars a, b from R. Compute u + v, aw, b(v + w), (a + b)u,
u + v + w, abw, and the additive inverse to u.

2.1.2. Working in R2 , give a geometric proof that if we view the vectors x


and y as arrows from 0 and form the parallelogram P with these arrows as
two of its sides, then the diagonal z starting at 0 is the vector sum x+y viewed
as an arrow.

2.1.3. Verify that Rn satisfies vector space axioms (A2), (A3), (D1).

2.1.4. Are all the field axioms used in verifying that Euclidean space satisfies
the vector space axioms?
30 2 Euclidean Space

2.1.5. Show that 0 is the unique additive identity in Rn . Show that each vector
x ∈ Rn has a unique additive inverse, which can therefore be denoted −x. (And
it follows that vector subtraction can now be defined,

− ∶ Rn × Rn Ð→ Rn , x − y = x + (−y) for all x, y ∈ Rn .)

Show that 0x = 0 for all x ∈ Rn .

2.1.6. Repeat the previous exercise, but with Rn replaced by an arbitrary


vector space V over a field F . (Work with the axioms.)

2.1.7. Show the uniqueness of the additive identity and the additive inverse
using only (A1), (A2), (A3). (This is tricky; the opening pages of some books
on group theory will help.)

2.1.8. Let x and y be noncollinear vectors in R3 . Give a geometric description


of the set of all linear combinations of x and y.

2.1.9. Which of the following sets are bases of R3 ?

S1 = {(1, 0, 0), (1, 1, 0), (1, 1, 1)},


S2 = {(1, 0, 0), (0, 1, 0), (0, 0, 1), (1, 1, 1)},
S3 = {(1, 1, 0), (0, 1, 1)},
S4 = {(1, 1, 0), (0, 1, 1), (1, 0, −1)}.

How many elements do you think a basis for Rn must have? Give (without
proof) geometric descriptions of all bases of R2 , of R3 .

2.1.10. Recall the field C of complex numbers. Define complex n-space Cn


analogously to Rn :

Cn = {(z1 , . . . , zn ) ∶ zi ∈ C for i = 1, . . . , n} ,

and endow it with addition and scalar multiplication defined by the same
formulas as for Rn . You may take for granted that under these definitions, Cn
satisfies the vector space axioms with scalar multiplication by scalars from R,
and also Cn satisfies the vector space axioms with scalar multiplication by
scalars from C. That is, using language that was introduced briefly in this
section, Cn can be viewed as a vector space over R and also, separately, as a
vector space over C. Give a basis for each of these vector spaces.

Brief Pedagogical Interlude

Before continuing, a few comments about how to work with these notes may
be helpful.
2.2 Geometry: Length and Angle 31

The subject-matter of Chapters 2 through 5 is largely cumulative, with


the main theorem of Chapter 5 being proved with main results of Chapters 2,
3, and 4. Each chapter is largely cumulative internally as well. To acquire
detailed command of so much material and also a large-scale view of how it
fits together, the trick is to focus on each section’s techniques while studying
that section and working its exercises, but thereafter to use the section’s
ideas freely by reference. Specifically, after the scrutiny of vector algebra in
the previous section, one’s vector manipulations should be fluent from now
on, freeing one to concentrate on vector geometry in the next section, after
which the geometry should also be light while one is concentrating on the
analytical ideas of the following section, and so forth.
Admittedly, the model that one has internalized all the prior material
before moving on is idealized. For that matter, so is the model that a body of
interplaying ideas is linearly cumulative. In practice, focusing entirely on the
details of whichever topics are currently active while using previous ideas by
reference isn’t always optimal. One might engage with the details of previous
ideas because one is coming to understand them better, or because the current
ideas showcase the older ones in a new way. Still, the paradigm of technical
emphasis on the current ideas and fluent use of the earlier material does help
a person who is navigating a large body of mathematics to conserve energy
and synthesize a larger picture.

2.2 Geometry: Length and Angle


The geometric notions of length and angle in Rn are readily described in terms
of the algebraic notion of inner product.

Definition 2.2.1 (Inner product). The inner product is a function from


pairs of vectors to scalars,

⟨ , ⟩ ∶ Rn × Rn Ð→ R,

defined by the formula


n
⟨(x1 , . . . , xn ), (y1 , . . . , yn )⟩ = ∑ xi yi .
i=1

For example,

n(n + 1)
⟨(1, 1, . . . , 1), (1, 2, . . . , n)⟩ = ,
2
⟨x, ej ⟩ = xj where x = (x1 , . . . , xn ) and j ∈ {1, . . . , n},
⟨ei , ej ⟩ = δij (this means 1 if i = j, 0 otherwise).

Proposition 2.2.2 (Inner product properties).


32 2 Euclidean Space

(IP1) The inner product is positive definite: ⟨x, x⟩ ≥ 0 for all x ∈ Rn , with
equality if and only if x = 0.
(IP2) The inner product is symmetric: ⟨x, y⟩ = ⟨y, x⟩ for all x, y ∈ Rn .
(IP3) The inner product is bilinear:

⟨x + x′ , y⟩ = ⟨x, y⟩ + ⟨x′ , y⟩, ⟨ax, y⟩ = a⟨x, y⟩,


⟨x, y + y ′ ⟩ = ⟨x, y⟩ + ⟨x, y ′ ⟩, ⟨x, by⟩ = b⟨x, y⟩

for all a, b ∈ R, x, x′ , y, y ′ ∈ Rn .

Proof. Exercise 2.2.4. ⊔


The reader should be aware that:


In general, ⟨x + x′ , y + y ′ ⟩ does not equal ⟨x, y⟩ + ⟨x′ , y ′ ⟩.
Indeed, expanding ⟨x + x′ , y + y ′ ⟩ carefully with the inner product properties
shows that the cross-terms ⟨x, y ′ ⟩ and ⟨x′ , y⟩ are present in addition to ⟨x, y⟩
and ⟨x′ , y ′ ⟩.
Like the vector space axioms, the inner product properties are phrased
intrinsically, although they need to be proved using coordinates. As mentioned
in the previous section, intrinsic methods are neater and more conceptual than
using coordinates. More importantly:
The rest of the results of this section are proved by reference to the
inner product properties, with no further reference to the inner product
formula.
The notion of an inner product generalizes beyond Euclidean space—this will
be demonstrated in Exercise 2.3.4, for example—and thanks to the displayed
sentence, once the properties (IP1) through (IP3) are established for any inner
product, all of the pending results in the section will follow automatically with
no further work. (But here a slight disclaimer is necessary. In the displayed
sentence, the word results does not refer to the pending graphic figures. The
fact that the length and angle to be defined in this section will agree with prior
notions of length and angle in the plane, or in three-dimensional space, does
depend on the specific inner product formula. In Euclidean space, the inner
product properties do not determine the inner product formula uniquely. This
point will be addressed in Exercise 3.5.1.)

Definition 2.2.3 (Modulus). The modulus (or absolute value) of a vec-


tor x ∈ Rn is defined as √
∣x∣ = ⟨x, x⟩.

Thus the modulus is defined in terms of the inner product, rather than by
its own formula. The inner product formula shows that the modulus formula
is √
∣(x1 , . . . , xn )∣ = x21 + ⋯ + x2n ,
2.2 Geometry: Length and Angle 33

so that some particular examples are



n(n + 1)(2n + 1)
∣(1, 2, . . . , n)∣ = ,
6
∣ei ∣ = 1.

However, the definition of the modulus in terms of inner product combines


with the inner product properties to show, with no reference to the inner prod-
uct formula or the modulus formula, that the modulus satisfies the following
properties (Exercise 2.2.5).

Proposition 2.2.4 (Modulus properties).


(Mod1) The modulus is positive: ∣x∣ ≥ 0 for all x ∈ Rn , with equality if and
only if x = 0.
(Mod2) The modulus is absolute-homogeneous: ∣ax∣ = ∣a∣∣x∣ for all a ∈ R and
x ∈ Rn .

Like other symbols, the absolute value signs are now overloaded, but their
meaning can be inferred from context, as in property (Mod2). When n is 1,
2, or 3, the modulus ∣x∣ gives the distance from 0 to the point x, or the length
of x viewed as an arrow. (See Figure 2.6.)

∣x∣ ∣x∣

x ∣x∣
x

Figure 2.6. Modulus as length

The following relation between inner product and modulus will help to
show that distance in Rn behaves as it should, and that angle in Rn makes
sense. Since the relation is not obvious, its proof is a little subtle.

Theorem 2.2.5 (Cauchy–Schwarz inequality). For all x, y ∈ Rn ,

∣⟨x, y⟩∣ ≤ ∣x∣ ∣y∣,

with equality if and only if one of x, y is a scalar multiple of the other.


34 2 Euclidean Space

Note that the absolute value signs mean different things on each side of
the Cauchy–Schwarz inequality. On the left side, the quantities x and y are
vectors, their inner product ⟨x, y⟩ is a scalar, and ∣⟨x, y⟩∣ is its scalar absolute
value, while on the right side, ∣x∣ and ∣y∣ are the scalar absolute values of
vectors, and ∣x∣ ∣y∣ is their product. That is, the Cauchy–Schwarz inequality
says:
The size of the product is at most the product of the sizes.
The Cauchy–Schwarz inequality can be written out in coordinates if we
temporarily abandon the principle that we should avoid reference to formulas,

(x1 y1 + ⋯ + xn yn )2 ≤ (x21 + ⋯ + x2n )(y12 + ⋯ + yn2 ).

And this inequality can be proved unconceptually as follows (the reader is


encouraged only to skim the following computation). Rewrite the desired in-
equality as
( ∑ xi yi ) ≤ ∑ x2i ⋅ ∑ yj2 ,
2

i i j

where the indices of summation run from 1 to n. Expand the square to get

∑ x i yi + ∑ x i yi x j yj ≤ ∑ x i yj ,
2 2 2 2
i i,j i,j
i≠j

and canceling the terms common to both sides reduces it to

∑ x i yi x j yj ≤ ∑ x i yj ,
2 2
i≠j i≠j

or
∑(xi yj − xi yi xj yj ) ≥ 0.
2 2
i≠j

Rather than sum over all pairs (i, j) with i ≠ j, sum over the pairs with
i < j, collecting the (i, j)-term and the (j, i)-term for each such pair, and the
previous inequality becomes

∑(xi yj + xj yi − 2xi yj xj yi ) ≥ 0.
2 2 2 2
i<j

Thus the desired inequality has reduced to a true inequality,

∑(xi yj − xj yi ) ≥ 0.
2
i<j

So the main proof is done, although there is still the question of when equality
holds.
But surely the previous paragraph is not the graceful way to argue. The
computation draws on the minutiae of the formulas for the inner product and
the modulus, rather than using their properties. It is uninformative, making
2.2 Geometry: Length and Angle 35

the Cauchy–Schwarz inequality look like a low-level accident. It suggests that


larger-scale mathematics is just a matter of bigger and bigger formulas. To
prove the inequality in a way that is enlightening and general, we should
work intrinsically, keeping the scalars ⟨x, y⟩ and ∣x∣ and ∣y∣ notated in their
concise forms, and we should use properties, not formulas. The idea is that the
calculation in coordinates reduces to the fact that squares are nonnegative.
That is, the Cauchy–Schwarz inequality is somehow quadratically hard, and its
verification amounted to completing many squares. The argument to be given
here is guided by this insight to prove the inequality by citing facts about
quadratic polynomials, facts established by completing one square back in
high-school algebra at the moment that doing so was called for. Thus we
eliminate redundancy and clutter. So the argument to follow will involve an
auxiliary object, a judiciously chosen quadratic polynomial, but in return it
will become coherent.
Proof. The result is clear when x = 0, so assume x ≠ 0. For every a ∈ R,

0 ≤ ⟨ax − y, ax − y⟩ by positive definiteness


= a⟨x, ax − y⟩ − ⟨y, ax − y⟩ by linearity in the first variable
= a2 ⟨x, x⟩ − a⟨x, y⟩ − a⟨y, x⟩ + ⟨y, y⟩ by linearity in the second variable
= a2 ∣x∣2 − 2a⟨x, y⟩ + ∣y∣2 by symmetry, definition of modulus.

View the right side as a quadratic polynomial in the scalar variable a, where
the scalar coefficients of the polynomial depend on the generic but fixed vec-
tors x and y,
f (a) = ∣x∣2 a2 − 2⟨x, y⟩a + ∣y∣2 .
We have shown that f (a) is always nonnegative, so f has at most one root.
Thus by the quadratic formula its discriminant is nonpositive,

4⟨x, y⟩2 − 4∣x∣2 ∣y∣2 ≤ 0,

and the Cauchy–Schwarz inequality ∣⟨x, y⟩∣ ≤ ∣x∣ ∣y∣ follows. Equality holds
exactly when the quadratic polynomial f (a) = ∣ax − y∣2 has a root a, i.e.,
exactly when y = ax for some a ∈ R. ⊔

Geometrically, the condition for equality in Cauchy–Schwarz is that the
vectors x and y, viewed as arrows at the origin, are parallel, though perhaps
pointing in opposite directions. A geometrically conceived proof of Cauchy–
Schwarz is given in Exercise 2.2.15 to complement the algebraic argument
that has been given here.
The Cauchy–Schwarz inequality shows that the modulus function satisfies
the triangle inequality.
Theorem 2.2.6 (Triangle inequality). For all x, y ∈ Rn ,

∣x + y∣ ≤ ∣x∣ + ∣y∣,
36 2 Euclidean Space

with equality if and only if one of x, y is a nonnegative scalar multiple of the


other.

Proof. To show this, compute

∣x + y∣2 = ⟨x + y, x + y⟩
= ∣x∣2 + 2⟨x, y⟩ + ∣y∣2 by bilinearity
≤ ∣x∣ + 2∣x∣∣y∣ + ∣y∣
2 2
by Cauchy–Schwarz
= (∣x∣ + ∣y∣) ,
2

proving the inequality. Equality holds exactly when ⟨x, y⟩ = ∣x∣∣y∣, or equiva-
lently when ∣⟨x, y⟩∣ = ∣x∣∣y∣ and ⟨x, y⟩ ≥ 0. These hold when one of x, y is a
scalar multiple of the other and the scalar is nonnegative. ⊔

While the Cauchy–Schwarz inequality says that the size of the product is
at most the product of the sizes, the triangle inequality says:

The size of the sum is at most the sum of the sizes.

The triangle inequality’s name is explained by its geometric interpretation


in R2 . View x as an arrow at the origin, y as an arrow with tail at the head
of x, and x + y as an arrow at the origin. These three arrows form a triangle,
and the assertion is that the lengths of two sides sum to at least the length of
the third. (See Figure 2.7.)

x+y
y

x
Figure 2.7. Sides of a triangle

The full triangle inequality says that for all x, y ∈ Rn ,

∣ ∣x∣ − ∣y∣ ∣ ≤ ∣x ± y∣ ≤ ∣x∣ + ∣y∣.

The proof is Exercise 2.2.7.


A small argument, which can be formalized as induction if one is painstak-
ing, shows that the basic triangle inequality extends from two vectors to any
finite number of vectors. For example,

∣x + y + z∣ ≤ ∣x + y∣ + ∣z∣ ≤ ∣x∣ + ∣y∣ + ∣z∣.


2.2 Geometry: Length and Angle 37

The only obstacle to generalizing the basic triangle inequality in this fashion
is notation. The argument can’t use the symbol n to denote the number of
vectors, because n already denotes the dimension of the Euclidean space where
we are working; and furthermore, the vectors can’t be denoted with subscripts
since a subscript denotes a component of an individual vector. Thus, for now
we are stuck writing something like

∣x(1) + ⋯ + x(k) ∣ ≤ ∣x(1) ∣ + ⋯ + ∣x(k) ∣ for all x(1) , . . . , x(k) ∈ Rn ,

or
k k
∣∑ x(i) ∣ ≤ ∑ ∣x(i) ∣, x(1) , . . . , x(k) ∈ Rn .
i=1 i=1

As our work with vectors becomes more intrinsic, vector entries will demand
less of our attention, and we will be able to denote vectors by subscripts. The
notation-change will be implemented in the next section.
For every vector x = (x1 , . . . , xn ) ∈ Rn , useful bounds on the modulus ∣x∣
in terms of the scalar absolute values ∣xi ∣ are as follows.

Proposition 2.2.7 (Size bounds). For every j ∈ {1, . . . , n},


n
∣xj ∣ ≤ ∣x∣ ≤ ∑ ∣xi ∣.
i=1

The proof (by quick applications of the Cauchy–Schwarz inequality and


the triangle inequality) is Exercise 2.2.8.
The modulus gives rise to a distance function on Rn that behaves as dis-
tance should. Define
d ∶ Rn × Rn Ð→ R
by
d(x, y) = ∣y − x∣.

For example, d(ei , ej ) = 2(1 − δij ).

Theorem 2.2.8 (Distance properties).


(D1) Distance is positive: d(x, y) ≥ 0 for all x, y ∈ Rn , and d(x, y) = 0 if and
only if x = y.
(D2) Distance is symmetric: d(x, y) = d(y, x) for all x, y ∈ Rn .
(D3) Triangle inequality: d(x, z) ≤ d(x, y) + d(y, z) for all x, y, z ∈ Rn .

(D1) and (D2) are clearly desirable as properties of a distance function.


Property (D3) says that you can’t shorten your trip from x to z by making a
stop at y.

Proof. Exercise 2.2.9. ⊔



38 2 Euclidean Space

The Cauchy–Schwarz inequality also lets us define the angle between two
nonzero vectors in terms of the inner product. If x and y are nonzero vectors
in Rn , define their angle θx,y by the condition

⟨x, y⟩
cos θx,y = 0 ≤ θx,y ≤ π.
∣x∣∣y∣
, (2.2)

The condition is sensible because −1 ≤ ⟨x,y⟩


≤ 1 by the Cauchy–Schwarz in-

∣x∣∣y∣
equality. For example, cos θ(1,0),(1,1) = 1/ 2, and so θ(1,0),(1,1) = π/4. In partic-
ular, two nonzero vectors x and y are orthogonal when ⟨x, y⟩ = 0. Naturally,
we would like θx,y to correspond to the usual notion of angle, at least in R2 ,
and indeed it does—see Exercise 2.2.10. For convenience, define any two vec-
tors x and y to be orthogonal if ⟨x, y⟩ = 0, thus making 0 orthogonal to all
vectors.
Rephrasing geometry in terms of intrinsic vector algebra not only extends
the geometric notions of length and angle uniformly to any dimension, it also
makes some low-dimensional geometry easier. For example, vectors show in a
natural way that the three altitudes of every triangle must meet. Let x and y
denote two sides of the triangle, making the third side x −y by the head minus
tail mnemonic. Let q be the point where the altitudes to x and y meet. (See
Figure 2.8, which also shows the third altitude.) Thus

q − y ⊥ x and q − x ⊥ y.

We want to show that q also lies on the third altitude, i.e., that

q ⊥ x − y.

To rephrase matters in terms of inner products, we want to show that

⟨q − y, x⟩ = 0
{ } Ô⇒ ⟨q, x − y⟩ = 0.
⟨q − x, y⟩ = 0

Since the inner product is linear in each of its arguments, a further rephrasing
is that we want to show that
⟨q, x⟩ = ⟨y, x⟩
{ } Ô⇒ ⟨q, x⟩ = ⟨q, y⟩.
⟨q, y⟩ = ⟨x, y⟩

And this is immediate because the inner product is symmetric: ⟨q, x⟩ and ⟨q, y⟩
both equal ⟨x, y⟩, and so they equal each other as desired. The point q where
the three altitudes meet is called the orthocenter of the triangle. In general,
the orthocenter of a triangle is not the geometric center that we considered
in the previous section.
2.2 Geometry: Length and Angle 39

x−y
q

x
Figure 2.8. Three altitudes of a triangle

Exercises

2.2.1. Let x = ( 23 , − 12 , 0), y = ( 12 , z = (1, 1, 1). Compute ⟨x, x⟩, ⟨x, y⟩,
√ √
3
, 1),
⟨y, z⟩, ∣x∣, ∣y∣, ∣z∣, θx,y , θy,e1 , θz,e2 .
2

2.2.2. Show that the points x = (2, −1, 3, 1), y = (4, 2, 1, 4), z = (1, 3, 6, 1) form
the vertices of a triangle in R4 with two equal angles.

2.2.3. Explain why for all x ∈ Rn , x = ∑nj=1 ⟨x, ej ⟩ej .

2.2.4. Prove the inner product properties.

2.2.5. Use the inner product properties and the definition of the modulus in
terms of the inner product to prove the modulus properties.

2.2.6. In the text, the modulus is defined in terms of the inner product. Prove
that this can be turned around by showing that for every x, y ∈ Rn ,

∣x + y∣2 − ∣x − y∣2
⟨x, y⟩ = .
4
2.2.7. Prove the full triangle inequality: for every x, y ∈ Rn ,

∣ ∣x∣ − ∣y∣ ∣ ≤ ∣x ± y∣ ≤ ∣x∣ + ∣y∣.

Do not do this by writing three more variants of the proof of the triangle in-
equality, but by substituting suitably into the basic triangle inequality, which
is already proved.

2.2.8. Let x = (x1 , . . . , xn ) ∈ Rn . Prove the size bounds: for every j ∈


{1, . . . , n},
n
∣xj ∣ ≤ ∣x∣ ≤ ∑ ∣xi ∣.
i=1

(One approach is to start by noting that xj = ⟨x, ej ⟩ and recalling equa-


tion (2.1).) When can each “≤” be an “=”?

2.2.9. Prove the distance properties.


40 2 Euclidean Space

2.2.10. Working in R2 , depict the nonzero vectors x and y as arrows from the
origin and depict x − y as an arrow from the endpoint of y to the endpoint
of x. Let θ denote the angle (in the usual geometric sense) between x and y.
Use the law of cosines to show that
⟨x, y⟩
cos θ =
∣x∣∣y∣
,

so that our notion of angle agrees with the geometric one, at least in R2 .

2.2.11. Prove that for every nonzero x ∈ Rn , ∑ni=1 cos2 θx,ei = 1.

2.2.12. Prove that two nonzero vectors x, y are orthogonal if and only if
∣x + y∣2 = ∣x∣2 + ∣y∣2 .

2.2.13. Use vectors in R2 to show that the diagonals of a parallelogram are


perpendicular if and only if the parallelogram is a rhombus.

2.2.14. Use vectors to show that every angle inscribed in a semicircle is right.

2.2.15. Let x and y be vectors, with x nonzero. Define the parallel component
of y along x and the normal component of y to x to be

⟨x, y⟩
y(∥x) = y(⊥x) = y − y(∥x) .
∣x∣2
x and

(a) Show that y = y(∥x) + y(⊥x) ; show that y(∥x) is a scalar multiple of x; show
that y(⊥x) is orthogonal to x. Show that the decomposition of y as a sum of
vectors parallel and perpendicular to x is unique. Draw an illustration.
(b) Show that
∣y∣2 = ∣y(∥x) ∣2 + ∣y(⊥x) ∣2 .
What theorem from classical geometry does this encompass?
(c) Explain why it follows from (b) that

∣y(∥x) ∣ ≤ ∣y∣,

with equality if and only if y is a scalar multiple of x. Use this inequality to


give another proof of the Cauchy–Schwarz inequality. This argument gives the
geometric content of Cauchy–Schwarz: the parallel component of one vector
along another is at most as long as the original vector.
(d) The proof of the Cauchy–Schwarz inequality in part (c) refers to parts
(a) and (b), part (a) refers to orthogonality, orthogonality refers to an angle,
and as explained in the text, the fact that angles make sense depends on the
Cauchy–Schwarz inequality. And so the proof in part (c) apparently relies on
circular logic. Explain why the logic is in fact not circular.
2.3 Analysis: Continuous Mappings 41

2.2.16. Given nonzero vectors x1 , x2 , . . . , xn in Rn , the Gram–Schmidt pro-


cess is to set

x′1 = x1
x′2 = x2 − (x2 )(∥x′1 )
x′3 = x3 − (x3 )(∥x′2 ) − (x3 )(∥x′1 )

x′n = xn − (xn )(∥x′n−1 ) − ⋯ − (xn )(∥x′1 ) .

(a) What is the result of applying the Gram–Schmidt process to the vectors
x1 = (1, 0, 0), x2 = (1, 1, 0), and x3 = (1, 1, 1)?
(b) Returning to the general case, show that x′1 , . . . , x′n are pairwise or-
thogonal and that each x′j has the form

x′j = aj1 x1 + aj2 x2 + ⋯ + aj,j−1 xj−1 + xj .

Thus every linear combination of the new {x′j } is also a linear combination
of the original {xj }. The converse is also true and will be shown in Exer-
cise 3.3.13.

2.3 Analysis: Continuous Mappings


A mapping from Rn to Rm is some rule that assigns to each point x in Rn a
point in Rm . Generally, mappings will be denoted by letters such as f , g, h.
When m = 1, we usually say function instead of mapping.
For example, the mapping

f ∶ R2 Ð→ R2

defined by
f (x, y) = (x2 − y 2 , 2xy)
takes the real and imaginary parts of a complex number z = x + iy and returns
the real and imaginary parts of z 2 . By the nature of multiplication of complex
numbers, this means that each output point has modulus equal to the square
of the modulus of the input point and has angle equal to twice the angle of
the input point. Make sure that you see how this is shown in Figure 2.9.
Mappings expressed by formulas may be undefined at certain points (e.g.,
f (x) = 1/∣x∣ is undefined at 0), so we need to restrict their domains. For a given
dimension n, a given set A ⊂ Rn , and a second dimension m, let M(A, Rm )
denote the set of all mappings f ∶ A Ð→ Rm . This set forms a vector space
over R (whose points are functions) under the operations

+ ∶ M(A, Rm ) × M(A, Rm ) Ð→ M(A, Rm ),


42 2 Euclidean Space

2
1

1 −1 1

Figure 2.9. The complex square as a mapping from R2 to R2

defined by
(f + g)(x) = f (x) + g(x) for all x ∈ A,
and
⋅ ∶ R × M(A, Rm ) Ð→ M(A, Rm ),
defined by
(a ⋅ f )(x) = a ⋅ f (x) for all x ∈ A.
As usual, “+” and “⋅” are overloaded: on the left they denote operations
on M(A, Rm ), while on the right they denote the operations on Rm de-
fined in Section 2.1. Also as usual, the “⋅” is generally omitted. The origin
in M(A, Rm ) is the zero mapping, 0 ∶ A Ð→ Rm , defined by

0(x) = 0m for all x ∈ A.

For example, to verify that M(A, Rm ) satisfies (A1), consider any mappings
f, g, h ∈ M(A, Rm ). For every x ∈ A,

((f + g) + h)(x) = (f + g)(x) + h(x) by definition of “+” in M(A, Rm )


= (f (x) + g(x)) + h(x) by definition of “+” in M(A, Rm )
= f (x) + (g(x) + h(x)) by associativity of “+” in Rm
= f (x) + (g + h)(x) by definition of “+” in M(A, Rm )
= (f + (g + h))(x) by definition of “+” in M(A, Rm ).

Since x is arbitrary, (f + g) + h = f + (g + h).


Let A be a subset of Rn . A sequence in A is an infinite list of vectors
{x1 , x2 , x3 , . . . } in A, often written {xν }. (The symbol n is already in use,
so its Greek counterpart ν—pronounced nu—is used as the index-counter.)
Since a vector has n entries, each vector xν in the sequence takes the form
(x1,ν , . . . , xn,ν ).

Definition 2.3.1 (Null Sequence). The sequence {xν } in Rn is null if for


every ε > 0 there exists some ν0 such that
2.3 Analysis: Continuous Mappings 43

if ν > ν0 then ∣xν ∣ < ε.

That is, a sequence is null if for every ε > 0, all but finitely many terms of the
sequence lie within distance ε of 0n .

Quickly from the definition, if {xν } is a null sequence in Rn and {yν } is a


sequence in Rn such that ∣yν ∣ ≤ ∣xν ∣ for all ν then also {yν } is null.
Let {xν } and {yν } be null sequences in Rn , and let c be a scalar. Then
the sequence {xν + yν } is null because ∣xν + yν ∣ ≤ ∣xν ∣ + ∣yν ∣ for each ν, and
the sequence {cxν } is null because ∣cxν ∣ = ∣c∣∣xν ∣ for each ν. These two results
show that the set of null sequences in Rn forms a vector space.
For every vector x ∈ Rn the absolute value ∣x∣ is a nonnegative scalar, and
so no further effect is produced by taking the scalar absolute value in turn,

∣ ∣x∣ ∣ = ∣x∣, x ∈ Rn ,

and so a vector sequence {xν } is null if and only if the scalar sequence {∣xν ∣}
is null.

Lemma 2.3.2 (Componentwise nature of nullness). The vector sequence


{(x1,ν , . . . , xn,ν )} is null if and only if each of its component scalar sequences
{xj,ν } (j ∈ {1, . . . , n}) is null.

Proof. By the observation just before the lemma, it suffices to show that
{∣(x1,ν , . . . , xn,ν )∣} is null if and only if each {∣xj,ν ∣} is null. The size bounds
give for every j ∈ {1, . . . , n} and every ν,
n
∣xj,ν ∣ ≤ ∣(x1,ν , . . . , xn,ν )∣ ≤ ∑ ∣xi,ν ∣.
i=1

If {∣(x1,ν , . . . , xn,ν )∣} is null then by the first inequality, so is each {∣xj,ν ∣}. On
the other hand, if each {∣xj,ν ∣} is null then so is {∑ni=1 ∣xi,ν ∣}, and thus by the
second inequality, {∣(x1,ν , . . . , xn,ν )∣} is null as well. ⊔

We define the convergence of vector sequences in terms of null sequences.

Definition 2.3.3 (Sequence convergence, sequence limit). Let A be a


subset of Rn . Consider a sequence {xν } in A and a point p ∈ Rn . The sequence
{xν } converges to p (or has limit p), written {xν } → p, if the sequence
{xν −p} is null. When the limit p is a point of A, the sequence {xν } converges
in A.

If a sequence {xν } converges to p and also converges to p′ then the constant


sequence {p′ − p} is the difference of the null sequences {xν − p} and {xν − p′ },
hence null, forcing p′ = p. Thus a sequence cannot converge to two distinct
values.
Many texts define convergence directly rather than by reference to nullness,
the key part of the definition being
44 2 Euclidean Space

if ν > ν0 then ∣xν − p∣ < ε.

In particular, a null sequence is a sequence that converges to 0n . However, in


contrast to the situation for null sequences, for p ≠ 0n it is emphatically false
that if {∣xν ∣} converges to ∣p∣ then necessarily {xν } converges to p or even
converges at all. Also, for every nonzero p, the sequences that converge to p
do not form a vector space.
Vector versions of the sum rule and the constant multiple rule for con-
vergent sequences follow immediately from the vector space properties of null
sequences:

Proposition 2.3.4 (Linearity of convergence). Let {xν } be a sequence


in Rn converging to p, let {yν } be a sequence in Rn converging to q, and let c
be a scalar. Then the sequence {xν + yν } converges to p + q, and the sequence
{cxν } converges to cp.

Similarly, since a sequence {xν } converges to p if and only if {xν − p} is


null, we have the following corollary in consequence of the componentwise
nature of nullness (Exercise 2.3.5):

Proposition 2.3.5 (Componentwise nature of convergence). The vec-


tor sequence {(x1,ν , . . . , xn,ν )} converges to the vector (p1 , . . . , pn ) if and
only if each component scalar sequence {xj,ν } (j = 1, . . . , n) converges to the
scalar pj .

Continuity, like convergence, is typographically indistinguishable in R


and Rn .

Definition 2.3.6 (Continuity). Let A be a subset of Rn , let f ∶ A Ð→ Rm be


a mapping, and let p be a point of A. Then f is continuous at p if for every
sequence {xν } in A converging to p, the sequence {f (xν )} converges to f (p).
The mapping f is continuous on A (or just continuous when A is clearly
established) if it is continuous at each point p ∈ A.

For example, the modulus function

∣ ∣ ∶ Rn Ð→ R

is continuous on Rn . To see this, consider any point p ∈ Rn and consider any


sequence {xν } in Rn that converges to p. We need to show that the sequence
{∣xν ∣} in R converges to ∣p∣. But by the full triangle inequality,

∣ ∣xν ∣ − ∣p∣ ∣ ≤ ∣xν − p∣.

Since the right side is the νth term of a null sequence, so is the left, giving
the result.
For another example, let a ∈ Rn be any fixed vector and consider the
function defined by taking the inner product of this vector with other vectors,
2.3 Analysis: Continuous Mappings 45

T ∶ Rn Ð→ R, T (x) = ⟨a, x⟩.

This function is also continuous on Rn . To see this, again consider any p ∈ Rn


and any sequence {xν } in Rn converging to p. Then the definition of T , the
bilinearity of the inner product, and the Cauchy–Schwarz inequality combine
to show that

∣T (xν ) − T (p)∣ = ∣⟨a, xν ⟩ − ⟨a, p⟩∣ = ∣⟨a, xν − p⟩∣ ≤ ∣a∣ ∣xν − p∣.

Since ∣a∣ is a constant, the right side is the νth term of a null sequence,
whence so is the left, and the proof is complete. We will refer to this example
in Section 3.1. Also, note that as a special case of this example we may take
any j ∈ {1, . . . , n} and set the fixed vector a to ej , showing that the jth
coordinate function map,

πj ∶ Rn Ð→ R, πj (x1 , . . . , xn ) = xj ,

is continuous.
Proposition 2.3.7 (Vector space properties of continuity). Let A be a
subset of Rn , let f, g ∶ A Ð→ Rm be continuous mappings, and let c ∈ R. Then
the sum and the scalar multiple mappings

f + g, cf ∶ A Ð→ Rm

are continuous. Thus the set of continuous mappings from A to Rm forms a


vector subspace of M(A, Rm ).
The vector space properties of continuity follow immediately from the
linearity of convergence and from the definition of continuity. Another conse-
quence of the definition of continuity is as follows.
Proposition 2.3.8 (Persistence of continuity under composition). Let
A be a subset of Rn , and let f ∶ A Ð→ Rm be a continuous mapping. Let B
be a superset of f (A) in Rm , and let g ∶ B Ð→ Rℓ be a continuous mapping.
Then the composition mapping

g ○ f ∶ A Ð→ Rℓ

is continuous.
The proof is Exercise 2.3.7.
Let A be a subset of Rn . Every mapping f ∶ A Ð→ Rm decomposes as m
functions f1 , . . . , fm , with each fi ∶ A Ð→ R, by the formula

f (x) = (f1 (x), . . . , fm (x)).

For example, if f (x, y) = (x2 − y 2 , 2xy) then f1 (x, y) = x2 − y 2 and f2 (x, y) =


2xy. The decomposition of f can also be written
46 2 Euclidean Space
m
f (x) = ∑ fi (x)ei ,
i=1

or equivalently, the functions fi are defined by the condition

fi (x) = f (x)i for i = 1, . . . , m.

Conversely, given m functions f1 , . . . , fm from A to R, each of the preceding


three displayed formulas assembles a mapping f ∶ A Ð→ Rm . Thus, each map-
ping f determines and is determined by its component functions f1 , . . . , fm .
Conveniently, to check continuity of the vector-valued mapping f we only need
to check its scalar-valued component functions.

Theorem 2.3.9 (Componentwise nature of continuity). Let A ⊂ Rn , let


f ∶ A Ð→ Rm have component functions f1 , . . . , fm , and let p be a point in A.
Then
f is continuous at p ⇐⇒ each fi is continuous at p.

The componentwise nature of continuity follows from the componentwise


nature of convergence and is left as Exercise 2.3.6.
Let A be a subset of Rn , let f and g be continuous functions from A to R,
and let c ∈ R. Then the familiar sum rule, constant multiple rule, product
rule, and quotient rule for continuous functions hold. That is, the sum f + g,
the constant multiple cf , the product f g, and the quotient f /g (at points
p ∈ A such that g(p) ≠ 0) are again continuous. The first two of these facts
are special cases of the vector space properties of continuity. The proofs of
the other two are typographically identical to their one-variable counterparts.
With the various continuity results obtained thus far in hand, it is clear that
a function such as

sin( x2 + y 2 + z 2 )
f ∶ R Ð→ R,
3
f (x, y, z) =
exy+z
is continuous. The continuity of such functions, and of mappings with such
functions as their components, will go without comment from now on.
However, the continuity of functions of n variables also has new, subtle
features when n > 1. In R, a sequence {xν } can approach the point p in only
two essential ways: from the left and from the right. But in Rn for n ≥ 2, {xν }
can approach p along a line from infinitely many directions, or not approach
along a line at all, and so the convergence of {f (xν )} can be trickier. For
example, consider the function f ∶ R2 Ð→ R defined by


⎪ if (x, y) ≠ 0,
2xy

f (x, y) = ⎨ x2 + y 2



⎩ b if (x, y) = 0.

Can the constant b be specified to make f continuous at 0?


2.3 Analysis: Continuous Mappings 47

It can’t. Take a sequence {(xν , yν )} approaching 0 along the line y = mx


of slope m. For every point (xν , yν ) of this sequence,
2mx2ν
f (xν , yν ) = f (xν , mxν ) =
2xν mxν 2m
= =
xν + m xν (1 + m )xν 1 + m2
2 2 2 2 2
.

Thus, as the sequence of inputs {(xν , yν )} approaches 0 along the line of


slope m, the corresponding sequence of outputs {f (xν , yν )} holds steady
at 2m/(1 + m2 ), and so f (0) needs to take this value for continuity. Taking
input sequences {(xν , yν )} that approach 0 along lines of different slope shows
that f (0) needs to take different values for continuity, and hence f cannot be
made continuous at 0. The graph of f away from 0 is a sort of spiral staircase,
and no height over 0 is compatible with all the stairs. (See Figure 2.10. The
figure displays only the portion of the graph for slopes between 0 and 1 in
the input plane.) The reader who wants to work a virtually identical example
can replace the formula 2xy/(x2 + y 2 ) in f by (x2 − y 2 )/(x2 + y 2 ) and run the
same procedure as in this paragraph.

Figure 2.10. A spiral staircase

The previous example was actually fairly simple in that we only needed
to study f (x, y) as (x, y) approached 0 along straight lines. Consider the
function g ∶ R2 Ð→ R defined by



⎪ 4
x2 y
if (x, y) ≠ 0,
g(x, y) = ⎨ x + y 2



⎩ b if (x, y) = 0.

For a nonzero slope m, take a sequence {(xν , yν )} approaching 0 along the


line y = mx. Compute that for each point of this sequence,
mx3ν
g(xν , yν ) = g(xν , mxν ) = =
mxν
.
x4ν + m2 x2ν x2ν + m2
This quantity tends to 0 as xν goes to 0. That is, as the sequence of inputs
{(xν , yν )} approaches 0 along the line of slope m, the corresponding sequence
48 2 Euclidean Space

of outputs {g(xν , yν )} approaches 0, and so g(0) needs to take the value 0


for continuity. Since g is 0 at the nonzero points of either axis in the (x, y)-
plane, this requirement extends to the cases that {(xν , yν )} approaches 0 along
a horizontal or vertical line. However, next consider a sequence {(xν , yν )}
approaching 0 along the parabola y = x2 . For each point of this sequence,

x4ν
g(xν , yν ) = g(xν , x2ν ) =
1
= .
x4ν + x4ν 2

Thus, as the sequence of inputs {(xν , yν )} approaches 0 along the parabola,


the corresponding sequence of outputs {g(xν , yν )} holds steady at 1/2, and so
g(0) needs to be 1/2 for continuity as well. Thus g cannot be made continuous
at 0, even though approaching 0 only along lines suggests that it can. The
reader who wants to work a virtually identical example can replace the formula
x2 y/(x4 + y 2 ) in g by x3 y/(x6 + y 2 ) and run the same procedure as in this
paragraph but using the curve y = x3 .
Thus, given a function f ∶ R2 Ð→ R, letting {(xν , yν )} approach 0 along
lines can disprove continuity at 0, but it can only suggest continuity at 0, not
prove it. To prove continuity, the size bounds may be helpful. For example,

let


⎪ 2
x3
if (x, y) ≠ 0,
h(x, y) = ⎨ x + y 2



⎩ b if (x, y) = 0.
Can b be specified to make h continuous at 0? The estimate ∣x∣ ≤ ∣(x, y)∣ gives
for every (x, y) ≠ 0,

∣x3 ∣ ∣x∣3 ∣(x, y)∣3


0 ≤ ∣h(x, y)∣ = = ≤ = ∣(x, y)∣,
x2+y 2 ∣(x, y)∣2 ∣(x, y)∣2

so as a sequence {(xν , yν )} of nonzero input vectors converges to 0, the cor-


responding sequence of outputs {h(xν , yν )} is squeezed to 0 in absolute value
and hence converges to 0. Setting b = 0 makes h continuous at 0. The reader
who wants to work a virtually identical example can replace the formula
x3 /(x2 + y 2 ) in h by x2 y 2 /(x4 + y 2 ) and run the same procedure as in this
paragraph but applying the size bounds to vectors (x2ν , yν ).
Returning to the spiral staircase example,


⎪ if (x, y) ≠ 0,
2xy

f (x, y) = ⎨ x2 + y 2



⎩ b if (x, y) = 0,

the size bounds show that that for every (x, y) ≠ 0,

2∣x∣ ∣y∣ 2∣(x, y)∣2


0 ≤ ∣f (x, y)∣ = ≤ = 2.
∣(x, y)∣2 ∣(x, y)∣2
2.3 Analysis: Continuous Mappings 49

The display tells us only that as a sequence of inputs {(xν , yν )} approaches 0,


the sequence of outputs {f (xν , yν )} might converge to some limit between −2
and 2. The outputs needn’t converge to 0 (or converge at all), but according
to this diagnostic they possibly could. Thus the size bounds tell us only that
f could be discontinuous at (0, 0), but they give no conclusive information.
In sum, these examples illustrate three ideas.
• The straight line test can prove that a limit does not exist, or it can
determine the only candidate for the value of the limit, but it cannot
prove that the candidate value is the limit.
• When the straight line test determines a candidate value of the limit,
approaching along a curve can further support the candidate, or it can
prove that the limit does not exist by determining a different candidate as
well.
• The size bounds can prove that a limit does exist, but they can only suggest
that a limit does not exist.

The next proposition is a handy encoding of an intuitively plausible prop-


erty of continuous mappings. The result is so natural that it often is tacitly
taken for granted, but it is worth stating and proving carefully.

Proposition 2.3.10 (Persistence of inequality). Let A be a subset of Rn


and let f ∶ A Ð→ Rm be a continuous mapping. Let p be a point of A, let b be
a point of Rm , and suppose that f (p) ≠ b. Then there exists some ε > 0 such
that
for all x ∈ A such that ∣x − p∣ < ε, f (x) ≠ b.

Proof. Assume that the displayed statement in the proposition fails for ev-
ery ε > 0. Then in particular, it fails for ε = 1/ν for ν = 1, 2, 3, . . . . So there is
a sequence {xν } in A such that

∣xν − p∣ < 1/ν and f (xν ) = b, ν = 1, 2, 3, . . . .

Since f is continuous at p, this condition shows that f (p) = b. But in fact


f (p) ≠ b, and so our assumption that the displayed statement in the propo-
sition fails for every ε > 0 leads to a contradiction. Therefore the statement
holds for some ε > 0, as desired. ⊔

Exercises

2.3.1. For A ⊂ Rn , partially verify that M(A, Rm ) is a vector space over R


by showing that it satisfies vector space axioms (A4) and (D1).

2.3.2. Define multiplication ∗ ∶ M(A, R)×M(A, R) Ð→ M(A, R). Is M(A, R)


a field with “+” from the section and this multiplication? Does it have a
subspace that is a field?
50 2 Euclidean Space

2.3.3. For A ⊂ Rn and m ∈ Z+ define a subspace of the space of mappings


from A to Rm ,

C(A, Rm ) = {f ∈ M(A, Rm ) ∶ f is continuous on A}.

Briefly explain how this section has shown that C(A, Rm ) is a vector space.

2.3.4. Define an inner product and a modulus on C([0, 1], R) by



⟨f, g⟩ = ∫ f (t)g(t) dt, ∣f ∣ = ⟨f, f ⟩.
1

Do the inner product properties (IP1), (IP2), and (IP3) (see Proposition 2.2.2)
hold for this inner product on C([0, 1], R)? How much of the material from Sec-
tion 2.2 on the inner product and modulus in Rn carries over to C([0, 1], R)?
Express the Cauchy–Schwarz inequality as a relation between integrals.

2.3.5. Use the definition of convergence and the componentwise nature of


nullness to prove the componentwise nature of convergence. (The argument is
short.)

2.3.6. Use the definition of continuity and the componentwise nature of con-
vergence to prove the componentwise nature of continuity.

2.3.7. Prove the persistence of continuity under composition.

2.3.8. Define f ∶ Q Ð→ R by the rule




⎪1 if x2 < 2,
f (x) = ⎨

⎪ if x2 > 2.

0

Is f continuous?

2.3.9. Which of the following functions on R2 can be defined continuously


at 0?

⎪ x4 − y 4 ⎧
⎪ x2 − y 3

⎪ 2 if (x, y) ≠ 0, ⎪
⎪ 2 if (x, y) ≠ 0,
f (x, y) = ⎨ (x + y 2 )2 g(x, y) = ⎨ x + y 2

⎪ b
⎪ ⎪
⎪ b

⎩ if (x, y) = 0, ⎩ if (x, y) = 0,

⎪ x3 − y 3 ⎧


⎪ 2 (x, y) ≠ 0, ⎪
⎪ 2
xy 2
if (x, y) ≠ 0,
h(x, y) = ⎨ x + y 2 k(x, y) = ⎨ x + y 6
if

⎪ ⎪


⎩ b if (x, y) = 0, ⎪
⎩ b if (x, y) = 0.

2.3.10. Let f (x, y) = g(xy), where g ∶ R Ð→ R is continuous. Is f continuous?

2.3.11. Let f, g ∈ M(Rn , R) be such that f + g and f g are continuous. Are f


and g necessarily continuous?
2.4 Topology: Compact Sets and Continuity 51

2.4 Topology: Compact Sets and Continuity


The extreme value theorem from one-variable calculus states:
Let I be a nonempty closed and bounded interval in R, and let f ∶
I Ð→ R be a continuous function. Then f takes a minimum value and
a maximum value on I.
This section generalizes the theorem from scalars to vectors. That is, we want
a result that if A is a set in Rn with certain properties, and if f ∶ A Ð→ Rm
is a continuous mapping, then the output set f (A) will also have certain
properties. The questions are, for what sorts of properties do such statements
hold, and when they hold, how do we prove them?
The one-variable theorem hypothesizes two data, the nonempty closed and
bounded interval I and the continuous function f . Each of these is described
in its own terms—I takes the readily recognizable but static form [a, b] where
a ≤ b, while the continuity of f is a dynamic assertion about convergence
of sequences. Because the two data have differently phrased descriptions, a
proof of the extreme value theorem doesn’t suggest itself immediately: no
ideas at hand bear obviously on all the given information. Thus the work of
this section is not only to define the sets to appear in the pending theorem, but
also to describe them in terms of sequences, compatibly with the sequential
description of continuous mappings. The theorem itself will then be easy to
prove. Accordingly, most of the section will be spent describing sets in two
ways—in terms that are easy to recognize, and in sequential language that
dovetails with continuity.
We begin with a little machinery to quantify the intuitive notion of near-
ness.
Definition 2.4.1 (ε-ball). For every point p ∈ Rn and every positive real
number ε > 0, the ε-ball centered at p is the set
B(p, ε) = {x ∈ Rn ∶ ∣x − p∣ < ε} .
(See Figure 2.11.)

Figure 2.11. Balls in various dimensions

With ε-balls it is easy to describe the points that are approached by a


set A.
52 2 Euclidean Space

Definition 2.4.2 (Limit point). Let A be a subset of Rn , and let p be a


point of Rn . The point p is a limit point of A if every ε-ball centered at p
contains some point x ∈ A such that x ≠ p.
A limit point of A need not belong to A (Exercise 2.4.2). On the other
hand, a point in A need not be a limit point of A (Exercise 2.4.2 again); such
a point is called an isolated point of A. Equivalently, p is an isolated point
of A if p ∈ A and there exists some ε > 0 such that B(p, ε) ∩ A = {p}. The next
lemma justifies the nomenclature of the previous definition: limit points of A
are precisely the (nontrivial) limits of sequences in A.
Lemma 2.4.3 (Sequential characterization of limit points). Let A be
a subset of Rn , and let p be a point of Rn . Then p is the limit of a sequence
{xν } in A with each xν ≠ p if and only if p is a limit point of A.
Proof. ( Ô⇒ ) If p is the limit of a sequence {xν } in A with each xν ≠ p then
every ε-ball about p contains an xν (in fact, infinitely many), so p is a limit
point of A.
( ⇐Ô ) Conversely, if p is a limit point of A then B(p, 1/2) contains some
x1 ∈ A, x1 ≠ p. Let ε2 = ∣x1 − p∣/2. The ball B(p, ε2 ) contains some x2 ∈ A,
x2 ≠ p. Let ε3 = ∣x2 − p∣/2 and continue defining a sequence {xν } in this
fashion with ∣xν − p∣ < 1/2ν for all ν. This sequence converges to p, and xν ≠ p
for each xν . ⊔

The lemma shows that Definition 2.4.2 is more powerful than it appears—
every ε-ball centered at a limit point of A contains not only one but infinitely
many points of A.
Definition 2.4.4 (Closed set). A subset A of Rn is closed if it contains
all of its limit points.
For example, the x1 -axis is closed as a subset of Rn , because every point
off the axis is surrounded by a ball that misses the axis—that is, every point
off the axis is not a limit point of the axis, i.e., the axis is not missing any
of its limit points, i.e., the axis contains all of its limit points. The interval
(0, 1) is not closed because it does not contain the limit points at its ends.
These examples illustrate the fact that with a little practice it becomes easy
to recognize quickly whether a set is closed. Loosely speaking, a set is closed
when it contains all the points that it seems to want to contain.
Proposition 2.4.5 (Sequential characterization of closed sets). Let A
be a subset of Rn . Then A is closed if and only if every sequence in A that
converges in Rn in fact converges in A.
Proof. ( Ô⇒ ) Suppose that A is closed, and let {xν } be a sequence in A
converging in Rn to p. If xν = p for some ν then p ∈ A because xν ∈ A; and if
xν ≠ p for all ν then p is a limit point of A by “ Ô⇒ ” of Lemma 2.4.3, and
so p ∈ A because A is closed.
2.4 Topology: Compact Sets and Continuity 53

( ⇐Ô ) Conversely, suppose that every convergent sequence in A has its


limit in A. Then all limit points of A are in A by “ ⇐Ô ” of Lemma 2.4.3,
and so A is closed. ⊔

The proposition equates an easily recognizable condition that we can un-
derstand intuitively (a set being closed) with a sequential characterization
that we can use in further arguments. Note that the sequential characteriza-
tion of a closed set A refers not only to A but also to the ambient space Rn
in which A lies. We will return to this point later in this section.
Closed sets do not necessarily have good properties under continuous map-
pings. So next we describe another class of sets, the bounded sets. Bounded-
ness is again an easily recognizable condition that also has a characterization
in terms of sequences. The sequential characterization will turn out to be
complementary to the sequential characterization of closed sets, foreshadow-
ing that the properties of being closed and bounded will work well together.
Definition 2.4.6 (Bounded set). A set A in Rn is bounded if A ⊂ B(0, R)
for some R > 0.
Thus a bounded set is enclosed in some finite corral centered at the origin,
possibly a very big one. For example, every ball B(p, ε), not necessarily cen-
tered at the origin, is bounded, by a nice application of the triangle inequality
(Exercise 2.4.5). On the other hand, the Archimedean property of the real
number system says that Z is an unbounded subset of R. The size bounds
show that a subset of Rn is bounded if and only if the jth coordinates of
its points form a bounded subset of R for each j ∈ {1, . . . , n}. The geometric
content of this statement is that a set sits inside a ball centered at the origin
if and only if it sits inside a box centered at the origin.
Blurring the distinction between a sequence and the set of its elements
allows the definition of boundedness to apply to sequences. That is, a sequence
{xν } is bounded if there is some R > 0 such that ∣xν ∣ < R for all ν ∈ Z+ . The
proof of the next fact in Rn is symbol-for-symbol the same as in R (or in C),
so it is only sketched.
Proposition 2.4.7 (Convergence implies boundedness). If the sequence
{xν } converges in Rn then it is bounded.
Proof. Let {xν } converge to p. Then there exists a starting index ν0 such that
xν ∈ B(p, 1) for all ν > ν0 . Consider any real number R such that
R > max{∣x1 ∣, . . . , ∣xν0 ∣, ∣p∣ + 1}.
Then clearly xν ∈ B(0, R) for ν = 1, . . . , ν0 , and the triangle inequality shows
that also xν ∈ B(0, R) for all ν > ν0 . Thus {xν } ⊂ B(0, R) as a set. ⊔

Definition 2.4.8 (Subsequence). A subsequence of the sequence {xν } is
a sequence consisting of some (possibly all) of the original terms, in ascending
order of indices.
54 2 Euclidean Space

Since a subsequence of {xν } consists of terms xν only for some values of ν,


it is often written {xνk }, where now k is the index variable. For example, given
the sequence
{x1 , x2 , x3 , x4 , x5 , . . .} ,
a subsequence is
{x2 , x3 , x5 , x7 , x11 , . . . },
with ν1 = 2, ν2 = 3, ν3 = 5, and generally νk = the kth prime.

Lemma 2.4.9 (Persistence of convergence). Let {xν } converge to p.


Then every subsequence {xνk } also converges to p.

Proof. The hypothesis that {xν } converges to p means that for every given
ε > 0, only finitely many sequence-terms xν lie outside the ball B(p, ε). Con-
sequently, only finitely many subsequence-terms xνk lie outside B(p, ε), which
is to say that {xνk } converges to p. ⊔

The sequence property that characterizes bounded sets is called the


Bolzano–Weierstrass property. Once it is proved in R, the result follows
in Rn by arguing one component at a time.

Theorem 2.4.10 (Bolzano–Weierstrass property in R). Let A be a


bounded subset of R. Then every sequence in A has a convergent subsequence.

Proof. Let {xν } be a sequence in A. Call a term xν of the sequence a max-


point if it is at least as big as all later terms, i.e., xν ≥ xµ for all µ > ν.
(For visual intuition, draw a graph plotting xν as a function of ν, with line
segments connecting consecutive points. A max-point is a peak of the graph at
least as high as all points to its right.) If there are infinitely many max-points
in {xν } then these form a decreasing sequence. If there are only finitely many
max-points then {xν } has an increasing sequence starting after the last max-
point—this follows almost immediately from the definition of max-point. In
either case, {xν } has a monotonic subsequence that, being bounded, converges
because the real number system is complete. ⊔

Theorem 2.4.11 (Bolzano–Weierstrass property in Rn : sequential


characterization of bounded sets). Let A be a subset of Rn . Then A
is bounded if and only if every sequence in A has a subsequence that converges
in Rn .

Proof. ( Ô⇒ ) Suppose that A is bounded. Consider any sequence {xν }


in A, written as {(x1,ν , . . . , xn,ν )}. The real sequence {x1,ν } takes values in
a bounded subset of R and thus has a convergent subsequence, {x1,νk }. The
subscripts are getting out of hand, so keep only the νk th terms of the orig-
inal sequence and relabel it. In other words, we may as well assume that
the sequence of first components, {x1,ν }, converges. The real sequence of
second components, {x2,ν }, in turn has a convergent subsequence, and by
2.4 Topology: Compact Sets and Continuity 55

Lemma 2.4.9 the corresponding subsequence of first components, {x1,ν }, con-


verges too. Relabeling again, we may assume that {x1,ν } and {x2,ν } both
converge. Continuing in this fashion n − 2 more times exhibits a subsequence
of {xν } that converges at each component.
( ⇐Ô ) Conversely, suppose that A is not bounded. Then there is a se-
quence {xν } in A with ∣xν ∣ > ν for all ν. This sequence has no bounded subse-
quence, and hence it has no convergent subsequence by Proposition 2.4.7. ⊓ ⊔

Note how the sequential characterizations in Proposition 2.4.5 and in the


Bolzano–Weierstrass property complement each other. The proposition char-
acterizes every closed set in Rn by the fact that if a sequence converges in the
ambient space then it converges in the set. The Bolzano–Weierstrass property
characterizes every bounded set in Rn by the fact that every sequence in the
set has a subsequence that converges in the ambient space but not necessarily
in the set. Both the sequential characterization of a closed set and the sequen-
tial characterization of a bounded set refer to the ambient space Rn in which
the set lies. We will return to this point once more in this section.

Definition 2.4.12 (Compact set). A subset K of Rn is compact if it is


closed and bounded.

Since the static notions of closed and bounded are reasonably intuitive, we
can usually recognize compact sets on sight. But it is not obvious from how
compact sets look that they are related to continuity. So our program now
has two steps: first, combine Proposition 2.4.5 and the Bolzano–Weierstrass
property to characterize compact sets in terms of sequences, and second, use
the characterization to prove that compactness is preserved by continuous
mappings.

Theorem 2.4.13 (Sequential characterization of compact sets). Let


K be a subset of Rn . Then K is compact if and only if every sequence in K
has a subsequence that converges in K.

Proof. ( Ô⇒ ) We show that the sequential characterizations of closed and


bounded sets together imply the claimed sequential characterization of com-
pact sets. Suppose that K is compact and {xν } is a sequence in K. Then K is
bounded, so by “ Ô⇒ ” of the Bolzano–Weierstrass property, {xν } has a con-
vergent subsequence. But K is also closed, so by “ Ô⇒ ” of Proposition 2.4.5,
this subsequence converges in K.
( ⇐Ô ) Conversely, we show that the claimed sequential characterization of
compact sets subsumes the sequential characterizations of closed and bounded
sets. Thus, suppose that every sequence in K has a subsequence that converges
in K. Then in particular, every sequence in K that converges in Rn has a sub-
sequence that converges in K. By Lemma 2.4.9 the limit of the sequence is
the limit of the subsequence, so the sequence converges in K. That is, every
sequence in K that converges in Rn converges in K, and hence K is closed
56 2 Euclidean Space

by “ ⇐Ô ” of Proposition 2.4.5. Also in consequence of the claimed sequen-


tial property of compact sets, every sequence in K has a subsequence that
converges in Rn . Thus K is bounded by “ ⇐Ô ” of the Bolzano–Weierstrass
Property. ⊔

By contrast to the sequential characterizations of a closed set and of a


bounded set, the sequential characterization of a compact set K makes no
reference to the ambient space Rn in which K lies. A set’s property of being
compact is innate in a way that a set’s property of being closed or of being
bounded is not.
The next theorem is the main result of this section. Now that all of the
objects involved are described in the common language of sequences, its proof
is natural.

Theorem 2.4.14 (The continuous image of a compact set is com-


pact). Let K be a compact subset of Rn and let f ∶ K Ð→ Rm be continuous.
Then f (K), the image set of K under f , is a compact subset of Rm .

Proof. Let {yν } be any sequence in f (K); by “ ⇐Ô ” of Theorem 2.4.13, it


suffices to exhibit a subsequence converging in f (K). Each yν has the form
f (xν ), and this defines a sequence {xν } in K. By “ Ô⇒ ” of Theorem 2.4.13,
since K is compact, {xν } necessarily has a subsequence {xνk } converging in K,
say to p. By the continuity of f at p, the sequence {f (xνk )} converges in f (K)
to f (p). Since {f (xνk )} is a subsequence of {yν }, the proof is complete. ⊔

Again, the sets in Theorem 2.4.14 are defined with no direct reference to
sequences, but the theorem is proved entirely using sequences. The point is
that with the theorem proved, we can easily see that it applies in particular
contexts without having to think any longer about the sequences that were
used to prove it.
A corollary of Theorem 2.4.14 generalizes the theorem that was quoted to
begin the section:

Theorem 2.4.15 (Extreme value theorem). Let K be a nonempty com-


pact subset of Rn and let the function f ∶ K Ð→ R be continuous. Then f
takes a minimum and a maximum value on K.

Proof. By Theorem 2.4.14, f (K) is a compact subset of R. As a nonempty


bounded subset of R, f (K) has a greatest lower bound and a least upper
bound by the completeness of the real number system. Each of these bounds
is an isolated point or a limit point of f (K), since otherwise some ε-ball
about it would be disjoint from f (K), giving rise to greater lower bounds or
lesser upper bounds of f (K). Because f (K) is also closed, it contains its limit
points, so in particular it contains its greatest lower bound and its least upper
bound. This means precisely that f takes a minimum and a maximum value
on K. ⊔

2.4 Topology: Compact Sets and Continuity 57

Even when n = 1, Theorem 2.4.15 generalizes the extreme value theorem


from the beginning of the section. In the theorem here, K can be a finite union
of closed and bounded intervals in R rather than only one interval, or K can
be a more complicated set, provided only that it is compact.
A topological property of sets is a property that is preserved under continu-
ity. Theorem 2.4.14 says that compactness is a topological property. Neither
the property of being closed nor the property of being bounded is in itself
topological. That is, the continuous image of a closed set need not be closed,
and the continuous image of a bounded set need not be bounded; for that
matter, the continuous image of a closed set need not be bounded, and the
continuous image of a bounded set need not be closed (Exercise 2.4.8).
The nomenclature continuous image in the slogan-title of Theorem 2.4.14
and in the previous paragraph is, strictly speaking, inaccurate: the image of
a mapping is a set, and the notion of a set being continuous doesn’t even
make sense according to our grammar. As stated correctly in the body of the
theorem, continuous image is short for image under a continuous mapping.
The property that students often have in mind when they call a set continu-
ous is in fact called connectedness. Loosely, a set is connected if it has only one
piece, so that a better approximating word from everyday language is contigu-
ous. To define connectedness accurately, we would have to use methodology
exactly opposite that of this section: rather than relate sets to continuous
mappings by characterizing the sets in terms of sequences, the idea is to turn
the whole business around and characterize continuous mappings in terms of
sets, specifically in terms of open balls. However, the process of doing so, and
then characterizing compact sets in terms of open balls as well, is trickier
than characterizing sets in terms of sequences; and so we omit it because we
do not need connectedness. Indeed, the remark after Theorem 2.4.15 points
out that connectedness is unnecessary even for the one-variable extreme value
theorem.
However, it deserves passing mention that connectedness is also a topologi-
cal property: again using language loosely, the continuous image of a connected
set is connected. This statement generalizes another theorem that underlies
one-variable calculus, the intermediate value theorem. For a notion related
to connectedness that is easily shown to be a topological property, see Exer-
cise 2.4.10.
The ideas of this section readily extend to broader environments. The first
generalization of Euclidean space is a metric space, a set with a well-behaved
distance function. Even more general is a topological space, a set with some
of its subsets designated as closed. Continuous functions, compact sets, and
connected sets can be defined meaningfully in these environments, and the
theorems remain the same: the continuous image of a compact set is compact,
and the continuous image of a connected set is connected.
58 2 Euclidean Space

Exercises
2.4.1. Are the following subsets of Rn closed, bounded, compact?
(a) B(0, 1),
(b) {(x, y) ∈ R2 ∶ y − x2 = 0},
(c) {(x, y, z) ∈ R3 ∶ x2 + y 2 + z 2 − 1 = 0},
(d) {x ∶ f (x) = 0m }, where f ∈ M(Rn , Rm ) is continuous (this generalizes
(b) and (c)),
(e) Qn where Q denotes the rational numbers,
(f) {(x1 , . . . , xn ) ∶ x1 + ⋯ + xn > 0}.
2.4.2. Give a set A ⊂ Rn and limit point b of A such that b ∉ A. Give a set
A ⊂ Rn and a point a ∈ A such that a is not a limit point of A.
2.4.3. Let A be a closed subset of Rn and let f ∈ M(A, Rm ). Define the
graph of f to be
G(f ) = {(a, f (a)) ∶ a ∈ A},
a subset of Rn+m . Show that if f is continuous then its graph is closed.
2.4.4. Prove the closed set properties: (1) the empty set ∅ and the full space
Rn are closed subsets of Rn ; (2) every intersection of closed sets is closed; (3)
every finite union of closed sets is closed.
2.4.5. Prove that every ball B(p, ε) is bounded in Rn .
2.4.6. Show that A is a bounded subset of Rn if and only if for each j ∈
{1, . . . , n}, the jth coordinates of its points form a bounded subset of R.
2.4.7. Show by example that a closed set need not satisfy the sequential char-
acterization of bounded sets, and that a bounded set need not satisfy the
sequential characterization of closed sets.
2.4.8. Show by example that the continuous image of a closed set need not
be closed, that the continuous image of a closed set need not be bounded,
that the continuous image of a bounded set need not be closed, and that the
continuous image of a bounded set need not be bounded.
2.4.9. A subset A of Rn is called discrete if each of its points is isolated.
(Recall that the term isolated was defined in this section.) Show or take for
granted the (perhaps surprising at first) fact that every mapping whose do-
main is discrete must be continuous. Is discreteness a topological property?
That is, need the continuous image of a discrete set be discrete?
2.4.10. A subset A of Rn is called path-connected if for every two points
x, y ∈ A, there is a continuous mapping
γ ∶ [0, 1] Ð→ A
such that γ(0) = x and γ(1) = y. (This γ is the path that connects x and y.)
Draw a picture to illustrate the definition of a path-connected set. Prove that
path-connectedness is a topological property.
3
Linear Mappings and Their Matrices

The basic idea of differential calculus is to approximate smooth-but-curved


objects in the small by straight ones. To prepare for doing so, this chapter
studies the multivariable analogues of lines. With one variable, lines are easily
manipulated by explicit formulas (e.g., the point–slope form is y = mx + b),
but with many variables we want to use the language of mappings. Section 3.1
gives an algebraic description of “straight” mappings, the linear mappings,
proceeding from an intrinsic definition to a description in coordinates. Each
linear mapping is described by a box of numbers called a matrix, so Section 3.2
derives mechanical matrix manipulations corresponding to the natural ideas of
adding, scaling, and composing linear mappings. Section 3.3 discusses in ma-
trix terms the question whether a linear mapping has an inverse, i.e., whether
there is a second linear mapping such that each undoes the other’s effect. Sec-
tion 3.5 discusses the determinant, an elaborate matrix-to-scalar function that
extracts from a linear mapping a single number with remarkable properties:
• (Linear invertibility theorem) The mapping is invertible if and only if the
determinant is nonzero.
• An explicit formula for the inverse of an invertible linear mapping can be
written using the determinant (Section 3.7).
• The factor by which the mapping magnifies volume is the absolute value
of the determinant (Section 3.8).
• The mapping preserves or reverses orientation according to the sign of the
determinant (Section 3.9). Here orientation is an algebraic generalization of
clockwise versus counterclockwise in the plane and of right-handed versus
left-handed in space.
Finally, Section 3.10 defines the cross product (a vector-by-vector multiplica-
tion special to three dimensions) and uses it to derive formulas for lines and
planes in space.

© Springer International Publishing AG 2016 59


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5_3
60 3 Linear Mappings and Their Matrices

3.1 Linear Mappings


The simplest interesting mappings from Rn to Rm are those whose output is
proportional to their input, the linear mappings. Proportionality means that
a linear mapping should take a sum of inputs to the corresponding sum of
outputs,
T (x + y) = T (x) + T (y) for all x, y ∈ Rn , (3.1)
and a linear mapping should take a scaled input to the correspondingly scaled
output,
T (αx) = αT (x) for all α ∈ R, x ∈ Rn . (3.2)
(Here we use the symbol α because a will be used heavily in other ways during
this chapter.) More formally, the definition of a linear mapping is as follows.

Definition 3.1.1 (Linear mapping). The mapping T ∶ Rn Ð→ Rm is lin-


ear if
k k
T (∑ αi xi ) = ∑ αi T (xi )
i=1 i=1

for all positive integers k, all real numbers α1 through αk , and all vectors x1
through xk .

The reader may find this definition discomfiting. It does not say what form
a linear mapping takes, and this raises some immediate questions. How are we
to recognize linear mappings when we encounter them? Or are we supposed to
think about them without knowing what they look like? For that matter, are
there even any linear mappings to encounter? Another troublesome aspect of
Definition 3.1.1 is semantic: despite the geometric sound of the word linear,
the definition is in fact algebraic, describing how T behaves with respect to
the algebraic operations of vector addition and scalar multiplication. (Note
that on the left of the equality in the definition, the operations are set in Rn ,
while on the right they are in Rm .) So what is the connection between the
definition and actual lines? Finally, how exactly do conditions (3.1) and (3.2)
relate to the condition in the definition?
On the other hand, Definition 3.1.1 has the virtue of illustrating the prin-
ciple that to do mathematics effectively we should characterize our objects
rather than construct them. The characterizations are admittedly guided by
hindsight, but there is nothing wrong with that. Definition 3.1.1 says how
a linear mapping behaves. It says that whatever form linear mappings will
turn out to take, our reflex should be to think of them as mappings through
which we can pass sums and constants. (This idea explains why one of the
inner product properties is called bilinearity: the inner product is linear as a
function of either of its two vector variables when the other variable is held
fixed.) The definition of linearity tells us how to use linear mappings once we
know what they are, or even before we know what they are. Another virtue
of Definition 3.1.1 is that it is intrinsic, making no reference to coordinates.
3.1 Linear Mappings 61

Some of the questions raised by Definition 3.1.1 have quick answers. The
connection between the definition and actual lines will quickly emerge from our
pending investigations. Also, an induction argument shows that (3.1) and (3.2)
are equivalent to the characterization in the definition, despite appearing
weaker (Exercise 3.1.1). Thus, to verify that a mapping is linear, we only
need to show that it satisfies the easier-to-check conditions (3.1) and (3.2);
but to derive properties of mappings that are known to be linear, we may want
to use the more powerful condition in the definition. As for finding linear map-
pings, the definition suggests a two-step strategy: first, derive the form that
a linear mapping necessarily takes in consequence of satisfying the definition;
and second, verify that the mappings of that form are indeed linear, i.e., show
that the necessary form of a linear mapping is also sufficient for a mapping
to be linear. We now turn to this.
The easiest case to study is linear mappings from R to R. Following the
strategy, first we assume that we have such a mapping and determine its form,
obtaining the mappings that are candidates to be linear. Second, we show
that all the candidates are indeed linear mappings. Thus suppose that some
mapping T ∶ R Ð→ R is linear. The mapping determines a scalar, a = T (1).
And then for every x ∈ R,

T (x) = T (x ⋅ 1) since x ⋅ 1 = x
= xT (1) by (3.2)
= xa by definition of a
= ax since multiplication in R commutes.

Thus, T is simply multiplication by a, where a = T (1). But to reiterate, this


calculation does not show that any mapping is linear. Rather, it tells us what
form a mapping must necessarily have if it is assumed or known to be linear,
and therefore it gives us all candidate linear mappings. But we don’t yet know
that any linear mappings exist at all.
So the next thing to do is show that conversely, every mapping of the
derived form is indeed linear—the necessary condition is also sufficient. Fix
a real number a and define a mapping T ∶ R Ð→ R by T (x) = ax. Then the
claim is that T is linear and T (1) = a. Let’s partially show this by verifying
that T satisfies (3.2). For every α ∈ R and every x ∈ R,

T (αx) = aαx by definition of T


= αax since multiplication in R commutes
= αT (x) by definition of T ,

as needed. You can check (3.1) similarly, and the calculation that T (1) = a is
immediate. These last two paragraphs combine to prove the following result.

Proposition 3.1.2 (Description of linear mappings from scalars to


scalars). The linear mappings T ∶ R Ð→ R are precisely the mappings
62 3 Linear Mappings and Their Matrices

T (x) = ax
where a ∈ R. That is, each linear mapping T ∶ R Ð→ R is multiplication by a
unique a ∈ R and conversely.
The slogan encapsulating the formula T (x) = ax (read “T of x equals a
times x”) in the proposition is:
For scalar input and scalar output, linear OF is scalar TIMES.
That is, given x ∈ R, the effect of a linear mapping T ∶ R Ð→ R on x is
simply to multiply x by a scalar a ∈ R associated with T . This may seem
trivial, but the issue is that at times our methodology will be to study a linear
mapping by its defining properties, i.e., the rules T (x + y) = T (x) + T (y) and
T (αx) = αT (x), while at other times we will profit from studying a linear
mapping computationally, i.e., as a mapping that simply multiplies its inputs
by something—by a scalar here, but by a vector or by a matrix later in this
section. The slogan displayed just above, as well as its two variants to follow
below, gives the connection between the two ways to think about a linear
mapping.
Also, the proposition explains the term linear: the graphs of linear map-
pings from R to R are lines through the origin. (Mappings f (x) = ax + b with
b ≠ 0 are not linear according to our definition even though their graphs are
also lines. However, see Exercises 3.1.15 and 3.2.6.) For example, a typical
linear mapping from R to R is T (x) = (1/2)x. Figure 3.1 shows two ways of
visualizing this mapping. The left half of the figure plots the domain axis and
the codomain axis orthogonally to each other in one plane, the familiar way
to graph a function. The right half of the figure plots the axes separately,
using the spacing of the dots to describe the mapping instead. The uniform
spacing along the rightmost axis depicts the fact that T (x) = xT (1) for all
x ∈ Z, and the spacing is half as big because the multiplying factor is 1/2.
Figures of this second sort can generalize up to three dimensions of input and
three dimensions of output, whereas figures of the first sort can display at
most three dimensions of input and output combined.

T (x)

T
x
0 1 0 T (1)

Figure 3.1. A linear mapping from R to R

Next consider a linear mapping T ∶ Rn Ð→ R. Recall the standard basis


vectors of Rn ,
3.1 Linear Mappings 63

e1 = (1, 0, . . . , 0), ..., en = (0, 0, . . . , 1).

Take the n real numbers

a1 = T (e1 ), ..., an = T (en ),

and define the vector a = (a1 , . . . , an ) ∈ Rn . Every x ∈ Rn can be written


n
x = (x1 , . . . , xn ) = ∑ xi ei , each xi ∈ R.
i=1

(So here each xi is a scalar entry of the vector x, whereas in Definition 3.1.1,
each xi was itself a vector. The author does not know any graceful way to
avoid this notation collision, the systematic use of boldface or arrows to adorn
vector names being heavyhanded, and the systematic use of the Greek letter
ξ rather than its Roman counterpart x to denote scalars being alien. Since
mathematics involves finitely many symbols and infinitely many ideas, the
reader will in any case eventually need the skill of discerning meaning from
context, a skill that may as well start receiving practice now.) Returning to
the main discussion, since x = ∑ni=1 xi ei and T is linear, Definition 3.1.1 shows
that
n n n
T (x) = T (∑ xi ei ) = ∑ xi T (ei ) = ∑ xi ai = ⟨x, a⟩ = ⟨a, x⟩.
i=1 i=1 i=1

Again, the only possibility for the linear mapping is multiplication by an


element a, where now a = (T (e1 ), . . . , T (en )) is a vector and the multiplication
is an inner product, but we don’t yet know that such a mapping is linear.
However, fix a vector a = (a1 , . . . , an ) and define the corresponding mapping
T ∶ Rn Ð→ R by T (x) = ⟨a, x⟩. Then it is straightforward to show that indeed
T is linear and T (ej ) = aj for j = 1, . . . , n (Exercise 3.1.3). Thus we have the
following proposition.

Proposition 3.1.3 (Description of linear mappings from vectors to


scalars). The linear mappings T ∶ Rn Ð→ R are precisely the mappings

T (x) = ⟨a, x⟩

where a ∈ Rn . That is, each linear mapping T ∶ Rn Ð→ R is multiplication by


a unique a ∈ Rn and conversely.

The slogan encapsulating the formula T (x) = ⟨a, x⟩ of the proposition is:

For vector input and scalar output, linear OF is vector TIMES.

In light of the proposition, you should be able to recognize linear mappings



from Rn to R on sight. For example, the mapping T ∶ R3 Ð→ R given √ by
T (x, y, z) = πx+ey+ 2z is linear, being multiplication by the vector (π, e, 2).
64 3 Linear Mappings and Their Matrices

In the previous chapter, the second example after Definition 2.3.6 showed
that every linear mapping T ∶ Rn Ð→ R is continuous. You are encouraged to
reread that example now before continuing.
A depiction of a linear mapping from R2 to R can again plot the domain
plane and the codomain axis orthogonally to each other or separately. See
Figures 3.2 and 3.3 for examples of each type of plot. The first figure suggests
that the graph forms a plane in R3 and that a line of inputs is taken to
the output value 0. The second figure shows more clearly how the mapping
compresses the plane into the line. As in the right half of Figure 3.1, the idea
is that T (x, y) = xT (1, 0) + yT (0, 1) for all x, y ∈ Z. The compression is that
although (1, 0) and (0, 1) lie on separate input axes, T (1, 0) and T (0, 1) lie
on the same output axis.

Figure 3.2. The graph of a linear mapping from R2 to R

Figure 3.3. Second depiction of a linear mapping from R2 to R

The most general mapping is T ∶ Rn Ð→ Rm . Such a mapping decomposes


as T = (T1 , . . . , Tm ) where each Ti ∶ Rn Ð→ R is the ith component function
of T . The next proposition reduces the linearity of such T to the linearity of
its components Ti , which we already understand.
3.1 Linear Mappings 65

Proposition 3.1.4 (Componentwise nature of linearity). The vector-


valued mapping T = (T1 , . . . , Tm ) ∶ Rn Ð→ Rm is linear if and only if each
scalar-valued component function Ti ∶ Rn Ð→ R is linear.

Proof. For every x, y ∈ Rn ,

T (x + y) = (T1 (x + y), ..., Tm (x + y))

and

T (x) + T (y) = (T1 (x), . . . , Tm (x)) + (T1 (y), . . . , Tm (y))


= (T1 (x) + T1 (y), ..., Tm (x) + Tm (y)).

But T satisfies (3.1) exactly when the left sides are equal, the left sides are
equal exactly when the right sides are equal, and the right sides are equal
exactly when each Ti satisfies (3.1). A similar argument with (3.2), left as
Exercise 3.1.5, completes the proof. ⊔

The componentwise nature of linearity combines with the fact that scalar-
valued linear mappings are continuous (as observed after Proposition 3.1.3)
and with the componentwise nature of continuity to show that all linear map-
pings are continuous. Despite being so easy to prove, this fact deserves a
prominent statement.

Theorem 3.1.5 (Linear mappings are continuous). Let the mapping T ∶


Rn Ð→ Rm be linear. Then T is continuous.

By the previous proposition, a mapping T ∶ Rn Ð→ Rm is linear if and only


if each Ti determines n real numbers ai1 , . . . , ain as just discussed. Putting
all mn numbers aij into a box with m rows and n columns gives a matrix
⎡ a11 a12 ⋯ a1n ⎤
⎢ ⎥
⎢a a ⋯ a ⎥
⎢ 21 22 2n ⎥
A=⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⎥
(3.3)
⎢ ⎥
⎢am1 am2 ⋯ amn ⎥
⎣ ⎦
whose ith row is the vector determined by Ti , and whose (i, j)th entry (this
means ith row, jth column) is thus given by

aij = Ti (ej ). (3.4)

Sometimes one saves writing by abbreviating the right side of (3.3) to [aij ]m×n ,
or even just [aij ] when m and n are firmly established.
The set of all m × n matrices (those with m rows and n columns) of real
numbers is denoted Mm,n (R). The n × n square matrices are denoted Mn (R).
Euclidean space Rn is often identified with Mn,1 (R) and vectors written as
columns,
66 3 Linear Mappings and Their Matrices
⎡x ⎤
⎢ 1⎥
⎢ ⎥
(x1 , . . . , xn ) = ⎢ ⋮ ⎥ .
⎢ ⎥
⎢xn ⎥
⎣ ⎦
This typographical convention may look odd, but it is useful. The idea is that
a vector in parentheses is merely an ordered list of entries, not inherently a
row or a column; but when a vector—or, more generally, a matrix—is enclosed
by square brackets, the distinction between rows and columns is significant.
To make the linear mapping T ∶ Rn Ð→ Rm be multiplication by its matrix
A ∈ Mm,n (R), we need to define multiplication of an m × n matrix A by an
n × 1 vector x appropriately. That is, the only sensible definition is as follows.
Definition 3.1.6 (Matrix-by-vector multiplication). Let A ∈ Mm,n (R)
and let x ∈ Rn . The product Ax ∈ Rm is defined to be the vector whose ith
entry is the inner product of A’s ith row and x,
⎡ ⎤
⎡ a11 a12 ⋯ ⋯ a1n ⎤ ⎢ x1 ⎥ ⎡ a11 x1 + ⋯ + a1n xn ⎤
⎢ ⎥ ⎢x ⎥ ⎢ ⎥
⎢ a a ⋯ ⋯ a ⎥ ⎢ 2⎥ ⎢ a x + ⋯ + a x ⎥
⎢ 21 22 2n ⎥ ⎢ ⎥ ⎢ 21 1 2n n ⎥
Ax = ⎢ ⎥⎢ ⋮ ⎥ = ⎢ ⎥.
⎢ ⋮ ⋮ ⋮ ⎥ ⎢ ⎥ ⎢ ⋮ ⎥
⎢ ⎥⎢ ⋮ ⎥ ⎢ ⎥
⎢am1 am2 ⋯ ⋯ amn ⎥ ⎢ ⎥ ⎢am1 x1 + ⋯ + amn xn ⎥
⎣ ⎦ ⎢xn ⎥ ⎣ ⎦
⎣ ⎦
For example,
⎡7⎤
123 ⎢ ⎥
⎢ ⎥ 1⋅7+2⋅8+3⋅9
[ ] ⎢8⎥ = [ ] = [ ].
50
456 ⎢ ⎥
⎢9⎥ 4 ⋅ 7 + 5 ⋅ 8 + 6 ⋅ 9 122
⎣ ⎦
Definition 3.1.6 is designed to give the following theorem, which encompasses
Propositions 3.1.2 and 3.1.3 as special cases.
Theorem 3.1.7 (Description of linear mappings from vectors to vec-
tors). The linear mappings T ∶ Rn Ð→ Rm are precisely the mappings

T (x) = Ax

where A ∈ Mm,n (R). That is, each linear mapping T ∶ Rn Ð→ Rm is multipli-


cation by a unique A ∈ Mm,n (R) and conversely.
The slogan encapsulating the formula T (x) = Ax of the proposition is:

For vector input and vector output, linear OF is matrix TIMES.

Recall the meaning of the rows of a matrix A that describes a correspond-


ing linear mapping T :

The ith row of A describes Ti , the ith component function of T .

The columns of A also have a description in terms of T . Indeed, the jth column
is
3.1 Linear Mappings 67
⎡ a ⎤ ⎡ T (e ) ⎤
⎢ 1j ⎥ ⎢ 1 j ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⎥ = ⎢ ⋮ ⎥ = T (ej ).
⎢ ⎥ ⎢ ⎥
⎢amj ⎥ ⎢Tm (ej )⎥
⎣ ⎦ ⎣ ⎦
That is:

The jth column of A is T (ej ), i.e., is T of the jth standard basis vector.

For an example using this last principle, let r ∶ R2 Ð→ R2 be the mapping


that rotates the plane counterclockwise through the angle π/6. It is geomet-
rically evident that r is linear: rotating the parallelogram P with sides x1
and x2 (and thus with diagonal x1 + x2 ) by π/6 yields the parallelogram r(P )
with sides r(x1 ) and r(x2 ), so the diagonal of r(P ) is equal to both r(x1 +x2 )
and r(x1 ) + r(x2 ). Thus r satisfies (3.1). The geometric verification of (3.2)
is similar. (See Figure 3.4.)

r(x1 + x2 ) = r(x1 ) + r(x2 )

x1 + x2
x2 r(x2 ) r(x1 )
x1

Figure 3.4. The rotation mapping is linear

To find the matrix A of r, simply compute that its columns are



−1/2
r(e1 ) = r(1, 0) = [ ], r(e2 ) = r(0, 1) = [√ ] ,
3/2
1/2 3/2

and thus √
−1/2
3/2 √
A=[ ].
1/2 3/2
So now we know r, because the rows of A describe its component functions,
√ ⎡ 3 x − 1 y⎤
√ √ √
−1/2 x
3/2 √ ⎢ 2 ⎥
r(x, y) = [ ][ ] = ⎢1 ⎥=(
3 1 1 3
x − y, x +
⎢ x + 3 y⎥
√ 2 y).
1/2 3/2 y ⎣2 2 ⎦ 2 2 2 2

Figures 3.5 through 3.8 show more depictions of linear mappings between
spaces of various dimensions. Note that although these mappings stretch and
torque their basic input grids, the grids still get taken to configurations of
68 3 Linear Mappings and Their Matrices

Figure 3.5. A linear mapping from R to R2

Figure 3.6. A linear mapping from R2 to R2

Figure 3.7. A linear mapping from R3 to R3

straight lines. Contrast this to how the nonlinear mapping of Figure 2.9 bends
the basic grid lines into curves.
We end this section by returning from calculations to intrinsic methods.
The following result could have come immediately after Definition 3.1.1, but
it has been deferred to this point for the sake of presenting some of the objects
more explicitly first, to make them familiar. However, it is most easily proved
intrinsically.
Let L(Rn , Rm ) denote the set of all linear mappings from Rn to Rm . Not
only does this set sit inside the vector space M(Rn , Rm ), it is a vector space
in its own right:
3.1 Linear Mappings 69

Figure 3.8. A linear mapping from R3 to R2

Proposition 3.1.8 (L(Rn , Rm ) forms a vector space). Suppose that S, T ∶


Rn Ð→ Rm are linear and that a ∈ R. Then the mappings
S + T, aS ∶ Rn Ð→ Rm
are also linear. Consequently, the set of linear mappings from Rn to Rm forms
a vector space.
Proof. The mappings S and T satisfy (3.1) and (3.2). We must show that
S + T and aS do the same. Compute for x, y ∈ Rn ,
(S + T )(x + y)
= S(x + y) + T (x + y) by definition of “+” in M(Rn , Rm )
= S(x) + S(y) + T (x) + T (y) since S and T satisfy (3.1)
= S(x) + T (x) + S(y) + T (y) since addition in Rm commutes
= (S + T )(x) + (S + T )(y) by definition of “+” in M(Rn , Rm ).
Thus S + T satisfies (3.1). The other three statements about S + T and aS
satisfying (3.1) and (3.2) are similar and are left as Exercise 3.1.12. Once those
are established, the rest of the vector space axioms in L(Rn , Rm ) are readily
seen to be inherited from M(Rn , Rm ). ⊔

Also, linearity is preserved under composition. That is, if S ∶ Rn Ð→ Rm
and T ∶ Rp Ð→ Rn are linear then so is S ○ T ∶ Rp Ð→ Rm (Exercise 3.1.13).

Exercises
3.1.1. Prove that T ∶ Rn Ð→ Rm is linear if and only if it satisfies (3.1)
and (3.2). (It may help to rewrite (3.1) with the symbols x1 and x2 in place
of x and y. Then prove one direction by showing that (3.1) and (3.2) are
implied by the defining condition for linearity, and prove the other direction
by using induction to show that (3.1) and (3.2) imply the defining condition.
Note that as pointed out in the text, one direction of this argument has a bit
more substance than the other.)
70 3 Linear Mappings and Their Matrices

3.1.2. Suppose that T ∶ Rn Ð→ Rm is linear. Show that T (0n ) = 0m . (An


intrinsic argument is nicer.)

3.1.3. Fix a vector a ∈ Rn . Show that the mapping T ∶ Rn Ð→ R given by


T (x) = ⟨a, x⟩ is linear, and that T (ej ) = aj for j = 1, . . . , n.

3.1.4. Find the linear mapping T ∶ R3 Ð→ R such that T (0, 1, 1) = 1,


T (1, 0, 1) = 2, and T (1, 1, 0) = 3.

3.1.5. Complete the proof of the componentwise nature of linearity.

3.1.6. Carry out the matrix-by-vector multiplications


⎡1 0 0⎤ ⎡ ⎤ ⎡a b⎤ ⎡y ⎤ ⎡ 1 −1 0 ⎤ ⎡1⎤
⎢ ⎥ ⎢1⎥ ⎢ ⎥ x ⎢ 1⎥ ⎢ ⎥⎢ ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢1 1 0⎥ ⎢2⎥ , ⎢c d⎥ [ ] , [x1 ⋯ xn ] ⎢ ⋮ ⎥ , ⎢ 0 1 −1 ⎥ ⎢1⎥ .
⎢ ⎥⎢ ⎥ ⎢ ⎥ y ⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢1 1 1⎥ ⎢ ⎥ ⎢e f⎥ ⎢yn ⎥ ⎢ −1 0 1 ⎥ ⎢1⎥
⎣ ⎦ ⎣3⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦
3.1.7. Prove that the identity mapping id ∶ Rn Ð→ Rn is linear. What is its
matrix? Explain.

3.1.8. Let θ denote a fixed but generic angle. Argue geometrically that the
mapping R ∶ R2 Ð→ R2 given by counterclockwise rotation by θ is linear, and
then find its matrix.

3.1.9. Show that the mapping Q ∶ R2 Ð→ R2 given by reflection through the


x-axis is linear. Find its matrix.

3.1.10. Show that the mapping P ∶ R2 Ð→ R2 given by orthogonal projection


onto the diagonal line x = y is linear. Find its matrix. (See Exercise 2.2.15.)

3.1.11. Draw the graph of a generic linear mapping from R2 to R3 .

3.1.12. Continue the proof of Proposition 3.1.8 by proving the other three
statements about S + T and aS satisfying (3.1) and (3.2).

3.1.13. If S ∈ L(Rn , Rm ) and T ∈ L(Rp , Rn ), show that S ○ T ∶ Rp Ð→ Rm lies


in L(Rp , Rm ).

3.1.14. (a) Let S ∈ L(Rn , Rm ). Its transpose is the mapping

S T ∶ Rm Ð→ Rn

defined by the characterizing condition

⟨x, S T (y)⟩ = ⟨S(x), y⟩ for all x ∈ Rn and y ∈ Rm .

Granting that indeed a unique such S T exists, use the characterizing condition
to show that

S T (y + y ′ ) = S T (y) + S T (y ′ ) for all y, y ′ ∈ Rm


3.1 Linear Mappings 71

by showing that

⟨x, S T (y + y ′ )⟩ = ⟨x, S T (y) + S T (y ′ )⟩ for all x ∈ Rn and y, y ′ ∈ Rm .

A similar argument (not requested here) shows that S T (αy) = αS T (y) for
all α ∈ R and y ∈ Rm , and so the transpose of a linear mapping is linear.
(b) Keeping S from part (a), now further introduce T ∈ L(Rp , Rn ), so that
also S ○ T ∈ L(Rp , Rm ). Show that the transpose of the composition is the
composition of the transposes in reverse order,

(S ○ T )T = T T ○ S T ,

by showing that

⟨x, (S ○ T )T (z)⟩ = ⟨x, (T T ○ S T )(z)⟩ for all x ∈ Rp and z ∈ Rm .

3.1.15. A mapping f ∶ Rn Ð→ Rm is called affine if it has the form f (x) =


T (x) + b, where T ∈ L(Rn , Rm ) and b ∈ Rm . State precisely and prove: the
composition of affine mappings is affine.
3.1.16. Let T ∶ Rn Ð→ Rm be a linear mapping. Note that since T is continu-
ous and since the absolute value function on Rm is continuous, the composite
function
∣T ∣ ∶ Rn Ð→ R
is continuous.
(a) Let S = {x ∈ Rn ∶ ∣x∣ = 1}. Explain why S is a compact subset of Rn .
Explain why it follows that ∣T ∣ takes a maximum value c on S.
(b) Show that ∣T (x)∣ ≤ c∣x∣ for all x ∈ Rn . This result is the linear mag-
nification boundedness lemma. We will use it in Chapter 4.
3.1.17. Let T ∶ Rn Ð→ Rm be a linear mapping.
(a) Explain why the set D = {x ∈ Rn ∶ ∣x∣ = 1} is compact.
(b) Use part (a) of this exercise and part (b) of the preceding exercise to
explain why therefore the set {∣T (x)∣ ∶ x ∈ D} has a maximum. This maximum
is called the norm of T and is denoted ∥T ∥.
(c) Explain why ∥T ∥ is the smallest value K that satisfies the condition
from part (b) of the preceding exercise, ∣T (x)∣ ≤ K∣x∣ for all x ∈ Rn .
(d) Show that for every S, T ∈ L(Rn , Rm ) and every a ∈ R,

∥S + T ∥ ≤ ∥S∥ + ∥T ∥ and ∥aT ∥ = ∣a∣ ∥T ∥.

Define a distance function

d ∶ L(Rn , Rm ) × L(Rn , Rm ) Ð→ R, d(S, T ) = ∥T − S∥.

Show that this function satisfies the distance properties of Theorem 2.2.8.
(e) Show that for every S ∈ L(Rn , Rm ) and every T ∈ L(Rp , Rn ),

∥ST ∥ ≤ ∥S∥∥T ∥.
72 3 Linear Mappings and Their Matrices

3.2 Operations on Matrices


Having described abstract objects, the linear mappings T ∈ L(Rn , Rm ), with
explicit ones, the matrices A ∈ Mm,n (R) with (i, j)th entry aij = Ti (ej ), we
naturally want to study linear mappings via their matrices. The first step
is to develop rules for matrix manipulation corresponding to operations on
mappings. Thus if
S, T ∶ Rn Ð→ Rm
are linear mappings having matrices

A, B ∈ Mm,n (R),

and if a is a real number, then the matrices for the linear mappings

S + T ∶ Rn Ð→ Rm and aS ∶ Rn Ð→ Rm

naturally should be denoted

A + B ∈ Mm,n (R) and aA ∈ Mm,n (R).

So “+” and “⋅” (or juxtaposition) are about to acquire new meanings yet
again,
+ ∶ Mm,n (R) × Mm,n (R) Ð→ Mm,n (R)
and
⋅ ∶ R × Mm,n (R) Ð→ Mm,n (R).
To define the sum, fix j between 1 and n. Then

the jth column of A + B = (S + T )(ej )


= S(ej ) + T (ej )
= the sum of the jth columns of A and B.

And since vector addition is simply coordinatewise scalar addition, it follows


that for every i between 1 and m and every j between 1 and m, the (i, j)th
entry of A + B is the sum of the (i, j)th entries of A and B. (One can reach
the same conclusion in a different way by thinking about rows rather than
columns.) Thus the definition for matrix addition must be as follows.
Definition 3.2.1 (Matrix addition).

If A = [aij ]m×n and B = [bij ]m×n then A + B = [aij + bij ]m×n .

For example,
−1 0
[ ]+[ ]=[ ].
12 02
34 21 55
A similar argument shows that the appropriate definition to make for scalar
multiplication of matrices is as follows.
3.2 Operations on Matrices 73

Definition 3.2.2 (Scalar-by-matrix multiplication).

If α ∈ R and A = [aij ]m×n then αA = [αaij ]m×n .

For example,
2[ ]=[ ].
12 24
34 68
The zero matrix 0m,n ∈ Mm,n (R), corresponding to the zero mapping in
L(Rn , Rm ), is the obvious one, with all entries 0. The operations in Mm,n (R)
precisely mirror those in L(Rn , Rm ), giving the following result.

Proposition 3.2.3 (Mm,n (R) forms a vector space). The set Mm,n (R)
of m × n matrices forms a vector space over R.

The remaining important operation on linear mappings is composition. As


shown in Exercise 3.1.13, if

S ∶ Rn Ð→ Rm and T ∶ Rp Ð→ Rn

are linear then their composition

S ○ T ∶ Rp Ð→ Rm

is linear as well. Suppose that S and T respectively have matrices

A ∈ Mm,n (R) and B ∈ Mn,p (R).

Then the composition S ○ T has a matrix in Mm,p (R) that is naturally defined
as the matrix-by-matrix product

AB ∈ Mm,p (R),

the order of multiplication being chosen for consistency with the composition.
Under this specification,

(A times B)’s jth column = (S ○ T )(ej )


= S(T (ej ))
= A times (B’s jth column).

And A times (B’s jth column) is a matrix-by-vector multiplication, which


we know how to carry out: the result is a column vector whose ith entry for
i = 1, . . . , m is the inner product of the ith row of A and the jth column of B.
In sum, the rule for matrix-by-matrix multiplication is as follows.

Definition 3.2.4 (Matrix multiplication). Given two matrices

A ∈ Mm,n (R) and B ∈ Mn,p (R)


74 3 Linear Mappings and Their Matrices

such that A has as many columns as B has rows, their product,

AB ∈ Mm,p (R),

has for its (i, j)th entry (for every (i, j) ∈ {1, . . . , m} × {1, . . . , p}) the inner
product of the ith row of A and the jth column of B. In symbols,

(AB)ij = ⟨ith row of A, jth column of B⟩,

or, at the level of individual entries,


n
If A = [aij ]m×n and B = [bij ]n×p then AB = [ ∑ aik bkj ] .
k=1 m×p

Inevitably, matrix-by-matrix multiplication subsumes matrix-by-vector


multiplication, with vectors viewed as one-column matrices. Also, once we
have the definition of matrix-by-matrix multiplication, we can observe that in
complement to the already-established rule that for every j ∈ {1, . . . , n},

(A times B)’s jth column equals A times (B’s jth column),

also, for every i ∈ {1, . . . , m},

ith row of (A times B) equals (ith row of A) times B.

Indeed, both quantities in the previous display are the 1 × p vector whose jth
entry is the inner product of the ith row of A and the jth column of B.
For example, consider the matrices
⎡1 −2⎤
⎢ ⎥
⎢ ⎥
A=[ ], B = ⎢2 −3⎥ , C = [ ],
123 45
⎢ ⎥
456 ⎢3 −4⎥ 67
⎣ ⎦
⎡1 1 1⎤ ⎡x⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
D = ⎢0 1 1⎥ , E = [a b c] , F = ⎢y ⎥ .
⎢ ⎥ ⎢ ⎥
⎢0 0 1⎥ ⎢z ⎥
⎣ ⎦ ⎣ ⎦
Some products among these (verify!) are
⎡ −8 −9 ⎤
⎢ ⎥
14 −20 ⎢ ⎥
AB = [ ], BC = ⎢−10 −11⎥ , AD = [ ],
13 6
32 −47 ⎢ ⎥
⎢−12 −13⎥ 4 9 15
⎣ ⎦
⎡6 −9⎤ ⎡ax bx cx⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ x + 2y + 3z ⎢ ⎥
DB = ⎢5 −7⎥ , AF = [ ], F E = ⎢ay by cy ⎥ ,
⎢ ⎥ 4x + 5y + 6z ⎢ ⎥
⎢3 −4⎥ ⎢az bz cz ⎥
⎣ ⎦ ⎣ ⎦
EF = ax + by + cz.
3.2 Operations on Matrices 75

Matrix multiplication is not commutative. Indeed, when the product AB


is defined, the product BA may not be, or it may be but have different dimen-
sions from AB; cf. EF and F E above. Even when A and B are both n × n, so
that AB and BA are likewise n×n, the products need not agree. For example,

[ ][ ]=[ ], [ ][ ]=[ ].
01 00 10 00 01 00
00 10 00 10 00 01

Of particular interest is the matrix associated with the identity mapping,

id ∶ Rn Ð→ Rn , id(x) = x.

Naturally, this matrix is called the identity matrix; it is written In . Since


idi (ej ) = δij ,
⎡1 0 ⋯ 0⎤
⎢ ⎥
⎢0 1 ⋯ 0⎥
⎢ ⎥
In = [δij ]n×n = ⎢ ⎥.
⎢⋮ ⋮ ⋮ ⎥
⎢ ⎥
⎢0 0 ⋯ 1⎥
⎣ ⎦
Although matrix multiplication fails to commute, it does have the following
properties.

Proposition 3.2.5 (Properties of matrix multiplication). Matrix mul-


tiplication is associative,

A(BC) = (AB)C for A ∈ Mm,n (R), B ∈ Mn,p (R), C ∈ Mp,q (R).

Matrix multiplication distributes over matrix addition,

A(B + C) = AB + AC for A ∈ Mm,n (R), B, C ∈ Mn,p (R),


(A + B)C = AC + BC for A, B ∈ Mm,n (R), C ∈ Mn,p (R).

Scalar multiplication passes through matrix multiplication,

α(AB) = (αA)B = A(αB) for α ∈ R, A ∈ Mm,n (R), B ∈ Mn,p (R).

The identity matrix is a multiplicative identity,

Im A = A = AIn for A ∈ Mm,n (R).

Proof. The right way to prove these is intrinsic, by recalling that addition,
scalar multiplication, and multiplication of matrices precisely mirror addition,
scalar multiplication, and composition of mappings. For example, if A, B, C
are the matrices of the linear mappings S ∈ L(Rn , Rm ), T ∈ L(Rp , Rn ), and
U ∈ L(Rq , Rp ), then (AB)C and A(BC) are the matrices of (S ○ T ) ○ U and
S ○ (T ○ U ). But these two mappings are the same, because the composition of
mappings (mappings in general, not only linear mappings) is associative. To
76 3 Linear Mappings and Their Matrices

verify the associativity, we cite the definition of four different binary compo-
sitions to show that the ternary composition is independent of parentheses,
as follows. For every x ∈ Rq ,

((S ○ T ) ○ U )(x) = (S ○ T )(U (x)) by definition of R ○ U where R = S ○ T


= S(T (U (x))) by definition of S ○ T
= S((T ○ U )(x)) by definition of T ○ U
= (S ○ (T ○ U ))(x) by definition of S ○ V where V = T ○ U .

So indeed ((S ○ T ) ○ U ) = (S ○ (T ○ U )), and consequently (AB)C = A(BC).


Alternatively, one can verify the equalities elementwise by manipulating
sums. Adopting the notation Mij for the (i, j)th entry of a matrix M , we have
n n p n p
(A(BC))ij = ∑ Aik (BC)kj = ∑ Aik ∑ Bkℓ Cℓj = ∑ ∑ Aik Bkℓ Cℓj
k=1 k=1 ℓ=1 k=1 ℓ=1
p n p
= ∑ ∑ Aik Bkℓ Cℓj = ∑ (AB)iℓ Cℓj = ((AB)C)ij .
ℓ=1 k=1 ℓ=1

The steps here are not explained in detail because the author finds this method
as grim as it is gratuitous: the coordinates work because they must, but their
presence only clutters the argument. The other equalities are similar. ⊔

Composing mappings is most interesting when all the mappings in ques-


tion take a set S to the same set S, for the set of such mappings is closed
under composition. In particular, L(Rn , Rn ) is closed under composition. The
corresponding statement about matrices is that Mn (R) is closed under mul-
tiplication.

Exercises

3.2.1. Justify Definition 3.2.2 of scalar multiplication of matrices.

3.2.2. Carry out the matrix multiplications


⎡0 1 0 0⎤e
⎡a b ⎤ ⎢ ⎥
⎢ 1 1⎥ ⎢0 0 1 0⎥
d −b ⎢ ⎥ ⎢ ⎥
[ ][ ], [x1 x2 x3 ] ⎢a2 b2 ⎥ , ⎢ ⎥ (e = 2, 3, 4),
ab
c d −c a ⎢ ⎥ ⎢0 0 0 1⎥
⎢a3 b3 ⎥ ⎢ ⎥
⎣ ⎦ ⎢0 0 0 0⎥
⎣ ⎦
⎡1 1 1⎤ ⎡ ⎤ ⎡1 0 0⎤ ⎡1 1 1⎤
⎢ ⎥ ⎢1 0 0⎥ ⎢ ⎥⎢ ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢0 1 1⎥ ⎢1 1 0⎥ , ⎢1 1 0⎥ ⎢0 1 1⎥ .
⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢0 0 1⎥ ⎢ ⎥ ⎢1 1 1⎥ ⎢0 0 1⎥
⎣ ⎦ ⎣1 1 1⎦ ⎣ ⎦⎣ ⎦
3.2.3. Prove more of Proposition 3.2.5, that A(B + C) = AB + AC, (αA)B =
A(αB), and Im A = A for suitable matrices A, B, C and any scalar α.
3.2 Operations on Matrices 77

3.2.4. (If you have not yet worked Exercise 3.1.14 then do so before working
this exercise.) Let A = [aij ] ∈ Mm,n (R) be the matrix of S ∈ L(Rn , Rm ). Its
transpose AT ∈ Mn,m (R) is the matrix of the transpose mapping S T . Since
S and S T act respectively as multiplication by A and AT , the characterizing
property of S T from Exercise 3.1.14 gives

⟨x, AT y⟩ = ⟨Ax, y⟩ for all x ∈ Rn and y ∈ Rm .

Make specific choices of x and y to show that the transpose AT ∈ Mn,m (R) is
obtained by flipping A about its northwest–southeast diagonal; that is, show
that the (i, j)th entry of AT is aji . It follows that the rows of AT are the
columns of A, and the columns of AT are the rows of A.
(Similarly, let B ∈ Mn,p (R) be the matrix of T ∈ L(Rp , Rn ), so that B T
is the matrix of T T . Because matrix multiplication is compatible with linear
mapping composition, we know immediately from Exercise 3.1.14(b), with no
reference to the concrete description of the matrix transposes AT and B T in
terms of the original matrices A and B, that the transpose of the product is
the product of the transposes in reverse order,

(AB)T = B T AT for all A ∈ Mm,n (R) and B ∈ Mn,p (R).

That is, by characterizing the transpose mapping in Exercise 3.1.14, we eas-


ily derived the construction of the transpose matrix here and obtained the
formula for the product of transpose matrices with no reference to their con-
struction.)
3.2.5. The trace of a square matrix A ∈ Mn (R) is the sum of its diagonal
elements,
n
tr(A) = ∑ aii .
i=1

Show that
tr(AB) = tr(BA), A, B ∈ Mn (R).
(This exercise may entail double subscripts.)
3.2.6. For every matrix A ∈ Mm,n (R) and column vector a ∈ Rm , define the
affine mapping (cf. Exercise 3.1.15)

AffA,a ∶ Rn Ð→ Rm

by the rule AffA,a (x) = Ax + a for all x ∈ Rn , viewing x as a column vector.


(a) Explain why every affine mapping from Rn to Rm takes this form.
(b) Given such A and a, define the matrix A′ ∈ Mm+1,n+1 (R) to be

A′ = [ ].
A a
0n 1

Show that for all x ∈ Rn ,


78 3 Linear Mappings and Their Matrices

Aff (x)
A′ [ ] = [ A,a ].
x
1 1

Thus, affine mappings, like linear mappings, behave as matrix-by-vector mul-


tiplications but where the vectors are the usual input and output vectors
augmented with an extra “1” at the bottom.
(c) The affine mapping AffB,b ∶ Rp Ð→ Rn determined by B ∈ Mn,p (R) and
b ∈ Rn has matrix
B′ = [ ].
B b
0p 1
Show that AffA,a ○ AffB,b ∶ Rp Ð→ Rm has matrix A′ B ′ . That is, matrix
multiplication is compatible with composition of affine mappings.

3.2.7. The exponential of a square matrix A is the infinite matrix sum


1 2 1 3
eA = I + A + A + A + ⋯.
2! 3!
Compute the exponentials of the following matrices:
⎡λ 0⎤
⎡λ 1 0 ⎤ ⎢ ⎥
⎢ ⎥ ⎢0 0⎥
1 0
⎢ ⎥ ⎢ ⎥
A = [λ], A = [ ], A = ⎢0 λ 1⎥ , A=⎢ ⎥.
λ1 λ 1
⎢ ⎥ ⎢0 1⎥
0λ ⎢ 0 0 λ⎥ ⎢ 0 λ ⎥
⎣ ⎦ ⎢0 λ⎥
⎣ 0 0 ⎦
What is the general pattern?

3.2.8. Let a, b, d be real numbers with ad = 1. Show that

[ ]=[ ][ ].
ab 1 ab a 0
0d 0 1 0d

Let a, b, c, d be real numbers with c ≠ 0 and ad − bc = 1. Show that

1 ac−1 c−1 0 0 −1 1 c−1 d


[ ]=[ ][ ][ ][ ].
ab
cd 0 1 0 c 1 0 0 1

Thus this exercise has shown that all matrices [ ac db ] with ad − bc = 1 can be
expressed in terms of matrices [ 10 β1 ] and matrices [ α0 α0−1 ] and the matrix
[ 01 −10 ].

3.3 The Inverse of a Linear Mapping


Given a linear mapping S ∶ Rn Ð→ Rm , does it have an inverse? That is, is
there a mapping T ∶ Rm Ð→ Rn such that

S ○ T = idm and T ○ S = idn ?


3.3 The Inverse of a Linear Mapping 79

If so, what is T ?
The symmetry of the previous display shows that if T is an inverse of S
then S is an inverse of T in turn. Also, the inverse T , if it exists, must be
unique, for if T ′ ∶ Rm Ð→ Rn also inverts S then
T ′ = T ′ ○ idm = T ′ ○ (S ○ T ) = (T ′ ○ S) ○ T = idn ○ T = T.
Thus T can unambiguously be denoted S −1 . In fact, this argument has shown
a little bit more than claimed: if T ′ inverts S from the left and T inverts S
from the right then T ′ = T . On the other hand, the argument does not show
that if T inverts S from the left then T also inverts S from the right—this is
not true.
If the inverse T exists then it too is linear. To see this, note that the
elementwise description of S and T being inverses of one another is that every
y ∈ Rm takes the form y = S(x) for some x ∈ Rn , every x ∈ Rn takes the form
x = T (y) for some y ∈ Rm , and
for all x ∈ Rn and y ∈ Rm , y = S(x) ⇐⇒ x = T (y).
Now compute that for every y1 , y2 ∈ Rm ,
T (y1 + y2 ) = T (S(x1 ) + S(x2 )) for some x1 , x2 ∈ Rn
= T (S(x1 + x2 )) since S is linear
= x1 + x2 since T inverts S
= T (y1 ) + T (y2 ) since y1 = S(x1 ) and y2 = S(x2 ).
Thus T satisfies (3.1). The argument that T satisfies (3.2) is similar.
Since matrices are more explicit than linear mappings, we replace the
question at the beginning of this section with its matrix counterpart: given a
matrix A ∈ Mm,n (R), does it have an inverse matrix, a matrix B ∈ Mn,m (R)
such that
AB = Im and BA = In ?
As above, if the inverse exists then it is unique, and so it can be denoted A−1 .
The first observation to make is that if the equation Ax = 0m has a nonzero
solution x ∈ Rn then A has no inverse. Indeed, also A0n = 0m , so an inverse
A−1 would have to take 0m both to x and to 0n , which is impossible. And so
we are led to a subordinate question: when does the matrix equation
Ax = 0m
have nonzero solutions x ∈ R ? n

For example, let A be the 5 × 6 matrix


⎡ 5 17 26 1 55 ⎤
⎢ 1 ⎥
⎢ −3 −1 −13 −20 0 −28 ⎥
⎢ ⎥
⎢ ⎥
A = ⎢ −2 3 5 0 3⎥.
⎢ ⎥
1
⎢ −2 −4 −6 0 −10 ⎥
⎢ 0 ⎥
⎢ 5 10 15 1 42 ⎥
⎣ 0 ⎦
80 3 Linear Mappings and Their Matrices

If there is a nonzero x ∈ R6 such that Ax = 05 then A is not invertible.


Left multiplication by certain special matrices will simplify the matrix A.

Definition 3.3.1 (Elementary matrices). There are three kinds of ele-


mentary matrices. For every i, j ∈ {1, . . . , m} (i ≠ j) and every a ∈ R, the
m × m (i; j, a) recombine matrix is
⎡1 ⎤
⎢ ⎥
⎢ ⋱ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ 1 a ⎥

Ri;j,a = ⎢ ⋱ ⎥.

⎢ ⎥
⎢ ⎥
⎢ ⎥
1
⎢ ⋱ ⎥
⎢ ⎥
⎢ ⎥
⎣ 1⎦
(Here the a sits in the (i, j)th position, the diagonal entries are 1 and all other
entries are 0. The a is above the diagonal as shown only when i < j; otherwise
it is below.)
For every i ∈ {1, . . . , m} and every nonzero a ∈ R, the m × m (i, a) scale
⎡1 ⎤
matrix is
⎢ ⎥
⎢ ⋱ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ 1 ⎥
Si,a = ⎢

⎥.

⎢ ⎥
a
⎢ ⎥
⎢ ⎥
1
⎢ ⋱ ⎥
⎢ ⎥
⎢ ⎥
⎣ 1 ⎦
(Here the a sits in the ith diagonal position, all other diagonal entries are 1,
and all other entries are 0.)
For every i, j ∈ {1, . . . , m} (i ≠ j), the m × m (i; j) transposition matrix
⎡1 ⎤
is
⎢ ⎥
⎢ ⋱ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ 0 1 ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
1
Ti;j = ⎢ ⋱ ⎥.
⎢ ⎥
⎢ ⎥
⎢ 1 ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
1 0
⎢ ⎥
⎢ ⎥
1
⎢ ⋱ ⎥
⎢ ⎥
⎢ 1⎥
⎣ ⎦
(Here the diagonal entries are 1 except the ith and jth, the (i, j)th and (j, i)th
entries are 1, and all other entries are 0.)
3.3 The Inverse of a Linear Mapping 81

The plan is to study the equation Ax = 0m by using these elementary


matrices to reduce A to a nicer matrix E and then solve the equation Ex = 0m
instead. Thus we are developing an algorithm rather than a formula. The next
proposition describes the effect that the elementary matrices produce by left
multiplication.

Proposition 3.3.2 (Effects of the elementary matrices). Let M be an


m × n matrix; call its rows rk . Then:
(1) The m × n matrix Ri;j,a M has the same rows as M except that its ith row
is ri + arj .
(2) The m × n matrix Si,a M has the same rows as M except that its ith row
is ari .
(3) The m × n matrix Ti;j M has the same rows as M except that its ith row
is rj and its jth row is ri .

Proof. (1) As observed immediately after Definition 3.2.4, each row of Ri;j,a M
equals the corresponding row of Ri;j,a times M . For every row index k ≠ i,
the only nonzero entry of the row is a 1 in the kth position, so the product
of the row and M simply picks out the kth row of M . Similarly, the ith row
of Ri;j,a has a 1 in the ith position and an a in the jth, so the row times M
equals the ith row of M plus a times the jth row of M .
The proofs of statements (2) and (3) are similar, left as Exercise 3.3.2. ⊓

To get a better sense of why the statements in the proposition are true, it
may be helpful to do the calculations explicitly with some moderately sized
matrices. But then, the point of the proposition is that once one believes it, left
multiplication by elementary matrices no longer requires actual calculation.
Instead, one simply carries out the appropriate row operations. For example,

R1;2,3 ⋅ [ ]=[ ],
123 13 17 21
456 4 5 6

because R1;2,3 adds 3 times the second row to the first. The slogan here is:

Elementary matrix TIMES is row operation ON.

Thus we use the elementary matrices to reason about this material, but for
hand calculation we simply carry out the row operations.
The next result is that performing row operations on A doesn’t change the
set of solutions x to the equation Ax = 0m .

Lemma 3.3.3 (Invertibility of products of the elementary matrices).


Products of elementary matrices are invertible. More specifically:
(1) The elementary matrices are invertible by other elementary matrices,

(Ri;j,a )−1 = Ri;j,−a , (Si,a )−1 = Si,a−1 , (Ti;j )−1 = Ti;j .


82 3 Linear Mappings and Their Matrices

(2) If the m × m matrices M and N are invertible by M −1 and N −1 , then the


product matrix M N is invertible by N −1 M −1 . (Note the order reversal.)
(3) Every product of elementary matrices is invertible by another such product,
the product of the inverses of the original matrices but taken in reverse
order.
Proof. (1) To prove that Ri;j,−a Ri;j,a = Im , note that Ri;j,a is the identity
matrix Im with a times its jth row added to its ith row, and multiplying this
from the left by Ri;j,−a subtracts off a times the jth row from its ith row,
restoring Im . The proof that Ri;j,a Ri;j,−a = Im is either done similarly or by
citing the proof just given with a replaced by −a. The rest of (1) is similar.
(2) Compute
(M N )(N −1 M −1 ) = M (N N −1 )M −1 = M Im M −1 = M M −1 = Im ,
and similarly for (N −1 M −1 )(M N ) = Im .
(3) This is immediate from (1) and (2). ⊔

Proposition 3.3.4 (Persistence of solution). Let A be an m × n matrix
and let P be a product of m × m elementary matrices. Then the equations
Ax = 0m and (P A)x = 0m
are satisfied by the same vectors x in Rn .
Proof. Suppose that the vector x ∈ Rn satisfies the left equation, Ax = 0m .
Then
(P A)x = P (Ax) = P 0m = 0m .
Conversely, suppose that x satisfies (P A)x = 0m . Lemma 3.3.3 says that P
has an inverse P −1 , so
Ax = Im Ax = (P −1 P )Ax = P −1 (P A)x = P −1 0m = 0m .


The machinery is in place to solve the equation Ax = 05 , where as before,
⎡ 5 17 26 1 55 ⎤
⎢ 1 ⎥
⎢ −3 −1 −13 −20 0 −28 ⎥
⎢ ⎥
⎢ ⎥
A = ⎢ −2 3 5 0 3⎥.
⎢ ⎥
1
⎢ −2 −4 −6 0 −10 ⎥
⎢ 0 ⎥
⎢ 5 10 15 1 42 ⎥
⎣ 0 ⎦
Scale A’s fourth row by −1/2 and transpose A’s first and fourth rows; call the
⎡ 1 0 2 3 0 5⎤
result B:
⎢ ⎥
⎢ −3 −1 −13 −20 0 −28 ⎥
⎢ ⎥
⎢ ⎥
T1;4 S4,−1/2 A = ⎢ −2 1 3 5 0 3 ⎥ = B.
⎢ ⎥
⎢ 5 1 17 26 1 55 ⎥
⎢ ⎥
⎢ 5 0 10 15 1 42 ⎥
⎣ ⎦
3.3 The Inverse of a Linear Mapping 83

Note that B has a 1 as the leftmost entry of its first row. Recombine various
multiples of the first row with the other rows to put 0’s beneath the leading 1
of the first row; call the result C:
⎡1 0 2 3 0 5⎤
⎢ ⎥
⎢ 0 −1 −7 −11 0 −13 ⎥
⎢ ⎥
⎢ ⎥
R5;1,−5 R4;1,−5 R3;1,2 R2;1,3 B = ⎢ 0 1 7 11 0 13 ⎥ = C.
⎢ ⎥
⎢ 0 1 7 11 1 30 ⎥
⎢ ⎥
⎢ 0 0 0 0 1 17 ⎥
⎣ ⎦
Recombine various multiples of the second row with the others to put 0’s
above and below its leftmost nonzero entry; scale the second row to make its
leading nonzero entry a 1; call the result D:
⎡1 0 5⎤
⎢ 02 3 ⎥
⎢0 0 13 ⎥
⎢ ⎥
⎢ ⎥
17 11
S2,−1 R4;2,1 R3;2,1 C = ⎢ 0 0 0 ⎥ = D.
⎢ ⎥
00 0
⎢0 1 17 ⎥
⎢ 00 0 ⎥
⎢0 1 17 ⎥
⎣ 00 0 ⎦
Transpose the third and fifth rows; put 0’s above and below the leading 1 in
the third row; call the result E:
⎡1 5⎤
⎢ 02 30 ⎥
⎢0 13 ⎥
⎢ ⎥
⎢ ⎥
17 11 0
R4;3,−1 T3;5 D = ⎢ 0 17 ⎥ = E.
⎢ ⎥
00 01
⎢0 0⎥
⎢ 00 00 ⎥
⎢0 0⎥
⎣ 00 00 ⎦
Matrix E is a prime example of a so-called echelon matrix. (The term will be
defined precisely in a moment.) Its virtue is that the equation Ex = 05 is now
easy to solve. This equation expands out to
⎡ x1 ⎤
⎡1 0 5⎤ ⎢ ⎥ ⎡ ⎤ ⎡ ⎤
⎢ 2 30 ⎥ ⎢ x2 ⎥ ⎢ x1 + 2x3 + 3x4 + 5x6 ⎥ ⎢ 0 ⎥
⎢0 1 13 ⎥ ⎢ ⎥ ⎢ x2 + 7x3 + 11x4 + 13x6 ⎥
⎥ ⎢ ⎥ ⎢ ⎢ ⎥
⎢ ⎥ ⎢0⎥
⎢ ⎥ ⎢ x3 ⎥ ⎢ ⎥ ⎢ ⎥
7 11 0
Ex = ⎢ 0 0 17 ⎥ ⎢ ⎥ = ⎢ x5 + 17x6 ⎥ = ⎢ 0 ⎥ .
⎢ ⎥ ⎢ x4 ⎥ ⎢ ⎥ ⎢ ⎥
0 01
⎢0 0 0⎥ ⎢ ⎥ ⎢ 0⎥ ⎢ ⎥
⎢ 0 00 ⎥ ⎢ x5 ⎥ ⎢ ⎥ ⎢0⎥
⎢0 0 ⎥ ⎢ ⎥ ⎢ 0⎦ ⎢
⎥ ⎥
⎣ 0⎦⎢ ⎥ ⎣ ⎣0⎦
⎣ x6 ⎦
0 00

Matching the components in the last equality gives


x1 = −2x3 − 3x4 − 5x6
x2 = −7x3 − 11x4 − 13x6
x5 = − 17x6 .
Thus, x3 , x4 , and x6 are free variables that can take any values we wish, but
then x1 , x2 , and x5 are determined from these equations. For example, setting
x3 = −5, x4 = 3, x6 = 2 gives the solution x = (−9, −24, −5, 3, −34, 2).
84 3 Linear Mappings and Their Matrices

Definition 3.3.5 (Echelon matrix). A matrix E is called echelon if it has


the form
⎡0⋯01∗⋯∗0∗⋯∗0∗⋯⎤
⎢ ⎥
⎢ 1∗⋯∗0∗⋯⎥
⎢ ⎥
⎢ ⎥
E=⎢ 1 ∗ ⋯ ⎥.
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎢ ⎥
⎣ ⎦
Here the ∗’s are arbitrary entries, and all entries below the stairway are 0.
Thus each row’s first nonzero entry is a 1, each row’s leading 1 is farther right
than that of the row above it, each leading 1 has a column of 0’s above it, and
any rows of 0’s are at the bottom.

Note that the identity matrix I is a special case of an echelon matrix.


The algorithm for reducing a matrix A to echelon form by row operations
should be fairly clear from the previous example. The interested reader may
want to codify it more formally, perhaps in the form of a computer program.
Although different sequences of row operations may reduce A to echelon form,
the resulting echelon matrix E will always be the same. This result can be
proved by induction on the number of columns of A, and its proof is in many
linear algebra books.

Theorem 3.3.6 (Matrices reduce to echelon form). Every matrix A row


reduces to a unique echelon matrix E.

In an echelon matrix E, the columns with leading 1’s are called new
columns, and all others are old columns. The recipe for solving the equation
Ex = 0m is then as follows.
1. Freely choose the entries in x that correspond to the old columns of E.
2. Then each nonzero row of E will determine the entry of x corresponding
to its leading 1 (which sits in a new column). This entry will be a linear
combination of the free entries to its right.
Let’s return to the problem of determining whether A ∈ Mm,n (R) is in-
vertible. The idea was to see whether the equation Ax = 0m has any nonzero
solutions x, in which case A is not invertible. Equivalently, we may check
whether Ex = 0m has nonzero solutions, where E is the echelon matrix to
which A row reduces. The recipe for solving Ex = 0m shows that there are
nonzero solutions unless all of the columns are new.
If A ∈ Mm,n (R) has more columns than rows then its echelon matrix E
must have old columns. Indeed, each new column comes from the leading 1 in
a distinct row, so

new columns of E ≤ rows of E < columns of E,

showing that not all the columns are new. Thus A is not invertible when
m < n. On the other hand, if A ∈ Mm,n (R) has more rows than columns and
3.3 The Inverse of a Linear Mapping 85

it has an inverse matrix A−1 ∈ Mn,m (R), then A−1 in turn has inverse A, but
this is impossible, because A−1 has more columns than rows. Thus A is also
not invertible when m > n.
The remaining case is that A is square. The only square echelon matrix
with all new columns is I, the identity matrix (Exercise 3.3.10). Thus, unless
A’s echelon matrix is I, A is not invertible. On the other hand, if A’s echelon
matrix is I, then P A = I for some product P of elementary matrices. Multiply
from the left by P −1 to get A = P −1 ; this is invertible by P , giving A−1 = P .
This discussion is summarized in the following theorem.
Theorem 3.3.7 (Invertibility and echelon form for matrices). A non-
square matrix A is never invertible. A square matrix A is invertible if and
only if its echelon form is the identity matrix.
When A is square, the discussion above gives an algorithm that simulta-
neously checks whether it is invertible and finds its inverse when it is.
Proposition 3.3.8 (Matrix inversion algorithm). Given A ∈ Mn (R), set
up the matrix
B = [A ∣ In ]
in Mn,2n (R). Carry out row operations on this matrix to reduce the left side
to echelon form. If the left side reduces to In then A is invertible and the right
side is A−1 . If the left side doesn’t reduce to In then A is not invertible.
The algorithm works because if B is left multiplied by a product P of
elementary matrices, the result is
P B = [P A ∣ P ] .
As discussed, P A = In exactly when P = A−1 .
For example, the calculation
⎡ 1 −1 0 1 0 0 ⎤ ⎡ 1 0 0 1 1 1 ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
R1;2,1 R2;3,1 ⎢ 0 1 −1 0 1 0 ⎥ = ⎢ 0 1 0 0 1 1 ⎥
⎢ ⎥ ⎢ ⎥
⎢0 0 1 0 0 1⎥ ⎢0 0 1 0 0 1⎥
⎣ ⎦ ⎣ ⎦
shows that
⎡ 1 −1 0 ⎤−1 ⎡ 1 1 1 ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ 0 1 −1 ⎥ = ⎢ 0 1 1 ⎥ ,
⎢ ⎥ ⎢ ⎥
⎢0 0 1⎥ ⎢0 0 1⎥
⎣ ⎦ ⎣ ⎦
and one readily checks that the claimed inverse really works. Since arithmetic
by hand is so error-prone a process, one always should confirm one’s answer
from the matrix inversion algorithm.
We now have an algorithmic answer to the question at the beginning of
the section.
Theorem 3.3.9 (Echelon criterion for invertibility). The linear map-
ping S ∶ Rn Ð→ Rm is invertible only when m = n and its matrix A has
echelon matrix In , in which case its inverse S −1 is the linear mapping with
matrix A−1 .
86 3 Linear Mappings and Their Matrices

Exercises

3.3.1. Write down the following 3 × 3 elementary matrices and their inverses:
R3;2,π , S3,3 , T3;2 , T2;3 .
3.3.2. Finish the proof of Proposition 3.3.2.

3.3.3. Let A = [ 3 4 ]. Evaluate the following products without actually multi-


12
56
plying matrices: R3;2,π A, S3,3 A, T3;2 A, T2;3 A.
3.3.4. Finish the proof of Lemma 3.3.3, part (1).
3.3.5. What is the effect of right multiplying the m × n matrix M by an n × n
matrix Ri;j,a ? By Si,a ? By T i; j?
3.3.6. Recall the transpose of a matrix M (cf. Exercise 3.2.4), denoted M T .
T
Prove: Ri;j,a = Rj;i,a ; Si,a
T
= Si,a ; Ti;j
T
= Ti;j . Use these results and the formula
(AB) = B A to redo the previous problem.
T T T

3.3.7. Are the following matrices echelon? For each matrix M , solve the equa-
tion M x = 0.
⎡0 0⎤
⎡1 0 3⎤ ⎢ ⎥ ⎡1 0 0 0⎤ ⎡0 1 1⎤
⎢ ⎥ ⎢1 0⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢0 1 1⎥ , [ ], [ ], ⎢ ⎥, ⎢0 1 1 0⎥ , ⎢1 0 3⎥ .
0001 1100
⎢ ⎥ ⎢0 1⎥ ⎢ ⎥ ⎢ ⎥
⎢0 0 1⎥ 0000 0011 ⎢ ⎥ ⎢0 0 1 0⎥ ⎢0 0 0⎥
⎣ ⎦ ⎢0 0⎥ ⎣ ⎦ ⎣ ⎦
⎣ ⎦
3.3.8. For each matrix A solve the equation Ax = 0.
⎡ −1 1 4 ⎤ ⎡ 2 −1 3 2 ⎤ ⎡ 3 −1 2 ⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 1 3 8⎥, ⎢1 4 0 1⎥, ⎢2 1 1⎥.
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 1 2 5⎥ ⎢ 2 6 −1 5 ⎥ ⎢ 1 −3 0 ⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
3.3.9. Balance the chemical equation

Ca + H3 PO4 Ð→ Ca3 P2 O8 + H2 .

3.3.10. Prove by induction that the only square echelon matrix with all new
columns is the identity matrix.
3.3.11. Are the following matrices invertible? Find the inverse when possible,
and then check your answer.
⎡ 1 −1 1 ⎤ ⎡ 2 5 −1 ⎤ ⎡1 1⎤
⎢ ⎥ ⎢ ⎥ ⎢ 3⎥
1
⎢ ⎥ ⎢ ⎥ ⎢1 1⎥
⎢2 0 1⎥, ⎢ 4 −1 2 ⎥ , ⎢2 ⎥.
2
⎢ ⎥ ⎢ ⎥ ⎢1 4⎥
1

⎢3 0 1⎥ ⎢6 4 1⎥ ⎢ 1⎥
3
⎣ ⎦ ⎣ ⎦ ⎣3 5⎦
1
4

3.3.12. The matrix A is called lower triangular if aij = 0 whenever i < j.


If A is a lower triangular square matrix with all diagonal entries equal to 1,
show that A is invertible and A−1 takes the same form.
3.4 Inhomogeneous Linear Equations 87

3.3.13. This exercise refers back to the Gram–Schmidt exercise in Chapter 2.


That exercise expresses the relation between the vectors {x′j } and the vectors
{xj } formally as x′ = Ax, where x′ is a column vector whose entries are the
vectors x′1 , . . . , x′n , x is the corresponding column vector of xj ’s, and A is an
n × n lower triangular matrix.
Show that each xj has the form

xj = a′j1 x′1 + a′j2 x′2 + ⋯ + a′j,j−1 x′j−1 + x′j ,

and thus every linear combination of the original {xj } is also a linear combi-
nation of the new {x′j }.

3.4 Inhomogeneous Linear Equations

The question whether a linear mapping T is invertible led to solving the linear
equation Ax = 0, where A was the matrix of T . Such a linear equation, with
right side 0, is called homogeneous. An inhomogeneous linear equation
has nonzero right side,

Ax = b, A ∈ Mm,n (R), x ∈ Rn , b ∈ Rm , b ≠ 0.

The methods of the homogeneous case apply here too. If P is a product of


m×m elementary matrices such that P A is echelon (call it E), then multiplying
the inhomogeneous equation from the left by P gives

Ex = P b,

and since P b is just a vector, the solutions to this can be read off as in the
homogeneous case. There may not always be solutions, however.

Exercises

3.4.1. Solve the inhomogeneous equations


⎡ 1 −1 2 ⎤ ⎡1⎤ ⎡ 1 −2 1 2 ⎤ ⎡1⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 2 0 2 ⎥ x = ⎢1⎥ , ⎢ 1 1 −1 1 ⎥ x = ⎢2⎥ .
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 1 −3 4 ⎥ ⎢2⎥ ⎢ 1 7 −5 −1 ⎥ ⎢3⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦
3.4.2. For what values b1 , b2 , b3 does the equation
⎡ 3 −1 2 ⎤ ⎡b ⎤
⎢ ⎥ ⎢ 1⎥
⎢ ⎥ ⎢ ⎥
⎢ 2 1 1 ⎥ x = ⎢b2 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 1 −3 0 ⎥ ⎢b3 ⎥
⎣ ⎦ ⎣ ⎦
have a solution?
88 3 Linear Mappings and Their Matrices

3.4.3. A parent has a son and a daughter. The parent is four times as old
as the daughter, and the daughter is four years older than the son. In three
years, the parent will be five times as old as the son. How old are the parent,
daughter, and son?

3.4.4. Show that to solve an inhomogeneous linear equation, one may solve
a homogeneous system in one more variable and then restrict to solutions for
which the last variable is equal to −1.

3.5 The Determinant: Characterizing Properties and


Their Consequences

In this section all matrices are square, n × n. The goal is to define a function
that takes such a matrix, with its n2 entries, and returns a single number.
The putative function is called the determinant,

det ∶ Mn (R) Ð→ R.

For every square matrix A ∈ Mn (R), the scalar det(A) should contain as
much algebraic and geometric information about the matrix as possible. Not
surprisingly, so informative a function is complicated to encode.
This context nicely demonstrates a pedagogical principle already men-
tioned in Section 3.1: characterizing a mathematical object illuminates its
construction and its use. Rather than beginning with a definition of the de-
terminant, we will stipulate a few natural behaviors for it, and then we will
eventually see that
• there is a function with these behaviors (existence),
• there is only one such function (uniqueness), and, most importantly,
• these behaviors, rather than the definition, further show how the function
works (consequences).
We could start at the first bullet (existence) and proceed from the construction
of the determinant to its properties, but when a construction is complicated
(as the determinant’s construction is), it fails to communicate intent, and
pulling it out of thin air as the starting point of a long discussion is an obstacle
to understanding. A few naturally gifted readers will see what the unexplained
idea really is, enabling them to skim the ensuing technicalities and go on to
start using the determinant effectively; some other tough-minded readers can
work through the machinery and then see its operational consequences; but
it is all too easy for the rest of us to be defeated by disorienting detail-fatigue
before the presentation gets to the consequential points and provides any
energizing clarity.
Another option would be to start at the second bullet (uniqueness), letting
the desired properties of the determinant guide our construction of it. This
3.5 The Determinant: Characterizing Properties and Their Consequences 89

process wouldn’t be as alienating as starting with existence, but deriving


the determinant’s necessary construction has only limited benefit, because
we intend to use the construction as little as possible. Working through the
derivation would still squander our energy on the internal mechanisms of the
determinant before getting to its behavior, when its behavior is what truly
lets us understand it. We first want to learn to use the determinant easily and
artfully. Doing so will make its internals feel of secondary importance, as they
should.
The upshot is that in this section we will pursue the third bullet (conse-
quences), and then the next section will proceed to the second bullet (unique-
ness) and finally the first one (existence).
Instead of viewing the determinant only as a function of a matrix A ∈
Mn (R) with n2 scalar entries, view it also as a function of A’s n rows, each of
which is an n-vector. If A has rows r1 , . . . , rn , write det(r1 , . . . , rn ) for det(A).
Thus, det is now being interpreted as a function of n vectors, i.e., the domain
of det is n copies of Rn ,

det ∶ Rn × ⋯ × Rn Ð→ R.

The advantage of this viewpoint is that now we can impose conditions on


the determinant, using language already at our disposal in a natural way.
Specifically, we make three requirements:
(1) The determinant is multilinear, meaning that it is linear as a function
of each of its vector variables when the rest are held fixed. That is, for all
vectors r1 , . . . , rk , rk′ , . . . , rn and every scalar α,

det(r1 , . . . , αrk + rk′ , . . . , rn ) = α det(r1 , . . . , rk , . . . , rn )


+ det(r1 , . . . , rk′ , . . . , rn ).

(2) The determinant is skew-symmetric as a function of its vector vari-


ables, meaning that exchanging any two inputs changes the sign of the
determinant,

det(r1 , . . . , rj , . . . , ri , . . . , rn ) = − det(r1 , . . . , ri , . . . , rj , . . . , rn ).

(Here i ≠ j.) Consequently, the determinant is also alternating, meaning


that if two inputs ri and rj are equal then det(r1 , . . . , rn ) = 0.
(3) The determinant is normalized, meaning that the standard basis has
determinant 1,
det(e1 , . . . , en ) = 1.
Condition (1) does not say that det(αA+A′ ) = α det(A)+det(A′ ) for scalars α
and square matrices A, A′ . Especially, the determinant is not additive,

det(A + B) is in general not det(A) + det(B). (3.5)


90 3 Linear Mappings and Their Matrices

What the condition does say is that if all rows but one of a square matrix are
held fixed, then the determinant of the matrix varies linearly as a function
of the one row. By induction, an equivalent statement of multilinearity is the
more cluttered

det(r1 , . . . , ∑ αi rk,i , . . . , rn ) = ∑ αi det(r1 , . . . , rk,i , . . . , rn ),


i i

but to keep the notation manageable, we work with the simpler version.
We will prove the following theorem in the next section.

Theorem 3.5.1 (Existence and uniqueness of the determinant). One,


and only one, multilinear skew-symmetric normalized function from the n-fold
product of Rn to R exists. This function is the determinant,

det ∶ Rn × ⋯ × Rn Ð→ R.

Furthermore, all multilinear skew-symmetric functions from the n-fold product


of Rn to R are scalar multiples of of the determinant. That is, every multilinear
skew-symmetric function δ ∶ Rn × ⋯ × Rn Ð→ R is

δ = c ⋅ det where c = δ(e1 , . . . , en ).

In more structural language, Theorem 3.5.1 says that the multilinear skew-
symmetric functions from the n-fold product of Rn to R form a 1-dimensional
vector space over R, and {det} is a basis.
The reader may object that even if the conditions of multilinearity, skew-
symmetry, and normalization are grammatically natural, they are concep-
tually opaque. Indeed, they reflect considerable hindsight, since the idea of
a determinant originally emerged from explicit calculations. But again, the
payoff is that characterizing the determinant rather than constructing it illu-
minates its many useful properties. The rest of the section can be viewed as
an amplification of this idea.
For one consequence of the determinant’s existence, with no reference to
its uniqueness, consider the standard basis of Rn taken in order,

(e1 , . . . , en ).

Suppose that some succession of m pair-exchanges of the vectors in this or-


dered n-tuple has no net effect, i.e., after the m pair-exchanges, the vectors are
back in their original order. By skew-symmetry each pair-exchange changes
the sign of the determinant, and so after all m pair-exchanges the net result
is
(−1)m det(e1 , . . . , en ) = det(e1 , . . . , en ).
Since the determinant is normalized, this says that (−1)m = 1, i.e., m is even.
That is, no odd number of pair-exchanges can leave an ordered n-tuple in
3.5 The Determinant: Characterizing Properties and Their Consequences 91

its initial order. Consequently, if two different sequences of pair-exchanges


have the same net effect then their lengths are both odd or both even—this is
because running one sequence forward and then the other backward has no net
effect and hence comes to an even number of moves. In other words, although
a net rearrangement of an n-tuple does not determine a unique succession of
pair-exchanges to bring it about, or even a unique number of such exchanges, it
does determine the parity of any such number: the net rearrangement requires
an odd number of pair-exchanges, or it requires an even number. (For reasons
related to this, an old puzzle involving fifteen squares that slide in a 4 × 4 grid
can be made unsolvable by popping two pieces out and exchanging them.)
The fact that the parity of a rearrangement is well defined may be easy to
believe, perhaps so easy that the need for a proof is hard to see, but a proof
really is required. The determinant’s skew-symmetry and normalization are so
powerful that they give the result essentially as an afterthought. Alternatively,
see Exercise 3.5.2 for an elementary proof that does not invoke the existence
of the determinant. To summarize:
The existence of a determinant with no reference to its uniqueness,
or an argument that makes no reference to the determinant at all,
shows that every rearrangement of n objects has a well-defined parity,
meaning that either all sequences of pair-exchanges that put the objects
back in order have even length or all such sequences have odd length.
In the next section we will show that there are as many candidate determinants
(multilinear skew-symmetric normalized functions) as there are ways to assign
a parity to each rearrangement of n objects, with no assumption that any
determinant exists. So there could be as many as 2n! candidate determinants,
in the extreme case that each rearrangement can be put back in order by
an odd number of pair-exchanges and by an even number. And in the next
section we will use one particular assignment of a parity to each rearrangement
to show that a determinant exists. As in the previous displayed text, once a
determinant exists, only one parity-assignment function exists, and so the
determinant is unique. The logic here is subtle, and so the reader may prefer
to rely on Exercise 3.5.2 to defray any concern about arguing in a circle. If the
uniqueness of parity is established first then the ideas lay themselves out more
clearly: a unique candidate determinant presents itself, and we show that it
works.
The next result is the crucial property of the determinant, in consequence
of its characterizing properties.
Theorem 3.5.2 (The determinant is multiplicative). For all matrices
A, B ∈ Mn (R), the determinant of the matrix product is the product of the
scalar determinants,
det(AB) = det(A) det(B).
Further, if A is invertible then the determinant of the matrix inverse is the
scalar inverse of the determinant,
92 3 Linear Mappings and Their Matrices

det(A−1 ) = (det(A))−1 .

Multilinearity says that the determinant behaves well additively and


scalar-multiplicatively as a function of each of n vectors, while (3.5) says that
the determinant does not behave well additively as a function of one matrix.
Theorem 3.5.2 says that the determinant behaves perfectly well multiplica-
tively as a function of one matrix. Also, the theorem tacitly says that if A is
invertible then det(A) is nonzero. Soon we will establish the converse as well.

Proof. Let B ∈ Mn (R) be fixed. Consider the function

δ ∶ Mn (R) Ð→ R, δ(A) = det(AB).

As a function of the rows of A, δ is the determinant of the rows of AB,

δ ∶ Rn × ⋯ × Rn Ð→ R, δ(r1 , . . . , rn ) = det(r1 B, . . . , rn B).

The function δ is multilinear and skew-symmetric. To show multilinearity,


compute (using the definition of δ in terms of det, properties of vector–matrix
algebra, the multilinearity of det, and the definition of δ again),

δ(r1 , . . . , αrk + rk′ , . . . , rn ) = det(r1 B, . . . , (αrk + rk′ )B, . . . , rn B)


= det(r1 B, . . . , αrk B + rk′ B, . . . , rn B)
= α det(r1 B, . . . , rk B, . . . , rn B)
+ det(r1 B, . . . , rk′ B, . . . , rn B)
= α δ(r1 , . . . , rk , . . . , rn )
+ δ(r1 , . . . , rk′ , . . . , rn ).

To show skew-symmetry, take two distinct indices i, j ∈ {1, . . . , n} and compute


similarly,

δ(r1 , . . . , rj , . . . , ri , . . . , rn ) = det(r1 B, . . . , rj B, . . . , ri B, . . . , rn B)
= − det(r1 B, . . . , ri B, . . . , rj B, . . . , rn B)
= −δ(r1 , . . . , ri , . . . , rj , . . . , rn ).

Also compute that

δ(e1 , . . . , en ) = det(e1 B, . . . , en B) = det(B).

It follows from Theorem 3.5.1 that δ(A) = det(B) det(A), and this is the
desired main result det(AB) = det(A) det(B) of the theorem. Finally, if A is
invertible then

det(A) det(A−1 ) = det(AA−1 ) = det(I) = 1.

That is, det(A−1 ) = (det(A))−1 . The proof is complete. ⊔



3.5 The Determinant: Characterizing Properties and Their Consequences 93

One consequence of the theorem is

det(A−1 BA) = det(B), A, B ∈ Mn (R), A invertible.

And we note that the same result holds for the trace, introduced in Exer-
cise 3.2.5, in consequence of that exercise,

tr(A−1 BA) = tr(B), A, B ∈ Mn (R), A invertible.

More facts about the determinant are immediate consequences of its char-
acterizing properties.

Proposition 3.5.3 (Determinants of elementary and echelon matri-


ces).
(1) det(Ri;j,a ) = 1 for all i, j ∈ {1, . . . , n} (i ≠ j) and a ∈ R.
(2) det(Si,a ) = a for all i ∈ {1, . . . , n} and nonzero a ∈ R.
(3) det(Ti;j ) = −1 for all i, j ∈ {1, . . . , n} (i ≠ j).
(4) If E is n × n echelon then


⎪1 if E = I,
det(E) = ⎨



0 otherwise.

Proof. (1) Compute

det(Ri;j,a ) = det(e1 , . . . , ei + aej , . . . , ej , . . . , en )


= det(e1 , . . . , ei , . . . , ej , . . . , en ) + a det(e1 , . . . , ej , . . . , ej , . . . , en )
= 1 + a ⋅ 0 = 1.

The proofs of statements (2) and (3) are similar. For (4), if E = I then det(E) =
1, because the determinant is normalized. Otherwise the bottom row of E is 0,
and because a linear function takes 0 to 0 it follows that det(E) = 0. ⊔

For one consequence of Theorem 3.5.2 and Proposition 3.5.3, recall that
every matrix A ∈ Mn (R) has a transpose matrix AT , obtained by flipping A
about its northwest–southeast diagonal. The next theorem (whose proof is
Exercise 3.5.4) says that all statements about the determinant as a function
of the rows of A also apply to the columns. This fact will be used without
comment from now on. In particular, det(A) is the unique multilinear skew-
symmetric normalized function of the columns of A.

Theorem 3.5.4 (Determinant and transpose). For all A ∈ Mn (R),


det(AT ) = det(A).

We also give another useful consequence of the determinant’s characteriz-


ing properties. A type of matrix that has an easily calculable determinant is a
triangular matrix, meaning a matrix all of whose subdiagonal entries are 0
94 3 Linear Mappings and Their Matrices

or all of whose superdiagonal entries are 0. (Lower triangular matrices have


already been introduced in Exercise 3.3.12.) For example, the matrices
⎡a a a ⎤ ⎡a ⎤
⎢ 11 12 13 ⎥ ⎢ 11 0 0 ⎥
⎢ ⎥ ⎢ ⎥
⎢ 0 a22 a23 ⎥ ⎢a21 a22 0 ⎥
⎢ ⎥ ⎢ ⎥
and
⎢ 0 0 a33 ⎥ ⎢a31 a32 a33 ⎥
⎣ ⎦ ⎣ ⎦
are triangular.

Proposition 3.5.5 (Determinant of a triangular matrix). The determi-


nant of a triangular matrix is the product of its diagonal entries.

Proof. We may consider only upper triangular matrices, because a lower tri-
angular matrix has an upper triangular matrix for its transpose. The 3×3 case
makes the general argument clear. The determinant of a 3×3 upper triangular
matrix A is
3 3 3
det A = det( ∑ a1i1 ei1 , ∑ a2i2 ei2 , ∑ a3i3 ei3 ),
i1 =1 i2 =2 i3 =3

which, since the determinant is multilinear, is

3 3 3
det A = ∑ ∑ ∑ a1i1 a2i2 a3i3 det(ei1 , ei2 , ei3 ).
i1 =1 i2 =2 i3 =3

Because the summation-index i3 takes only the value 3, this is

3 3
det A = ∑ ∑ a1i1 a2i2 a33 det(ei1 , ei2 , e3 ),
i1 =1 i2 =2

and the terms with i1 = 3 or i2 = 3 vanish because the determinant is alter-


nating, so the determinant further simplifies to

2
det A = ∑ a1i1 a22 a33 det(ei1 , e2 , e3 ).
i1 =1

Now the term with i1 = 2 vanishes similarly, leaving

det A = a11 a22 a33 det(e1 , e2 , e3 ).

Finally, because the determinant is normalized, we have

det A = a11 a22 a33 .



3.5 The Determinant: Characterizing Properties and Their Consequences 95

A far more important consequence of Theorem 3.5.2 and Proposition 3.5.3


is one of the main results of this chapter. Recall that every matrix A row
reduces as
R1 ⋯RN A = E
where the Rk are elementary, E is echelon, and A is invertible if and only if
E = I. Because the determinant is multiplicative,
det(R1 )⋯ det(RN ) det(A) = det(E). (3.6)
But each det(Rk ) is nonzero, and det(E) is 1 if E = I and 0 otherwise, so this
gives the algebraic significance of the determinant:
Theorem 3.5.6 (Linear invertibility theorem). The matrix A ∈ Mn (R)
is invertible if and only if det(A) ≠ 0.
That is, the zeroness or nonzeroness of the determinant says whether the
matrix is invertible. Once the existence and uniqueness of the determinant
are established in the next section, we will continue to use the determinant
properties to interpret the magnitude and the sign of the determinant as well.
Not only does equation (3.6) prove the linear invertibility theorem, but
furthermore it describes an algorithm for computing the determinant of any
square matrix A: reduce A to echelon form by recombining, scaling, and trans-
position; if the echelon form is I then det(A) is the reciprocal product of the
scaling factors times −1 raised to the number of transpositions, and if the ech-
elon form is not I then det(A) = 0. We will give a more efficient determinant
algorithm in the next section.

Exercises
3.5.1. Consider a scalar-valued function of pairs of vectors,
ip ∶ Rn × Rn Ð→ R,
satisfying the following three properties.
(1) The function is bilinear,
ip(αx + α′ x′ , y) = α ip(x, y) + α′ ip(x′ , y),
ip(x, βy + β ′ y ′ ) = β ip(x, y) + β ′ ip(x, y ′ )
for all α, α′ , β, β ′ ∈ R and x, x′ , y, y ′ ∈ Rn .
(2) The function is symmetric,
ip(x, y) = ip(y, x) for all x, y ∈ Rn .
(3) The function is normalized,
ip(ei , ej ) = δij for all i, j ∈ {1, . . . , n}.
(The Kronecker delta δij was defined in Section 2.2.)
96 3 Linear Mappings and Their Matrices

Compute that this function, if it exists at all, must be the inner product.
On the other hand, we already know that the inner product has these three
properties, so this exercise has shown that it is characterized by them.
3.5.2. Let n ≥ 2. This exercise proves, without invoking the determinant, that
every succession of pair-exchanges of the ordered set

(1, 2, . . . , n)

that has no net effect consists of an even number of exchanges.


To see this, consider a shortest-possible succession of an odd number of
pair-exchanges having in total no net effect. Certainly it must involve at least
three exchanges. We want to show that it can’t exist at all.
Let the notation
(i j) (where i ≠ j)
stand for exchanging the elements in positions i and j. Then in particular,
the first two exchanges in the succession take the form

(i j)(∗ ∗),

meaning to exchange the elements in positions i and j and then to exchange


the elements in another pair of positions. There are four cases,

(i j)(i j),
(i j)(i k), k ∉ {i, j},
(i j)(j k), k ∉ {i, j},
(i j)(k ℓ), k, ℓ ∉ {i, j}, k ≠ ℓ.

The first case gives a shorter succession of an odd number of pair-exchanges


having in total no net effect, and this is a contradiction. Show that the other
three cases can be rewritten in the form

(∗ ∗)(i ∗)

where the first exchange does not involve the ith slot. Next we may apply
the same argument to the second and third exchanges, then to the third and
fourth, and so on. Eventually, either a contradiction arises from the first of
the four cases, or only the last pair-exchange involves the ith slot. Explain
why the second possibility is untenable, completing the argument.
3.5.3. Let f ∶ Rn × ⋯ × Rn Ð→ R be a multilinear skew-symmetric function,
and let c be a real number. Show that the function cf is again multilinear and
skew-symmetric.
3.5.4. This exercise shows that det(AT ) = det(A) for every square matrix A.
(a) Show that det(RT ) = det(R) for every elementary matrix R. (That is,
R can be a recombine matrix, a scale matrix, or a transposition matrix.)
3.6 The Determinant: Uniqueness and Existence 97

(b) If E is a square echelon matrix then either E = I or the bottom row


of E is 0. In either case, show that det(E T ) = det(E). (For the case E ≠ I, we
know that E is not invertible. What is E T en , and what does this say about
the invertibility of E T ?)
(c) Use the formula (M N )T = N T M T , Theorem 3.5.2, and Proposi-
tion 3.5.3 to show that det(AT ) = det(A) for all A ∈ Mn (R).
3.5.5. The square matrix A is orthogonal if AT A = I. Show that if A is
orthogonal then det(A) = ±1. Give an example with determinant −1.
3.5.6. The matrix A is skew-symmetric if AT = −A. Show that if A is n × n
skew-symmetric with n odd then det(A) = 0.

3.6 The Determinant: Uniqueness and Existence


Recall that Theorem 3.5.1 asserts that exactly one multilinear skew-symmetric
normalized function from the n-fold product of Rn to R exists. That is, a
unique determinant exists.
We warm up for the proof of the theorem by using the three defining
conditions of the determinant to show that only one formula is possible for
the determinant of a general 2 × 2 matrix,

A=[ ].
ab
cd

The first row of this matrix is

r1 = (a, b) = a(1, 0) + b(0, 1) = ae1 + be2 ,

and similarly its second row is r2 = ce1 + de2 . Thus, since we view the deter-
minant as a function of rows, its determinant must be

det(A) = det(r1 , r2 ) = det(ae1 + be2 , ce1 + de2 ).

Since the determinant is linear in its first vector variable, this expands to

det(ae1 + be2 , ce1 + de2 ) = a det(e1 , ce1 + de2 ) + b det(e2 , ce1 + de2 ),

and since the determinant is also linear in its second vector variable, this
expands further,

a det(e1 , ce1 + de2 ) + b det(e2 , ce1 + de2 )


= ac det(e1 , e1 ) + ad det(e1 , e2 )
+ bc det(e2 , e1 ) + bd det(e2 , e2 ).

But since the determinant is skew-symmetric and alternating, this expanded


expression simplifies considerably,
98 3 Linear Mappings and Their Matrices

ac det(e1 , e1 ) + ad det(e1 , e2 ) + bc det(e2 , e1 ) + bd det(e2 , e2 )


= (ad − bc) det(e1 , e2 ).

And finally, since the determinant is normalized, we have found the only
possible formula for the 2 × 2 case,

det(A) = ad − bc.

All three characterizing properties of the determinant were required to derive


this formula. More subtly (though in this context trivially), the fact that this
is the only possible formula tacitly relies on the fact that every sequence of
exchanges of e1 and e2 that leaves them in order has even length, and every
such sequence that exchanges their order has odd length.
As a brief digression, the reader can use the matrix inversion algorithm
from Section 3.3 to verify that the 2 × 2 matrix A is invertible if and only if
ad−bc is nonzero, showing that the formula for the 2×2 determinant arises from
considerations of invertibility as well as from our three conditions. However,
the argument requires cases, e.g., a ≠ 0 and a = 0, making this approach
uninviting for larger matrices.
Returning to the main line of exposition, nothing here has yet shown that
a determinant function exists at all for 2 × 2 matrices. What it has shown is
that there is only one possibility,

det((a, b), (c, d)) = ad − bc.

But now that we have the only possible formula, checking that indeed it
satisfies the desired properties is purely mechanical. For example, to verify
linearity in the first vector variable, compute

det(α(a, b) + (a′ , b′ ), (c, d)) = det((αa + a′ , αb + b′ ), (c, d))


= (αa + a′ )d − (αb + b′ )c
= α(ad − bc) + (a′ d − b′ c)
= α det((a, b), (c, d)) + det((a′ , b′ ), (c, d)).

For skew-symmetry,

det((c, d), (a, b)) = cb − da = −(ad − bc) = − det((a, b), (c, d)).

And for normalization,

det(1, 0), (0, 1)) = 1 ⋅ 1 − 0 ⋅ 0 = 1.

We should also verify linearity in the second vector variable, but this no longer
requires the defining formula. Instead, since the formula is skew-symmetric
and is linear in the first variable,
3.6 The Determinant: Uniqueness and Existence 99

det(r1 , αr2 + r2′ ) = − det(αr2 + r2′ , r1 )


= −(α det(r2 , r1 ) + det(r2′ , r1 ))
= −( − α det(r1 , r2 ) − det(r1 , r2′ ))
= α det(r1 , r2 ) + det(r1 , r2′ ).

This little trick illustrates the value of thinking in general terms: a slight
modification, inserting a few occurrences of “. . . ” and replacing the subscripts
1 and 2 by i and j, shows that for every n, the three required conditions for
the determinant are redundant—linearity in one vector variable combines with
skew-symmetry to ensure linearity in each vector variable.
One can similarly show that for a 1 × 1 matrix,

A = [a],

the only possible formula for its determinant is

det(A) = a,

and that indeed this works. The result is perhaps silly, but the exercise of
working through a piece of language and logic in the simplest instance can
help one to understand its more elaborate cases. As another exercise, the same
techniques show, granting that each permutation of three elements has only
one parity, that the only possible formula for a 3 × 3 determinant is
⎡a b c ⎤
⎢ ⎥
⎢ ⎥
det ⎢d e f ⎥ = aek + bf g + cdh − af h − bdk − ceg.
⎢ ⎥
⎢g h k ⎥
⎣ ⎦
This formula is complicated enough that we should rethink it in a more sys-
tematic way before verifying that it has the desired properties. And we may
as well generalize it to arbitrary n in the process. Here are some observations
about the 3 × 3 formula:
• It is a sum of 3-fold products of matrix entries.
• Every 3-fold product contains one element from each row of the matrix.
• Every 3-fold product also contains one element from each column of the
matrix. So every 3-fold product arises from the positions of three rooks
that don’t threaten each other on a 3 × 3 chessboard.
• Every 3-fold product comes weighted by a “+” or a “−”.
Similar observations apply to the 1×1 and 2×2 formulas. Our general formula
should encode them. Making it do so is partly a matter of notation, but also
an idea is needed to describe the appropriate distribution of plus signs and
minus signs among the terms. The following language provides all of this.

Definition 3.6.1 (Permutation). A permutation of {1, 2, . . . , n} is a vec-


tor
100 3 Linear Mappings and Their Matrices

π = (π(1), π(2), . . . , π(n))


whose entries are {1, 2, . . . , n}, each appearing once, in any order. An inver-
sion in the permutation π is a pair of entries with the larger one to the left.
The sign of the permutation π, written (−1)π , is −1 raised to the number of
inversions in π. The set of permutations of {1, . . . , n} is denoted Sn .

Examples are the permutations π = (1, 2, 3, . . . , n), σ = (2, 1, 3, . . . , n), and


τ = (5, 4, 3, 2, 1) (here n = 5). In these examples π has no inversions, σ has
one, and τ has ten. Thus (−1)π = 1, (−1)σ = −1, and (−1)τ = 1. In general,
the sign of a permutation with an even number of inversions is 1 and the
sign of a permutation with an odd number of inversions is −1. There are n!
permutations of {1, 2, . . . , n}; that is, the set Sn contains n! elements.
As advertised, permutations and their signs provide the notation for the
only possible n × n determinant formulas. Consider any n vectors
n n n
r1 = ∑ a1i ei , r2 = ∑ a2j ej , ..., rn = ∑ anp ep .
i=1 j=1 p=1

Every multilinear function δ (if it exists at all) must satisfy

⎛n n n ⎞
δ(r1 , r2 , . . . , rn ) = δ ∑ a1i ei , ∑ a2j ej , . . . , ∑ anp ep
⎝i=1 j=1 p=1 ⎠
n n n
= ∑ ∑ ⋯ ∑ a1i a2j ⋯anp δ(ei , ej , . . . , ep ).
i=1 j=1 p=1

If δ is also alternating then for every i, j, . . . , p ∈ {1, . . . , n},

δ(ei , ej , . . . , ep ) = 0 if any two subscripts agree.

Thus we may sum only over permutations,

δ(r1 , r2 , . . . , rn ) = ∑ a1i a2j ⋯anp det(ei , ej , . . . , ep ).


(i,j,...,p)∈Sn

Consider any permutation π = (i, j, . . . , p). Suppose that π contains an in-


version, i.e., two elements are out of order. Then necessarily two elements in
adjacent slots are out of order. (For example, if i > p then either i > j, giving
adjacent elements out of order as desired; or j > i > p, so that j and p are an
out of order pair in closer slots than i and p, and so on.) If a permutation
contains any inversions, then exchanging a suitable adjacent pair decreases
the number of inversions by one, changing the sign of the permutation, while
exchanging the corresponding two input vectors changes the sign of the de-
terminant. Repeating this process until the permutation has no remaining
inversions shows that

δ(ei , ej , . . . , ep ) = (−1)π δ(e1 , e2 , . . . , en ).


3.6 The Determinant: Uniqueness and Existence 101

That is, a possible formula for a multilinear skew-symmetric function δ is

δ(r1 , r2 , . . . , rn ) = ∑ (−1)π a1i a2j ⋯anp ⋅ c


π=(i,j,...,p)

where
c = δ(e1 , . . . , en ).
Especially, a possible formula for a multilinear skew-symmetric normalized
function is
det(r1 , r2 , . . . , rn ) = ∑ (−1) a1i a2j ⋯anp .
π

π=(i,j,...,p)

Because (−1) arises from a specific method of undoing any permutation—


π

exchange out-of-order neighboring pairs until none remain—it conceivably


need not be the only parity function of permutations. Further, the argument
here has shown that for any parity function sgn of permutations, the function

detsgn (r1 , r2 , . . . , rn ) = ∑ sgn(π)a1i a2j ⋯anp


π=(i,j,...,p)

is a possible formula for a multilinear skew-symmetric normalized function,


and these are the only candidates. As discussed in the previous section, either
we already know that (−1)π is the unique parity of each permutation π by
Exercise 3.5.2, or we will know it as soon as the function constructed with it
in the penultimate display is shown to be multilinear, skew-symmetric, and
normalized.
Definition 3.6.2 (Determinant). The determinant function,

det ∶ Mn (R) Ð→ R,

is defined as follows. For every A ∈ Mn (R) with entries (aij ),

det(A) = ∑ (−1)π a1π(1) a2π(2) ⋯anπ(n) .


π∈Sn

The formula in the definition is indeed the formula computed a moment


ago, because for every permutation π = (i, j, . . . , p) ∈ Sn we have π(1) = i,
π(2) = j, . . . , π(n) = p.
As an exercise to clarify the formula, we use it to reproduce the 3×3 deter-
minant. Each permutation in S3 determines a rook placement, and the sign of
the permutation is the parity of the number of northeast–southwest segments
joining any two of its rooks. For example, the permutation (2, 3, 1) specifies
that the rooks in the top, middle, and bottom rows are respectively in columns
2, 3, and 1, and the sign is positive because there are two northeast–southwest
segments. (See Figure 3.9.) The following table lists each permutation in S3
followed by the corresponding term in the determinant formula. For each per-
mutation, the term is its sign times the product of the three matrix entries
where its rooks are placed.
102 3 Linear Mappings and Their Matrices

Figure 3.9. The rook placement for (2, 3, 1), showing the two inversions

π (−1)π a1π(1) a2π(2) a3π(3)


(1, 2, 3) aek
(1, 3, 2) −af h
(2, 1, 3) −bdk
(2, 3, 1) bf g
(3, 1, 2) cdh
(3, 2, 1) −ceg
The sum of the right column entries is the anticipated formula from before,
⎡a b c ⎤
⎢ ⎥
⎢ ⎥
det ⎢d e f ⎥ = aek + bf g + cdh − af h − bdk − ceg.
⎢ ⎥
⎢g h k ⎥
⎣ ⎦
The same procedure reproduces the 2 × 2 determinant as well,

det [ ] = ad − bc,
ab
cd

and even the silly 1 × 1 formula det[a] = a. The 2 × 2 and 3 × 3 cases are
worth memorizing. They can be visualized as adding the products along
northwest–southeast diagonals of the matrix and then subtracting the prod-
ucts along southwest–northeast diagonals, where the word diagonal connotes
wraparound in the 3×3 case. (See Figure 3.10.) But be aware that this pattern
of the determinant as the northwest–southeast diagonals minus the southwest–
northeast diagonals is valid only for n = 2 and n = 3.
We have completed the program of the second bullet at the beginning of the
previous section, finding the only possible formula (the one in Definition 3.6.2)
that could satisfy the three desired determinant properties, its uniqueness
dependent on its doing so if we haven’t already shown that each permutation
has a unique sign. That is, we have now proved the uniqueness but not yet
the existence of the determinant in Theorem 3.5.1, the uniqueness possibly
provisional on the existence.
3.6 The Determinant: Uniqueness and Existence 103

− − −

+ + +

Figure 3.10. The 3 × 3 determinant

The first bullet tells us to prove the existence by verifying that the com-
puted determinant formula indeed satisfies the three stipulated determinant
properties. Similarly to the 2 × 2 case, this is a mechanical exercise. The im-
pediments are purely notational, but the notation is admittedly cumbersome,
and so the reader is encouraged to skim the next proof.

Proposition 3.6.3 (Properties of the determinant).


(1) The determinant is linear as a function of each row of A.
(2) The determinant is skew-symmetric as a function of the rows of A.
(3) The determinant is normalized.

Proof. (1) If A has rows ri = (ai1 , . . . , ain ) except that its kth row is the linear
combination αrk + rk′ where rk = (ak1 , . . . , akn ) and rk′ = (a′k1 , . . . , a′kn ), then
its (i, j)th entry is


⎪aij if i ≠ k,


⎪ + if i = k.
⎩ kj

αa a kj

Thus

det(r1 , . . . , αrk + rk′ , . . . , rn )


= ∑ (−1)π a1π(1) ⋯(αakπ(k) + a′kπ(k) )⋯anπ(n)
π∈Sn

= α ∑ (−1)π a1π(1) ⋯akπ(k) ⋯anπ(n)


π∈Sn

+ ∑ (−1)π a1π(1) ⋯a′kπ(k) ⋯anπ(n)


π∈Sn

= α det(r1 , . . . , rk , . . . , rn ) + det(r1 , . . . , rk′ , . . . , rn ),

as desired.
(2) Let A have rows r1 , . . . , rn where ri = (ai1 , . . . , ain ). Suppose that rows
k and k + 1 are exchanged. The resulting matrix has (i, j)th entry
104 3 Linear Mappings and Their Matrices


⎪ if i ∉ {k, k + 1},



aij
⎨ak+1,j if i = k,



⎪ if i = k + 1.
⎩akj
For each permutation π ∈ Sn , define a companion permutation π ′ by exchang-
ing the kth and (k + 1)st entries,

π ′ = (π(1), . . . , π(k + 1), π(k), . . . , π(n)).

Thus π ′ (k) = π(k + 1), π ′ (k + 1) = π(k), and π ′ (i) = π(i) for all other i.
As π varies through Sn , so does π ′ , and for each π we have the relation
(−1)π = −(−1)π (Exercise 3.6.6). The defining formula of the determinant

gives

det(r1 , . . . , rk+1 , rk , . . . , rn )
= ∑(−1)π a1π(1) ⋯ak+1,π(k) akπ(k+1) ⋯anπ(n)
π

= − ∑(−1)π a1π′ (1) ⋯ak+1,π′ (k+1) akπ′ (k) ⋯anπ′ (n)


π′
= − det(r1 , . . . , rk , rk+1 , . . . , rn ).

The previous calculation establishes the result when adjacent rows of A are
exchanged. To exchange rows k and ℓ in A where ℓ > k, carry out the following
adjacent row exchanges to trickle the kth row down to the ℓth and then bubble
the ℓth row back up to the kth, bobbing each row in between them up one
position and then back down:

rows k and k + 1, k and k + 1.


k + 1 and k + 2, k + 1 and k + 2,
..., ...,
ℓ − 2 and ℓ − 1, ℓ − 2 and ℓ − 1,
ℓ − 1 and ℓ,

The display shows that the process carries out an odd number of exchanges
(all but the bottom one come in pairs), each of which changes the sign of the
determinant.
(3) This is left to the reader (Exercise 3.6.7). ⊔

So a determinant function with the stipulated behavior exists, making our


(−1)π the only possible sign of each permutation π if we don’t know this
already, and thus showing that the determinant is unique. Another construc-
tion of a determinant function, with no reference to permutations at all but
proceeding instead by induction on the dimension n of the matrices, is given
in Exercise 3.6.12. And we have seen that every multilinear skew-symmetric
3.6 The Determinant: Uniqueness and Existence 105

function must be a scalar multiple of the determinant. The last comment nec-
essary to complete the proof of Theorem 3.5.1 is that since the determinant
is multilinear and skew-symmetric, so are its scalar multiples. This fact was
shown in Exercise 3.5.3.
The reader is invited to contemplate how unpleasant it would have been
to prove the various theorems about the determinant in the previous section
using the unwieldy determinant formula, with its n! terms, each an n-fold
product. That said, the theorems really can be shown directly from the for-
mula. For example, to prove that det(AT ) = det(A), one can write
det(AT ) = ∑ (−1)π aπ(1)1 aπ(2)2 ⋯aπ(n)n ,
π∈Sn

and then persuade oneself that this is also the sum over the permutations π ′
that undo the permutations π, and the undo-permutations have the same
signs as the originals,

det(AT ) = ∑ (−1)π a1π′ (1) a2π′ (2) ⋯anπ′ (n) ,


π ′ ∈Sn

and this is det(A). Here we are adumbrating basic ideas from group theory.
The previous section has already established that the determinant of a
triangular matrix is the product of the diagonal entries, but the result also
follows immediately from the determinant formula (Exercise 3.6.8). This fact
should be cited freely to save time.
An algorithm for computing det(A) for every A ∈ Mn (R) is now at hand.
Algebraically, the idea is that if
P1 AP2 = ∆,
where P1 and P2 are products of elementary matrices and ∆ is a triangular
matrix, then since the determinant is multiplicative,
det(A) = det(P1 )−1 det(∆) det(P2 )−1 .
Multiplying A by P2 on the right carries out a sequence of column operations
on A, just as multiplying A by P1 on the left carries out row operations. Recall
that the determinants of the elementary matrices are
det(Ri;j,a ) = 1,
det(Si,a ) = a,
det(Ti;j ) = −1.
Procedurally, this all plays out as follows.
Proposition 3.6.4 (Determinant algorithm). Given A ∈ Mn (R), use row
and column operations—recombines, scales, transpositions—to reduce A to a
triangular matrix ∆. Then det(A) is det(∆) times the reciprocal of each scale
factor and times −1 for each transposition.
106 3 Linear Mappings and Their Matrices

The only role that the determinant formula (as compared to the determi-
nant properties) played in obtaining this algorithm is that it gave the deter-
minant of a triangular matrix easily.
For example, the matrix
⎡1/0! 1/3!⎤
⎢ ⎥
⎢1/1! 1/4!⎥
1/1! 1/2!
⎢ ⎥
A=⎢ ⎥
1/2! 1/3!
⎢1/2! 1/5!⎥
⎢ 1/3! 1/4! ⎥
⎢1/3! 1/6!⎥
⎣ 1/4! 1/5! ⎦
becomes, after scaling the first row by 3!, the second row by 4!, the third row
by 5!, and the fourth row by 6!,
⎡ 6 6 3 1⎤
⎢ ⎥
⎢ 24 12 4 1⎥
⎢ ⎥
B=⎢ ⎥.
⎢ 60 20 5 1⎥
⎢ ⎥
⎢120 30 6 1⎥
⎣ ⎦
Subtract the first row from each of the others to get
⎡ 6 6 3 1⎤
⎢ ⎥
⎢ 18 6 1 0⎥
⎢ ⎥
C =⎢ ⎥,
⎢ 54 14 2 0⎥
⎢ ⎥
⎢114 24 3 0⎥
⎣ ⎦
and then scale the third row by 1/2 and the fourth row by 1/3, yielding
⎡ 6 6 3 1⎤
⎢ ⎥
⎢18 6 1 0⎥
⎢ ⎥
D=⎢ ⎥.
⎢27 7 1 0⎥
⎢ ⎥
⎢38 8 1 0⎥
⎣ ⎦
Next subtract the second row from the third row and the fourth rows, and
scale the fourth row by 1/2 to get
⎡6 6 3 1⎤
⎢ ⎥
⎢18 6 1 0⎥
⎢ ⎥
E=⎢ ⎥.
⎢9 1 0 0⎥
⎢ ⎥
⎢10 1 0 0⎥
⎣ ⎦
Subtract the third row from the fourth, transpose the first and fourth columns,
and transpose the second and third columns, leading to
⎡1 6⎤
⎢ ⎥
⎢0 18⎥
36
⎢ ⎥
∆=⎢ ⎥.
16
⎢0 9⎥
⎢ 01 ⎥
⎢0 1⎥
⎣ 00 ⎦
This triangular matrix has determinant 1, and so according to the algorithm,
3.6 The Determinant: Uniqueness and Existence 107

2⋅3⋅2 1
det(A) = = .
6! 5! 4! 3! 1036800
In the following exercises, feel free to use the determinant properties and
the determinant formula in whatever combined way gives you the least work.

Exercises

3.6.1. For this exercise, let n and m be positive integers, not necessarily equal,
and let Rn × ⋯ × Rn denote m copies of Rn . Consider any multilinear function

f ∶ Rn × ⋯ × Rn Ð→ R.

For any m vectors in Rn ,

a1 = (a11 , . . . , a1n ),
a2 = (a21 , . . . , a2n ),

am = (am1 , . . . , amn ),

explain why
n n n
f (a1 , a2 , . . . , am ) = ∑ ∑ ⋯ ∑ a1i a2j ⋯amp f (ei , ej , . . . , ep ).
i=1 j=1 p=1

Since each f (ei , ej , . . . , ep ) is a constant (it depends on f , but not on the


vectors a1 , . . . , am ), the multilinear function f is a polynomial in the entries of
its vector-variables. Therefore, this exercise has shown that every multilinear
function is continuous.

3.6.2. Use the three desired determinant properties to derive the formulas in
this section for 1 × 1 and 3 × 3 determinants. Verify that the 1 × 1 formula
satisfies the properties.

3.6.3. For each permutation, count the inversions and compute the sign:
(2, 3, 4, 1), (3, 4, 1, 2), (5, 1, 4, 2, 3).

3.6.4. Explain why there are n! permutations of {1, . . . , n}.

3.6.5. Define the permutation µ = (n, n − 1, n − 2, . . . , 1) ∈ Sn . Show that µ has


(n − 1)n/2 inversions and that


⎪ 1 if n has the form 4k or 4k + 1 (k ∈ Z),
(−1)µ = ⎨

⎪−1 otherwise.

3.6.6. Explain why (−1)π = −(−1)π in the proof of part (2) of Proposi-

tion 3.6.3.
108 3 Linear Mappings and Their Matrices

3.6.7. Use the defining formula of the determinant to reproduce the result
that det(In ) = 1.
3.6.8. Explain why in every term (−1)π a1π(1) a2π(2) ⋯anπ(n) from the deter-
minant formula, ∑ni=1 π(i) = ∑ni=1 i. Use this to reexplain why the determinant
of a triangular matrix is the product of its diagonal entries.
3.6.9. Calculate the determinants of the following matrices:
⎡ 4 3 −1 2 ⎤ ⎡ 1 −1 2 3 ⎤
⎢ ⎥ ⎢ ⎥
⎢0 1 2 3⎥ ⎢2 2 0 2⎥
⎢ ⎥ ⎢ ⎥
⎢ ⎥, ⎢ ⎥.
⎢1 0 4 1⎥ ⎢ 4 1 −1 −1 ⎥
⎢ ⎥ ⎢ ⎥
⎢2 0 3 0⎥ ⎢1 2 3 0⎥
⎣ ⎦ ⎣ ⎦
3.6.10. Show that the Vandermonde matrix,
⎡1 a a2 ⎤
⎢ ⎥
⎢ ⎥
⎢1 b b2 ⎥ ,
⎢ ⎥
⎢1 c c2 ⎥
⎣ ⎦
has determinant (b − a)(c − a)(c − b). For what values of a, b, c is the Vander-
monde matrix invertible? (The idea is to do the problem conceptually rather
than writing out the determinant and then factoring it, so that the same ideas
would work for larger matrices. The determinant formula shows that the de-
terminant in the problem is a polynomial in a, b, and c. What is its degree in
each variable? Why must it vanish if any two variables are equal? Once you
have argued that that the determinant is as claimed, don’t forget to finish the
problem.)
3.6.11. Consider the following n × n matrix based on Pascal’s triangle:
⎡1 1 1 1 ⋯ 1 ⎤
⎢ ⎥
⎢1 2 3 4 ⋯ n ⎥
⎢ ⎥
⎢ ⎥
⎢1 3 6 10 ⋯ n(n+1) ⎥
A=⎢ ⎢1 4 10 20 ⋯
⎥.
⋅ ⎥
2
⎢ ⎥
⎢⋮ ⋮ ⋮ ⎥
⎢ ⋮ ⋮ ⎥
⎢ ⎥
⎢1 n n(n+1) ⋅ ⋯ ⋅ ⎥
⎣ 2 ⎦
Find det(A). (Hint: Row and column reduce.)
3.6.12. This exercise constructs a determinant with no reference to permu-
tations or their signs, inductively on the dimension n of the matrix. Define
det1 ([a]) = a, and then
n
detn (A) = ∑ (−1)1+j a1j detn−1 (A1j ), n ≥ 2,
j=1

where A1j is the (n − 1) × (n − 1) matrix obtained by removing the first row


and jth column of A. One can start instead with det0 ([ ]) = 1 and then the
displayed formula for n ≥ 1. Show by induction on n that detn is multilinear,
alternating (hence skew-symmetric), and normalized as a function of the rows
of A.
3.7 An Explicit Formula for the Inverse 109

3.7 An Explicit Formula for the Inverse


Consider an invertible linear mapping

T ∶ Rn Ð→ Rn

having matrix
A ∈ Mn (R).
In Section 3.3 we discussed a process to invert A and thereby invert T . Now,
with the determinant in hand, we can also write the inverse of A explicitly in
closed form. Because the formula giving the inverse involves many determi-
nants, it is hopelessly inefficient for computation. Nonetheless, it is of interest
to us for a theoretical reason (the pending Corollary 3.7.3) that we will need
in Chapter 5.

Definition 3.7.1 (Classical adjoint). Let n ≥ 2 be an integer, and let A ∈


Mn (R) be an n × n matrix. For every i, j ∈ {1, . . . , n}, let

Ai,j ∈ Mn−1 (R)

be the (n − 1) × (n − 1) matrix obtained by deleting the ith row and the jth
column of A. The classical adjoint of A is the n × n matrix whose (i, j)th
entry is (−1)i+j times the determinant of Aj,i ,

Aadj = [(−1)i+j det(Aj,i )] ∈ Mn (R).

The factor (−1)i+j in the formula produces an alternating checkerboard


pattern of plus and minus signs, starting with a plus sign in the upper left cor-
ner of Aadj . Note that the (i, j)th entry of Aadj involves Aj,i rather than Ai,j .
For instance, in the 2 × 2 case,

d −b
adj
[ ] =[ ].
ab
cd −c a

Already for a 3 × 3 matrix, the formula for the classical adjoint is daunting,
⎡ ⎤
⎢ det [ e f ] − det [ b c ] det [ b c ] ⎥
⎢ ⎥
⎡a b c ⎤adj ⎢ ⎥
⎢ ⎥
h k h k e f
⎢ ⎥ ⎢ ac ⎥
⎢ ⎥ ⎢
⎢d e f ⎥ = ⎢ − det [ ] det [ ] − det [ ]⎥
df ac
⎢ ⎥
⎢g h k ⎥ ⎢ df ⎥ ⎥
⎢ ⎥
gk gk
⎣ ⎦ ⎢ ⎥
⎢ det [ d e
] − [
a b
] [
a b
] ⎥
⎢ det det
de ⎥
⎣ gh gh ⎦
⎡ ek − f h ch − bk bf − ce ⎤
⎢ ⎥
⎢ ⎥
= ⎢ f g − dk ak − cg cd − af ⎥ .
⎢ ⎥
⎢ dh − eg bg − ah ae − bd ⎥
⎣ ⎦
110 3 Linear Mappings and Their Matrices

Returning to the 2 × 2 case, where

d −b
A=[ ] and Aadj = [ ],
ab
cd −c a

compute that

ad − bc
A Aadj = [ ] = (ad − bc) [ ] = det(A)I2 .
0 10
0 ad − bc 01

The same result holds in general:


Proposition 3.7.2 (Classical adjoint identity). Let n ≥ 2 be an integer,
let A ∈ Mn (R) be an n × n matrix, and let Aadj be its classical adjoint. Then

A Aadj = det(A)In .

Especially, if A is invertible then


1
A−1 = Aadj .
det(A)
The idea of the proof is that the inner product of the ith row of A and
the ith column of Aadj gives precisely the formula for det(A), while for i ≠ j
the inner product of the ith row of A and the jth column of Aadj gives the
formula for the determinant of a matrix having the ith row of A as two of its
rows. The argument is purely formal but notationally tedious, and so we omit
it.
In the 2 × 2 case the proposition gives us a slogan:
To invert a 2 × 2 matrix, exchange the diagonal elements, change the
signs of the off-diagonal elements, and divide by the determinant.
Again, for n > 2 the explicit formula for the inverse is rarely of calculational
use. We care about it for the following reason.
Corollary 3.7.3. Let A ∈ Mn (R) be an invertible n × n matrix. Then each
entry of the inverse matrix A−1 is a continuous function of the entries of A.
Proof. Specifically, the (i, j)th entry of A−1 is

(A−1 )i,j = (−1)i+j det(Aj,i )/ det(A),

a rational function (ratio of polynomials) of the entries of A. As such it varies


continuously in the entries of A as long as A remains invertible. ⊔

Exercise

3.7.1. Verify at least one diagonal entry and at least one off-diagonal entry
in the formula A Aadj = det(A)In for n = 3.
3.8 Geometry of the Determinant: Volume 111

3.8 Geometry of the Determinant: Volume


Consider a linear mapping from n-space to n-space,

T ∶ Rn Ð→ Rn .

This section discusses two ideas:


• The mapping T magnifies volume by a constant factor. (Here volume is
a pandimensional term that in particular means length when n = 1, area
when n = 2, and the usual notion of volume when n = 3.) That is, there is
some number t ≥ 0 such that if one takes a set,

E ⊂ Rn ,

and passes it through the mapping to get another set,

T E ⊂ Rn ,

then the set’s volume is multiplied by t,

vol T E = t ⋅ vol E.

The magnification factor t depends on T but is independent of the set E.


• Furthermore, if the matrix of T is A then the magnification factor associ-
ated to T is
t = ∣ det A∣.
That is, the absolute value of det A has a geometric interpretation as the
factor by which T magnifies volume.
(The geometric interpretation of the sign of det A will be discussed in the next
section.)
An obstacle to pursuing these ideas is that we don’t have a theory of
volume in Rn readily at hand. In fact, volume presents real difficulties. For
instance, no notion of volume that has sensible properties can apply to all sets;
so either volume behaves unreasonably or some sets don’t have well-defined
volumes at all. Here we have been tacitly assuming that volume does behave
well and that the sets E under consideration do have volumes. This section
will investigate volume informally by considering how it ought to behave,
stating assumptions as they arise and arriving only at a partial description.
The resulting arguments will be heuristic, and the skeptical reader will see
gaps in the reasoning. Volume will be discussed further in Chapter 6, but a
full treatment of the subject (properly called measure) is beyond the range of
this text.
The standard basis vectors e1 , . . . , en in Rn span the unit box,

B = {α1 e1 + ⋯ + αn en ∶ 0 ≤ α1 ≤ 1, . . . , 0 ≤ αn ≤ 1}.
112 3 Linear Mappings and Their Matrices

Thus box means interval when n = 1, rectangle when n = 2, and the usual
notion of box when n = 3. Let p be a point in Rn , let a1 , . . . , an be positive
real numbers, and let B ′ denote the box spanned by the vectors a1 e1 , . . . , an en
and translated by p,

B ′ = {α1 a1 e1 + ⋯ + αn an en + p ∶ 0 ≤ α1 ≤ 1, . . . , 0 ≤ αn ≤ 1}.

(See Figure 3.11. The figures of this section are set in two dimensions, but the
ideas are general and hence so are the figure captions.) A face of a box is the
set of its points such that some particular αi is held fixed at 0 or at 1 while
the others vary. A box in Rn has 2n faces.

p + a 2 e2
B′ e2
p p + a 1 e1 B

e1
Figure 3.11. Scaling and translating the unit box

A natural definition is that the unit box has unit volume,

vol B = 1.

We assume that volume is unchanged by translation. Also, we assume that box


volume is finitely additive, meaning that given finitely many boxes B1 , . . . , BM
that are disjoint except possibly for shared faces or shared subsets of faces,
the volume of their union is the sum of their volumes,
M
vol ⋃ Bi = ∑ vol Bi .
M
(3.7)
i=1 i=1

And we assume that scaling any spanning vector of a box affects the box’s
volume continuously in the scaling factor. It follows that scaling any spanning
vector of a box by a real number a magnifies the volume by ∣a∣. To see this,
first note that scaling a spanning vector by an integer ℓ creates ∣ℓ∣ abutting
translated copies of the original box, and so the desired result follows in this
case from finite additivity. A similar argument applies to scaling a spanning
vector by a reciprocal integer 1/m (m ≠ 0), since the original box is now ∣m∣
copies of the scaled one. These two special cases show that the result holds
for scaling a spanning vector by any rational number r = ℓ/m. Finally, the
continuity assumption extends the result from the rational numbers to the
real numbers, since every real number is approached by a sequence of rational
numbers. Since the volume of the unit box is normalized to 1, since volume
3.8 Geometry of the Determinant: Volume 113

Figure 3.12. Inner and outer approximation of E by boxes

is unchanged by translation, and since scaling any spanning vector of a box


by a magnifies its volume by ∣a∣, the volume of the general box is (recalling
that a1 , . . . , an are assumed to be positive)

vol B ′ = a1 ⋯an .

A subset of Rn that is well approximated by boxes plausibly has a volume.


To be more specific, a subset E of Rn is well approximated by boxes if for every
ε > 0 there exist boxes B1 , . . . , BN , BN +1 , . . . , BM , disjoint except possibly for
shared faces, such that E is contained between a partial union of the boxes
and the full union,
⋃ Bi ⊂ E ⊂ ⋃ Bi ,
N M
(3.8)
i=1 i=1

and such that the boxes that complete the partial union to the full union have
a small sum of volumes,
M
∑ vol Bi < ε. (3.9)
i=N +1

(See Figure 3.12, where E is an elliptical region, the boxes B1 through BN


that it contains are dark, and the remaining boxes BN +1 through BM are
light.) To see that E should have a volume, note that the first containment
of (3.8) says that a number at most big enough to serve as vol E (a lower
bound) is L = vol ⋃N i=1 Bi , and the second containment says that a number at
least big enough (an upper bound) is U = vol ⋃M i=1 Bi . By the finite additivity
condition (3.7), the lower and upper bounds are L = ∑N i=1 vol Bi and U =
∑i=1 vol Bi . Thus they are close to each other by (3.9),
M

M
U − L = ∑ vol Bi < ε.
i=N +1

Since ε is arbitrarily small, the bounds should be squeezing down on a unique


value that is the actual volume of E, and so indeed E should have a volume.
For now this is only a plausibility argument, but it is essentially the idea of
integration, and it will be quantified in Chapter 6.
114 3 Linear Mappings and Their Matrices

Every set of n vectors v1 , . . . , vn in Rn spans a parallelepiped

P(v1 , . . . , vn ) = {α1 v1 + ⋯ + αn vn ∶ 0 ≤ α1 ≤ 1, . . . , 0 ≤ αn ≤ 1},

abbreviated to P when the vectors are firmly fixed. Again the terminology
is pandimensional, meaning in particular interval, parallelogram, and paral-
lelepiped in the usual sense for n = 1, 2, 3. We will also consider translations
of parallelepipeds away from the origin by offset vectors p,

P ′ = P + p = {v + p ∶ v ∈ P}.

(See Figure 3.13.) A face of a parallelepiped is the set of its points such that
some particular αi is held fixed at 0 or at 1 while the others vary. A paral-
lelepiped in Rn has 2n faces. Boxes are special cases of parallelepipeds. The
methods of Chapter 6 will show that parallelepipeds are well approximated by
boxes, and so they have well-defined volumes. We assume that parallelepiped
volume is finitely additive, and we assume that every finite union of paral-
lelepipeds each having volume zero again has volume zero.

p + v2
P′
p + v1
v2
P
v1
p

Figure 3.13. Parallelepipeds

To begin the argument that the linear mapping T ∶ Rn Ð→ Rn magnifies


volume by a constant factor, we pass the unit box B and the scaled translated

T (p) T B′
B ′
p
B
TB

Figure 3.14. Linear image of the unit box and of a scaled translated box
3.8 Geometry of the Determinant: Volume 115

box B ′ from earlier in the section through T . The image of B under T is


a parallelepiped T B spanned by T (e1 ), . . . , T (en ), and the image of B ′ is a
parallelepiped T B ′ spanned by T (a1 e1 ), . . . , T (an en ) and translated by T (p).
(See Figure 3.14.) Since T (a1 e1 ) = a1 T (e1 ), . . . , T (an en ) = an T (en ), it follows
that scaling the sides of T B by a1 , . . . , an and then translating the scaled
parallelepiped by T (p) gives T B ′ . As for boxes, scaling any spanning vector
of a parallelepiped by a real number a magnifies the volume by ∣a∣, and so we
have
vol T B ′ = vol T B ⋅ a1 ⋯an .
But also,
a1 ⋯an = vol B ′ .
That is, the volume of the T -image of any box is a constant multiple of the
volume of the box, regardless of the box’s location or side lengths, the constant
being the volume of T B, the T -image of the unit box B. Call this constant
magnification factor t. Thus,

vol T B ′ = t ⋅ vol B ′ for all boxes B ′ . (3.10)

Figure 3.15. Inner and outer approximation of T E by parallelepipeds

We need one last preliminary result about volume. Again let E be a subset
of Rn that is well approximated by boxes. Fix a linear mapping T ∶ Rn Ð→
Rn . Very similarly to the argument for E, the set T E also should have a
volume, because it is well approximated by parallelepipeds. Indeed, the set
containments (3.8) are preserved under the linear mapping T ,

T ⋃ Bi ⊂ T E ⊂ T ⋃ Bi .
N M

i=1 i=1

In general, the image of a union is the union of the images, so this can be
rewritten as
⋃ T Bi ⊂ T E ⊂ ⋃ T Bi .
N M

i=1 i=1
116 3 Linear Mappings and Their Matrices

(See Figure 3.15.) As before, numbers at most big enough and at least big
enough for the volume of T E are
N M
L = vol ⋃ T Bi = ∑ vol T Bi , U = vol ⋃ T Bi = ∑ vol T Bi .
N M

i=1 i=1 i=1 i=1

The only new wrinkle is that citing the finite additivity of parallelepiped
volume here assumes that the parallelepipeds T Bi either inherit from the
original boxes Bi the property of being disjoint except possibly for shared
faces, or they all have volume zero. The assumption is valid because if T is
invertible then the inheritance holds, while if T is not invertible then we will
see later in this section that the T Bi have volume zero, as desired. With this
point established, let t be the factor by which T magnifies box-volume. The
previous display and (3.10) combine to show that the difference of the bounds
is
M M M
U − L = ∑ vol T Bi = ∑ t ⋅ vol Bi = t ⋅ ∑ vol Bi ≤ tε.
i=N +1 i=N +1 i=N +1

The inequality is strict if t > 0, and it collapses to U − L = 0 if t = 0. In either


case, since ε is arbitrarily small, the argument that T E should have a volume
is the same as for E.
To complete the argument that the linear mapping T ∶ Rn Ð→ Rn magnifies
volume by a constant factor, we argue that for every subset E of Rn that is well
approximated by boxes, vol T E is t times the volume of E. Let V = vol ⋃N i=1 Bi .
Then E is contained between a set of volume V and a set of volume less than
V + ε (again see Figure 3.12, where V is the shaded area and V + ε is the total
area), and T E is contained between a set of volume tV and a set of volume at
most t(V + ε) (again see Figure 3.15, where tV is the shaded area and t(V + ε)
is the total area). Thus the volumes vol E and vol T E satisfy the condition

vol T E t(V + ε)
≤ ≤
tV
vol E
.
V +ε V
Since ε can be arbitrarily small, the left and right quantities in the display
can be arbitrarily close to t, and so the only possible value for the quantity in
the middle (which is independent of ε) is t. Thus we have the desired equality
announced at the beginning of this section,

vol T E = t ⋅ vol E.

In sum, subject to various assumptions about volume, T magnifies the volumes


of all boxes and of all figures that are well approximated by boxes by the same
factor, which we have denoted t.
Now we investigate the magnification factor t associated with the linear
mapping T , with the goal of showing that it is ∣ det A∣, where A is the matrix
of T . As a first observation, if the linear mappings S, T ∶ Rn Ð→ Rn magnify
3.8 Geometry of the Determinant: Volume 117

volume by s and t respectively, then their composition S ○ T magnifies volume


by st. In other words, the magnification of linear mappings is multiplicative.
Also, recall that the mapping T is simply multiplication by the matrix A. Since
every matrix is a product of elementary matrices times an echelon matrix,
we only need to study the magnification of multiplying by such matrices.
Temporarily let n = 2.
The 2 × 2 recombine matrices take the form R = [ 10 a1 ] and R′ = [ a1 01 ] with
a ∈ R. The standard basis vectors e1 and e2 are taken by R to its columns, e1
and ae1 + e2 . Thus R acts geometrically as a shear by a in the e1 -direction,
magnifying volume by 1. (See Figure 3.16.) Note that 1 = ∣ det R∣ as desired.
The geometry of R′ is left as an exercise.

Figure 3.16. Shear

The scale matrices are S = [ a0 10 ] and S ′ = [ 10 a0 ]. The standard basis gets


taken by S to ae1 and e2 , so S acts geometrically as a scale in the e1 -direction,
magnifying volume by ∣a∣; this is ∣ det S∣, again as desired. (See Figure 3.17.)
The situation for S ′ is similar.

Figure 3.17. Scale

The transposition matrix is T = [ 01 10 ]. It exchanges e1 and e2 , acting as a


reflection through the diagonal, magnifying volume by 1. (See Figure 3.18.)
Since det T = −1, the magnification factor is the absolute value of the deter-
minant.
118 3 Linear Mappings and Their Matrices

Figure 3.18. Reflection

Finally, the identity matrix E = I has no effect, magnifying volume by 1,


and every other echelon matrix E has bottom row (0, 0) and hence squashes
e1 and e2 to vectors whose last component is 0, magnifying volume by 0. (See
Figure 3.19.) The magnification factor is ∣ det E∣ in both cases.

Figure 3.19. Squash

The discussion for scale matrices, transposition matrices, and echelon ma-
trices generalizes effortlessly from 2 to n dimensions, but generalizing the dis-
cussion for recombine matrices Ri;j,a takes a small argument. Because trans-
position matrices have no effect on volume, we may multiply Ri;j,a from the
left and from the right by various transposition matrices to obtain R1;2,a and
study it instead. Multiplication by R1;2,a preserves all of the standard basis
vectors except e2 , which is taken to ae1 + e2 as before. The resulting paral-
lelepiped P(e1 , ae1 + e2 , e3 , . . . , en ) consists of the parallelogram shown in the
right side of Figure 3.16, extended one unit in each of the remaining orthogo-
nal n−2 directions of Rn . The n-dimensional volume of the parallelepiped is its
base (the area of the parallelogram, 1) times its height (the (n−2)-dimensional
volume of the unit box over each point of the parallelogram, again 1). That is,
the n × n recombine matrix still magnifies volume by 1, the absolute value of
its determinant, as desired. The base times height property of volume is yet
another invocation here, but it is a consequence of a theorem to be proved in
Chapter 6, Fubini’s theorem. Summarizing, we have the following result.
3.8 Geometry of the Determinant: Volume 119

Theorem 3.8.1 (Geometry of linear mappings). Every linear mapping


T ∶ Rn Ð→ Rn is the composition of a possible squash followed by shears, scales,
and reflections. If the matrix of T is A then T magnifies volume by ∣ det A∣.

Proof. The matrix A of T is a product of elementary matrices and an echelon


matrix. The elementary matrices act as shears, scales, and reflections, and if
the echelon matrix is not the identity then it acts as a squash. This proves
the first statement. Each elementary or echelon matrix magnifies volume by
the absolute value of its determinant. The second statement follows since
magnification and ∣ det ∣ are both multiplicative. ⊔

The work of this section has given a geometric interpretation of the mag-
nitude of det A: it is the magnification factor of multiplication by A. If the
columns of A are denoted c1 , . . . , cn then Aej = cj for j = 1, . . . , n, so that
even more explicitly ∣ det A∣ is the volume of the parallelepiped spanned by
the columns of A. For instance, to find the volume of the 3-dimensional par-
allelepiped spanned by the vectors (1, 2, 3), (2, 3, 4), and (3, 5, 8), compute
⎡1 2 3⎤
that
⎢ ⎥
⎢ ⎥
∣ det ⎢2 3 5⎥ ∣ = 1.
⎢ ⎥
⎢3 4 8⎥
⎣ ⎦

Exercises

3.8.1. (a) This section states that the image of a union is the union of the
images. More specifically, let A and B be any sets, let f ∶ A Ð→ B be any
mapping, and let A1 , . . . , AN be any subsets of A. Show that

f ( ⋃ Ai ) = ⋃ f (Ai ).
N N

i=1 i=1

(This exercise is purely set-theoretic, making no reference to our working


environment of Rn .)
(b) Consider a two-point set A = {a1 , a2 } where a1 ≠ a2 , a one-point set B =
{b}, and the only possible mapping f ∶ A Ð→ B, given by f (a1 ) = f (a2 ) = b.
Let A1 = {a1 } and A2 = {a2 }, subsets of A. What is the intersection A1 ∩ A2 ?
What is the image of the intersection, f (A1 ∩ A2 )? What are the images
f (A1 ) and f (A2 )? What is the intersection of the images, f (A1 ) ∩ f (A2 )? Is
the image of an intersection in general the intersection of the images?

3.8.2. Describe the geometric effect of multiplying by the matrices R′ and S ′


in this section. Describe the effect of multiplying by R and S if a < 0.

3.8.3. Describe the geometric effect of multiplying by the 3 × 3 elementary


matrices R2;3,1 , R3;1,2 , and S2,−3 .
120 3 Linear Mappings and Their Matrices

3.8.4. (a) Express the matrix [ 01 −10 ] as a product of recombine and scale
matrices (you may not need both types).
(b) Use part (a) to describe counterclockwise rotation of the plane through
the angle π/2 as a composition of shears and scales.

3.8.5. Describe counterclockwise rotation of the plane through the angle θ


(where cos θ ≠ 0 and sin θ ≠ 0) as a composition of shears and scales.

3.8.6. In R3 , describe the linear mapping that takes e1 to e2 , e2 to e3 , and e3


to e1 as a composition of shears, scales, and transpositions.

3.8.7. Let P be the parallelogram in R2 spanned by (a, c) and (b, d). Cal-
culate directly that ∣ det [ ac db ] ∣ = area P. (Hint: area = base × height =
∣(a, c)∣ ∣(b, d)∣ ∣ sin θ(a,c),(b,d) ∣. It may be cleaner to find the square of the area.)

3.8.8. This exercise shows directly that ∣ det ∣ = volume in R3 . Let P be the
parallelepiped in R3 spanned by v1 , v2 , v3 , let P ′ be spanned by the vectors
v1′ , v2′ , v3′ obtained from performing the Gram–Schmidt process on the vj ’s,
let A ∈ M3 (R) have rows v1 , v2 , v3 , and let A′ ∈ M3 (R) have rows v1′ , v2′ , v3′ .
(a) Explain why det A′ = det A.
(b) Give a plausible geometric argument that vol P ′ = vol P.
(c) Show that
⎡∣v ′ ∣2 0 0 ⎤
⎢ 1 ⎥
⎢ ⎥
A A = ⎢ 0 ∣v2 ∣ 0 ⎥ .
′ ′t
⎢ ′ 2⎥
′ 2

⎢ 0 0 ∣v3 ∣ ⎥
⎣ ⎦
Explain why therefore ∣ det A′ ∣ = vol P ′ . It follows from parts (a) and (b) that
∣ det A∣ = vol P.

3.9 Geometry of the Determinant: Orientation

Recall from Section 2.1 that a basis of Rn is a set of vectors {f1 , . . . , fp } such
that every vector in Rn is a unique linear combination of the {fj }. Though
strictly speaking, a basis is only a set, we adopt here the convention that the
basis vectors are given in the specified order indicated. Given such a basis,
view the vectors as columns and let F denote the matrix in Mn,p (R) with
columns f1 , . . . , fp . Thus the order of the basis vectors is now relevant. For
a standard basis vector ej of Rp , the matrix-by-vector product F ej gives the
jth column fj of F . Therefore, for every vector x = (x1 , . . . , xp ) ∈ Rp (viewed
as a column),
⎛p ⎞ p p
F x = F ⋅ ∑ x j ej = ∑ x j F e j = ∑ x j f j .
⎝j=1 ⎠ j=1 j=1

Thus, multiplying all column vectors x ∈ Rp by the matrix F gives precisely


the linear combinations of f1 , . . . , fp , and so we have the equivalences
3.9 Geometry of the Determinant: Orientation 121

{f1 , . . . , fp } is a basis of Rn
each y ∈ Rn is uniquely expressible
⇐⇒ ( )
as a linear combination of the {fj }
each y ∈ Rn takes the form
⇐⇒ ( )
y = F x for a unique x ∈ Rp
⇐⇒ F is invertible
⇐⇒ F is square (i.e., p = n) and det F ≠ 0.

These considerations have proved the following result.

Theorem 3.9.1 (Characterization of bases). Every basis of Rn has n


elements. The vectors {f1 , . . . , fn } form a basis exactly when the matrix F
having them as its columns has nonzero determinant.

Let {f1 , . . . , fn } be a basis of Rn , and let F be the matrix formed by their


columns. Abuse terminology and call det F the determinant of the basis,
written det{f1 , . . . , fn }. Again, this depends on the order of the {fj }. There
are then two kinds of bases of Rn , positive and negative bases, according
to the sign of their determinants. The standard basis {e1 , . . . , en } forms the
columns of the identity matrix I and is therefore positive.
The multilinear function det F is continuous in the n2 entries of f1 , . . . , fn
(see Exercise 3.6.1). If a basis {f1 , ⋯, fn } can be smoothly deformed via other
bases to the standard basis then the corresponding determinants must change
continuously to 1 without passing through 0. Such a basis must therefore be
positive. Similarly, a negative basis cannot be smoothly deformed via other
bases to the standard basis. It is also true but less clear (and not proved here)
that every positive basis deforms smoothly to the standard basis.
The plane R2 is by convention drawn with {e1 , e2 } forming a counterclock-
wise angle of π/2. Two vectors {f1 , f2 } form a basis if they are not collinear.
Therefore the basis {f1 , f2 } can be deformed via bases to {e1 , e2 } exactly
when the angle θf1 ,f2 goes counterclockwise from f1 to f2 . (Recall from equa-
tion (2.2) that the angle between two nonzero vectors is between 0 and π.)
That is, in R2 , the basis {f1 , f2 } is positive exactly when the angle from f1
to f2 is counterclockwise. (See Figure 3.20.)

f2
f1

f1 f2

Figure 3.20. Positive and negative bases of R2


122 3 Linear Mappings and Their Matrices

Three-space R3 is by convention drawn with {e1 , e2 , e3 } forming a right-


handed triple, meaning that when the fingers of your right hand curl from
e1 to e2 , your thumb forms an acute angle with e3 . Three vectors {f1 , f2 , f3 }
form a basis if they are not coplanar. In other words, they must form a right-
or left-handed triple. Only right-handed triples deform via other nonplanar
triples to {e1 , e2 , e3 }. Therefore in R3 , the basis {f1 , f2 , f3 } is positive exactly
when it forms a right-handed triple. (See Figure 3.21.)

f3
f2
f2
f1
f1 f3

Figure 3.21. Positive and negative bases of R3

The geometric generalization to Rn of a counterclockwise angle in the plane


and a right-handed triple in space is not so clear, but the algebraic notion of
positive basis is the same for all n.
Consider any invertible mapping T ∶ Rn Ð→ Rn with matrix A ∈ Mn (R),
and any basis {f1 , . . . , fn } of Rn . If F again denotes the matrix with columns
f1 , . . . , fn then AF has columns {Af1 , . . . , Afn } = {T (f1 ), . . . , T (fn )}. These
form a new basis of Rn with determinant

det{T (f1 ), . . . , T (fn )} = det AF = det A det F = det A det{f1 , . . . , fn }.

The calculation lets us interpret the sign of det A geometrically: if det A > 0
then T preserves the orientation of bases, and if det A < 0 then T reverses
orientation. For example, the mapping with matrix
⎡0 0 0 1⎤
⎢ ⎥
⎢1 0 0 0⎥
⎢ ⎥
⎢ ⎥
⎢0 1 0 0⎥
⎢ ⎥
⎢0 0 1 0⎥
⎣ ⎦
reverses orientation in R4 .
To summarize: Let A be an n × n matrix. Whether det A is nonzero says
whether A is invertible; the magnitude of det A is the factor by which A
magnifies volume; and (assuming that det A ≠ 0) the sign of det A determines
how A affects orientation. The determinant is astonishing.
3.10 The Cross Product, Lines, and Planes in R3 123

Exercises

3.9.1. Every invertible mapping T ∶ Rn Ð→ Rn is a composition of scales,


shears, and transpositions. Give conditions on such a composition to make
the mapping orientation-preserving, orientation-reversing.

3.9.2. Does the linear mapping T ∶ Rn Ð→ Rn that takes e1 to e2 , e2 to e3 ,


. . . , en to e1 preserve or reverse orientation? (The answer depends on n.)
More generally, if π is a permutation in Sn , does the linear mapping taking e1
to eπ(1) , . . . , en to eπ(n) preserve or reverse orientation? (This depends on π.)

3.9.3. Argue geometrically in R2 that every basis can be smoothly deformed


via other bases to the standard basis or to {e1 , −e2 }. Do the same for R3
and {e1 , e2 , −e3 }.

3.10 The Cross Product, Lines, and Planes in R3


Generally in Rn there is no natural way to associate to a pair of vectors u
and v a third vector. In R3 , however, the plane specified by u and v has only
one orthogonal direction, i.e., dimension 3 is special because 3 − 2 = 1. In R3 a
normal vector to u and v can be specified by making suitable conventions on
its orientation vis-à-vis the other two vectors, and on its length. This will give
a vector-valued product of two vectors that is special to 3-dimensional space,
called the cross product. The first part of this section develops these ideas.
Given any two vectors u, v ∈ R3 , we want their cross product u × v ∈ R3 to
be orthogonal to u and v,

u×v ⊥u and u × v ⊥ v. (3.11)

There is the question of which way u×v should point along the line orthogonal
to the plane spanned by u and v. The natural answer is that the direction
should be chosen to make the ordered triple of vectors {u, v, u × v} positive
unless it is degenerate,
det(u, v, u × v) ≥ 0. (3.12)
Also there is the question of how long u × v should be. With hindsight, we
assert that specifying the length to be the area of the parallelogram spanned
by u and v will work well. That is,

∣u × v∣ = area P(u, v). (3.13)

The three desired geometric properties (3.11) through (3.13) seem to describe
the cross product completely. (See Figure 3.22.)
The three geometric properties also seem disparate. However, they combine
into a uniform algebraic property, as follows. Since the determinant in (3.12) is
nonnegative, it is the volume of the parallelepiped spanned by u, v, and u × v.
124 3 Linear Mappings and Their Matrices

u
Figure 3.22. The cross product of u and v

The volume is the base times the height, and because u × v is normal to u
and v, the base is the area of P(u, v) and the height is ∣u × v∣. Thus

det(u, v, u × v) = area P(u, v) ∣u × v∣.

It follows from the previous display and (3.13) that

∣u × v∣2 = det(u, v, u × v).

Since orthogonal vectors have inner product 0, since the determinant is 0 when
two rows agree, and since the square of the absolute value is the vector’s inner
product with itself, we can rewrite (3.11) and this last display (obtained from
(3.12) and (3.13)) uniformly as equalities of the form ⟨u × v, w⟩ = det(u, v, w)
for various w,
⟨u × v, u⟩ = det(u, v, u),
⟨u × v, v⟩ = det(u, v, v), (3.14)
⟨u × v, u × v⟩ = det(u, v, u × v).
Instead of saying what the cross product is, as an equality of the form u × v =
f (u, v) would, the three equalities of (3.14) say how the cross product interacts
with certain vectors—including itself—via the inner product. Again, the idea
is to characterize rather than construct.
(The reader may object to the argument just given that det(u, v, u × v) =
area P(u, v) ∣u × v∣, on the grounds that we don’t really understand the area
of a 2-dimensional parallelogram in 3-dimensional space to start with, that
in R3 we measure volume rather than area, and the parallelogram surely has
volume zero. In fact, the argument can be viewed as motivating the formula
as the definition of the area. This idea will be discussed more generally in
Section 9.1.)
Based on (3.14), we leap boldly to an intrinsic algebraic characterization
of the cross product.
Definition 3.10.1 (Cross product). Let u and v be any two vectors in R3 .
Their cross product u × v is defined by the property
3.10 The Cross Product, Lines, and Planes in R3 125

⟨u × v, w⟩ = det(u, v, w) for all w ∈ R3 .

That is, u × v is the unique vector x ∈ R3 such that ⟨x, w⟩ = det(u, v, w) for
all w ∈ R3 .

As with the determinant earlier, we do not yet know that the characterizing
property determines the cross product uniquely, or even that a cross product
that satisfies the characterizing property exists at all. But also as with the
determinant, we defer those issues and first reap the consequences of the
characterizing property with no reference to an unpleasant formula for the
cross product. Of course the cross product will exist and be unique, but for
now the point is that graceful arguments with its characterizing property show
that it has all the further properties that we want it to have.

Proposition 3.10.2 (Properties of the cross product).


(CP1) The cross product is skew-symmetric: v × u = −u × v for all u, v ∈ R3 .
(CP2) The cross product is bilinear: for all scalars a, a′ , b, b′ ∈ R and all vectors
u, u′ , v, v ′ ∈ R3 ,

(au + a′ u′ ) × v = a(u × v) + a′ (u′ × v),


u × (bv + b′ v ′ ) = b(u × v) + b′ (u × v ′ ).

(CP3) The cross product u × v is orthogonal to u and v.


(CP4) u × v = 0 if and only if u and v are collinear (meaning that u = av or
v = au for some a ∈ R).
(CP5) If u and v are not collinear then the triple {u, v, u × v} is right-handed.
(CP6) The magnitude ∣u × v∣ is the area of the parallelogram spanned by u and
v.

Proof. (1) This follows from the skew-symmetry of the determinant. For every
w ∈ R3 ,

⟨v × u, w⟩ = det(v, u, w) = − det(u, v, w) = −⟨u × v, w⟩ = ⟨−u × v, w⟩.

Since w is arbitrary, v × u = −u × v.
(2) For the first variable, this follows from the linearity of the determinant
in its first row-vector variable and the linearity of the inner product in its first
vector variable. Fix a, a′ ∈ R, u, u′ , v ∈ R3 . For every w ∈ R3 ,

⟨(au + a′ u′ ) × v, w⟩ = det(au + a′ u′ , v, w)
= a det(u, v, w) + a′ det(u′ , v, w)
= a⟨u × v, w⟩ + a′ ⟨u′ × v, w⟩
= ⟨a(u × v) + a′ (u′ × v), w⟩.

Since w is arbitrary, (au + a′ u′ ) × v = a(u × v) + a′ (u′ × v). The proof for the
second variable follows from the result for the first variable and from (1).
126 3 Linear Mappings and Their Matrices

(3) ⟨u × v, u⟩ = det(u, v, u) = 0 because the determinant of a matrix with


two equal rows vanishes. Similarly, ⟨u × v, v⟩ = 0.
(4) If u = av then for every w ∈ R3 ,

⟨u × v, w⟩ = ⟨av × v, w⟩ = det(av, v, w) = a det(v, v, w) = 0.

Since w is arbitrary, u × v = 0. And similarly if v = au.


Conversely, suppose that u and v are not collinear. Then they are linearly
independent, and so no element of R3 can be written as a linear combination
of u and v in more than one way. The set {u, v} is not a basis of R3 , because
every basis consists of three elements. Since no elements of R3 can be written
as a linear combination of u and v in more than one way, and since {u, v}
is not a basis, the only possibility is that some w ∈ R3 cannot be written as
a linear combination of u and v at all. Thus the set {u, v, w} is a linearly
independent set of three elements, making it a basis of R3 . Compute that
since {u, v, w} is a basis,

⟨u × v, w⟩ = det(u, v, w) ≠ 0.

Therefore u × v ≠ 0.
(5) By (4), u × v ≠ 0, so 0 < ⟨u × v, u × v⟩ = det(u, v, u × v). By the results
on determinants and orientation, {u, v, u × v} is right-handed.
(6) By definition, ∣u × v∣2 = ⟨u × v, u × v⟩ = det(u, v, u × v). As discussed
earlier in this section, det(u, v, u × v) = area P(u, v) ∣u × v∣. The result follows
from dividing by ∣u × v∣ if it is nonzero, and from (4) otherwise. ⊔

Now we show that the characterizing property determines the cross prod-
uct uniquely. The idea is that a vector’s inner products with all other vectors
completely describe the vector itself. The observation to make is that for every
vector x ∈ Rn (n need not be 3 in this paragraph),

if ⟨x, w⟩ = 0 for all w ∈ Rn then x = 0n .

To justify this observation, specialize w to x to show that ⟨x, x⟩ = 0, giving the


result because 0n is the only vector whose inner product with itself is 0. (Here
we use the nontrivial direction of the degeneracy condition in the positive
definiteness property of the inner product.) In consequence of the observation,
for any two vectors x, x′ ∈ Rn ,

if ⟨x, w⟩ = ⟨x′ , w⟩ for all w ∈ Rn then x = x′ .

That is, the inner product values ⟨x, w⟩ for all w ∈ Rn specify x, as anticipated.
To prove that the cross product exists, it suffices to write a formula for it
that satisfies the characterizing property in Definition 3.10.1. Since we need
the cross product to have components
3.10 The Cross Product, Lines, and Planes in R3 127

⟨u × v, e1 ⟩ = det(u, v, e1 ),
⟨u × v, e2 ⟩ = det(u, v, e2 ),
⟨u × v, e3 ⟩ = det(u, v, e3 ),
the only possible formula is to construct the cross product from these compo-
nents,
u × v = (det(u, v, e1 ), det(u, v, e2 ), det(u, v, e3 )).
This formula indeed satisfies the definition, because by definition of the inner
product and then by the linearity of the determinant in its third argument,
we have for every w = (w1 , w2 , w3 ) ∈ R3 ,
⟨u × v, w⟩ = det(u, v, e1 ) ⋅ w1 + det(u, v, e2 ) ⋅ w2 + det(u, v, e3 ) ⋅ w3
= det(u, v, w1 e1 + w2 e2 + w3 e3 )
= det(u, v, w).
In coordinates, the formula for the cross product is
⎡u u u ⎤ ⎡u u u ⎤ ⎡u u u ⎤
⎢ 1 2 3⎥ ⎢ 1 2 3⎥ ⎢ 1 2 3⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
u × v = (det ⎢ v1 v2 v3 ⎥ , det ⎢ v1 v2 v3 ⎥ , det ⎢ v1 v2 v3 ⎥)
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢1 0 0⎥ ⎢0 1 0⎥ ⎢0 0 1⎥
⎣ ⎦ ⎣ ⎦ ⎣ ⎦
= (u2 v3 − u3 v2 , u3 v1 − u1 v3 , u1 v2 − u2 v1 ).
A bit more conceptually, the cross product formula in coordinates is
⎡u u u ⎤
⎢ 1 2 3⎥
⎢ ⎥
u × v = det ⎢ v1 v2 v3 ⎥ .
⎢ ⎥
⎢ e1 e2 e3 ⎥
⎣ ⎦
The previous display is only a mnemonic device: strictly speaking, it doesn’t
lie within our grammar, because the entries of the bottom row are vectors
rather than scalars. But even so, its two terms u1 v2 e3 − u2 v1 e3 do give the
third entry of the cross product, and similarly for the others. In Chapter 9,
where we will have to compromise our philosophy of working intrinsically
rather than in coordinates, this formula will be cited and generalized. In the
meantime, its details are not important except for mechanical calculations,
and we want to use it as little as possible, as with the determinant earlier.
Indeed, the display shows that the cross product is essentially a special case
of the determinant.
It is worth knowing the cross products of the standard basis pairs,
e1 × e1 = 03 , e 1 × e2 = e3 , e1 × e3 = −e2 ,
e2 × e1 = −e3 , e2 × e2 = 03 , e 2 × e3 = e1 ,
e3 × e1 = e2 , e3 × e2 = −e1 , e3 × e3 = 03 .
Here ei × ej is 03 if i = j, and ei × ej is the remaining standard basis vector if
i ≠ j and i and j are in order in the diagram
128 3 Linear Mappings and Their Matrices

/2

1Y


3
and ei × ej is minus the remaining standard basis vector if i ≠ j and i and j
are out of order in the diagram.

The remainder of this section describes lines and planes in R3 .


A line ℓ in R3 is determined by a point p and a direction vector d. (See
Figure 3.23.) A point q lies in the line exactly when it is a translation from p
by some multiple of d. Therefore the line ℓ is given by
ℓ(p, d) = {p + td ∶ t ∈ R}.
In coordinates, a point (x, y, z) lies in ℓ((xp , yp , zp ), (xd , yd , zd )) exactly when
x = xp + txd , y = yp + tyd , z = zp + tzd for some t ∈ R.
If the components of d are all nonzero then the relation between the coordi-
nates can be expressed without the parameter t,
x − x p y − y p z − zp
= = .
xd yd zd
For example, the line through (1, 1, 1) in the direction (1, 2, 3) consists of all
points (x, y, z) satisfying x = 1+t, y = 1+2t, z = 1+3t for t ∈ R, or equivalently,
satisfying x − 1 = (y − 1)/2 = (z − 1)/3.

p+d

Figure 3.23. Line in R3

A plane P in R3 is determined by a point p and a normal (orthogonal)


vector n. (See Figure 3.24.) A point q lies in the plane exactly when the vector
from p to q is orthogonal to n. Therefore the plane P is given by
3.10 The Cross Product, Lines, and Planes in R3 129

P (p, n) = {q ∈ R3 ∶ ⟨q − p, n⟩ = 0}.
In coordinates, a point (x, y, z) lies in P ((xp , yp , zp ), (xn , yn , zn )) exactly when
(x − xp )xn + (y − yp )yn + (z − zp )zn = 0.

Figure 3.24. Plane in R3

Exercises
3.10.1. Evaluate (2, 0, −1) × (1, −3, 2).
3.10.2. Suppose that a vector v ∈ R3 takes the form v = u1 × e1 = u2 × e2 for
some u1 and u2 . Describe v.
3.10.3. True or false: For all u, v, w in R3 , (u × v) × w = u × (v × w).
3.10.4. Express (u + v) × (u − v) as a scalar multiple of u × v.
3.10.5. (a) Let U, V ∈ Mn (R) be skew-symmetric, meaning that U T = −U and
similarly for V , where U T is the transpose of U (Exercise 3.2.4). Show that aU
is skew-symmetric for every a ∈ R, and that U + V is skew-symmetric. Thus
the skew-symmetric matrices form a vector space. Show furthermore that the
Lie bracket [U, V ] = U V − V U is skew-symmetric. One can optionally check
that although the Lie bracket product is not in general associative, it instead
satisfies the Jacobi identity,
[[U, [V, W ]] + [V, [W, U ]] + [W, [U, V ]] = 0.
(b) Encode the vectors u = (u1 , u2 , u3 ) and v = (v1 , v2 , v3 ) as 3 × 3 skew-
symmetric matrices,
⎡ 0 −u −u ⎤ ⎡ 0 −v −v ⎤
⎢ 2⎥ ⎢ 2⎥
⎢ ⎥ ⎢ ⎥
1 1
U = ⎢ u1 0 −u3 ⎥ , V = ⎢ v1 0 −v3 ⎥ .
⎢ ⎥ ⎢ ⎥
⎢ u2 u3 0 ⎥ ⎢ v2 v3 0 ⎥
⎣ ⎦ ⎣ ⎦
Show that the Lie bracket product [U, V ] encodes the cross product u × v.
130 3 Linear Mappings and Their Matrices

3.10.6. Investigate the extent to which a cancellation law holds for the cross
product, as follows: for fixed u, v in R3 with u ≠ 0, describe the vectors w
satisfying the condition u × v = u × w.
3.10.7. What is the line specified by two points p and p′ ?
3.10.8. Give conditions on the points p, p′ and the directions d, d′ so that
ℓ(p, d) = ℓ(p′ , d′ ).
3.10.9. Express the relation between the coordinates of a point on ℓ(p, d) if
the x-component of d is 0.
3.10.10. What can you conclude about the lines
x − x p y − y p z − zp x − x p y − y p z − zp
= = and = =
xd yd zd xD yD zD
given that xd xD + yd yD + zd zD = 0? What can you conclude if instead xd /xD =
yd /yD = zd /zD ?
3.10.11. Show that ℓ(p, d) and ℓ(p′ , d′ ) intersect if and only if the linear
equation Dt = ∆p is solvable, where D ∈ M3,2 (R) has columns d and d′ , t
is the column vector [ tt12 ], and ∆p = p′ − p. For what points p and p′ do
ℓ(p, (1, 2, 2)) and ℓ(p′ , (2, −1, 4)) intersect?
3.10.12. Use vector geometry to show that the distance from the point q to
the line ℓ(p, d) is
∣(q − p) × d∣
∣d∣
.

(Hint: what is the area of the parallelogram spanned by q − p and d?) Find
the distance from the point (3, 4, 5) to the line ℓ((1, 1, 1), (1, 2, 3)).
3.10.13. Show that the time of nearest approach of two particles whose po-
sitions are s(t) = p + tv, s̃(t) = p̃ + tṽ is t = −⟨∆p, ∆v⟩/∣∆v∣2 . (You may assume
that the particles are at their nearest approach when the difference of their
velocities is orthogonal to the difference of their positions.)
3.10.14. Write the equation of the plane through (1, 2, 3) with normal direc-
tion (1, 1, 1).
3.10.15. Where does the plane x/a + y/b + z/c = 1 intersect each axis?
3.10.16. Specify the plane containing the point p and spanned by directions
d and d′ . Specify the plane containing the three points p, q, and r.
3.10.17. Use vector geometry to show that the distance from the point q to
the plane P (p, n) is
∣⟨q − p, n⟩∣
∣n∣
.

(Hint: Resolve q − p into components parallel and normal to n.) Find the
distance from the point (3, 4, 5) to the plane P ((1, 1, 1), (1, 2, 3)).
4
The Derivative

In one-variable calculus the derivative is a limit of difference quotients, but


this idea does not generalize to many variables. The multivariable definition
of the derivative to be given in this chapter has three noteworthy features:
• The derivative is defined as a linear mapping.
• The derivative is characterized intrinsically rather than constructed in co-
ordinates.
• The derivative is characterized by the property of closely approximating
the original mapping near the point of approximation.
Section 4.1 shows that the familiar definition of the one-variable derivative
cannot scale up to many variables. Section 4.2 introduces a pandimensional
notation scheme that describes various closenesses of approximation. The no-
tation packages a range of ideas that arise in calculus, handling them uni-
formly. Section 4.3 revisits the one-variable derivative, rephrasing it in the
new scheme, and then scales it up to many variables. Handy basic properties
of the derivative follow immediately. Section 4.4 obtains some basic results
about the derivative intrinsically, notably the chain rule. Section 4.5 com-
putes with coordinates to calculate the derivative by considering one variable
at a time and using the techniques of one-variable calculus. This section also
obtains a coordinate-based version of the chain rule. Section 4.6 studies the
multivariable counterparts of higher-order derivatives from one-variable calcu-
lus. Section 4.7 discusses optimization of functions of many variables. Finally,
Section 4.8 discusses the rate of change of a function of many variables as its
input moves in any fixed direction, not necessarily parallel to a coordinate
axis.

© Springer International Publishing AG 2016 131


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5_4
132 4 The Derivative

4.1 Trying to Extend the Symbol-Pattern: Immediate,


Irreparable Catastrophe

In one-variable calculus, the derivative of a function f ∶ R Ð→ R at a point


a ∈ R is defined as a limit,

f (a + h) − f (a)
f ′ (a) = lim .
h→0 h
But for every integer n > 1, the corresponding expression makes no sense for
a mapping f ∶ Rn Ð→ Rm and for a point a of Rn . Indeed, the expression is

f (a + h) − f (a)
lim ,
h→0n h
but this is not even grammatically admissible—there is no notion of division by
the vector h. That is, the standard definition of derivative does not generalize
to more than one input variable.
The breakdown here cannot be repaired by any easy patch. We must re-
think the derivative altogether in order to extend it to many variables.
Fortunately, the reconceptualization is richly rewarding.

Exercise

4.1.1. For a mapping f ∶ Rn Ð→ Rm and a point a of Rn , the repair-attempt


of defining f ′ (a) as
f (a + h) − f (a)
∣h∣
lim
h→0n

is grammatically sensible. Does it reproduce the usual derivative if n = m = 1?

4.2 New Environment: The Bachmann–Landau Notation

The notation to be introduced in this section, originally due to Bachmann late


in the nineteenth century, was also employed by Landau. It was significantly
repopularized in the 1960s by Knuth in his famous computer science books,
and it is now integral to mathematics, computer science, and mathematical
statistics.

Definition 4.2.1 (o(1)-mapping, O(h)-mapping, o(h)-mapping). Con-


sider a mapping from some ball about the origin in one Euclidean space to a
second Euclidean space,
ϕ ∶ B(0n , ε) Ð→ Rm
where n and m are positive integers and ε > 0 is a positive real number. The
mapping ϕ is smaller than order 1 if
4.2 New Environment: The Bachmann–Landau Notation 133

for every c > 0, ∣ϕ(h)∣ ≤ c for all small enough h.

The mapping ϕ is of order h if

for some c > 0, ∣ϕ(h)∣ ≤ c∣h∣ for all small enough h.

The mapping ϕ is smaller than order h if

for every c > 0, ∣ϕ(h)∣ ≤ c∣h∣ for all small enough h.

A mapping smaller than order 1 is denoted o(1), a mapping of order h is


denoted O(h), and a mapping smaller than order h is denoted o(h). Also o(1)
can denote the collection of o(1)-mappings, and similarly for O(h) and o(h).

The definition says that in terms of magnitudes, an o(1)-mapping is


smaller than every constant as h gets small, and an O(h)-mapping is at most
some constant multiple of h as h gets small, and an o(h)-mapping is smaller
than every constant multiple of h as h gets small. That is,

⎪ ∣o(1)∣ → 0 ⎫



⎪ ⎪




⎪ ∣O(h)∣ ⎪



⎪ is bounded⎪

⎨ ∣h∣ ⎬
⎪ ⎪
as h → 0,

⎪ ⎪



⎪ ∣o(h)∣ ⎪




⎪ ⎪


⎩ ∣h∣
→ 0

but the definitions of O(h) and o(h) avoid the divisions in the previous dis-
play, and the definitions further stipulate that every o(1)-mapping or O(h)-
mapping or o(h)-mapping takes the value 0 at h = 0. That is, beyond avoiding
division, the definitions are strictly speaking slightly stronger than the previ-
ous display. Also, the definitions quickly give the containments

o(h) ⊂ O(h) ⊂ o(1),

meaning that every o(h)-mapping is an O(h)-mapping, and every O(h)-


mapping is an o(1)-mapping.
Visually, the idea is that:
• For every c > 0, however small, close enough to the origin the graph of an
o(1)-mapping lies between the horizontal lines at height ±c, although the
requisite closeness of h to 0 can change as c gets smaller.
• For some particular c > 0, close enough to the origin the graph of an
O(h)-mapping lies inside the bow-tie-shaped envelope determined by the
lines y = ±cx.
• For every c > 0, however small, close enough to the origin the graph of
an o(h)-mapping lies inside the y = ±cx bow-tie, although the requisite
closeness of h to 0 can change as c gets smaller.
134 4 The Derivative

These images are oversimplified, representing a mapping’s n-dimensional


domain-ball and m-dimensional codomain-space as axes, but still the im-
ages correctly suggest that the o(1) condition describes continuity in local
coordinates, and the O(h) condition describes at-most-linear growth in local
coordinates, and the o(h) condition describes smaller-than-linear growth in
local coordinates. (A local coordinate system has its origin placed at some
particular point of interest, allowing us to assume that the point is simply the
origin.)
The next proposition gives the important basic example to have at hand.
Proposition 4.2.2 (Basic family of Landau functions). Consider the
function
ϕe ∶ Rn Ð→ R, ϕe (x) = ∣x∣e (where e ≥ 0 is a real number).
Then
• ϕe is o(1) if e > 0,
• ϕe is O(h) if e ≥ 1,
• ϕe is o(h) if e > 1.
The proof is Exercise 4.2.2. Examples are shown in Figure 4.1.

ϕ1/2
ϕ1
ϕ3

Figure 4.1. Basic o(1), O(h), and o(h) functions

Since Definition 4.2.1 stipulates growth-bounds, the following result is im-


mediate.
Proposition 4.2.3 (Dominance principle for the Landau spaces). Let
ϕ be o(1), and suppose that ∣ψ(h)∣ ≤ ∣ϕ(h)∣ for all small enough h. Then also
ψ is o(1). And similarly for O(h) and for o(h).
For example, the function


⎪h2 sin(1/h) if h ≠ 0,
ψ ∶ R Ð→ R, ψ(h) = ⎨

⎪ if h = 0

0
4.2 New Environment: The Bachmann–Landau Notation 135

is o(h) despite oscillating ever faster as h approaches 0, because ∣ψ∣ ≤ ∣ϕ2 ∣


where ϕ2 (h) = h2 is o(h) by Proposition 4.2.2. The reader should draw a
sketch of this situation.
Similarly, the functions ψ, φ ∶ R2 Ð→ R given by

ψ(h, k) = h, φ(h, k) = k

are O((h, k)) because the size bounds say that they are bounded absolutely by
the O(h)-mapping ϕ1 (h, k) = ∣(h, k)∣, i.e., ∣ψ(h, k)∣ = ∣h∣ ≤ ∣(h, k)∣ and similarly
for φ. For general n and for every i ∈ {1, . . . , n}, now letting h denote a vector
again as usual rather than the first component of a vector as it did a moment
ago, the ith component function

ψ ∶ Rn Ð→ R, ψ(h) = hi

is O(h) by the same argument. We will use this observation freely in the
sequel.
The o(1) and O(h) and o(h) conditions give rise to predictable closure
properties.
Proposition 4.2.4 (Vector space properties of the Landau spaces).
For every fixed domain-ball B(0n , ε) and codomain-space Rm , the o(1)-map-
pings form a vector space, and O(h) forms a subspace, of which o(h) forms
a subspace in turn. Symbolically,

o(1) + o(1) = o(1), R o(1) = o(1),


O(h) + O(h) = O(h), R O(h) = O(h),
o(h) + o(h) = o(h), R o(h) = o(h),

i.e., o(1) and O(h) and o(h) absorb addition and scalar multiplication.
The fact that o(1) forms a vector space encodes the rules that sums and
constant multiples of continuous mappings are again continuous.
Proof (Sketch). Consider any ϕ, ψ ∈ o(1). For every c > 0,

∣ϕ(h)∣ ≤ c/2 and ∣ψ(h)∣ ≤ c/2 for all small enough h,

and so by the triangle inequality,

∣(ϕ + ψ)(h)∣ ≤ c for all small enough h.

(A fully quantified version of the argument is as follows. Let c > 0 be given.


There exists δϕ > 0 such that ∣ϕ(h)∣ ≤ c/2 if ∣h∣ ≤ δϕ , and there exists δψ > 0
such that ∣ψ(h)∣ ≤ c/2 if ∣h∣ ≤ δψ . Let δ = min{δϕ , δψ }. Then ∣(ϕ + ψ)(h)∣ ≤ c
if ∣h∣ ≤ δ.) Similarly, for every nonzero α ∈ R,

∣ϕ(h)∣ ≤ c/∣α∣ for all small enough h,


136 4 The Derivative

so that since the modulus is absolute-homogeneous,

∣(αϕ)(h)∣ ≤ c for all small enough h.

If instead ϕ, ψ ∈ O(h) then for all small enough h,

∣ϕ(h)∣ ≤ c∣h∣ and ∣ψ(h)∣ ≤ c′ ∣h∣ for some c, c′ > 0,

so that for all small enough h,

∣(ϕ + ψ)(h)∣ ≤ (c + c′ )∣h∣.

Similarly, for every nonzero α ∈ R, for all small enough h,

∣(αϕ)(h)∣ ≤ (∣α∣c)∣h∣.

The argument for o(h) is similar to the argument for o(1) (Exercise 4.2.3). ⊔

For example, the function

ϕ ∶ Rn Ð→ R, ϕ(x) = 12∣x∣1/2 − 7∣x∣ + 5∣x∣3/2

is an o(1)-function because all three of its terms are. It is not an O(h)-function


even though its second and third terms are, and it is not an o(h)-function even
though its third term is.
Another handy fact is the componentwise nature of the conditions o(1)
and O(h) and o(h). To see this, first note that every ϕ ∶ B(0n , ε) Ð→ Rm is
o(1) if and only if the corresponding absolute value ∣ϕ∣ ∶ B(0n , ε) Ð→ R is.
Now let ϕ have component functions ϕ1 , . . . , ϕm . For every h ∈ B(0n , ε) and
for each j ∈ {1, . . . , m}, the size bounds give
m
∣ϕj (h)∣ ≤ ∣ϕ(h)∣ ≤ ∑ ∣ϕi (h)∣.
i=1

Using the left side of the size bounds and then the vector space properties of
o(1) and then the right side of the size bounds, we get
m
∣ϕ∣ is o(1) Ô⇒ each ∣ϕj ∣ is o(1) Ô⇒ ∑ ∣ϕi ∣ is o(1) Ô⇒ ∣ϕ∣ is o(1).
i=1

Thus ∣ϕ∣ is o(1) if and only if each ∣ϕi ∣ is. As explained just above, we may
drop the absolute values, and so in fact ϕ is o(1) if and only if each ϕi is,
as desired. The arguments for the O(h) and o(h) conditions are the same
(Exercise 4.2.4). The componentwise nature of the o(1) condition encodes the
componentwise nature of continuity.
The role of linear mappings in the Landau notation scheme is straightfor-
ward, affirming the previously mentioned intuition that the O(h) condition de-
scribes at-most-linear growth and the o(h) condition describes smaller-than-
linear growth.
4.2 New Environment: The Bachmann–Landau Notation 137

Proposition 4.2.5. Every linear mapping is O(h). The only o(h) linear
mapping is the zero mapping.

Proof. Let T ∶ Rn Ð→ Rm be a linear mapping. The unit sphere in Rn is


compact and T is continuous, so the image of the unit sphere under T is
again compact, hence bounded. That is, some positive c ∈ R exists such that
∣T (ho )∣ ≤ c whenever ∣ho ∣ = 1. The homogeneity of T shows that ∣T (h)∣ ≤ c∣h∣
for all nonzero h: letting ho = h/∣h∣,

∣T (h)∣ = ∣T (∣h∣ho )∣ = ∣ ∣h∣T (ho ) ∣ = ∣h∣ ∣T (ho )∣ ≤ c∣h∣.

And the inequality holds for h = 0 as well. Thus T is O(h).


Now assume that T is not the zero mapping. Thus T (ho ) is nonzero for
some nonzero ho , and we may take ∣ho ∣ = 1. Let c = ∣T (ho )∣/2, a positive real
number. For every scalar multiple h = αho of ho , however small, compute
(noting for the last step that ∣h∣ = ∣α∣)

∣T (h)∣ = ∣T (αho )∣ = ∣αT (ho )∣ = ∣α∣ ∣T (ho )∣ = 2c∣α∣ = 2c∣h∣.

That is, ∣T (h)∣ > c∣h∣ for some arbitrarily small h-values, i.e., it is not the case
that ∣T (h)∣ ≤ c∣h∣ for all small enough h. Thus T fails the o(h) definition for
the particular constant c = ∣T (ho )∣/2. ⊔

For scalar-valued functions, a product property is useful to have at hand.

Proposition 4.2.6 (Product property for Landau functions). Consider


two scalar-valued functions and their product function,

ϕ, ψ, ϕψ ∶ B(0n , ε) Ð→ R.

If ϕ is o(1) and ψ is O(h) then ϕψ is o(h). Especially, the product of two


linear functions is o(h).

Proof. Let c > 0 be given. For some d > 0, for all h close enough to 0n ,

∣ϕ(h)∣ ≤ c/d and ∣ψ(h)∣ ≤ d∣h∣,

and so
∣(ϕψ)(h)∣ ≤ c∣h∣.
The second statement of the proposition follows from its first statement and
the previous proposition. ⊔

For two particular examples, consider the linear functions

π1 , π2 ∶ R2 Ð→ R, π1 (h, k) = h, π2 (h, k) = k.

The proposition combines with the vector space properties of o(h, k) to say
that the functions
138 4 The Derivative

α, β ∶ R2 Ð→ R, α(h, k) = h2 − k 2 , β(h, k) = hk

are both o(h, k).


Beyond their vector space properties, the Landau spaces carry composition
properties. If ϕ ∶ B(0n , ε) Ð→ Rm and ψ ∶ B(0m , ρ) Ð→ Rℓ are both o(1),
then after shrinking ε if necessary, the composition ψ ○ ϕ ∶ B(0n , ε) Ð→ Rℓ is
also defined. That is, composition of o(1)-mappings is defined after suitably
shrinking a domain-ball. From now on, we shrink domain-balls as necessary
without further comment.

Proposition 4.2.7 (Composition properties of the Landau spaces).


The composition of o(1)-mappings is again an o(1)-mapping. Also, the com-
position of O(h)-mappings is again an O(h)-mapping. Furthermore, the com-
position of an O(h)-mapping and an o(h)-mapping, in either order, is again
an o(h)-mapping. Symbolically,

o(o(1)) = o(1),
O(O(h)) = O(h),
o(O(h)) = o(h),
O(o(h)) = o(h).

That is, o(1) and O(h) absorb themselves, and o(h) absorbs O(h) from either
side.

The rule o(o(1)) = o(1) encodes the persistence of continuity under com-
position.

Proof. For example, to verify the third rule, suppose that ϕ ∶ B(0n , ε) Ð→ Rm
is O(h) and that ψ ∶ B(0m , ρ) Ð→ Rℓ is o(k). Then

for some c > 0, ∣ϕ(h)∣ ≤ c∣h∣ for all small enough h.

Thus if h is small then so is ϕ(h), so that

for any d > 0, ∣ψ(ϕ(h))∣ ≤ d∣ϕ(h)∣ for all small enough h.

Since c is some particular positive number and d can be any positive number,
cd again can be any positive number. That is, letting e = cd and combining
the previous two displays, we have

for every e > 0, ∣(ψ ○ ϕ)(h)∣ ≤ e∣h∣ for all small enough h.

Hence ψ ○ ϕ is o(h), as desired.


A fully quantified version of the argument is as follows. The hypotheses
are that

there exist c > 0 and δ > 0 such that ∣ϕ(h)∣ ≤ c∣h∣ if ∣h∣ ≤ δ
4.2 New Environment: The Bachmann–Landau Notation 139

and that

for every d > 0 there exists εd > 0 such that ∣ψ(k)∣ ≤ d∣k∣ if ∣k∣ ≤ εd .

Now let e > 0 be given. Define d = e/c and ρe = min{δ, εd /c}. Suppose that
∣h∣ ≤ ρe . Then

∣ϕ(h)∣ ≤ c∣h∣ ≤ εd since ∣h∣ ≤ δ and ∣h∣ ≤ εd /c,

and so

∣ψ(ϕ(h))∣ ≤ d∣ϕ(h)∣ ≤ cd∣h∣ since ∣ϕ(h)∣ ≤ εd and ∣ϕ(h)∣ ≤ c∣h∣.

That is,
∣ψ(ϕ(h))∣ ≤ e∣h∣ since cd = e.
This shows that ψ ○ ϕ is o(h), since for every e > 0 there exists ρe > 0 such
that ∣(ψ ○ φ)(h)∣ ≤ e∣h∣ if ∣h∣ ≤ ρe .
The other rules are proved similarly (Exercise 4.2.5). ⊔

Exercises

4.2.1. By analogy to Definition 4.2.1, give the appropriate definition of an


O(1)-mapping. What is the geometric interpretation of the definition? Need
an O(1)-mapping take 0 to 0?

4.2.2. Let e be a nonnegative real number. Consider the function

ϕe ∶ Rn Ð→ R, ϕ(x) = ∣x∣e .

(a) Suppose that e > 0. Let c > 0 be given. If ∣h∣ ≤ c1/e then what do we
know about ∣ϕe (h)∣ in comparison to c? What does this tell us about ϕe ?
(b) Prove that ϕ1 is O(h).
(c) Suppose that e > 1. Combine parts (a) and (b) with the product prop-
erty for Landau functions (Proposition 4.2.6) to show that ϕe is o(h).
(d) Explain how parts (a), (b), and (c) have proved Proposition 4.2.2.

4.2.3. Complete the proof of Proposition 4.2.4.

4.2.4. Establish the componentwise nature of the O(h) condition, and estab-
lish the componentwise nature of the o(h) condition.

4.2.5. Complete the proof of Proposition 4.2.7.


140 4 The Derivative

4.3 One-Variable Revisionism: The Derivative Redefined


The one-variable derivative as recalled at the beginning of the chapter,

f (a + h) − f (a)
f ′ (a) = lim ,
h→0 h
is a construction. To rethink the derivative, we should characterize it instead.
To think clearly about what it means for the graph of a function to have a
tangent of slope t at a point (a, f (a)), we should work in local coordinates and
normalize to the case of a horizontal tangent. That is, given a function f of
x-values near some point a, and given a candidate tangent-slope t at (a, f (a)),
define a related function g of h-values near 0,

g(h) = f (a + h) − f (a) − th.

Thus g takes 0 to 0, and the graph of g near the origin is like the graph of f
near (a, f (a)) but with the line of slope t subtracted. To reiterate, the idea
that f has a tangent of slope t at (a, f (a)) has been normalized to the tidier
idea that g has slope 0 at the origin:
To say that the graph of g is horizontal at the origin is to say that for
every positive real number c, however small, the region between the
lines of slope ±c contains the graph of g close enough to the origin.
That is:
The intuitive condition for the graph of g to be horizontal at the origin
is precisely that g is o(h). The horizontal nature of the graph of g
at the origin connotes that the graph of f has a tangent of slope t
at (a, f (a)).
The symbolic connection between this characterization of the derivative
and the constructive definition is immediate. As always, the definition of f
having derivative f ′ (a) at a is

f (a + h) − f (a)
lim = f ′ (a),
h→0 h
which is to say,
f (a + h) − f (a) − f ′ (a)h
lim = 0,
h→0 h
and indeed, this is precisely the o(h) condition on g. Figure 4.2 illustrates the
idea that when h is small, not only is the vertical distance f (a + h) − f (a) −
f ′ (a)h from the tangent line to the curve small as well, but it is small even
relative to the horizontal distance h.
We need to scale these ideas up to many dimensions. Instead of viewing
the one-variable derivative as the scalar f ′ (a), think of it as the corresponding
4.3 One-Variable Revisionism: The Derivative Redefined 141

f (x)

f (a + h)
f (a) + f ′ (a)h

f (a)
x
a a+h

Figure 4.2. Vertical distance from tangent line to curve

linear mapping Ta ∶ R Ð→ R, multiplication by f ′ (a). That is, think of it as


the mapping
Ta (h) = f ′ (a)h for all h ∈ R.
Figure 4.3 incorporates this idea. The figure is similar to Figure 4.2, but it
shows the close approximation in the local coordinate system centered at the
point of tangency, and in the local coordinate system the tangent line is indeed
the graph of the linear mapping Ta . The shaded axis-portions in the figure
are h horizontally and g(h) = f (a + h) − f (a) − f ′ (a)h vertically, and the fact
that the vertical portion is so much smaller illustrates that g(h) is o(h).

Ta (h)

f (a + h) − f (a)
Ta (h)

h
h

Figure 4.3. Vertical distance in local coordinates


142 4 The Derivative

We are nearly ready to rewrite the derivative definition pandimensionally.


The small remaining preliminary matter is to take into account the local
nature of the characterizing condition: it depends on the behavior of f only
on an ε-ball about a, but on the other hand, it does require an entire ε-ball.
Thus the following definition is appropriate for our purposes.

Definition 4.3.1 (Interior point). Let A be a subset of Rn , and let a be


a point of A. Then a is an interior point of A if some ε-ball about a is a
subset of A. That is, a is an interior point of A if B(a, ε) ⊂ A for some ε > 0.

Now we can define the derivative in a way that encompasses many variables
and is suitably local.

Definition 4.3.2 (Derivative). Let A be a subset of Rn , let f ∶ A Ð→ Rm


be a mapping, and let a be an interior point of A. Then f is differentiable
at a if there exists a linear mapping Ta ∶ Rn Ð→ Rm satisfying the condition

f (a + h) − f (a) − Ta (h) is o(h). (4.1)

This Ta is called the derivative of f at a, written Dfa or (Df )a . When f


is differentiable at a, the matrix of the linear mapping Dfa is written f ′ (a)
and is called the Jacobian matrix of f at a.

Here are two points to note about Definition 4.3.2:


• Again, an assertion that a mapping is differentiable at a point has the
connotation that the point is an interior point of the mapping’s domain.
That is, if f is differentiable at a then B(a, ε) ⊂ A for some ε > 0. In the
special case n = 1, we are disallowing the derivative at an endpoint of the
domain.
• The domain of the linear mapping Ta is unrestricted even if f itself is
defined only locally about a. Indeed, the definition of linearity requires
that the linear mapping have all of Rn as its domain. Every linear mapping
is so uniform that in any case its behavior on all of Rn is determined by its
behavior on any ε-ball about 0n (Exercise 4.3.1). In geometric terms, the
graph of T , the tangent object approximating the graph of f at (a, f (a)),
extends without bound, even if the graph of f itself is restricted to points
near (a, f (a)). But the approximation of the graph by the tangent object
needs to be close only near the point of tangency.
Returning to the idea of the derivative as a linear mapping, when n = 2
and m = 1 a function f ∶ A Ð→ R is differentiable at an interior point (a, b) of A
if for small scalar values h and k, f (a + h, b + k) − f (a, b) is well approximated
by a linear function
T (h, k) = αh + βk
where α and β are scalars. Since the equation z = f (a, b) + αh + βk describes
a plane in (x, y, z)-space (where h = x − a and k = y − b), f is differentiable
4.3 One-Variable Revisionism: The Derivative Redefined 143

at (a, b) if its graph has a well-fitting tangent plane through (a, b, f (a, b)).
(See Figure 4.4.) Here the derivative of f at (a, b) is the linear mapping
taking (h, k) to αh + βk, and the Jacobian matrix of f at a is therefore [α, β].
The tangent plane in the figure is not the graph of the derivative Df(a,b) ,
but rather a translation of the graph. Another way to say this is that the
(h, k, Df(a,b) (h, k))-coordinate system has its origin at the point (a, b, f (a, b))
in the figure.

f (x, y)
T (h, k)

k
h

x (a, b) y

Figure 4.4. Graph and tangent plane

When n = 1 and m = 3, a mapping f ∶ A Ð→ R3 is differentiable at an


interior point a of A if f (a + h) − f (a) is closely approximated for small real h
by a linear mapping
⎡α⎤
⎢ ⎥
⎢ ⎥
T (h) = ⎢β ⎥ h
⎢ ⎥
⎢γ ⎥
⎣ ⎦
for some scalars α, β, and γ. As h varies through R, f (a) + T (h) traverses the
line ℓ = ℓ(f (a), (α, β, γ)) in R3 that is tangent at f (a) to the output curve
of f . (See Figure 4.5.) Here Dfa (h) = [ βγ ] h, and the corresponding Jacobian
α

matrix is [ βγ ]. Note that the figure does not show the domain of f , so it may
α

help to think of f as a time-dependent traversal of the curve rather than as


the curve itself. The figure does not have room for the (h, Dfa (h))-coordinate
system (which is 4-dimensional), but the Dfa (h)-coordinate system has its
origin at the point f (a).
For an example, let A = B((0, 0), 1) be the unit disk in R2 , and consider
the function
144 4 The Derivative

f (a)

Figure 4.5. Tangent to a parametrized curve

f ∶ A Ð→ R, f (x, y) = x2 − y 2 .
We show that for every point (a, b) ∈ A, f is differentiable at (a, b), and its
derivative is the linear mapping

T(a,b) ∶ R2 Ð→ R, T(a,b) (h, k) = 2ah − 2bk.

To verify this, we need to check Definition 4.3.2. The point that is written
in the definition intrinsically as a (where a is a vector) is written here in
coordinates as (a, b) (where a and b are scalars), and similarly the vector h in
the definition is written (h, k) here, because the definition is intrinsic, whereas
here we are going to compute. To check the definition, first note that every
point (a, b) of A is an interior point; the fact that every point of A is interior
doesn’t deserve a detailed proof right now, only a quick comment. Second,
confirm the derivative’s characterizing property (4.1) by calculating that

f (a + h, b + k) − f (a, b) − T(a,b) (h, k)


= (a + h)2 − (b + k)2 − a2 + b2 − 2ah + 2bk
= h2 − k 2 .

We saw immediately after the product property for Landau functions (Propo-
sition 4.2.6) that h2 −k 2 is o(h, k). This is the desired result. Also, the calcula-
tion tacitly shows how the derivative was found for us to verify: the difference
f (a + h, b + k) − f (a, b) is 2ah − 2bk + h2 − k 2 , which as a function of h and k has
a linear part 2ah − 2bk and a quadratic part h2 − k 2 that is much smaller when
h and k are small. The linear approximation of the difference is the derivative.
Before continuing, we need to settle a grammatical issue. Definition 4.3.2
refers to any linear mapping that satisfies condition (4.1) as the derivative of f
at a. Fortunately, the derivative, if it exists, is unique, justifying the definite
article. The uniqueness is geometrically plausible: if two straight objects (e.g.,
lines or planes) approximate the graph of f well near (a, f (a)), then they
4.3 One-Variable Revisionism: The Derivative Redefined 145

should also approximate each other well enough that straightness forces them
to coincide. The quantitative argument amounts to recalling that the only
linear o(h)-mapping is zero.

Proposition 4.3.3 (Uniqueness of the derivative). Let f ∶ A Ð→ Rm


(where A ⊂ Rn ) be differentiable at a. Then there is only one linear mapping
satisfying the definition of Dfa .

Proof. Suppose that the linear mappings Ta , T̃a ∶ Rn Ð→ Rm are both deriva-
tives of f at a. Then the two mappings

f (a + h) − f (a) − Ta (h) and f (a + h) − f (a) − T̃a (h)

are both o(h). By the vector space properties of o(h), so is their difference
(T̃a − Ta )(h). Since the linear mappings from Rn to Rm form a vector space
as well, the difference T̃a − Ta is linear. But the only o(h) linear mapping is
the zero mapping, so T̃a = Ta as desired. ⊔

Finally, another result is immediate in our setup.

Proposition 4.3.4 (Differentiability implies continuity). If f is differ-


entiable at a then f is continuous at a.

Proof. Compute, using the differentiability of f at a and the fact that lin-
ear mappings are O(h), then the containment o(h) ⊂ O(h) and the closure
of O(h) under addition, and finally the containment O(h) ⊂ o(1), that

f (a + h) − f (a) = f (a + h) − f (a) − Ta (h) + Ta (h) = o(h) + O(h) = O(h) = o(1).

Since the o(1) condition describes continuity, the argument is complete. ⊔


We will study the derivative via two routes. On the one hand, the linear
mapping Dfa ∶ Rn Ð→ Rm is specified by mn scalar entries of its matrix f ′ (a),
and so calculating the derivative is tantamount to determining these scalars
by using coordinates. On the other hand, developing conceptual theorems
without getting lost in coefficients and indices requires the intrinsic idea of
the derivative as a well-approximating linear mapping.

Exercises

4.3.1. Let T ∶ Rn Ð→ Rm be a linear mapping. Show that for every ε > 0, the
behavior of T on B(0n , ε) determines the behavior of T everywhere.

4.3.2. Give a geometric interpretation of the derivative when n = m = 2. Give


a geometric interpretation of the derivative when n = 1 and m = 2.
146 4 The Derivative

4.3.3. Let f ∶ A Ð→ Rm (where A ⊂ Rn ) have component functions f1 , . . . , fm ,


and let a be an interior point of A. Let T ∶ Rn Ð→ Rm be a linear mapping
with component functions T1 , . . . , Tm . Using the componentwise nature of the
o(h) condition, established in Section 4.2, prove the componentwise nature
of differentiability: f is differentiable at a with derivative T if and only if
each component fi is differentiable at a with derivative Ti .
4.3.4. Let f (x, y) = (x2 − y 2 , 2xy). Show that Df(a,b) (h, k) = (2ah − 2bk, 2bh +
2ak) for all (a, b) ∈ R2 . (By the previous problem, you may work component-
wise.)
4.3.5. Let g(x, y) = xey . Show that Dg(a,b) (h, k) = heb + kaeb for all (a, b) ∈
R2 . (Note that because e0 = 1 and because the derivative of the exponential
function at 0 is 1, the one-variable characterizing property says that ek − 1 =
k + o(k).)
4.3.6. Show that if f ∶ Rn Ð→ Rm satisfies ∣f (x)∣ ≤ ∣x∣2 for all x ∈ Rn then f
is differentiable at 0n .

4.3.7. Show that the function f (x, y) = ∣xy∣ for all (x, y) ∈ R2 is not dif-
ferentiable at (0, 0). (First see what Df(0,0) (h, 0) and Df(0,0) (0, k) need to
be.)

4.4 Basic Results and the Chain Rule


Before constructing the derivative coordinatewise via the Jacobian matrix, we
derive some results intrinsically from its characterizing property. We begin by
computing two explicit derivatives.
Proposition 4.4.1 (Derivatives of constant and linear mappings).
(1) Let C ∶ A Ð→ Rm (where A ⊂ Rn ) be the constant mapping C(x) = c for
all x ∈ A, where c is some fixed value in Rm . Then the derivative of C at
every interior point a of A is the zero mapping.
(2) The derivative of a linear mapping T ∶ Rn Ð→ Rm at every point a ∈ Rn is
again T .
Proof. Both of these results hold essentially by grammar. In general, the
derivative of a mapping f at a is the linear mapping that well approximates
f (a + h) − f (a) for h near 0n . But C(a + h) − C(a) is the zero mapping for
all h ∈ A, so it is well approximated near 0n by the zero mapping on Rn .
Similarly, T (a + h) − T (a) is T (h) for all h ∈ Rn , and this linear mapping is
well approximated by itself near 0n .
To prove (1) more symbolically, let Z ∶ Rn Ð→ Rm denote the zero map-
ping, Z(h) = 0m for all h ∈ Rn . Then

C(a + h) − C(a) − Z(h) = c − c − 0 = 0 for all h ∈ Rn .


4.4 Basic Results and the Chain Rule 147

Being the zero mapping, C(a + h) − C(a) − Z(h) is crushingly o(h), showing
that Z meets the condition to be DCa . And (2) is similar (Exercise 4.4.1). ⊓

Of course, differentiation passes through addition and scalar multiplication


of mappings.

Proposition 4.4.2 (Linearity of differentiation). Let f ∶ A Ð→ Rm


(where A ⊂ Rn ) and g ∶ B Ð→ Rm (where B ⊂ Rn ) be mappings, and let a
be a point of A ∩ B. Suppose that f and g are differentiable at a with deriva-
tives Dfa and Dga . Then:
(1) The sum f + g ∶ A ∩ B Ð→ Rm is differentiable at a with derivative D(f +
g)a = Dfa + Dga .
(2) For every α ∈ R, the scalar multiple αf ∶ A Ð→ Rm is differentiable at a
with derivative D(αf )a = αDfa .

The proof is a matter of seeing that the vector space properties of o(h)
encode the sum rule and constant multiple rule for derivatives.

Proof. Since f and g are differentiable at a, some ball about a lies in A and
some ball about a lies in B. The smaller of these two balls lies in A ∩ B. That
is, a is an interior point of the domain of f + g. With this topological issue
settled, proving the proposition reduces to direct calculation. For (1),

(f + g)(a + h) − (f + g)(a) − (Dfa + Dga )(h)


= f (a + h) − f (a) − Dfa (h) + g(a + h) − g(a) − Dga (h)
= o(h) + o(h) = o(h).

And (2) is similar (Exercise 4.4.2). ⊔


Elaborate mappings are built by composing simpler ones. The next theo-
rem is the important result that the derivative of a composition is the composi-
tion of the derivatives. That is, the best linear approximation of a composition
is the composition of the best linear approximations.

Theorem 4.4.3 (Chain rule). Let f ∶ A Ð→ Rm (where A ⊂ Rn ) be a map-


ping, let B ⊂ Rm be a set containing f (A), and let g ∶ B Ð→ Rℓ be a mapping.
Thus the composition g ○ f ∶ A Ð→ Rℓ is defined. If f is differentiable at the
point a ∈ A, and g is differentiable at the point f (a) ∈ B, then the composition
g ○ f is differentiable at the point a, and its derivative there is

D(g ○ f )a = Dgf (a) ○ Dfa .

In terms of Jacobian matrices, since the matrix of a composition is the product


of the matrices, the chain rule is

(g ○ f )′ (a) = g ′ (f (a)) f ′ (a).


148 4 The Derivative

The fact that we can prove that the derivative of a composition is the
composition of the derivatives without an explicit formula for the derivative
is akin to the fact in the previous chapter that we could prove that the deter-
minant of the product is the product of the determinants without an explicit
formula for the determinant.

Proof. To showcase the true issues of the argument clearly, we reduce the
problem to a normalized situation. For simplicity, we first take a = 0n and
f (a) = 0m . So we are given that

f (h) = S(h) + o(h),


g(k) = T (k) + o(k),

and we need to show that

(g ○ f )(h) = (T ○ S)(h) + o(h).

Compute that

g(f (h)) = g(Sh + o(h)) by the first given


= T Sh + T (o(h)) + o(Sh + o(h)) by the second.

We know that T k = O(k) and Sh = O(h), so the previous display gives

(g ○ f )(h) = (T ○ S)(h) + O(o(h)) + o(O(h) + o(h)).

Since o(h) ⊂ O(h) and O(h) is closed under addition, since o(h) absorbs O(h)
from either side, and since o(h) is closed under addition, the error (the last
two terms on the right side of the previous display) is

O(o(h)) + o(O(h) + o(h)) = O(o(h)) + o(O(h)) = o(h) + o(h) = o(h).

Therefore we have shown that

(g ○ f )(h) = (T ○ S)(h) + o(h),

exactly as desired. The crux of the matter is that o(h) absorbs O(h) from
either side.
For the general case, no longer assuming that a = 0n and f (a) = 0m , we
are given that

f (a + h) = f (a) + S(h) + o(h),


g(f (a) + k) = g(f (a)) + T (k) + o(k),

and we need to show that

(g ○ f )(a + h) = (g ○ f )(a) + (T ○ S)(h) + o(h).


4.4 Basic Results and the Chain Rule 149

Compute that

g(f (a + h)) = g(f (a) + Sh + o(h)) by the first given


= g(f (a)) + T Sh + T (o(h)) + o(Sh + o(h)) by the second,

and from here the proof that the remainder term is o(h) is precisely as it is
in the normalized case. ⊓

Two quick applications of the chain rule arise naturally for scalar-valued
functions. Given two such functions, not only is their sum defined, but because
R is a field (unlike Rm for m > 1), so is their product and so is their quotient at
points where g is nonzero. With some help from the chain rule, the derivative
laws for product and quotient follow easily from elementary calculations.

Lemma 4.4.4 (Derivatives of the product and reciprocal functions).


Define the product function,

p ∶ R2 Ð→ R, p(x, y) = xy,

and define the reciprocal function

r ∶ R − {0} Ð→ R, r(x) = 1/x.

Then:
(1) The derivative of p at every point (a, b) ∈ R2 exists and is

Dp(a,b) (h, k) = bh + ak.

(2) The derivative of r at every nonzero real number a exists and is

Dra (h) = −h/a2 .

Proof. (1) Compute

p(a + h, b + k) − p(a, b) − bh − ak = (a + h)(b + k) − ab − bh − ak = hk.

By the size bounds, ∣h∣ ≤ ∣(h, k)∣ and ∣k∣ ≤ ∣(h, k)∣, so ∣hk∣ = ∣h∣ ∣k∣ ≤ ∣(h, k)∣2 .
Since ∣(h, k)∣2 is ϕ2 (h, k) (where ϕe is the example from Proposition 4.2.2), it
is o(h, k).
Statement (2) is left as Exercise 4.4.3. ⊔

Proposition 4.4.5 (Multivariable product and quotient rules). Let


f ∶ A Ð→ R (where A ⊂ Rn ) and g ∶ B Ð→ R (where B ⊂ Rn ) be functions, and
let f and g differentiable at a. Then:
(1) f g is differentiable at a with derivative

D(f g)a (h) = g(a)Dfa (h) + f (a)Dga (h).


150 4 The Derivative

(2) If g(a) ≠ 0 then f /g is differentiable at a with derivative

g(a)Dfa (h) − f (a)Dga (h)


D ( ) (h) =
f
.
g a g(a)2

Proof. (1) As explained in the proof of Proposition 4.4.2, a is an interior point


of the domain A ∩ B of f g, so we have only to compute. The product function
f g is the composition p ○ (f, g), where (f, g) ∶ A ∩ B Ð→ R2 is the mapping
with component functions f and g. For every h ∈ Rn , the chain rule and the
componentwise nature of differentiation (this was Exercise 4.3.3) give

D(f g)a (h) = D(p ○ (f, g))a (h) = (Dp(f,g)(a) ○ D(f, g)a )(h)
= Dp(f (a),g(a)) (Dfa (h), Dga (h)),

and by the previous lemma,

Dp(f (a),g(a)) (Dfa (h), Dga (h)) = g(a)Dfa (h) + f (a)Dga (h).

This proves (1). Statement (2) is similar (Exercise 4.4.4) but with the wrinkle
that one needs to show that since g(a) ≠ 0 and since Dga exists, it follows
that a is an interior point of the domain of f /g. Here it is relevant that g
must be continuous at a, and so by the persistence of inequality principle
(Proposition 2.3.10), g is nonzero on some ε-ball at a, as desired. ⊔

With the results accumulated so far, we can compute the derivative of


every mapping whose component functions are given by rational expressions
in its component input scalars. By the componentwise nature of differentiabil-
ity, it suffices to find the derivatives of the component functions. Since these
are compositions of sums, products, and reciprocals of constants and linear
functions, their derivatives are calculable with the existing machinery.
Suppose, for instance, that f (x, y) = (x2 − y)/(y + 1) for all (x, y) ∈ R2 such
that y ≠ −1. Note that every point of the domain of f is an interior point.
Rewrite f as
X2 − Y
f=
Y +1
where X is the linear function X(x, y) = x on R2 and similarly Y (x, y) = y.
Applications of the chain rule and virtually every other result on derivatives so
far shows that at every point (a, b) in the domain of f , the derivative Df(a,b)
is given by (justify the steps)
4.4 Basic Results and the Chain Rule 151

Df(a,b) (h, k)
(Y + 1)(a, b)D(X 2 − Y )(a,b) − (X 2 − Y )(a, b)D(Y + 1)(a,b)
= (h, k)
((Y + 1)(a, b))2
(b + 1)(D(X 2 )(a,b) − DY(a,b) ) − (a2 − b)(DY(a,b) + D1(a,b) )
= (h, k)
(b + 1)2
(b + 1)(2X(a, b)DX(a,b) − Y ) − (a2 − b)Y
= (h, k)
(b + 1)2
(b + 1)(2aX − Y ) − (a2 − b)Y
= (h, k)
(b + 1)2
(b + 1)(2ah − k) − (a2 − b)k
=
(b + 1)2
2a a2 + 1
= h−
(b + 1)2
k.
b+1

In practice, this method is too unwieldy for any functions beyond the simplest,
and in any case, it applies only to mappings with rational component func-
tions. But on the other hand, there is no reason to expect much in the way of
computational results from our methods so far, since we have been studying
the derivative based on its intrinsic characterization. In the next section we
will construct the derivative in coordinates, enabling us to compute easily by
drawing on the results of one-variable calculus.
For another application of the chain rule, let A and B be subsets of Rn ,
and suppose that f ∶ A Ð→ B is invertible with inverse g ∶ B Ð→ A. Suppose
further that f is differentiable at a ∈ A and that g is differentiable at f (a).
The composition g ○ f is the identity mapping idA ∶ A Ð→ A, which, being the
restriction of a linear mapping, has that linear mapping id ∶ Rn Ð→ Rn as its
derivative at a. Therefore,

id = D(idA )a = D(g ○ f )a = Dgf (a) ○ Dfa .

This argument partly shows that for invertible f as described, the linear map-
ping Dfa is also invertible. A symmetric argument completes the proof by
showing that also id = Dfa ○ Dgf (a) . Because we have methods available to
check the invertibility of a linear map, we can apply this criterion once we
know how to compute derivatives.
Not too much should be made of this result, however; its hypotheses are
too strong. Even in the one-variable case, the function f (x) = x3 from R
√ and yet has the noninvertible derivative 0 at x = 0. (The
to R is invertible
inverse, g(x) = 3 x, is not differentiable at 0, so the conditions above are not
met.) Besides, we would prefer a converse statement, that if the derivative is
invertible then so is the mapping. The converse statement is not true, but we
will see in Chapter 5 that it is locally true, i.e., it is true in the small.
152 4 The Derivative

Exercises
4.4.1. Prove part (2) of Proposition 4.4.1.
4.4.2. Prove part (2) of Proposition 4.4.2.
4.4.3. Prove part (2) of Lemma 4.4.4.
4.4.4. Prove the quotient rule.
4.4.5. Let f (x, y, z) = xyz. Find Df(a,b,c) for arbitrary (a, b, c) ∈ R3 . (Hint:
f is the product XY Z, where X is the linear function X(x, y, z) = x and
similarly for Y and Z.)
4.4.6. Define f (x, y) = xy 2 /(y − 1) on {(x, y) ∈ R2 ∶ y ≠ 1}. Find Df(a,b) where
(a, b) is a point in the domain of f .
4.4.7. (A generalization of the product rule.) Recall that a function
f ∶ Rn × Rn Ð→ R
is called bilinear if for all x, x′ , y, y ′ ∈ Rn and all α ∈ R,
f (x + x′ , y) = f (x, y) + f (x′ , y),
f (x, y + y ′ ) = f (x, y) + f (x, y ′ ),
f (αx, y) = αf (x, y) = f (x, αy).
(a) Show that if f is bilinear then f (h, k) is o(h, k).
(b) Show that if f is bilinear then f is differentiable with Df(a,b) (h, k) =
f (a, k) + f (h, b).
(c) What does this exercise say about the inner product?
4.4.8. (A bigger generalization of the product rule.) A function
f ∶ Rn × ⋯ × Rn Ð→ R
(there are k copies of Rn ) is called multilinear if for each j ∈ {1, . . . , k}, for
all x1 , . . . , xj , x′j , . . . , xk ∈ Rn and all α ∈ R,
f (x1 , . . . , xj + x′j , . . . , xk ) = f (x1 , . . . , xj , . . . , xk ) + f (x1 , . . . , x′j , . . . , xk )
f (x1 , . . . , αxj , . . . , xk ) = αf (x1 , . . . , xj , . . . , xk ).
(a) Show that if f is multilinear and a1 , . . . , ak , h1 , . . . , hk ∈ Rn then for
any j ∈ {2, . . . , k}, f (h1 , . . . , hj , aj+1 . . . , ak ) is o(h1 , . . . , hk ). The same result
holds if any j inputs to f are h’s, rather than the first j inputs, because
permuting the inputs of a multilinear function creates another multilinear
function. Flesh this argument out as much as feels necessary for your under-
standing.
(b) Show that if f is multilinear then f is differentiable with
k
Df(a1 ,...,ak ) (h1 , . . . , hk ) = ∑ f (a1 , . . . , aj−1 , hj , aj+1 , . . . , ak ).
j=1

(c) When k = n, what does this exercise say about the determinant?
4.5 Calculating the Derivative 153

4.5 Calculating the Derivative


Working directly from Definition 4.3.2 of the multivariable derivative with-
out using coordinates has yielded some easy results and one harder one—the
chain rule—but no explicit description of the derivative except in the simplest
cases. We don’t even know that any multivariable derivatives exist except for
mappings with rational coefficient functions.
Following the general principle that necessary conditions are more easily
obtained than sufficient ones, we assume that the derivative exists and de-
termine what it then must be. Geometry provides the insight. By the usual
componentwise argument, there is no loss in studying a function f with scalar
output, i.e., we may take m = 1. Setting n = 2 fits the graph of f in R3 where
we can see it. Thus take f ∶ A Ð→ R where A ⊂ R2 .
Suppose that f is differentiable at the point (a, b). Then the graph of f
has a well-fitting tangent plane P at the point (a, b, f (a, b)), as shown ear-
lier, in Figure 4.4. To determine this plane, we need two of its lines through
(a, b, f (a, b)). The natural lines to consider are those whose (x, y)-shadows
run in the x and y directions. Call them ℓx and ℓy . (See Figure 4.6.)

f (x, y)

ℓx

(a, b)
x y

Figure 4.6. Cross-sectional lines

The line ℓx is tangent to a cross section of the graph of f . To see this cross
section, freeze the variable y at the value b and look at the resulting function
of one variable, ϕ(x) = f (x, b). The slope of ℓx in the vertical (x, b, z)-plane
is precisely ϕ′ (a). A small technicality here is that since (a, b) is an interior
point of A, also a is an interior point of the domain of ϕ.
Similarly, ℓy has slope ψ ′ (b) where ψ(y) = f (a, y). The linear function
approximating f (a + h, b + k) − f (a, b) for small (h, k) is now specified as
T (h, k) = ϕ′ (a)h + ψ ′ (b)k. Thus Df(a,b) has matrix [ϕ′ (a) ψ ′ (b)]. Since the
154 4 The Derivative

entries of this matrix are simply one-variable derivatives, this is something


that we can compute.

Definition 4.5.1 (Partial derivative). Let A be a subset of Rn , let f ∶


A Ð→ R be a function, and let a = (a1 , . . . , an ) be an interior point of A. Fix
j ∈ {1, . . . , n}. Define

ϕ(t) = f (a1 , . . . , aj−1 , t, aj+1 , . . . , an ) for t near aj .

Then the jth partial derivative of f at a is defined as

Dj f (a) = ϕ′ (aj )

if ϕ′ (aj ) exists. Here the prime signifies ordinary one-variable differentiation.


Equivalently,
f (a + tej ) − f (a)
Dj f (a) = lim
t→0 t
if the limit exists and it is not being taken at an endpoint of the domain of
the difference quotient.

Partial derivatives are easy to compute: fix all but one of the variables,
and then take the one-variable derivative with respect to the variable that
remains. For example, if

f (x, y, z) = ey cos x + z

then

D1 f (a, b, c) = (e cos x + c)∣x=a = −eb sin a,


d b
dx
D2 f (a, b, c) = eb cos a,
D3 f (a, b, c) = 1.

Theorem 4.5.2 (The derivative in coordinates: necessity). Let the


mapping f ∶ A Ð→ Rm (where A ⊂ Rn ) be differentiable at the point a ∈ A.
Then for each i ∈ {1, . . . , m} and j ∈ {1, . . . , n}, the partial derivative Dj fi (a)
exists. Furthermore, each Dj fi (a) is the (i, j)th entry of the Jacobian matrix
of f at a. Thus the Jacobian matrix is
⎡ D1 f1 (a) ⋯ Dn f1 (a) ⎤
⎢ ⎥
⎢ D f (a) ⋯ D f (a) ⎥
⎢ 1 2 ⎥
f (a) = ⎢ ⎥ = [Dj fi (a)]i=1,...,m .
⎢ ⎥
′ n 2

⎢ ⋮ ⋱ ⋮ ⎥
⎢D1 fm (a) ⋯ Dn fm (a)⎥
j=1,...,n
⎣ ⎦
Proof. The idea is to read off the (i, j)th entry of f ′ (a) by studying the ith
component function of f and letting h → 0n along the jth coordinate direction
in the defining property (4.1) of the derivative. The ensuing calculation will
4.5 Calculating the Derivative 155

repeat the quick argument in Section 4.3 that the characterization of the
derivative subsumes the construction in the one-variable case.
The derivative of the component function fi at a is described by the ith
row of f ′ (a). Call the row entries di1 , di2 , . . . , din . Since linear of is matrix
times, it follows that
(Dfi )a (tej ) = dij t for all t ∈ R.
Let h = tej with t a variable real number, so that h → 0n as t → 0R . Since
(Dfi )a exists, we have as a particular instance of the characterizing property
that fi (a + h) − fi a) − (Dfi )a (h) is o(h),
∣fi (a + tej ) − fi (a) − (Dfi )a (tej )∣
0 = lim
t→0 ∣tej ∣
fi (a + tej ) − fi (a) − dij t
= lim ∣ ∣
t→0 t
fi (a + tej ) − fi (a)
= lim ∣ − dij ∣ .
t→0 t
That is,
fi (a + tej ) − fi (a)
lim = dij .
t→0 t
The previous display says precisely that Dj fi (a) exists and equals dij . ⊔

So the existence of the derivative Dfa makes necessary the existence of all
partial derivatives of all component functions of f at a. The natural question
is whether their existence is also sufficient for the existence of Dfa . It is not.
The proof of Theorem 4.5.2 was akin to the straight line test from Section 2.3:
the general condition h → 0n was specialized to h = tej , i.e., to letting h
approach 0n only along the axes. The specialization let us show that the
derivative matrix entries are the partial derivatives of the component functions
of f . But the price for this specific information was loss of generality, enough
loss that the derived necessary conditions are not sufficient.
For example, the function


⎪ 2 2
2xy
if (x, y) ≠ (0, 0),
f ∶ R2 Ð→ R, f (x, y) = ⎨ x +y

⎪ (x, y) = (0, 0)
⎩0 if
has for its first partial derivative at the origin
f (t, 0) − f (0, 0) 0−0
D1 f (0, 0) = lim = lim = 0,
t→0 t t→0 t
and similarly D2 f (0, 0) = 0; but as discussed in Chapter 2, f is not contin-
uous at the origin, much less differentiable there. However, this example is
contrived, the sort of function that one sees only in a mathematics class, and
in fact a result in the spirit of the converse to Theorem 4.5.2 does hold, though
with stronger hypotheses.
156 4 The Derivative

Theorem 4.5.3 (The derivative in coordinates: sufficiency). Let f ∶


A Ð→ Rm (where A ⊂ Rn ) be a mapping, and let a be an interior point of A.
Suppose that for each i ∈ {1, . . . , m} and j ∈ {1, . . . , n}, the partial derivative
Dj fi exists not only at a but at all points in some ε-ball about a, and the
partial derivative Dj fi is continuous at a. Then f is differentiable at a.
Note that if f meets the conditions of Theorem 4.5.3 (all partial derivatives
of all component functions of f exist at and about a, and they are continuous
at a) then the theorem’s conclusion (f is differentiable at a) is the condition
of Theorem 4.5.2, so that the latter theorem tells us the derivative of f (the
entries of its matrix are the partial derivatives). But the example given just
before Theorem 4.5.3 shows that the converse fails: even if all partial deriva-
tives of all component functions of f exist at a, the function f need not be
differentiable at a.
The difference between the necessary conditions in Theorem 4.5.2 and the
sufficient conditions in Theorem 4.5.3 has a geometric interpretation when
n = 2 and m = 1. The necessary conditions in Theorem 4.5.2 are:
If a graph has a well-fitting plane at some point, then at that point
we see well-fitting lines in the cross sections parallel to the coordinate
axes.
The sufficient conditions in Theorem 4.5.3 are:
If a graph has well-fitting lines in the cross sections at and near the
point, and if those lines don’t change much as we move among cross
sections at and near the point, then the graph has a well-fitting plane.
But well-fitting cross-sectional lines at the point are not enough to guaran-
tee a well-fitting plane at the point. The multivariable derivative is truly a
pandimensional construct, not just an amalgamation of cross-sectional data.
Proof. It suffices to prove the differentiability of each component function fi ,
so we may assume that m = 1, i.e., that f is scalar-valued. To thin out the
notation, the proof will be done for n = 3 (so for example, a = (a1 , a2 , a3 )),
but its generality should be clear.
Theorem 4.5.2 says that if the derivative Dfa exists then it is defined by
the matrix of partial derivatives Dj f (a). The goal therefore is to show that
the linear mapping

Ta (h1 , h2 , h3 ) = D1 f (a)h1 + D2 f (a)h2 + D3 f (a)h3

satisfies the defining property of the derivative. That is, we need to show that

f (a + h) − f (a) = D1 f (a)h1 + D2 f (a)h2 + D3 f (a)h3 + o(h).

We may take h small enough that the partial derivatives Dj f exist at all
points within distance ∣h∣ of a. Here we use the hypothesis that the partial
derivatives exist everywhere near a.
4.5 Calculating the Derivative 157

The idea is to move from a to a + h in steps, changing one coordinate at a


time,

f (a + h) − f (a) = f (a1 + h1 , a2 + h2 , a3 + h3 ) − f (a1 , a2 + h2 , a3 + h3 )


+ f (a1 , a2 + h2 , a3 + h3 ) − f (a1 , a2 , a3 + h3 )
+ f (a1 , a2 , a3 + h3 ) − f (a1 , a2 , a3 ).

Because the partial derivatives exist, we may apply the mean value theorem
in two directions and the one-variable derivative’s characterizing property in
the third,

f (a + h) − f (a) = D1 f (a1 + c1 , a2 + h2 , a3 + h3 )h1


+ D2 f (a1 , a2 + c2 , a3 + h3 )h2
+ D3 f (a1 , a2 , a3 )h3 + o(h3 ),

where ∣ci ∣ ≤ ∣hi ∣ for i = 1, 2. Since D1 f and D2 f are continuous at the point a =
(a1 , a2 , a3 ), and since the condition h → 03 squeezes each hi and ci to 0,

D1 f (a1 + c1 , a2 + h2 , a3 + h3 ) = D1 f (a) + o(1),


D2 f (a1 , a2 + c2 , a3 + h3 ) = D2 f (a) + o(1).

Also, o(1)hi = o(h) for i = 1, 2 and o(h3 ) = o(h), and so altogether we have

f (a + h) − f (a) = D1 f (a)h1 + D2 f (a)h2 + D3 f (a)h3 + o(h).

This is the desired result. ⊔


Thus, to reiterate some earlier discussion and to amplify slightly:


• The differentiability of f at a implies the existence of all the partial deriva-
tives at a, and the partial derivatives are the entries of the derivative
matrix,
• while the existence of all the partial derivatives at and about a, and their
continuity at a, combine to imply the differentiability of f at a,
• but the existence of all partial derivatives at a need not imply the differ-
entiability of f at a.
• And in fact, the previous proof shows that we need to check the scope and
continuity only of all but one of the partial derivatives. The proof used
the existence of D3 f at a but not its existence near a or its continuity
at a, and a variant argument or a reindexing shows that nothing is special
about the last variable. This observation is a bit of a relief, telling us that
in the case of one input variable, our methods do not need to assume that
the derivative exists at and about a point and is continuous at the point
in order to confirm merely that it exists at the point. We codify this bullet
as a variant sufficiency theorem:
158 4 The Derivative

Theorem 4.5.4 (The derivative in coordinates: sufficiency). Let f ∶


A Ð→ Rm (where A ⊂ Rn ) be a mapping, and let a be an interior point of A.
Suppose that for each i ∈ {1, . . . , m},
• for each j ∈ {1, . . . , n}, the partial derivative Dj fi (a) exists,
• and for each but at most one j ∈ {1, . . . , m}, the partial derivative Dj fi
exists in some ε-ball about a and is continuous at a.
Then f is differentiable at a.

Note how all this compares to the discussion of the determinant in the
previous chapter. There we wanted the determinant to satisfy characterizing
properties. We found the only function that could possibly satisfy them, and
then we verified that it did. Here we wanted the derivative to satisfy a char-
acterizing property, and we found the only possibility for the derivative—the
linear mapping whose matrix consists of the partial derivatives, which must
exist if the derivative does. But analysis is more subtle than algebra: this linear
mapping need not satisfy the characterizing property of the derivative unless
we add further assumptions. The derivative-existence theorem, Theorem 4.5.3
or the slightly stronger Theorem 4.5.4, is the most substantial result so far
in this chapter. We have already seen a counterexample to the converse of
Theorem 4.5.3, in which the function had partial derivatives but wasn’t differ-
entiable because it wasn’t even continuous (page 155). For a one-dimensional
counterexample to the converse of Theorem 4.5.3, in which the derivative ex-
ists but is not continuous, see Exercise 4.5.3. The example in the exercise does
not contradict the weaker converse of the stronger Theorem 4.5.4.
To demonstrate the ideas of this section so far, consider the function


⎪ x2 +y2
x2 y
if (x, y) ≠ (0, 0),
f (x, y) = ⎨

⎪ if (x, y) = (0, 0).
⎩0
The top formula in the definition describes a rational function of x and y
on the punctured plane R2 − {(0, 0)}. Every rational function and all of its
partial derivatives are continuous on its domain (feel free to invoke this result),
and furthermore every point (a, b) away from (0, 0) lies in some ε-ball that
is also away from (0, 0). That is, for every point (a, b) ≠ (0, 0), the partial
derivatives of f exist at and about (a, b) and they are continuous at (a, b).
Thus the conditions for Theorem 4.5.3 are met, and so its conclusion follows:
f is differentiable at (a, b). Now Theorem 4.5.2 says that the derivative matrix
at (a, b) is the matrix of partial derivatives,

2ab3 a2 (a2 − b2 )
f ′ (a, b) = [ D1 f (a, b) D2 f (a, b) ] = [ ].
(a2 + b2 )2 (a2 + b2 )2

Consequently, the derivative of f at every nonzero (a, b) is the corresponding


linear map
4.5 Calculating the Derivative 159

2ab3 a2 (a2 − b2 )
Df(a,b) (h, k) = +
(a2 + b2 )2 (a2 + b2 )2
h k.

However, this analysis breaks down at the point (a, b) = (0, 0). Here our only
recourse is to figure out whether a candidate derivative exists and then test
whether it works. The first partial derivative of f at (0, 0) is

f (t, 0) − f (0, 0) 0−0


D1 f (0, 0) = lim = lim = 0,
t→0 t t→0 t
and similarly D2 f (0, 0) = 0. So by Theorem 4.5.2, the only possibility for the
derivative of f at (0, 0) is the zero mapping. Now the question is,

is f (h, k) − f (0, 0) − 0 o(h, k)?

Because the denominator h2 + k 2 of f away from the origin is ∣(h, k)∣2 ,

∣h∣2 ∣k∣
∣f (h, k) − f (0, 0) − 0∣ = ∣f (h, k)∣ =
∣(h, k)∣2
.


Let (h, k) approach 02 along the line h = k. Because ∣h∣ = ∣(h, h)∣/ 2,

∣h∣3 ∣(h, h)∣


∣f (h, h) − f (0, 0) − 0∣ = = √ .
∣(h, h)∣2 2 2
Thus along this line, the condition ∣f (h, k)−f (0, 0)−0∣ ≤ c∣(h, k)∣ fails for (say)
c = 1/4, and so f (h, k) − f (0, 0) − 0 is not o(h, k). That is, the function f is not
differentiable at (0, 0). And indeed, the graph of f near (0, 0) shows a surface
that isn’t well approximated by any plane through its center, no matter how
closely we zoom in. (See Figure 4.7. The figure shows that the cross-sectional
slopes over the axes are 0, while the cross-sectional slopes over the diagonals
are not, confirming our symbolic calculations.) Here we have used the straight
line test to get a negative answer; but recall that the straight line test alone
cannot give a positive answer, so the method here would need modification to
show that a function is differentiable.
For another example, Exercise 4.3.4 used the characterizing property to
confirm the derivative of the function f (x, y) = (x2 − y 2 , 2xy). Now we can
use the theorems of this section to obtain the derivative and know that it
works. The function f has domain R2 , so every domain point is interior. Since
each component of f is a polynomial, so are all partial derivatives of the
components, making them continuous everywhere. Thus f is differentiable at
every point (a, b) ∈ R2 . The matrix of partial derivatives at (a, b) is

D1 f1 (a, b) D2 f1 (a, b) 2a −2b


[ ]=[ ],
D1 f2 (a, b) D2 f2 (a, b) 2b 2a

and so the derivative of f at (a, b) is, as before,


160 4 The Derivative

Figure 4.7. The crimped sheet is differentiable everywhere except at the origin

Df(a,b) (h, k) = (2ah − 2bk, 2bh + 2ak).

Similarly, the function g(x, y) = xey from Exercise 4.3.5 has domain R2 ,
all of whose points are interior, and its partial derivatives D1 g(x, y) = ey and
D2 g(x, y) = xey are continuous everywhere. Thus it is differentiable every-
where. Its matrix of partial derivatives at every point (a, b) is

[D1 g(a, b) D2 g(a, b)] = [eb aeb ],

and so its derivative at (a, b) is

Dg(a,b) (h, k) = eb h + aeb k.

The reader is encouraged to reproduce the derivative of the product func-


tion (Lemma 4.4.4, part (1)) similarly.
Returning to the discussion (at the end of the previous section) of invert-
ibility of a mapping and invertibility of its derivative, consider the mapping

f ∶ R2 − {(0, 0)} Ð→ R2 − {(0, 0)}, f (x, y) = (x2 − y 2 , 2xy).

At every (x, y) where f is defined, the partial derivatives are D1 f1 (x, y) = 2x,
D2 f1 (x, y) = −2y, D1 f2 (x, y) = 2y, and D2 f2 (x, y) = 2x. These are continuous
functions of (x, y), so for every (a, b) ≠ (0, 0), Df(a,b) exists and its matrix is

D1 f1 (a, b) D2 f1 (a, b) 2a −2b


f ′ (a, b) = [ ]=[ ].
D1 f2 (a, b) D2 f2 (a, b) 2b 2a
4.5 Calculating the Derivative 161

The matrix has determinant 4(a2 + b2 ) > 0, and hence it is always invertible.
On the other hand, the mapping f takes the same value at points (x, y)
and −(x, y), so it is definitely not invertible.
With the Jacobian matrix described explicitly, a more calculational version
of the chain rule is available.

Theorem 4.5.5 (Chain rule in coordinates). Let f ∶ A Ð→ Rm (where


A ⊂ Rn ) be differentiable at the point a of A, and let g ∶ f (A) Ð→ Rℓ be
differentiable at the point b = f (a). Then the composition g ○ f ∶ A Ð→ Rℓ is
differentiable at a, and its partial derivatives are
m
Dj (g ○ f )i (a) = ∑ Dk gi (b)Dj fk (a) for i = 1, . . . , ℓ, j = 1, . . . , n.
k=1

Proof. The composition is differentiable by the intrinsic chain rule. The Ja-
cobian matrix of g at b is

g ′ (b) = [Dk gi (b)]ℓ×m (row index i, column index k),

the Jacobian matrix of f at a is

f ′ (a) = [Dj fk (a)]m×n (row index k, column index j),

and the Jacobian matrix of g ○ f at a is

(g ○ f )′ (a) = [Dj (g ○ f )i (a)]ℓ×n (row index i, column index j).

By the intrinsic chain rule,

(g ○ f )′ (a) = g ′ (b)f ′ (a).

Equate the (i, j)th entries to obtain the result. ⊔


Notations for the partial derivative vary. A function is often described by


a formula such as w = f (x, y, z). Other notations for D1 f are

∂f ∂w
f1 , fx , , wx , .
∂x ∂x
If x, y, z are in turn functions of s and t then a classical formulation of the
chain rule would be

= + +
∂w ∂w ∂x ∂w ∂y ∂w ∂z
. (4.2)
∂t ∂x ∂t ∂y ∂t ∂z ∂t
The formula is easily visualized as chasing back along all dependency chains
from t to w in a diagram where an arrow means contributes to:
162 4 The Derivative

♣7 E x ❃
♣ ♣♣♣♣ ☛☛ ❃❃❃
♣ ☛ ❃❃
♣♣♣ ☛☛ ❃❃
♣♣♣ ☛ ❃❃
s ✸◆◆◆ ☛☛
❃❃
✸✸ ◆◆◆ ☛☛ ❃❃
✸✸ ◆☛☛◆◆◆ ❃❃
✸✸ ☛☛ ◆◆◆ ❃
✸☛✸☛ &
8 y /w
☛☛ ✸ qq ✁ @
☛ ✸qq ✸ q q ✁
☛ q ✸ ✁✁
☛ qq ✁
☛q☛qqq ✸✸✸ ✁✁✁
t ◆◆◆ ✸✸✸ ✁✁
◆◆◆
◆◆◆ ✸✸ ✁✁✁
◆◆◆ ✸✸ ✁✁✁
◆&  ✁
z
Unfortunately, for all its mnemonic advantages, the classical notation is a
veritable minefield of misinterpretation. Formula (4.2) doesn’t indicate where
the various partial derivatives are to be evaluated, for one thing. Specifying the
variable of differentiation by name rather than by position also becomes con-
fusing when different symbols are substituted for the same variable, especially
since the symbols themselves may denote specific values or other variables.
For example, one can construe many different meanings for the expression

(y, x, z).
∂f
∂x
Blurring the distinction between functions and the variables denoting their
outputs is even more problematic. If one has, say, z = f (x, t, u), x = g(t, u),

t❖❄❄❖❖❖
❄❄ ❖❖❖
❄❄ ❖❖❖
❄❄ ❖❖❖
 ❖/'
⑦> x ♥♥♥7 z
⑦ ♥♥
⑦⑦⑦♥♥♥♥♥
⑦♥⑦♥♥♥
u
then chasing all paths from z back to t gives

= +
∂z ∂z ∂x ∂z
∂t ∂x ∂t ∂t
with “∂z/∂t” meaning something different on each side of the equality. While
the classical formulas are useful and perhaps simpler to apply in elementary
situations, they are not particularly robust until one has a solid understand-
ing of the chain rule. On the other hand, the classical formulas work fine
in straightforward applications, so several exercises are phrased in the older
language to give you practice with it.
For example, let
4.5 Calculating the Derivative 163

(x, y) = f (r, θ) = (r cos θ, r sin θ),


(z, w) = g(x, y) = (x2 − y 2 , 2xy).

We compute (∂z/∂r)(2, π/3). The chain rule in coordinates gives

[ ]=[ ]⋅[ ],
∂z/∂r ∂z/∂θ ∂z/∂x ∂z/∂y ∂x/∂r ∂x/∂θ
∂w/∂r ∂w/∂θ ∂w/∂x ∂w/∂y ∂y/∂r ∂y/∂θ

and the upper left entry is

= + = 2x cos θ − 2y sin θ.
∂z ∂z ∂x ∂z ∂y
∂r ∂x ∂r ∂y ∂r

We are given (r, θ) = (2, π/3), and it follows that (x, y) = (1, 3). So the
answer is √

(2, π/3) = 2 ⋅ 1 ⋅ − 2 ⋅ 3 ⋅
1 3
= −2.
∂z
∂r 2 2
To confirm the result without using the chain rule, note that f is the polar-
to-Cartesian change of coordinates, and g is the complex squaring function in
Cartesian coordinates, so that the composition g ○ f is the squaring function
in polar coordinates. That is, the composition is

(z, w) = (g ○ f )(r, θ) = (r2 cos 2θ, r2 sin 2θ).

Consequently ∂z/∂r = 2r cos 2θ, and substituting (r, θ) = (2, π/3) gives in
particular (∂z/∂r)(2, π/3) = 2 ⋅ 2 cos 2π/3 = 2 ⋅ 2 ⋅ (−1/2) = −2, as we know it
must.
For another example, the function f (x) = xx is usually differentiated as
follows in one-variable calculus: Consider the related function ln(f (x)) =
ln(xx ) = x ln(x), and take derivatives of both sides to get f ′ (x)/f (x) =
1 + ln(x); thus f ′ (x) = xx (1 + ln(x)). On the other hand, if we differenti-
ate xx treating the first x as variable and the second x as constant then
we get xxx−1 = xx , and if we differentiate xx treating the first x as con-
stant and the second x as variable then we get xx ln(x); the sum of these
two sort-of-derivatives is xx (1 + ln(x)), the derivative of xx as computed a
moment ago. The method of treating the two x’s as independent has pro-
duced the right answer, despite its illegality. This can’t be a coincidence, and
it isn’t. In general, if F (x1 , . . . , xn ) is a differentiable function of many vari-
ables then the derivative of the one-variable function f (x) = F (x, x, . . . , x) is
f ′ (x) = ∑ni=1 Di F (x, x, . . . , x). Exercise 4.5.10 is to prove this formula as an
immediate consequence of the chain rule, and then to use it to establish a
result known as Leibniz’s Rule. Exercise 4.5.11(a) is to use this formula to
differentiate the function f (x) = xx , and more generally Exercise 4.5.11(b)
x

is to differentiate the function f (x) = xx .


⋰x
164 4 The Derivative

Exercises

4.5.1. Explain why in the discussion beginning this section the tangent
plane P consists of all points (a, b, f (a, b)) + (h, k, T (h, k)) where T (h, k) =
ϕ′ (a)h + ψ ′ (b)k.

4.5.2. This exercise shows that all partial derivatives of a function can exist at
and about a point without being continuous at the point. Define f ∶ R2 Ð→ R
by


⎪ 2 2 if (x, y) ≠ (0, 0),
2xy
f (x, y) = ⎨ x +y

⎪ if (x, y) = (0, 0).
⎩0
(a) Show that D1 f (0, 0) = D2 f (0, 0) = 0.
(b) Show that D1 f (a, b) and D2 f (a, b) exist and are continuous at all
other (a, b) ∈ R2 .
(c) Show that D1 f and D2 f are discontinuous at (0, 0).

4.5.3. Define f ∶ R Ð→ R by


⎪x2 sin x1 if x ≠ 0,
f (x) = ⎨

⎪ if x = 0.

0

Show that f ′ (x) exists for all x but that f ′ is discontinuous at 0. Explain how
this disproves the converse of Theorem 4.5.3.

4.5.4. Discuss the derivatives of the following mappings at the following


points.
(a) f (x, y) = xy+1 on {(x, y) ∈ R2 ∶ y ≠ −1} at generic (a, b) with b ≠ −1.
2
−y

(After you are done, compare the effort of doing the problem now to the effort
of doing it as we did at the end of Section 4.4.)
(b) f (x, y) = y−1
xy 2
on {(x, y) ∈ R2 ∶ y ≠ 1} at generic (a, b) with b ≠ 1.

⎪ √ xy if (x, y) ≠ (0, 0)

(c) f (x, y) = ⎨ x +y at generic (a, b) ≠ (0, 0) and at

2 2


⎩0 if (x, y) = (0, 0)
(0, 0).

For the rest of these exercises, assume as much differentiability as necessary.

4.5.5. For what differentiable mappings f ∶ A Ð→ Rm is f ′ (a) a diagonal


matrix for all a ∈ A? (A diagonal matrix is a matrix whose (i, j)th entries for
all i ≠ j are 0.)

4.5.6. Show that if z = f (xy) then x, y, and z satisfy the differential equation
x ⋅ zx − y ⋅ zy = 0.

4.5.7. Let w = F (xz, yz). Show that x ⋅ wx + y ⋅ wy = z ⋅ wz .


4.5 Calculating the Derivative 165

4.5.8. If z = f (ax + by), show that bzx = azy .


4.5.9. The function f ∶ R2 Ð→ R is called homogeneous of degree k if
f (tx, ty) = tk f (x, y) for all scalars t and vectors (x, y). Letting f1 and f2
denote the first and second partial derivatives of f , show that such f satisfies
the differential equation
xf1 (x, y) + yf2 (x, y) = kf (x, y).
(Hint: First differentiate the homogeneity condition with respect to t, viewing
x and y as fixed but generic; the derivative of one side will require the chain
rule. Second, since the resulting condition holds for all scalars t, it holds for
any particular t of your choosing.)
4.5.10. (a) Consider a function f (x) = F (x, x, . . . , x) where F (x1 , x2 , . . . , xn )
is a differentiable function of n variables. Note that f = F ○ γ where γ(x) =
(x, x, . . . , x), and use this to show that f ′ (x) = ∑ni=1 Di F (x, x, . . . , x).
(b) (Leibniz’s Rule.) Let
f ∶ R2 Ð→ R
be a function such that for all a, b ∈ R, the integral

F (x) = ∫ f (x, y) dy
b
F ∶ R Ð→ R,
y=a

exists and is differentiable with respect to x, its derivative obtained by passing


the x-derivative through the y-integral,
dF (x, y)
∫ f (x, y) dy
d b
=
dx dx y=a
∫y=a f (x + h, y) dy − ∫y=a f (x, y) dy
b b

= lim
h→0 h
b f (x + h, y) − f (x, y)
= lim ∫ dy
h→0 y=a h
b f (x + h, y) − f (x, y)
=∫
!
lim dy
y=a h→0 h
(x, y) dy.
b ∂f
=∫
y=a ∂x

(The “!” step requires justification, but under reasonable circumstances it


can be carried out.) Let α, β ∶ R Ð→ R be differentiable functions. Define a
function
f (x, y) dy.
β(x)
G ∶ R Ð→ R, G(x) = ∫
y=α(x)

Thus x affects G in three ways: as a contributor the lower and upper limits of
integration, and as a parameter for the integrand. What is dG(x)/dx? (Hint:
G(x) = H(x, x, x) where H(x1 , x2 , x3 ) = ∫y=α(x f (x3 , y) dy.)
β(x2 )
1)
166 4 The Derivative

4.5.11. (a) Use the ideas at the end of the section to differentiate the function
f (x) = xx .
x

(b) For x > 0, define f−1 (x) = 0 and then fn (x) = xfn−1 (x) for n ≥ 0. Thus
f0 (x) = x0 = 1, f1 (x) = x1 = x, f2 (x) = xx , f3 (x) = xx , and so on. Show that
x

xfn′ (x) = fn (x)(fn−1 (x) + ln x ⋅ xfn−1



(x)), n ≥ 0.

Use this result and induction to establish the closed form


n−1 i
xfn′ (x) = fn (x)fn−1 (x) ∑ (ln x)i ∏ fn−1−j (x), n ≥ 0.
i=0 j=1

4.6 Higher-Order Derivatives

Partial differentiation can be carried out more than once on nice enough func-
tions. For example, if
f (x, y) = ex sin y
then
D1 f (x, y) = sin yex sin y , D2 f (x, y) = x cos yex sin y .
Taking partial derivatives again yields

D1 D1 f (x, y) = sin2 yex sin y ,


D1 D2 f (x, y) = cos yex sin y + x sin y cos yex sin y ,
D2 D1 f (x, y) = cos yex sin y + x sin y cos yex sin y = D1 D2 f (x, y),
D2 D2 f (x, y) = −x sin yex sin y + x2 cos2 yex sin y ,

and some partial derivatives of these in turn are

D1 D1 D2 f (x, y) = 2 sin y cos yex sin y + x sin2 y cos yex sin y ,


D1 D2 D1 f (x, y) = D1 D1 D2 f (x, y),
D2 D1 D2 f (x, y) = − sin yex sin y + 2x cos2 yex sin y − x sin2 yex sin y
+ x2 sin y cos2 yex sin y ,
D2 D2 D1 f (x, y) = D2 D1 D2 f (x, y),
D1 D2 D2 f (x, y) = − sin yex sin y + 2x cos2 yex sin y − x sin2 yex sin y
+ x2 sin y cos2 yex sin y
= D2 D1 D2 f (x, y),
D2 D1 D1 f (x, y) = 2 sin y cos yex sin y + x sin2 y cos yex sin y
= D1 D1 D2 f (x, y).
4.6 Higher-Order Derivatives 167

Suspiciously many of these match. The result of two or three partial differen-
tiations seems to depend only on how many were taken with respect to x and
how many with respect to y, not on the order in which they were taken.
To analyze the situation, it suffices to consider only two differentiations.
Streamline the notation by writing D2 D1 f as D12 f . (The subscripts may look
reversed, but reading D12 from left to right as D-one-two suggests the appro-
priate order of differentiating.) The definitions for D11 f , D21 f , and D22 f are
similar. These four functions are called the second-order partial derivatives
of f , and in particular D12 f and D21 f are the second-order mixed partial
derivatives. More generally, the kth-order partial derivatives of a function f are
those that come from k partial differentiations. A C k -function is a function
for which all the kth-order partial derivatives exist and are continuous. The
theorem is that with enough continuity, the order of differentiation doesn’t
matter. That is, the mixed partial derivatives agree.

Theorem 4.6.1 (Equality of mixed partial derivatives). Suppose that


f ∶ A Ð→ R (where A ⊂ R2 ) is a C 2 -function. Then at every point (a, b) of A,

D12 f (a, b) = D21 f (a, b).

We might try to prove the theorem as follows:

D1 f (a, b + k) − D1 f (a, b)
D12 f (a, b) = lim
k→0 k
limh→0 f (a+h,b+k)−f (a,b+k)
− limh→0 f (a+h,b)−f (a,b)
= lim h h
k→0 k
f (a + h, b + k) − f (a, b + k) − f (a + h, b) + f (a, b)
= lim lim ,
k→0 h→0 hk
and similarly

f (a + h, b + k) − f (a + h, b) − f (a, b + k) + f (a, b)
D21 f (a, b) = lim lim .
h→0 k→0 hk
So, letting ∆(h, k) = f (a + h, b + k) − f (a, b + k) − f (a + h, b) + f (a, b), we want
to show that
= lim lim
∆(h, k) ∆(h, k)
lim lim .
h→0 k→0 hk k→0 h→0 hk
If the order of taking the limits doesn’t matter then we have the desired re-
sult. However, if f is not a C 2 -function then the order of taking the limits can
in fact matter, i.e., the two mixed partial derivatives can both exist but not
be equal (see Exercise 4.6.1 for an example). Thus a correct proof of Theo-
rem 4.6.1 requires a little care. The theorem is similar to Taylor’s theorem
from Section 1.3 in that both are stated entirely in terms of derivatives, but
they are most easily proved using integrals. The following proof uses integra-
tion to show that ∆(h, k)/(hk) is an average value of both D12 f and D21 f
168 4 The Derivative

near (a, b), and then letting h and k shrink to 0 forces D12 f and D21 f to
agree at (a, b), as desired. That is, the proof shows that the two quantities in
the previous display are equal by showing that each of them equals a common
third quantity.
Proof. Since f is a C 2 -function on A, every point of A is interior. Take any
point (a, b) ∈ A. Then some box B = [a, a + h] × [b, b + k] lies in A. Compute
the nested integral
a+h b+k a+h
∫ ∫ dy dx = ∫ k dx = hk.
a b a

Also, by the fundamental theorem of integral calculus twice,

D12 f (x, y) dy dx = ∫ (D1 f (x, b + k) − D1 f (x, b)) dx


a+h b+k a+h
∫ ∫
a b a
= f (a + h, b + k) − f (a, b + k) − f (a + h, b) + f (a, b) = ∆(h, k).
(Thus the integral has reproduced the quantity that arose in the discussion
leading into this proof.) Let mh,k be the minimum value of D12 f on the box B,
and let Mh,k be the maximum value. These exist by Theorem 2.4.15, because
B is nonempty and compact, and D12 f ∶ B Ð→ R is continuous. Thus
mh,k ≤ D12 f (x, y) ≤ Mh,k for all (x, y) ∈ B.
Integrate this inequality, using the two previous calculations, to get
mh,k hk ≤ ∆(h, k) ≤ Mh,k hk,
or
mh,k ≤
≤ Mh,k .
∆(h, k)
hk
As (h, k) → (0+ , 0+ ), the continuity of D12 f at (a, b) forces mh,k and Mh,k
to D12 f (a, b), and hence

→ D12 f (a, b) as (h, k) → (0+ , 0+ ).


∆(h, k)
hk
But also, reversing the order of the integrations and of the partial derivatives
gives the symmetric calculations
b+k a+h
∫ ∫ dx dy = hk
b a

and
D21 f (x, y) dx dy = ∆(h, k),
b+k a+h
∫ ∫
b a
and so the same argument shows that

→ D21 f (a, b) as (h, k) → (0+ , 0+ ).


∆(h, k)
hk
Because both D12 f (a, b) and D21 f (a, b) are the limit of ∆(h, k)/(hk), they
are equal. ⊔

4.6 Higher-Order Derivatives 169

Extending Theorem 4.6.1 to more variables and to higher derivatives is


straightforward, provided that one supplies enough continuity. The hypotheses
of the theorem can be weakened a bit, in which case a subtler proof is required,
but such technicalities are more distracting than useful.
Higher-order derivatives are written in many ways. If a function is de-
scribed by the equation w = f (x, y, z) then D233 f is also denoted
∂3f
( ( )) ,
∂ ∂ ∂f
f233 , fyzz , ,
∂z ∂z ∂y ∂z 2 ∂y
∂3w
( ( )) ,
∂ ∂ ∂w
wyzz , .
∂z ∂z ∂y ∂z 2 ∂y
As with one derivative, these combine mnemonic advantages with conceptual
dangers.
A calculation using higher-order derivatives and the chain rule transforms
the heat equation of Laplace from Cartesian to polar coordinates. The C 2
quantity u = f (x, y) depending on the Cartesian variables x and y satisfies
Laplace’s equation if (blurring the distinction between u and f )
∂2u ∂2u
+ = 0.
∂x2 ∂y 2
If instead u is viewed as a function g(r, θ) of the polar variables r and θ then
how is Laplace’s equation expressed?
The Cartesian coordinates in terms of the polar coordinates are
x = r cos θ, y = r sin θ.
Thus u = f (x, y) = f (r cos θ, r sin θ) = g(r, θ), showing that u depends on r
and θ via x and y:
r❂ /x◆
◆◆◆
❂❂ ✂✂@ ◆◆◆
❂❂ ✂ ◆◆◆
❂❂ ✂✂✂ ◆◆◆
❂❂✂✂ '
✂✂❂❂ qq 8u
✂✂✂ ❂❂❂ qqqq
✂✂ ❂❂ qq
✂✂ ❂ qqqq
q
θ /y

The chain rule begins a hieroglyphic calculation,


u r = u x x r + u y yr ,

so that by the product rule,

urr = (ux xr + uy yr )r
= uxr xr + ux xrr + uyr yr + uy yrr .
170 4 The Derivative

Since ux and uy depend on r and θ via x and y just as u does, each of them can
take the place of u in the diagram above, and the chain rule gives expansions
of uxr and uyr as it did for ur ,

urr = uxr xr + ux xrr + uyr yr + uy yrr


= (uxx xr + uxy yr ) xr + ux xrr + (uyx xr + uyy yr ) yr + uy yrr
= uxx x2r + uxy yr xr + ux xrr + uyx xr yr + uyy yr2 + uy yrr
= uxx x2r + 2uxy xr yr + uyy yr2 + ux xrr + uy yrr .

Note the use of equality of mixed partial derivatives. The same calculation
with θ instead of r gives

uθθ = uxx x2θ + 2uxy xθ yθ + uyy yθ2 + ux xθθ + uy yθθ .

Because x = r cos θ and y = r sin θ, we have the relations

xr = x/r, yr = y/r, xθ = −y, yθ = x,


xrr = 0, yrr = 0, xθθ = −x, yθθ = −y.

It follows that

r2 urr = uxx x2 + 2uxy xy + uyy y 2 ,


rur = ux x + uy y,
uθθ = uxx y 2 − 2uxy xy + uyy x2 − ux x − uy y,

and so

r2 urr + rur + uθθ = uxx x2 + uyy y 2 + uxx y 2 + uyy x2


= (uxx + uyy )(x2 + y 2 ).

Recall that the Cartesian form of Laplace’s equation is uxx + uyy = 0. Now the
polar form follows:
r2 urr + rur + uθθ = 0.
That is,
∂2u ∂u ∂ 2 u
r2+ r + = 0.
∂r2 ∂r ∂θ2
The point of this involved calculation is that having done it once, and only
once, we now can check directly whether any given function g of the polar
variables r and θ satisfies Laplace’s equation. We no longer need to transform
each u = g(r, θ) into Cartesian terms u = f (x, y) before checking.
An n×n matrix A is orthogonal if AT A = I. (This concept was introduced
in Exercise 3.5.5.) Let A be orthogonal and consider its associated linear map,

TA ∶ Rn Ð→ Rn , TA (x) = Ax.
4.6 Higher-Order Derivatives 171

We show that prepending TA to a twice-differentiable function on Rn is inde-


pendent of applying the Laplacian operator to the function. That is, letting
∆ denote the Laplacian operator on Rn ,

∆ = D11 + D22 + ⋯ + Dnn ,

and taking any twice-differentiable function on Rn ,

f ∶ Rn Ð→ R,

we show that
∆(f ○ TA ) = ∆f ○ TA .
To see this, start by noting that for every x ∈ Rn , the chain rule and then
the fact that the derivative of every linear map is itself give two equalities of
linear mappings,

D(f ○ TA )x = DfTA (x) ○ D(TA )x = DfTA (x) ○ TA .

In terms of matrices, the equality of the first and last quantities in the previous
display is an equality of row-vector-valued functions of x,

[D1 (f ○ TA ) ⋯ Dn (f ○ TA )](x) = ([D1 f ⋯ Dn f ] ○ TA )(x) ⋅ A.

Because we view vectors as columns, transpose the quantities in the previ-


ous display, using the fact that A is orthogonal to write A−1 for AT , and
universalize over x to get an equality of column-valued functions,
⎡ D (f ○ T ) ⎤ ⎡D f ⎤
⎢ 1 A ⎥ ⎢ 1 ⎥
⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⎥ = TA−1 ○ ⎢ ⋮ ⎥ ○ TA .
⎢ ⎥ ⎢ ⎥
⎢Dn (f ○ TA )⎥ ⎢Dn f ⎥
⎣ ⎦ ⎣ ⎦
The derivative matrix of the left side has as its rows the row vector derivative
matrices of its entries, while the derivative matrix of the right side is computed
by the chain rule and the fact that the derivative of every linear map is itself,

[Dij (f ○ TA )]n×n = A−1 ⋅ [Dij f ○ TA ]n×n ⋅ A.

The trace of a square matrix was introduced in Exercise 3.2.5 as the sum of its
diagonal entries, and the fact that tr(A−1 BA) = tr(B) if A is invertible was
noted just after the proof of Theorem 3.5.2. Equate the traces of the matrices
in the previous display to get the desired result,

∆(f ○ TA ) = ∆f ○ TA .

To complement the proof just given in functional notation, here is a more


elementary second proof. Let the matrix A have entries aij . For every x ∈ Rn ,
compute that for i = 1, . . . , n,
172 4 The Derivative
n n
Di (f ○ TA )(x) = ∑ Dj f (Ax)Di (Ax)j = ∑ aji Dj f (Ax),
j=1 j=1

and thus
n n n
Dii (f ○ TA )(x) = ∑ aji ∑ Djk f (Ax)Di (Ax)k = ∑ aji aki Djk f (Ax),
j=1 k=1 j,k=1

and thus, because A is orthogonal, so that (AAT )jk is 1 when j = k and 0


otherwise,
n n
∆(f ○ TA )(x) = ∑ ∑ aji aki Djk f (Ax)
j,k=1 i=1
n
= ∑ (AAT )jk Djk f (Ax)
j,k=1
n
= ∑ Dii f (Ax) = (∆f ○ TA )(x),
i=1

as desired.

Exercises

4.6.1. This exercise shows that continuity is necessary for the equality of
mixed partial derivatives. Let


⎪ x2 +y2
xy(y 2 −x2 )
if (x, y) ≠ (0, 0),
f (x, y) = ⎨

⎪ if (x, y) = (0, 0).
⎩0
Away from (0, 0), f is rational, and so it is continuous and all its partial
derivatives of all orders exist and are continuous. Show: (a) f is continuous
at (0, 0), (b) D1 f and D2 f exist and are continuous at (0, 0), (c) D12 f (0, 0) =
1 ≠ −1 = D21 f (0, 0).

For the rest of these exercises, assume that the relevant functions are C 2 .

4.6.2. Suppose u, as a function of x and y, satisfies the differential equation


uxx − uyy = 0. Make the change of variables x = s + t, y = s − t. What corre-
sponding differential equation does u satisfy when viewed as a function of s
and t? (That is, find a nontrivial relation involving at least one of u, us , ut ,
uss , utt , and ust .)

4.6.3. (The wave equation) (a) Let c be a constant, tacitly understood to


denote the speed of light. Let x and t denote a space variable and a time
variable, and introduce variables

p = x + ct, q = x − ct.
4.6 Higher-Order Derivatives 173

Show that a quantity w, viewed as a function of x and t, satisfies the wave


equation,
c2 wxx = wtt ,
if and only if it satisfies the equation

wpq = 0.

(b) Using part (a), show that in particular if w = F (x + ct) + G(x − ct)
(where F and G are arbitrary C 2 -functions of one variable) then w satisfies
the wave equation. Here F and G are traveling waves, F traveling backward
and G forward.
(c) Now let 0 < v < c (both v and c are constant), and define new space
and time variables in terms of the original ones by a Lorentz transformation,

y = γ(x − vt), u = γ(t − (v/c2 )x) where γ = (1 − v 2 /c2 )−1/2 .

Show that

y + cu = γ(1 − v/c)(x + ct), y − cu = γ(1 + v/c)(x − ct),

so that consequently (y, u) has the same spacetime norm as (x, t),

y 2 − c 2 u 2 = x 2 − c 2 t2 .

(d) Recall the variables p = x + ct and q = x − ct from part (a). Similarly,


let r = y + cu and s = y − cu. Suppose that a quantity w, viewed as a function
of p and q, satisfies the wave equation wpq = 0. Use the results r = γ(1 − v/c)p,
s = γ(1 + v/c)q from part (c) to show that it also satisfies the wave equation
in the (r, s)-coordinate system, wrs = 0. Consequently, if w satisfies the wave
equation c2 wxx = wtt in the original space and time variables then it also
satisfies the wave equation c2 wyy = wuu in the new space and time variables.

4.6.4. Show that the substitution x = es , y = et converts the equation

x2 uxx + y 2 uyy + xux + yuy = 0

into Laplace’s equation uss + utt = 0.

4.6.5. (a) Show that the substitution x = s2 − t2 , y = 2st converts Laplace’s


equation uxx + uyy = 0 back into Laplace’s equation uss + utt = 0.
(b) Let k be a nonzero real number. Show that the substitution r = ρk ,
θ = kφ converts the polar Laplace’s equation r2 urr + rur + uθθ = 0 back into
the polar Laplace’s equation ρ2 uρρ + ρuρ + uφφ = 0. (When k = 2 this sub-
sumes part (a), because the substitution here encodes the complex kth-power
function in polar coordinates while the substitution in part (a) encodes the
complex squaring function in Cartesian coordinates.)
174 4 The Derivative

4.6.6. Let u be a function of x and y, and suppose that x and y in turn depend
linearly on s and t,

[ ]=[ ][ ], ad − bc = 1.
x ab s
y cd t

What is the relation between uss utt − u2st and uxx uyy − u2xy ?

4.6.7. (a) Let H denote the set of points (x, y) ∈ R2 such that y > 0. Associate
to each point (x, y) ∈ H another point,
−x
(z, w) = ( ).
y
, 2
x2+ y x + y2
2

You may take for granted or verify that

zx = z 2 − w 2 , zy = −2zw, zxx = 2z(z 2 − 3w2 ), zyy = −2z(z 2 − 3w2 )

and

wx = 2zw, wy = z 2 − w2 , wxx = 2w(3z 2 − w2 ), wyy = −2w(3z 2 − w2 ).

Consider a quantity u = f (z, w), so that also u = f˜(x, y) for a different func-
tion f˜. As usual, we have

uxx = uzz zx2 + 2uzw zx wx + uww wx2 + uz zxx + uw wxx ,


uyy = uzz zy2 + 2uzw zy wy + uww wy2 + uz zyy + uw wyy .

Show that
y 2 (uxx + uyy ) = w2 (uzz + uww ).
The operator y 2 (∂ 2 /∂x2 + ∂ 2 /∂y 2 ) on H is the hyperbolic Laplacian, de-
noted ∆H . We have just established the invariance of ∆H under the hyperbolic
transformation that takes (x, y) to (z, w) = (−x/(x2 + y 2 ), y/(x2 + y 2 )).
(b) Show that the invariance relation y 2 (uxx + uyy ) = w2 (uzz + uww ) also
holds when (z, w) = (x + b, y) for every fixed real number b, and that the
relation also holds when (z, w) = (rx, ry) for every fixed positive real num-
ber r. It is known that every hyperbolic transformation of H takes the form
(z, w) = φ(x, y) where φ is a finite succession of transformations of the type
in part (a) or of the two types just addressed here. Note that consequently
this exercise has shown that the invariance relation holds for every hyperbolic
transformation of H. That is, for every hyperbolic transformation φ and for
every twice-differential function f ∶ H Ð→ R we have, analogously to the result
at the very end of this section,

∆H (f ○ φ) = ∆H f ○ φ.
4.7 Extreme Values 175

4.6.8. Consider three matrices,

X =[ ], Y =[ ], H =[ ].
01 00 1 0
00 10 0 −1

Establish the relations

XY − Y X = H, HX − XH = 2X, HY − Y H = −2Y.

Now consider three operators on smooth functions from Rn to R, reusing the


names X, Y , and H, and letting ∆ = D11 +D22 +⋯+Dnn denote the Laplacian
operator,

(Xf )(x) = 1
2
∣x∣2 f (x),
(Y f )(x) = − 12 ∆f (x),
n
(Hf )(x) = n
2
f (x) + ∑ xi Di f (x).
i=1

Establish the same relations as a moment ago,

XY − Y X = H, HX − XH = 2X, HY − Y H = −2Y.

The three matrices generate a small instance of a Lie algebra, and this exercise
shows that the space of smooth functions on Rn can be made a representation
of the Lie algebra. Further show, partly by citing the work at the end of this
section, that the action of every orthogonal matrix A on smooth functions
commutes with the representation,

X(f ○ TA ) = (Xf ) ○ TA ,
Y (f ○ TA ) = (Y f ) ○ TA ,
H(f ○ TA ) = (Hf ) ○ TA .

4.7 Extreme Values


In one-variable calculus the derivative is used to find maximum and minimum
values (extrema) of differentiable functions. Recall the following useful facts.
• (Extreme value theorem.) If f ∶ [α, β] Ð→ R is continuous then it assumes
a maximum and a minimum on the interval [α, β].
• (Critical point theorem.) Suppose that f ∶ [α, β] Ð→ R is differentiable
on (α, β) and that f assumes a maximum or minimum at an interior
point a of [α, β]. Then f ′ (a) = 0.
• (Second derivative test.) Suppose that f ∶ [α, β] Ð→ R is C 2 on (α, β) and
that f ′ (a) = 0 at an interior point a of [α, β]. If f ′′ (a) > 0 then f (a) is a
local minimum of f , and if f ′′ (a) < 0 then f (a) is a local maximum.
176 4 The Derivative

Geometrically the idea is that just as the affine function

A(a + h) = f (a) + f ′ (a)h

specifies the tangent line to the graph of f at (a, f (a)), the quadratic function

P (a + h) = f (a) + f ′ (a)h + f ′′ (a)h2


1
2
determines the best-fitting parabola. When f ′ (a) = 0, the tangent line is
horizontal and the sign of f ′′ (a) specifies whether the parabola opens upward
or downward. When f ′ (a) = 0 and f ′′ (a) = 0, the parabola degenerates to the
horizontal tangent line, and the second derivative provides no information.
(See Figure 4.8.)

Figure 4.8. Approximating parabolas

This section generalizes these facts to functions f of n variables. The ex-


treme value theorem has already generalized as Theorem 2.4.15: a continuous
function f on a compact subset of Rn takes maximum and minimum values.
The critical point theorem also generalizes easily to say that each extreme
value of the function f ∶ A Ð→ R that occurs at a point where f is differen-
tiable occurs at a critical point of f , meaning a point a where Dfa is the
zero function.

Theorem 4.7.1 (Multivariable critical point theorem). Suppose that


the function f ∶ A Ð→ R (where A ⊂ Rn ) takes an extreme value at the point a
of A, and suppose that f is differentiable at a. Then all partial derivatives
of f at a are zero.

Proof. For each j ∈ {1, . . . , n}, the value f (a) is an extreme value for the one-
variable function ϕ from Definition 4.5.1 of the partial derivative Dj f (a). By
the one-variable critical point theorem, ϕ′ (aj ) = 0. That is, Dj f (a) = 0. ⊔

The generalization of the second derivative test is more elaborate. From


now on, all functions are assumed to be of type C 2 on the interiors of their
domains, meaning that all their second-order partial derivatives exist and are
continuous.
4.7 Extreme Values 177

Definition 4.7.2 (Second derivative matrix). Let f ∶ A Ð→ R (where A ⊂


Rn ) be a function and let a be an interior point of A. The second derivative
matrix of f at a is the n × n matrix whose (i, j)th entry is the second-order
partial derivative Dij f (a). Thus
⎡ D f (a) ⋯ D f (a) ⎤
⎢ 11 ⎥
⎢ ⎥
1n
f (a) = ⎢ ⋮ ⋱ ⋮ ⎥.
⎢ ⎥
′′

⎢Dn1 f (a) ⋯ Dnn f (a)⎥


⎣ ⎦
By the equality of mixed partial derivatives, the second derivative matrix
is symmetric, i.e., f ′′ (a)T = f ′′ (a). Beware of confusing the second derivative
matrix and the Jacobian matrix: the second derivative matrix is a square
matrix defined only for scalar-valued functions and its entries are second-order
partial derivatives, while for scalar-valued functions the Jacobian matrix is the
row vector of first partial derivatives.
The eminently plausible formula f ′′ = (f ′ )′ indeed holds, provided that we
view f ′ as a mapping to Rn , each of whose outputs is an ordered list with no
shape rather than a row vector. Thus

(f ′ )′ (a) = f ′′ (a) for interior points a of A.

As an example, if
f (x, y) = sin2 x + x2 y + y 2 ,
then for every (a, b) ∈ R2 ,

f ′ (a, b) = [sin 2a + 2ab a2 + 2b]

and
2 cos 2a + 2b 2a
f ′′ (a, b) = [ ].
2a 2
Every n × n matrix M determines a quadratic function

QM ∶ Rn Ð→ R, QM (h) = hT M h.

Here h is viewed as a column vector. If M has entries mij and h = (h1 , . . . , hn )


then the rules of matrix multiplication show that
⎡m ⋯ m ⎤ ⎡h ⎤
⎢ 11 1n ⎥ ⎢ 1 ⎥
⎢ ⎥⎢ ⎥
QM (h) = [h1 ⋯ hn ] ⎢ ⋮ ⋱ ⋮ ⎥ ⎢ ⋮ ⎥ = ∑ ∑ mij hi hj .
n n

⎢ ⎥⎢ ⎥
⎢mn1 ⋯ mnn ⎥ ⎢hn ⎥ i=1 j=1
⎣ ⎦⎣ ⎦
The function QM is homogeneous of degree 2, meaning that each of its terms
has degree 2 in the entries of h and therefore QM (th) = t2 QM (h) for all t ∈ R
and h ∈ Rn .
When M is the second derivative matrix of a function f at a point a, the
corresponding quadratic function is denoted Qfa rather than Qf ′′ (a) . Just as
178 4 The Derivative

f (a) + Dfa (h) gives the best affine approximation of f (a + h) for small h,
f (a) + Dfa (h) + 21 Qfa (h) gives the best quadratic approximation.
In the example f (x, y) = sin2 x + x2 y + y 2 , the second derivative matrix at
a point (a, b) defines the quadratic function

2 cos 2a + 2b 2a h
Qf(a,b) (h, k) = [h k] [ ][ ]
2a 2 k
= 2((cos 2a + b) h2 + 2a hk + k 2 ) for (h, k) ∈ R2 ,

and so the best quadratic approximation of f near, for instance, the point
(π/2, 1) is

f (π/2 + h, 1 + k) ≈ f (π/2, 1) + Df(π/2,1) (h, k) + Qf(π/2,1) (h, k)


1
2
= π 2 /4 + 2 + πh + (π 2 /4 + 2)k + πhk + k 2 .

Suppose that f ∶ A Ð→ R (where A ⊂ R2 ) has a critical point at (a, b), i.e.,


f (a, b) = [0 0]. Working in local coordinates, we will approximate f by a

quadratic function on R2 having a critical point at (0, 0). The graphs of nine
such quadratic functions are shown in Figure 4.9. If the best quadratic ap-
proximation of f at (a, b) is a bowl then f should have a minimum at (a, b).
Similarly for an inverted bowl and a maximum. If the best quadratic ap-
proximation is a saddle then there should be points (x, y) near (a, b) where
f (x, y) > f (a, b) and points (x′ , y ′ ) near (a, b) where f (x′ , y ′ ) < f (a, b). In
this case, (a, b) is called for obvious reasons a saddle point of f .
Returning to the example f (x, y) = sin2 x + x2 y + y 2 , note that (0, 0) is
a critical point of f because f ′ (0, 0) = [0 0]. The second derivative matrix
f ′′ (0, 0) is [ 20 02 ], and so the quadratic function 21 Qf(0,0) is given by

Qf(0,0) (h, k) = [h k] [ ] [ ] = h2 + k 2 .
1 1 20 h
2 2 02 k

Thus the graph of f looks like a bowl near (0, 0), and f (0, 0) should be a local
minimum.
This discussion is not yet rigorous. Justifying the ideas and proving the
appropriate theorems will occupy the rest of this section. The first task is to
study quadratic approximation of C 2 -functions.
Proposition 4.7.3 (Special case of Taylor’s theorem). Let I be an open
interval in R containing [0, 1]. Let ϕ ∶ I Ð→ R be a C 2 -function. Then

ϕ(1) = ϕ(0) + ϕ′ (0) + ϕ′′ (c) for some c ∈ [0, 1].


1
2
The proposition follows from the general Taylor’s theorem in Section 1.3
because the first-degree Taylor polynomial of ϕ at 0 is T1 (t) = ϕ(0) + ϕ′ (0)t,
so that in particular T1 (1) = ϕ(0) + ϕ′ (0).
4.7 Extreme Values 179

Figure 4.9. Two bowls, two saddles, four half-pipes, and a plane

Theorem 4.7.4 (Quadratic Taylor approximation). Let f ∶ A Ð→ R


(where A ⊂ Rn ) be a C 2 -function on the interior points of A. Let a be an
interior point of A. Then for all small enough h ∈ Rn ,

f (a + h) = f (a) + Dfa (h) + Qfa+ch (h) for some c ∈ [0, 1],


1
2
or, in matrices, viewing h as a column vector,

f (a + h) = f (a) + f ′ (a)h + hT f ′′ (a + ch)h for some c ∈ [0, 1].


1
2
Proof. Let I = (−ε, 1 + ε) be a small superinterval of [0, 1] in R. Define

γ ∶ I Ð→ A, γ(t) = a + th.

Thus γ(0) = a, γ(1) = a + h, and γ ′ (t) = h for all t ∈ I. Further define

ϕ = f ○ γ ∶ I Ð→ R.

That is, ϕ(t) = f (a + th) is the restriction of f to the line segment from a
to a + h. By the chain rule and the fact that γ ′ = h,

ϕ′ (t) = (f ○ γ)′ (t) = f ′ (γ(t))h = Dfa+th (h).

The previous display can be rephrased as ϕ′ (t) = ⟨f ′ (γ(t)), h⟩, and so the
chain rule and the symmetry of f ′′ give
180 4 The Derivative

ϕ′′ (t) = ⟨f ′′ (γ(t))h, h⟩ = hT f ′′ (a + th)h = Qfa+th (h).


Because f (a + h) = ϕ(1), the special case of Taylor’s theorem says that for
some c ∈ [0, 1],

f (a + h) = ϕ(0) + ϕ′ (0) + ϕ′′ (c) = f (a) + Dfa (h) + Qfa+ch (h),


1 1
2 2
giving the result. ⊔

Thus, to study f near a critical point a ∈ Rn where Dfa is zero, we need to
look at the sign of Qfa+ch (h) for small vectors h. The next order of business
is therefore to discuss the values taken by a homogeneous quadratic function.
Definition 4.7.5 (Positive definite, negative definite, indefinite ma-
trix). The symmetric square n × n matrix M is called
• positive definite if QM (h) > 0 for every nonzero h ∈ Rn ,
• negative definite if QM (h) < 0 for every nonzero h ∈ Rn ,
• indefinite if QM (h) is positive for some h and negative for others.
The identity matrix I is positive definite because hT Ih = ∣h∣2 for all h. The
matrix [ 10 −10 ] is indefinite. The general question whether a symmetric n × n
matrix is positive definite leads to an excursion into linear algebra too lengthy
for this course. (See Exercise 4.7.10 for the result without proof.) However,
in the special case of n = 2, basic methods give the answer. Recall that the
quadratic polynomial αh2 + 2βh + δ takes positive and negative values if and
only if it has distinct real roots, i.e., αδ − β 2 < 0.
Proposition 4.7.6 (Two-by-two definiteness Test). Consider a matrix
M = [α
β δ ] ∈ M2 (R). Then
β

(1) M is positive definite if and only if α > 0 and αδ − β 2 > 0.


(2) M is negative definite if and only if α < 0 and αδ − β 2 > 0.
(3) M is indefinite if and only if αδ − β 2 < 0.
Proof. Since QM (t(h, k)) = t2 QM (h, k) for all real t, scaling the input vector
(h, k) by nonzero real numbers doesn’t affect the sign of the output. The
second entry k can therefore be scaled to 0 or 1, and if k = 0 then the first
entry h can be scaled to 1. Therefore, to prove (1), reason that
M is positive definite ⇐⇒ QM (1, 0) > 0 and QM (h, 1) > 0 for all h ∈ R
⇐⇒ α > 0 and αh2 + 2βh + δ > 0 for all h ∈ R
⇐⇒ α > 0 and αδ − β 2 > 0.
Statement (2) is similar. As for (3),
M is indefinite ⇐⇒ αh2 + 2βh + δ takes positive and negative values
⇐⇒ αδ − β 2 < 0.


4.7 Extreme Values 181

The proposition provides no information if αδ − β 2 = 0. Geometrically,


the proposition gives conditions on M to determine that the graph of QM is
a bowl, an inverted bowl, or a saddle. The condition αδ − β 2 = 0 indicates
a degenerate graph: a half-pipe (see Figure 4.9), an inverted half-pipe, or a
plane.
For nonzero α, the matrix calculation

[ ] = [ −1 ][ 2 ][ ]
αβ 1 0 α 0 1 α−1 β
β δ α β 1 0 α (αδ − β ) 0 1
−1

gives a corresponding equality of quadratic functions,

αx2 + 2βxy + δy 2 = αx̃2 + α−1 (αδ − β 2 )y 2 , x̃ = x + α−1 βy.

That is, a change of variables eliminates the cross term, and the variant
quadratic function makes the results of the definiteness test clear.
The positive definite, negative definite, or indefinite character of a matrix
is preserved if the matrix entries vary by small enough amounts. Again we re-
strict our discussion to the 2×2 case. Here the result is plausible geometrically,
since it says that if the matrix M (a, b) defines a function whose graph is (for
example) a bowl, then matrices close to M (a, b) should define functions with
similar graphs, which thus should still be bowl-shaped. The same persistence
holds for a saddle, but a half-pipe can deform immediately into either a bowl
or a saddle, and so can a plane.
Proposition 4.7.7 (Persistence of definiteness). Let A be a subset of R2 ,
and let the matrix-valued mapping

M ∶ A Ð→ M2 (R), M (x, y) = [ ]
α(x, y) β(x, y)
β(x, y) δ(x, y)

be continuous. Let (a, b) be an interior point of A. Suppose that the matrix


M (a, b) is positive definite. Then for all (x, y) in some ε-ball about (a, b), the
matrix M (x, y) is also positive definite. Similar statements hold for negative
definite and indefinite matrices.
Proof. By the persistence of inequality principle (Proposition 2.3.10), the cri-
teria α > 0 and αδ − β 2 > 0 remain valid if x and y vary by a small enough
amount. The other statements follow similarly. ⊔

When a function f has continuous second-order partial derivatives, the
entries of the second derivative matrix f ′′ (a) vary continuously with a. The
upshot of the last proposition is therefore that we may replace the nebulous
notion of Qfa+ch for some c with the explicit function Qfa .
Proposition 4.7.8 (Two-variable max/min test). Let f ∶ A Ð→ R
(where A ⊂ R2 ) be C 2 on its interior points. Let (a, b) be an interior point
of A, and suppose that f ′ (a, b) = [0 0]. Let f ′′ (a, b) = [ α
β δ ]. Then:
β
182 4 The Derivative

(1) If α > 0 and αδ − β 2 > 0 then f (a, b) is a local minimum.


(2) If α < 0 and αδ − β 2 > 0 then f (a, b) is a local maximum.
(3) If αδ − β 2 < 0 then f (a, b) is a saddle point.
Proof. This follows from Theorem 4.7.4, Proposition 4.7.6, and Proposi-
tion 4.7.7. ⊔

Again, the test gives no information if αδ − β 2 = 0.
Returning once again to the example f (x, y) = sin2 x + x2 y + y 2 with its
critical point (0, 0) and second derivative matrix f ′′ (0, 0) = [ 20 02 ], the max/min
test shows that f has a local minimum at (0, 0).
Another example is to find the extrema of the function
f (x, y) = xy(x + y − 3)
on the triangle
T = {(x, y) ∈ R2 ∶ x ≥ 0, y ≥ 0, x + y ≤ 3}.
To solve this, first note that T is compact. Therefore f is guaranteed to take
a maximum and a minimum value on T . These are assumed either at interior
points of T or along the edge. Examining the signs of x, y, and x + y − 3
shows that f is zero at all points on the edge of T and negative on the interior
of T . Thus f assumes its maximum value—zero—along the boundary of T and
must assume its minimum somewhere inside. (See Figure 4.10.) To find the
extrema of f inside T , we first find the critical points. The partial derivatives
of f (now viewed as a function only on the interior of T ) are
fx (x, y) = y(2x + y − 3), fy (x, y) = x(x + 2y − 3),
and since x and y are nonzero on the interior of T , these are both zero only
at the unique solution (x, y) = (1, 1) of the simultaneous equations 2x + y = 3,
x + 2y = 3. Therefore f (1, 1) = −1 must be the minimum value of f . A quick
calculation shows that f ′′ (1, 1) = [ 21 12 ], and the max/min test confirms the
minimum at (1, 1).
Another example is to find the extreme values of the function

f (x, y) = x2 + xy − 2x − y 2 .
1 1
f ∶ R2 Ð→ R,
2 2
Since R2 is not compact, there is no guarantee that f has any extrema. In fact,
for large x, f (x, 0) gets arbitrarily large, and for large y, f (0, y) gets arbitrarily
large in the negative direction. So f has no global extrema. Nonetheless, there
may be local ones. Every point of R2 is interior, so it suffices to examine the
critical points of f . The partial derivatives are
fx (x, y) = x + y − 2, fy (x, y) = x − y,
and the only point where both of them vanish is (x, y) = (1, 1). The second
derivative matrix is f ′′ (1, 1) = [ 11 −11 ], so the critical point (1, 1) is a saddle
point. The function f has no extrema, local or global.
4.7 Extreme Values 183

Figure 4.10. Zero on the boundary, negative on the interior

Exercises

4.7.1. Compute the best quadratic approximation of f (x, y) = ex cos y at the


point (0, 0), f (h, k) ≈ f (0, 0) + Df(0,0) (h, k) + 12 Qf(0,0) (h, k).
4.7.2. Compute the best quadratic approximation of f (x, y) = ex+2y at the
point (0, 0).
4.7.3. Explain, making whatever reasonable assumptions seem to be help-
ful, why the n-dimensional conceptual analogue of Figure 4.9 should have 3n
pictures. How does this relate to Figure 4.8?
4.7.4. Find the extreme values taken by f (x, y) = xy(4x2 + y 2 − 16) on the
quarter-ellipse

E = {(x, y) ∈ R2 ∶ x ≥ 0, y ≥ 0, 4x2 + y 2 ≤ 16}.

4.7.5. Find the local extrema of the function f (x, y) = x2 + xy − 4x + 23 y 2 − 7y


on R2 .
4.7.6. Determine the nature of f (x, y) = 13 x3 + 13 y 3 + (x − 32 )2 − (y + 4)2 at each
of its critical points. Are there global extrema?
4.7.7. Find the critical points. Are they maxima, minima, or saddle points?
(The max/min test will not always help.)

f (x, y) = x2 y + xy 2 , g(x, y) = ex+y , h(x, y) = x5 y + xy 5 + xy.

4.7.8. Discuss local and global extrema of f (x, y) = 1


x−1
− y−1
1
on the open ball
B((0, 0); 1) in R2 .
4.7.9. The graph of the function m(x, y) = 6xy 2 −2x3 −3y 4 is called a monkey
saddle. Find the three critical points of m and classify each as a maximum,
minimum, or saddle. (The max/min test will work on two. Study m(x, 0) and
m(0, y) to classify the third.) Explain the name monkey saddle—a computer
picture may help.
184 4 The Derivative

4.7.10. Linear algebra readily addresses the question whether an n ×n matrix


is positive definite, negative definite, or indefinite.

Definition 4.7.9 (Characteristic polynomial). Let M be an n ×n matrix.


Its characteristic polynomial is

pM (λ) = det(M − λI).

The characteristic polynomial of M is a polynomial of degree n in the scalar


variable λ.

While the roots of a polynomial with real coefficients are in general com-
plex, the roots of the characteristic polynomial of a symmetric matrix in
Mn (R) are guaranteed to be real. The characterization we want is contained
in the following theorem.

Theorem 4.7.10 (Description of definite/indefinite matrices). Let M


be a symmetric matrix in Mn (R). Then:
(1) M is positive definite if and only if all the roots of pM (λ) are positive.
(2) M is negative definite if and only if all the roots of pM (λ) are negative.
(3) M is indefinite if and only if pM (λ) has positive roots and negative roots.

With this result one can extend the methods in this section to functions
of more than two variables.
(a) Let M be the symmetric matrix [ α
β δ ] ∈ M2 (R). Show that
β

pM (λ) = λ2 − (α + δ)λ + (αδ − β 2 ).

(b) Show that Theorem 4.7.10 is equivalent to Proposition 4.7.6 when


n = 2.
(c) Classify the 3 × 3 matrices
⎡ 1 −1 0 ⎤ ⎡0 1 0⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
⎢ −1 2 0 ⎥ , ⎢1 0 1⎥ .
⎢ ⎥ ⎢ ⎥
⎢ 0 0 1⎥ ⎢0 1 0⎥
⎣ ⎦ ⎣ ⎦
A generalization of Proposition 4.7.7 also holds, because the roots of a
polynomial vary continuously with the polynomial’s coefficients. The general-
ized proposition leads to the following result.

Proposition 4.7.11 (General max/min test). Let f ∶ A Ð→ R (where A ⊂


Rn ) be C 2 on its interior points. Let a be an interior point of A, and suppose
that f ′ (a) = 0n . Let the second derivative matrix f ′′ (a) have characteristic
polynomial p(λ).
(1) If all roots of p(λ) are positive then f (a) is a local minimum.
(2) If all roots of p(λ) are negative then f (a) is a local maximum.
(3) If p(λ) has positive and negative roots then f (a) is a saddle point.
4.8 Directional Derivatives and the Gradient 185

4.7.11. This exercise eliminates the cross terms from a quadratic function of
n variables, generalizing the calculation for n = 2 in this section. Throughout,
we abbreviate positive definite to positive. Let M be a positive n×n symmetric
matrix where n > 1. This exercise shows how to diagonalize M as a quadratic
function. (This is different from diagonalizing M as a transformation, as is
done in every linear algebra course.) Decompose M as

M =[ ],
a cT
cN

with a > 0 and c ∈ Rn−1 a column vector and N positive (n − 1) × (n − 1)


symmetric. Define
M2 = N − a−1 ccT ,
again (n − 1) × (n − 1) symmetric, though we don’t yet know whether it is
positive. Check that
T
1 a−1 cT 1 a−1 cT
M =[ ] [ ][ ].
a 0T
0 In−1 0 M2 0 In−1

Show that in terms of quadratic functions, this says (letting v = (x1 , . . . , xn )


and v2 = (x2 , . . . , xn ) with these vectors viewed as columns, and letting x̃1 =
x1 + a−1 cT v2 ) that
v T M v = ax̃21 + v2T M2 v2 .
Consequently M2 is positive: indeed, if the last term of the previous display
is nonpositive then setting x1 = −a−1 cT v2 makes x̃1 zero and thus makes
the entire right side nonpositive, so that v = 0n because M is positive, and
consequently v2 = 0n−1 . Repeating the process on M2 , and so on, eventually
gives
v T M v = a1 x̃21 + ⋯ + an x̃2n ,
with all ai > 0 and with the vector ṽ = (x̃1 , . . . , x̃n ) of modified variables the
image of the vector v of original variables by a linear transformation whose
matrix is upper triangular with 1’s on the diagonal.

4.8 Directional Derivatives and the Gradient


Let f be a scalar-valued function, f ∶ A Ð→ R where A ⊂ Rn , and assume
that f is differentiable at a point a of A. While the derivative Dfa is a rather
abstract object—the linear mapping that gives the best approximation of f (a+
h) − f (a) for small h—the partial derivatives Dj f (a) are easy to understand.
The jth partial derivative of f at a,

f (a + tej ) − f (a)
Dj f (a) = lim ,
t→0 t
186 4 The Derivative

measures the rate of change of f at a as its input varies in the jth direction.
Visually, Dj f (a) gives the slope of the jth cross section through a of the
graph of f .
Analogous formulas measure the rate of change of f at a as its input varies
in a direction that doesn’t necessarily parallel a coordinate axis. A direction
in Rn is specified by a unit vector d, i.e., a vector d such that ∣d∣ = 1. As the
input to f moves a distance t in the d direction, f changes by f (a + td) − f (a).
Thus the following definition is natural.

Definition 4.8.1 (Directional derivative). Let f ∶ A Ð→ R (where A ⊂ Rn )


be a function, let a be an interior point of A, and let d ∈ Rn be a unit vector.
The directional derivative of f at a in the d direction is

f (a + td) − f (a)
Dd f (a) = lim ,
t→0 t
if this limit exists.

The directional derivatives of f in the standard basis vector directions are


simply the partial derivatives.
When n = 2 and f is differentiable at (a, b) ∈ R2 , its graph has a well-
fitting tangent plane through (a, b, f (a, b)). The plane is determined by the
two slopes D1 f (a, b) and D2 f (a, b), and it geometrically determines the rate of
increase of f in all other directions. (See Figure 4.11.) The geometry suggests
that if f ∶ A Ð→ R (where A ⊂ Rn ) is differentiable at a then all directional
derivatives are expressible in terms of the partial derivatives. This is true and
easy to show. A special case of the differentiability property (4.1) is

f (a + td) − f (a) − Dfa (td) is o(td) = o(t),

or, since the constant t passes through the linear map Dfa ,

f (a + td) − f (a)
lim = Dfa (d),
t→0 t
or, since the linear map Dfa has matrix [D1 f (a), . . . , Dn f (a)],
n
Dd f (a) = ∑ Dj f (a)dj
j=1

as desired.
The derivative matrix f ′ (a) of a scalar-valued function f at a is often
called the gradient of f at a and written ∇f (a). That is,

∇f (a) = f ′ (a) = [D1 f (a), . . . , Dn f (a)].

The previous calculation and this definition lead to the following theorem.
4.8 Directional Derivatives and the Gradient 187

Figure 4.11. General directional slope determined by axis-directional slopes

Theorem 4.8.2 (Directional derivative and gradient). Let the function


f ∶ A Ð→ R (where A ⊂ Rn ) be differentiable at a, and let d ∈ Rn be a unit
vector. Then the directional derivative of f at a in the d direction exists, and
it is equal to
n
Dd f (a) = ∑ Dj f (a)dj = ⟨∇f (a), d⟩ = ∣∇f (a)∣ cos θ∇f (a),d .
j=1

Therefore:
• The rate of increase of f at a in the d direction varies with d, from −∣∇f (a)∣
when d points in the direction opposite to ∇f (a), to ∣∇f (a)∣ when d points
in the same direction as ∇f (a).
• In particular, the vector ∇f (a) points in the direction of greatest increase
of f at a, and its modulus ∣∇f (a)∣ is precisely this greatest rate.
• Also, the directions orthogonal to ∇f (a) are the directions in which f
neither increases nor decreases at a.

This theorem gives necessary conditions that arise in consequence of the


derivative of f existing at a point a. As in Section 4.5, the converse statement,
that these conditions are sufficient to make the derivative of f exist at a, is
false. Each directional derivative Dd f (a) can exist without the derivative Dfa
existing (Exercise 4.8.10). Furthermore, each directional derivative can exist
at a and satisfy the formula Dd f (a) = ⟨∇f (a), d⟩ in the theorem, but still
without the derivative Dfa existing (Exercise 4.8.11). The existence of the
multivariable derivative Dfa is a stronger condition than any amount of one-
variable cross-sectional derivative data at a.
For an example illustrating the theorem, if you are skiing on the quadratic
mountain f (x, y) = 9 − x2 − 2y 2 at the point (a, f (a)) = (1, 1, 6), then your
gradient meter shows
188 4 The Derivative

∇f (1, 1) = (D1 f (1, 1), D2 f (1, 1)) = (−2x, −4y)∣(x,y)=(1,1) = (−2, −4).

Therefore the direction of steepest descent down √ the hillside is the (2, 4)-
direction (this could be divided by its modulus 20 to make it a unit √ vector),
and the slope of steepest descent is the absolute value ∣∇f (1, 1)∣ = 20. On the
other hand, cross-country skiing in the (2, −1)-direction, which is orthogonal
to ∇f (1, 1), neither gains nor loses elevation immediately. (See Figure 4.12.)
The cross-country skiing trail that neither climbs nor descends has a mathe-
matical name.

Figure 4.12. Gradient and its orthogonal vector for the parabolic mountain

Definition 4.8.3 (Level set). Let f ∶ A Ð→ R (where A ⊂ Rn ) be a function.


A level set of f is the set of points in A that map under f to some fixed
value b in R,
L = {x ∈ A ∶ f (x) = b}.

The curves on a topographical map are level sets of the altitude function.
The isotherms on a weather map are level sets of the temperature function,
and the isobars on a weather map are level sets of the pressure function.
Indifference curves in economics are level sets of the utility function, and iso-
quants are level sets of the production function. Surfaces of constant potential
in physics are level sets of the potential function.
For example, on the mountain

f ∶ R2 Ð→ R, f (x, y) = 9 − x2 − 2y 2 ,

the level set for b = 5 is an ellipse in the plane,

L = {(x, y) ∈ R2 ∶ x2 + 2y 2 = 4}.

And similarly, the level set is an ellipse for every real number b up to 9. As
just mentioned, plotting the level sets of a function f of two variables gives a
topographical map description of f . The geometry is different for a function
4.8 Directional Derivatives and the Gradient 189

Figure 4.13. Level set and gradients for the sine function

of one variable: each level set is a subset of the line. For example, consider a
restriction of the sine function,

f ∶ (0, π) Ð→ R, f (x) = sin(x).

The level set taken by f to 1/2 consists of two points,

L = {π/6, 5π/6}.

For a function of three variables, each level set is a subset of space. For ex-
ample, if a, b, and c are positive numbers, and the function is

f ∶ R3 Ð→ R, f (x, y, z) = (x/a)2 + (y/b)2 + (z/c)2 ,


then its level sets are ellipsoids. Specifically, for every positive r, the √level set

of points taken by f to r is the ellipsoid of x-radius a r, y-radius b r, and
z-radius c r,

⎪ ⎫

⎪ ⎪
2 2 2
L = ⎨(x, y, z) ∈ R3 ∶ ( √ ) + ( √ ) + ( √ ) = 1⎬ .
x y z

⎪ ⎪

⎩ ⎭
a r b r c r

The third bullet in Theorem 4.8.2 says that the gradient is normal to the
level set. This fact may seem surprising, since the gradient is a version of the
derivative, and we think of the derivative as describing a tangent object to a
graph. The reason that the derivative has become a normal object is that

a level set is different from a graph.

A level set of f is a subset of the domain of f , whereas the graph of f , which


simultaneously shows the domain and the range of f , is a subset of a space that
is one dimension larger. For instance, if we think of f as measuring elevation,
then the graph of f is terrain in three-dimensional space, while a level set
of f is the set of points in the plane that lie beneath the terrain at some
constant altitude; the level set is typically a curve. Figure 4.12 illustrates the
difference in the case of the mountain function. Note that in the left part of the
190 4 The Derivative

figure, the gradient is orthogonal to the ellipse on which it starts. Similarly,


Figure 4.13 illustrates the difference in the case of the restricted sine function
from the previous paragraph. In the figure, the x-axis shows the two-point
level set from the previous paragraph and the gradient of f at each of the two
points. The fact that one gradient points rightward indicates that to climb
the graph of f over that point, one should move to the right, and the slope
to be encountered on the graph will be the length of the gradient on the axis.
Similarly, the other gradient points leftward, because to climb the graph over
the other point, one should move to the left. Here each gradient is trivially
orthogonal to the level set, because the level set consists of isolated points.
For the three-variable function from the previous paragraph, we still can see
the level sets—they are concentric ellipsoids—but not the graph, which would
require four dimensions. Instead, we can conceive of the function as measuring
temperature in space, and of the gradient as pointing in the direction to move
for greatest rate of temperature-increase, with the length of the gradient being
that rate. Figure 4.14 shows a level set for the temperature function and
several gradients, visibly orthogonal to the level set.

Figure 4.14. Level set and gradients for the temperature function

Although Theorem 4.8.2 has already stated that the gradient is orthogonal
to the level set, we now amplify the argument. Let f ∶ A Ð→ R (where A ⊂ Rn )
be given, and assume that it is differentiable. Let a be a point of A, and
let b = f (a). Consider the level set of f containing a,
L = {x ∈ A ∶ f (x) = b} ⊂ Rn ,
and consider any smooth curve from some interval into the level set, passing
through a,
γ ∶ (−ε, ε) Ð→ L, γ(0) = a.
The composite function
f ○ γ ∶ (−ε, ε) Ð→ R
is the constant function b, so that its derivative at 0 is 0. By the chain rule
this relation is
4.8 Directional Derivatives and the Gradient 191

∇f (a) ⋅ γ ′ (0) = 0.
Every tangent vector to L at a takes the form γ ′ (0) for some γ of the sort that
we are considering. Therefore, ∇f (a) is orthogonal to every tangent vector
to L at a, i.e., ∇f (a) is normal to L at a.
Before continuing to work with the gradient, we pause to remark that level
sets and graphs are related. For one thing:
The graph of a function is also the level set of a different function.
To see this, let n > 1, let A0 be a subset of Rn−1 , and let f ∶ A0 Ð→ R be any
function. Given this information, let A = A0 × R and define a second function
g ∶ A Ð→ R,
g(x1 , . . . , xn−1 , xn ) = f (x1 , . . . , xn−1 ) − xn .
Then the graph of f is a level of g, specifically the set of inputs that g takes
to 0,

graph(f ) = {x ∈ A0 × R ∶ xn = f (x1 , . . . , xn−1 )}


= {x ∈ A ∶ g(x) = 0}.

For example, the graph of the mountain function f (x, y) = 9 − x2 − 2y 2 is also


a level set of the function g(x, y, z) = 9 − x2 − 2y 2 − z. But in contrast to this
quick method of defining g explicitly in terms of f to show that every graph
is a level set, the converse question is much more subtle:
To what extent is some given level set also a graph?
For example, the level sets of the mountain function f are ellipses (as shown
in Figure 4.12), but an ellipse is not the graph of y as a function of x or
vice versa. This converse question will be addressed by the implicit function
theorem in the next chapter.
Returning to the gradient, the geometric fact that it is normal to the level
set makes it easy to find the tangent plane to a two-dimensional surface in R3 .
For example, consider the surface

H = {(x, y, z) ∈ R3 ∶ x2 + y 2 − z 2 = 1}.

(This surface is a hyperboloid of one sheet.) The point (2 2, 3, 4) belongs
to H. Note that H is a level set of the function f (x, y, z) = x2 + y 2 − z 2 , and
compute the gradient
√ √
∇f (2 2, 3, 4) = (4 2, 6, −8).

Since this is the normal vector to H at (2 2, 3, 4), the tangent plane equation
at the√end of Section 3.10 shows that the equation of the tangent plane to H
at (2 2, 3, 4) is
√ √
4 2(x − 2 2) + 6(y − 3) − 8(z − 4) = 0.
192 4 The Derivative

If a function f ∶ Rn Ð→ R has a continuous gradient, then from every


starting point a ∈ Rn where the gradient ∇f (a) is nonzero, there is a path of
steepest ascent of f (called an integral curve of ∇f ) starting at a. If n = 2
and the graph of f is seen as a surface in 3-space, then the integral curve
from the point (a, b) ∈ R2 is the shadow of the path followed by a particle
climbing the graph, starting at (a, b, f (a, b)). If n = 2 or n = 3 and f is viewed
as temperature, then the integral curve is the path followed by a heat-seeking
bug.
To find the integral curve, we set up an equation that describes it. The
idea is to treat the gradient vector as a divining rod and follow it starting
at a. Doing so produces a path in Rn that describes time-dependent motion,
always in the direction of the gradient, and always with speed equal to the
modulus of the gradient. Computing the path amounts to finding an interval
I ⊂ R containing 0 and a mapping

γ ∶ I Ð→ Rn

that satisfies the differential equation with initial conditions

γ ′ (t) = ∇f (γ(t)), γ(0) = a. (4.3)

Whether (and how) one can solve this for γ depends on the data f and a.
In the case of the mountain function f (x, y) = 9 − x2 − 2y 2 , with gradient
∇f (x, y) = (−2x, −4y), the path γ has two components γ1 and γ2 , and the
differential equation and initial conditions (4.3) become

(γ1′ (t), γ2′ (t)) = (−2γ1 (t), −4γ2 (t)), (γ1 (0), γ2 (0)) = (a, b),

to which the unique solution is

(γ1 (t), γ2 (t)) = (ae−2t , be−4t ).

Let x = γ1 (t) and y = γ2 (t). Then the previous display shows that

a2 y = bx2 ,

and so the integral curve lies on a parabola. The parabola is degenerate if the
starting point (a, b) lies on either axis. Every parabola that forms an integral
curve for the mountain function meets orthogonally with every ellipse that
forms a level set. (See Figure 4.15.)
For another example, let f (x, y) = x2 − y 2 . The level sets for this function
are hyperbolas having the 45 degree lines x = y and x = −y as asymptotes.
The gradient of the function is ∇f (x, y) = (2x, −2y), so to find the integral
curve starting at (a, b), we need to solve the equations

(γ1′ (t), γ2′ (t)) = (2γ1 (t), −2γ2 (t)), (γ1 (0), γ2 (0)) = (a, b).
4.8 Directional Derivatives and the Gradient 193

Figure 4.15. Level sets and integral curves for the parabolic mountain

Figure 4.16. Hyperbolic level sets and integral curves

Thus (γ1 (t), γ2 (t)) = (ae2t , be−2t ), so that the integral curve lies on the hy-
perbola xy = ab having the axes x = 0 and y = 0 as asymptotes. The integral
curve hyperbola is orthogonal to the level set hyperbolas. (See Figure 4.16.)
For another example, let f (x, y) = ex − y. The level sets for this function
are the familiar exponential curve y = ex and all of its vertical translates. The
gradient of the function is ∇f (x, y) = (ex , −1), so to find the integral curve
starting at (0, 1), we need to solve the equations

(γ1′ (t), γ2′ (t)) = (eγ1 (t) , −1), (γ1 (0), γ2 (0)) = (0, 1).

To find γ1 , reason that

e−γ1 (t) γ1′ (t) = 1 for all t ≥ 0 where the system is sensible,

and so for all t ≥ 0 where the system is sensible,

e−γ1 (τ ) γ1′ (τ ) dτ = t.
t

τ =0

Integration gives
−e−γ1 (t) + e−γ1 (0) = t,
194 4 The Derivative

and so, recalling that γ1 (0) = 0,

γ1 (t) = − ln(1 − t), 0 ≤ t < 1.

Also, γ2 (t) = 1 − t. Thus the integral curve,

(γ1 (t), γ2 (t)) = (− ln(1 − t), 1 − t), 0≤t<1

is the portion of the curve y = e−x where x ≥ 0. (See Figure 4.17.) The entire
integral curve is traversed in one unit of time.

Figure 4.17. Negative exponential integral curve for exponential level sets

For another example, let f (x, y) = x2 + xy + y 2 . The level sets for this
function are tilted ellipses. The gradient of f is ∇f (x, y) = (2x + y, x + 2y), so
to find the integral curve starting at (a, b), we need to solve the equations
γ1′ (t) = 2γ1 (t) + γ2 (t), γ1 (0) = a,
γ2′ (t) = γ1 (t) + 2γ2 (t), γ2 (0) = b.
Here the two differential equations are coupled, meaning that the derivative
of γ1 depends on both γ1 and γ2 , and similarly for the derivative of γ2 . How-
ever, the system regroups conveniently,
(γ1 + γ2 )′ (t) = 3(γ1 + γ2 )(t), (γ1 + γ2 )(0) = a + b,
(γ1 − γ2 )′ (t) = (γ1 − γ2 )(t), (γ1 − γ2 )(0) = a − b.
Thus

(γ1 + γ2 )(t) = (a + b)e3t ,


(γ1 − γ2 )(t) = (a − b)et ,

from which

γ1 (t) = 12 (a + b)e3t + 12 (a − b)et ,


γ2 (t) = 12 (a + b)e3t − 12 (a − b)et .
4.8 Directional Derivatives and the Gradient 195

These call to be checked, and indeed,


γ1′ (t) = 32 (a + b)e3t + 12 (a − b)et = 2γ1 (t) + γ2 (t),
γ2′ (t) = 32 (a + b)e3t − 12 (a − b)et = γ1 (t) + 2γ2 (t).
The motion takes place along the cubic curve having equation
x + y (x − y)3
=
a + b (a − b)3
.

(See Figure 4.18.) The integral curves in the first two examples were quadratic
only by happenstance, in consequence of the functions 9 − x2 − 2y 2 and x2 − y 2
having such simple coefficients. Changing the mountain function to 9−x2 −3y 2
would produce cubic integral curves, and changing x2 − y 2 to x2 − 5y 2 in the
second example would produce integral curves x5 y = a5 b.

Figure 4.18. Cubic integral curve for elliptic level sets

For another example, suppose the temperature in space is given by


T (x, y, z) = 1/(x2 + y 2 + z 2 ). (This function blows up at the origin, so we
don’t work there.) The level sets of this function are spheres, and the integral
curves are rays going toward the origin. The level set passing through the
point (a, b, c) in space is again orthogonal to the integral curve through the
same point. In general, solving the vector differential equation (4.3) to find
the integral curves γ of a function f can be difficult.

Exercises
4.8.1. Let f (x, y, z) = xy 2 + yz. Find D( 23 ,− 13 , 23 ) f (1, 1, 2).
4.8.2. Let g(x, y, z) = xyz, and let d be the unit vector in the direction from
(1, 2, 3) to (3, 1, 5). Find Dd g(1, 2, 3).
4.8.3. Let f be differentiable at a point a, and let d = −e1 , a unit vector. Are
the directional derivative Dd f (a) and the partial derivative D1 f (a) equal?
Explain.
196 4 The Derivative

4.8.4. Formulate and prove a version of Rolle’s theorem for functions of n


variables.

4.8.5. Show that if f ∶ Rn Ð→ R and g ∶ Rn Ð→ R are differentiable then so is


their product f g ∶ Rn Ð→ R, and ∇(f g) = f ∇g + g∇f .

4.8.6. Find the tangent plane to the surface {(x, y, z) ∶ x2 + 2y 2 + 3zx − 10 = 0}


in R3 at the point (1, 2, 13 ).

4.8.7. (a) Consider the surface S = {(x, y, z) ∈ R3 ∶ xy = z}. Let p = (a, b, c) be


a generic point of S. Find the tangent plane Tp to S at p.
(b) Show that the intersection S ∩ Tp consists of two lines.

4.8.8. (a) Let A and α be nonzero constants. Similarly to an example in the


section, solve the one-variable differential equation

z ′ (t) = Aαeαz(t) , z(0) = 0.

(b) The pheromone concentration in the plane is given by f (x, y) = e2x +


y
4e . What path does a bug take, starting from the origin?

4.8.9. (a) Sketch some level sets and integral curves for the function f (x, y) =
x2 + y. Find the integral curves analytically if you can.
(b) Sketch some level sets and integral curves for the function f (x, y) = xy.
Find the integral curves analytically if you can.

4.8.10. Recall the function f ∶ R2 Ð→ R whose graph is the crimped sheet,




⎪ x2 +y2
x2 y
if (x, y) ≠ (0, 0),
f (x, y) = ⎨

⎪ if (x, y) = (0, 0).
⎩0
(a) Show that f is continuous at (0, 0).
(b) Find the partial derivatives D1 f (0, 0) and D2 f (0, 0).
(c) Let d be any unit vector in R2 (thus d takes the form d = (cos θ, sin θ)
for some θ ∈ R). Show that Dd f (0, 0) exists by finding it.
(d) Show that in spite of (c), f is not differentiable at (0, 0). (Use your re-
sults from parts (b) and (c) to contradict Theorem 4.8.2.) Thus, the existence
of every directional derivative at a point is not sufficient for differentiability
at the point.

4.8.11. Define f ∶ R2 Ð→ R by


⎪1 if y = x2 and (x, y) ≠ (0, 0),
f (x, y) = ⎨



0 otherwise.

(a) Show that f is discontinuous at (0, 0). It follows that f is not differ-
entiable at (0, 0).
4.8 Directional Derivatives and the Gradient 197

(b) Let d be any unit vector in R2 . Show that Dd f (0, 0) = 0. Show that
consequently the formula Dd f (0, 0) = ⟨∇f (0, 0), d⟩ holds for every unit vec-
tor d. Thus, the existence of every directional derivative at a point and the
fact that each directional derivative satisfies the formula are still not sufficient
for differentiability at the point.

4.8.12. Fix two real numbers a and b satisfying 0 < a < b. Define a mapping
T = (T1 , T2 , T3 ) ∶ R2 Ð→ R3 by

T (s, t) = ((b + a cos s) cos t, (b + a cos s) sin t, a sin s).

(a) Describe the shape of the set in R3 mapped to by T . (The answer will
explain the name T .)
(b) Find the points (s, t) ∈ R2 such that ∇T1 (s, t) = 02 . The points map to
only four image points p under T . Show that one such p is a maximum of T1 ,
another is a minimum, and the remaining two are saddle points.
(c) Find the points(s, t) ∈ R2 such that ∇T3 (s, t) = 02 . To what points q
do these (s, t) map under T ? Which such q are maxima of T3 ? Minima? Saddle
points?
5
Inverse and Implicit Functions

The question whether a mapping f ∶ A Ð→ Rn (where A ⊂ Rn ) is globally in-


vertible is beyond the local techniques of differential calculus. However, a local
theorem is finally in reach. The idea sounds plausible: if the derivative of f is
invertible at the point a then f itself, being well approximated near a by its
derivative, should also be invertible in the small. However, it is by no means
a general principle that an approximated object must inherit the properties
of the object approximating it. On the contrary, mathematics often approx-
imates complicated objects by simpler ones. For instance, Taylor’s theorem
approximates any function that has many derivatives by a polynomial, but
this does not make the function itself a polynomial as well.
To further illustrate the issue via an example, consider an argument in
support of the one-variable critical point theorem. Let f ∶ A Ð→ R (where
A ⊂ R) be differentiable, and let f ′ (a) be positive at an interior point a of A.
We might reason as follows:
f cannot have a maximum at a, because the tangent line to the graph
of f at (a, f (a)) has a positive slope, so that as we move our input
rightward from a, we climb.
But this reasoning is vague. What do we climb, the tangent line or the graph?
The argument linearizes the question by fitting the tangent line through the
graph, and then it solves the linearized problem instead by checking whether
we climb the tangent line rather than whether we climb the graph. The calcu-
lus is light and graceful. But strictly speaking, part of the argument is tacit:
Since the tangent line closely approximates the graph near the point of
tangency, the fact that we climb the tangent line means that we climb
the graph as well for a while.
And the tacit part of the argument is not fully quantitative. How does the
climbing property of the tangent line transfer to the graph? The mean value
theorem, and a stronger hypothesis that f ′ is positive about a as well as at a,
resolve the question, since for x slightly larger than a,

© Springer International Publishing AG 2016 199


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5_5
200 5 Inverse and Implicit Functions

f (x) − f (a) = f ′ (c)(x − a) for some c ∈ (a, x),

and the right side is the product of two positive numbers, hence positive. But
the mean value theorem is an abstract existence theorem (“for some c”) whose
proof relies on foundational properties of the real number system. Thus, mov-
ing from the linearized problem to the actual problem is far more sophisticated
technically than linearizing the problem or solving the linearized problem. In
sum, this one-variable example is meant to amplify the point of the preced-
ing paragraph, that (now returning to n dimensions) if f ∶ A Ð→ Rn has
an invertible derivative at a then the inverse function theorem—that f itself
is invertible in the small near a—is surely inevitable, but its proof will be
technical and require strengthening our hypotheses.
Already in the one-variable case, the inverse function theorem relies on
foundational theorems about the real number system, on a property of con-
tinuous functions, and on a foundational theorem of differential calculus. We
quickly review the ideas. Let f ∶ A Ð→ R (where A ⊂ R) be a function, let a be
an interior point of A, and let f be continuously differentiable on some inter-
val about a, meaning that f ′ exists and is continuous on the interval. Suppose
that f ′ (a) > 0. Since f ′ is continuous about a, the persistence of inequality
principle (Proposition 2.3.10) says that f ′ is positive on some closed interval
[a − δ, a + δ] about a. By an application of the mean value theorem as in the
previous paragraph, f is therefore strictly increasing on the interval, and so its
restriction to the interval does not take any value twice. By the intermediate
value theorem, f takes every value from f (a − δ) to f (a + δ) on the interval.
Therefore f takes every such value exactly once, making it locally invertible.
A slightly subtle point is that the inverse function f −1 is continuous at f (a),
but then a purely formal calculation with difference quotients will verify that
the derivative of f −1 exists at f (a) and is 1/f ′ (a). Note how heavily this
proof relies on the fact that R is an ordered field. A proof of the multivariable
inverse function theorem must use other methods.
Although the proof to be given in this chapter is technical, its core idea
is simple common sense. Let a mapping f be given that takes x-values to y-
values and in particular takes a to b. Then the local inverse function must take
y-values near b to x-values near a, taking each such y back to the unique x
that f took to y in the first place. We need to determine conditions on f
that make us believe that a local inverse exists. As explained above, the basic
condition is that the derivative of f at a—giving a good approximation of f
near a, but easier to understand than f itself—should be invertible, and the
derivative should be continuous as well. With these conditions in hand, an
argument similar to that in the one-variable case (though more painstaking)
shows that f is locally injective:
• Given y near b, there is at most one x near a that f takes to y.
So the remaining problem is to show that f is locally surjective:
• Given y near b, show that there is some x near a that f takes to y.
5.1 Preliminaries 201

This problem decomposes into two subproblems. First:


• Given y near b, show that there is some x near a that f takes closest to y.
Then:
• Show that f takes this particular x exactly to y.
And once the appropriate environment is established, solving each subprob-
lem is just a matter of applying the main theorems from the previous three
chapters.
Not only does the inverse function theorem have a proof that uses so
much previous work from this course so nicely, it also has useful consequences.
It leads easily to the implicit function theorem, which answers a different
question: when does a set of constraining relations among a set of variables
make some of the variables dependent on the others? The implicit function
theorem in turn fully justifies (rather than linearizing) the Lagrange multiplier
method, a technique for solving optimization problems with constraints. As
discussed in the preface to these notes, optimization with constraints has no
one-variable counterpart, and it can be viewed as the beginning of calculus
on curved spaces.

5.1 Preliminaries

The basic elements of topology in Rn —ε-balls; limit points; closed, bounded,


and compact sets—were introduced in Section 2.4 to provide the environment
for the extreme value theorem. A little more topology is now needed before
we proceed to the inverse function theorem. Recall that for every point a ∈ Rn
and every radius ε > 0, the ε-ball at a is the set

B(a, ε) = {x ∈ Rn ∶ ∣x − a∣ < ε}.

Recall also that a subset of Rn is called closed if it contains all of its limit
points. Not unnaturally, a subset S of Rn is called open if its complement
S c = Rn − S is closed. A set, however, is not a door: it can be neither open nor
closed, and it can be both open and closed. (Examples?)

Proposition 5.1.1 (ε-balls are open). For every a ∈ Rn and every ε > 0,
the ball B(a, ε) is open.

Proof. Let x be any point in B(a, ε), and set δ = ε − ∣x − a∣, a positive number.
The triangle inequality shows that B(x, δ) ⊂ B(a, ε) (Exercise 5.1.1), and
therefore x is not a limit point of the complement B(a, ε)c . Consequently all
limit points of B(a, ε)c are in fact elements of B(a, ε)c , which is thus closed,
making B(a, ε) itself open. ⊔

202 5 Inverse and Implicit Functions

This proof shows that every point x ∈ B(a, ε) is an interior point. In fact,
an equivalent definition of open is that a subset of Rn is open if each of its
points is interior (Exercise 5.1.2).
The closed ε-ball at a, denoted B(a, ε), consists of the corresponding
open ball with its edge added in,

B(a, ε) = {x ∈ Rn ∶ ∣x − a∣ ≤ ε}.

The boundary of the closed ball B(a, ε), denoted ∂B(a, ε), is the set of
points on the edge,

∂B(a, ε) = {x ∈ Rn ∶ ∣x − a∣ = ε}.

(See Figure 5.1.) Every closed ball B and its boundary ∂B are compact sets
(Exercise 5.1.3).

Figure 5.1. Open ball, closed ball, and boundary

Let f ∶ A Ð→ Rm (where A ⊂ Rn ) be continuous, let W be an open subset


of Rm , and let V be the set of all points in A that f maps into W ,

V = {x ∈ A ∶ f (x) ∈ W }.

The set V is called the inverse image of W under f ; it is often denoted


f −1 (W ), but this is a little misleading because f need not actually have an
inverse mapping f −1 . For example, if f ∶ R Ð→ R is the squaring function
f (x) = x2 , then the inverse image of [4, 9] is [−3, −2] ∪ [2, 3], and this set is
denoted f −1 ([4, 9]) even though f has no inverse. (See Figure 5.2, in which
f is not the squaring function, but the inverse image f −1 (W ) also has two
components.) The inverse image concept generalizes an idea that we saw in
Section 4.8: the inverse image of a one-point set under a mapping f is a level
set of f , as in Definition 4.8.3.
Although the forward image under a continuous function of an open set
need not be open (Exercise 5.1.4), inverse images behave more nicely. The
connection between continuous mappings and open sets is provided by the
following theorem.
Theorem 5.1.2 (Inverse image characterization of continuity). Let
f ∶ A Ð→ Rm (where A is an open subset of Rn ) be continuous. Let W ⊂ Rm
be open. Then f −1 (W ), the inverse image of W under f , is open.
5.1 Preliminaries 203

f −1 (W ) f −1 (W )

Figure 5.2. Inverse image with two components

Proof. Let a be a point of f −1 (W ). We want to show that it is an interior


point. Let w = f (a), a point of W . Since W is open, some ball B(w, ρ) is
contained in W . Consider the function

g ∶ A Ð→ R, g(x) = ρ − ∣f (x) − w∣.

This function is continuous, and it satisfies g(a) = ρ > 0, and so by a slight


variant of the persistence of inequality principle (Proposition 2.3.10) there
exists a ball B(a, ε) ⊂ A on which g remains positive. That is,

f (x) ∈ B(w, ρ) for all x ∈ B(a, ε).

Since B(w, ρ) ⊂ W , this shows that B(a, ε) ⊂ f −1 (W ), making a an interior


point of f −1 (W ) as desired. ⊔

The converse to Theorem 5.1.2 is also true and is Exercise 5.1.8. We need
one last technical result for the proof of the inverse function theorem.

Lemma 5.1.3 (Difference magnification lemma). Let B be a closed ball


in Rn and let g be a differentiable mapping from an open superset of B in Rn
to Rn . Suppose that there is a number c such that ∣Dj gi (x)∣ ≤ c for all i, j ∈
{1, . . . , n} and all x ∈ B. Then

∣g(x̃) − g(x)∣ ≤ n2 c∣x̃ − x∣ for all x, x̃ ∈ B.

A comment about the lemma’s environment might be helpful before we go


into the details of the proof. We know that continuous mappings behave well
on compact sets. On the other hand, since differentiability is sensible only at
interior points, differentiable mappings behave well on open sets. And so to
work effectively with differentiability, we want a mapping on a domain that is
open, allowing differentiability everywhere, but then we restrict our attention
to a compact subset of the domain so that continuity (which follows from
differentiability) will behave well too. The closed ball and its open superset
in the lemma arise from these considerations.
204 5 Inverse and Implicit Functions

Proof. Consider any two points x, x̃ ∈ B. The size bounds give


n
∣g(x) − g(x̃)∣ ≤ ∑ ∣gi (x̃) − gi (x)∣,
i=1

and so to prove the lemma it suffices to prove that

∣gi (x̃) − gi (x)∣ ≤ nc∣x̃ − x∣ for i = 1, . . . , n.

Thus we have reduced the problem from vector output to scalar output. To
create an environment of scalar input as well, make the line segment from x
to x̃ the image of a function of one variable,

γ ∶ [0, 1] Ð→ Rn , γ(t) = x + t(x̃ − x).

Note that γ(0) = x, γ(1) = x̃, and γ ′ (t) = x̃ − x for all t ∈ (0, 1). Fix any
i ∈ {1, . . . , n} and consider the restriction of gi to the segment, a scalar-valued
function of scalar input,

ϕ ∶ [0, 1] Ð→ R, ϕ(t) = (gi ○ γ)(t).

Thus ϕ(0) = gi (x) and ϕ(1) = gi (x̃). By the mean value theorem,

gi (x̃) − gi (x) = ϕ(1) − ϕ(0) = ϕ′ (t) for some t ∈ (0, 1),

and so since ϕ = gi ○ γ, the chain rule gives

gi (x̃) − gi (x) = (gi ○ γ)′ (t) = gi′ (γ(t))γ ′ (t) = gi′ (γ(t))(x̃ − x).

Because gi′ (γ(t)) is a row vector and x̃−x is a column vector, the last quantity
in the previous display is their inner product. Hence the display and the
Cauchy–Schwarz inequality give

∣gi (x̃) − gi (x)∣ ≤ ∣gi′ (γ(t))∣ ∣x̃ − x∣.

For each j, the jth entry of the vector gi′ (γ(t)) is the partial derivative
Dj gi (γ(t)). And we are given that ∣Dj gi (γ(t))∣ ≤ c, so the size bounds show
that ∣gi′ (γ(t))∣ ≤ nc and therefore

∣gi (x̃) − gi (x)∣ ≤ nc∣x̃ − x∣.

As explained at the beginning of the proof, the result follows. ⊔


Exercises

5.1.1. Let x ∈ B(a; ε) and let δ = ε − ∣x − a∣. Explain why δ > 0 and why
B(x; δ) ⊂ B(a; ε).
5.1 Preliminaries 205

5.1.2. Show that a subset of Rn is open if and only if each of its points is
interior.
5.1.3. Prove that every closed ball B is indeed a closed set, as is its boundary
∂B. Show that every closed ball and its boundary are also bounded, hence
compact.
5.1.4. Find a continuous function f ∶ Rn Ð→ Rm and an open set A ⊂ Rn such
that the image f (A) ⊂ Rm of A under f is not open. Feel free to choose n
and m.
5.1.5. Define f ∶ R Ð→ R by f (x) = x3 − 3x. Compute f (−1/2). Find
f −1 ((0, 11/8)), f −1 ((0, 2)), f −1 ((−∞, −11/8) ∪ (11/8, ∞)). Does f −1 exist?
5.1.6. Show that for f ∶ Rn Ð→ Rm and B ⊂ Rm , the inverse image of the
complement is the complement of the inverse image,

f −1 (B c ) = f −1 (B)c .

Does the analogous formula hold for forward images?


5.1.7. If f ∶ Rn Ð→ Rm is continuous and B ⊂ Rm is closed, show that f −1 (B)
is closed. What does this say about the level sets of continuous functions?
5.1.8. Prove the converse to Theorem 5.1.2: if f ∶ A Ð→ Rm (where A ⊂ Rn is
open) is such that for every open W ⊂ Rm also f −1 (W ) ⊂ A is open, then f is
continuous.
5.1.9. Let a and b be real numbers with a < b. Let n > 1, and suppose that
the mapping g ∶ [a, b] Ð→ Rn is continuous and that g is differentiable on
the open interval (a, b). It is tempting to generalize the mean value theorem
(Theorem 1.2.3) to the assertion

g(b) − g(a) = g ′ (c)(b − a) for some c ∈ (a, b). (5.1)

The assertion is grammatically meaningful, since it posits an equality between


two n-vectors. The assertion would lead to a slight streamlining of the proof
of Lemma 5.1.3, since there would be no need to reduce to scalar output.
However, the assertion is false.
(a) Let g ∶ [0, 2π] Ð→ R2 be g(t) = (cos t, sin t). Show that (5.1) fails for
this g. Describe the situation geometrically.
(b) Let g ∶ [0, 2π] Ð→ R3 be g(t) = (cos t, sin t, t). Show that (5.1) fails for
this g. Describe the situation geometrically.
(c) Here is an attempt to prove (5.1): Let g = (g1 , . . . , gn ). Since each gi is
scalar-valued, we have for i = 1, . . . , n by the mean value theorem,

gi (b) − gi (a) = gi′ (c)(b − a) for some c ∈ (a, b).

Assembling the scalar results gives the desired vector result.


What is the error here?
206 5 Inverse and Implicit Functions

5.2 The Inverse Function Theorem


Theorem 5.2.1 (Inverse function theorem). Let A be an open subset
of Rn , and let f ∶ A Ð→ Rn have continuous partial derivatives at every point
of A. Let a be a point of A. Suppose that det f ′ (a) ≠ 0. Then there exist an
open set V ⊂ A containing a and an open set W ⊂ Rn containing f (a) such
that f ∶ V Ð→ W has a continuously differentiable inverse f −1 ∶ W Ð→ V .
For each y = f (x) ∈ W , the derivative of the inverse is the inverse of the
derivative,
D(f −1 )y = (Dfx )−1 .

Before the proof, it is worth remarking that the formula for the derivative
of the local inverse, and the fact that the derivative of the local inverse is
continuous, are easy to establish once everything else is in place. If the local
inverse f −1 of f is known to exist and to be differentiable, then for every
x ∈ V the fact that the identity mapping is its own derivative combines with
the chain rule to say that

idn = D(idn )x = D(f −1 ○ f )x = D(f −1 )y ○ Dfx where y = f (x),

and similarly idn = Dfx ○(Df −1 )y , where this time idn is the identity mapping
on y-space. The last formula in the theorem follows. In terms of matrices, the
formula is
(f −1 )′ (y) = f ′ (x)−1 where y = f (x).
This formula combines with Proposition 4.3.4 (differentiability implies conti-
nuity) and Corollary 3.7.3 (the entries of the inverse matrix are continuous
functions of the entries of the matrix) to show that since the mapping is con-
tinuously differentiable and the local inverse is differentiable, the local inverse
is continuously differentiable: If y varies slightly, then so does x because f −1
is continuous, hence so does f ′ (x) because f ′ is continuous, hence so does
f ′ (x)−1 , which is (f −1 )′ (y). Thus we need to show only that the local inverse
exists and is differentiable.

Proof. The proof begins with a simplification. Let T = Dfa , a linear map-
ping from Rn to Rn that is invertible because its matrix f ′ (a) has nonzero
determinant. Let
f˜ = T −1 ○ f.
By the chain rule, the derivative of f˜ at a is

Df˜a = D(T −1 ○ f )a = D(T −1 )f (a) ○ Dfa = T −1 ○ T = idn .

Also, suppose we have a local inverse g̃ of f˜, so that

g̃ ○ f˜ = idn near a

and
5.2 The Inverse Function Theorem 207

f˜ ○ g̃ = idn near f˜(a).


The situation is shown in the following diagram, in which V is an open set
̃ is an open set con-
containing a, W is an open set containing f (a), and W
taining T (f (a)) = f (a).
−1 ˜

̃
f T −1 /W
V^ /W o
T

The diagram shows that the way to invert f locally, going from W back to V ,
̃: g = g̃ ○ T −1 . Indeed, since f = T ○ f˜,
is to proceed through W

g ○ f = (g̃ ○ T −1 ) ○ (T ○ f˜) = idn near a,

and, since T −1 (f (a)) = f˜(a),

f ○ g = (T ○ f˜) ○ (g̃ ○ T −1 ) = idn near f (a).

That is, to invert f , it suffices to invert f˜. And if g̃ is differentiable then so


is g = g̃ ○ T −1 . The upshot is that we may prove the theorem for f˜ rather
than f . Equivalently, we may assume with no loss of generality that Dfa =
idn and therefore that f ′ (a) = In . This normalization will let us carry out
a clean, explicit computation in the following paragraph. (Note: With the
normalization complete, our use of the symbol g to denote a local inverse of f
now ends. The mapping to be called g in the following paragraph is unrelated
to the local inverse g in this paragraph.)
Next we find a closed ball B around a where the behavior of f is somewhat
controlled by the fact that f ′ (a) = In . More specifically, we will quantify
the idea that since f ′ (x) ≈ In for x near a, also f (x̃) − f (x) ≈ x̃ − x for
x, x̃ near a. Recall that the (i, j)th entry of In is δij and that det(In ) = 1.
As x varies continuously near a, the (i, j)th entry Dj fi (x) of f ′ (x) varies
continuously near δij , and so the scalar det f ′ (x) varies continuously near 1.
Since Dj fi (a) − δij = 0 and since det f ′ (a) = 1, applying the persistence of
inequality principle (Proposition 2.3.10) n2 + 1 times shows that there exists
a closed ball B about a small enough that

∣Dj fi (x) − δij ∣ < for all i, j ∈ {1, . . . , n} and x ∈ B


1
(5.2)
2n2
and
det f ′ (x) ≠ 0 for all x ∈ B. (5.3)
Let g = f − idn , a differentiable mapping near a, whose Jacobian matrix at x,
g ′ (x) = f ′ (x) − In , has (i, j)th entry Dj gi (x) = Dj fi (x) − δij . Equation (5.2)
208 5 Inverse and Implicit Functions

and Lemma 5.1.3 (with c = 1/(2n2 )) show that for every two points x and x̃
in B,
∣g(x̃) − g(x)∣ ≤ 12 ∣x̃ − x∣,
and therefore, since f = idn + g,

∣f (x̃) − f (x)∣ = ∣(x̃ − x) + (g(x̃) − g(x))∣


≥ ∣x̃ − x∣ − ∣g(x̃) − g(x)∣
≥ ∣x̃ − x∣ − 21 ∣x̃ − x∣ (by the previous display)
= 12 ∣x̃ − x∣.

The previous display shows that f is injective on B, i.e., every two distinct
points of B are taken by f to distinct points of Rn . For future reference, we
note that the result of the previous calculation can be rearranged as

∣x̃ − x∣ ≤ 2∣f (x̃) − f (x)∣ for all x, x̃ ∈ B. (5.4)

The boundary ∂B of B is compact, and so is the image set f (∂B) because f


is continuous. Also, f (a) ∉ f (∂B) because f is injective on B. And f (a) is not
a limit point of f (∂B) because f (∂B), being compact, is closed. Consequently,
some open ball B(f (a), 2ε) contains no point from f (∂B). (See Figure 5.3.)


f
a f (a)

∂B f (∂B)

Figure 5.3. Ball about f (a) away from f (∂B)

Let W = B(f (a), ε), the open ball with radius less than half the distance
from f (a) to f (∂B). Thus

∣f (a) − y∣ < ∣f (x) − y∣ for all y ∈ W and x ∈ ∂B. (5.5)

That is, for every point y of W , f (a) is closer to y than every point of f (∂B)
is close to y. (See Figure 5.4.)
The goal now is to exhibit a mapping on W that inverts f near a. In
other words, the goal is to show that for each y ∈ W , there exists a unique x
interior to B such that f (x) = y. So fix an arbitrary y ∈ W . Define a function
∆ ∶ B Ð→ R that measures for each x the square of the distance between f (x)
and y,
5.2 The Inverse Function Theorem 209

f y ε
W

x f (x)

Figure 5.4. Ball closer to f (a) than to f (∂B)

n
∆(x) = ∣f (x) − y∣2 = ∑(fi (x) − yi )2 .
i=1
The idea is to show that for one and only one x near a, ∆(x) = 0. Because the
modulus is always nonnegative, the x we seek must minimize ∆. As mentioned
at the beginning of the chapter, this simple idea inside all the technicalities
is the heart of the proof: the x to be taken to y by f must be the x that is
taken closest to y by f .
The function ∆ is continuous and B is compact, so the extreme value
theorem guarantees that ∆ does indeed take a minimum on B. Condition (5.5)
guarantees that ∆ takes no minimum on the boundary ∂B. Therefore the
minimum of ∆ must occur at an interior point x of B; this interior point x
must be a critical point of ∆, so all partial derivatives of ∆ vanish at x. Thus
by the chain rule,
n
0 = Dj ∆(x) = 2 ∑(fi (x) − yi )Dj fi (x) for j = 1, . . . , n.
i=1

This condition is equivalent to the matrix equation


⎡ D f (x) ⋯ D f (x) ⎤ ⎡ f (x) − y ⎤ ⎡0⎤
⎢ 1 1 ⎥⎢ 1 1⎥ ⎢ ⎥
⎢ ⎥⎢ ⎥ ⎢ ⎥
1 n
⎢ ⋮ ⋱ ⋮ ⎥⎢ ⋮ ⎥ = ⎢⋮⎥,
⎢ ⎥⎢ ⎥ ⎢ ⎥
⎢Dn f1 (x) ⋯ Dn fn (x)⎥ ⎢fn (x) − yn ⎥ ⎢0⎥
⎣ ⎦⎣ ⎦ ⎣ ⎦
or
f ′ (x)T (f (x) − y) = 0n .
But det f ′ (x)T = det f ′ (x) ≠ 0 by condition (5.3), so f ′ (x)T is invertible, and
the only solution of the equation is f (x) − y = 0n . Thus our x is the desired x
interior to B such that f (x) = y. And there is only one such x, because f is
injective on B. We no longer need the boundary ∂B, whose role was to make
a set compact. In sum, we now know that f is injective on B and that f (B)
contains W .
Let V = f −1 (W ) ∩ B, the set of all points x ∈ B such that f (x) ∈ W .
(See Figure 5.5.) By the inverse image characterization of continuity (Theo-
rem 5.1.2), V is open. We have established that f ∶ V Ð→ W is inverted by
f −1 ∶ W Ð→ V .
210 5 Inverse and Implicit Functions

f
V f −1 W

Figure 5.5. The sets V and W of the inverse function theorem

The last thing to prove is that f −1 is differentiable on W . Again, reducing


the problem makes it easier. By (5.3), the condition det f ′ (x) ≠ 0 is in effect
at each x ∈ V . Therefore a is no longer a distinguished point of V , and it
suffices to prove that the local inverse f −1 is differentiable at f (a). To reduce
the problem to working at the origin, consider the mapping f˜ defined by the
formula f˜(x) = f (x + a) − b. Because f (a) = b, it follows that f˜(0n ) = 0n ,
and since f˜ is f up to prepended and postpended translations, f˜ is locally
invertible at 0n and its derivative there is Df˜0 = Dfa = idn . The upshot is
that in proving that f −1 is differentiable at f (a), there is no loss of generality
in normalizing to a = 0n and f (a) = 0n while also retaining the normalization
that Dfa is the identity mapping.
So now we have that f (0n ) = 0n = f −1 (0n ) and

f (h) − h = o(h),

and we want to show that

f −1 (k) − k = o(k).

For every point k ∈ W , let h = f −1 (k). Note that ∣h∣ ≤ 2∣k∣ by condition (5.4)
with x̃ = h and x = 0n , so that f (x̃) = k and f (x) = 0n , and thus h = O(k). So
now we have

f −1 (k) − k = −(f (h) − h) = −o(h) = o(h) = o(O(k)) = o(k),

exactly as desired. That is, f −1 is indeed differentiable at 0n with the identity


mapping for its derivative. For an unnormalized proof that f −1 is differentiable
on W , see Exercise 5.2.9. ⊔

Note the range of mathematical skills that this proof of the inverse func-
tion theorem required. The ideas were motivated and guided by pictures, but
the actual argument was symbolic. At the level of fine detail, we normalized
the derivative to the identity in order to reduce clutter, we made an adroit
choice of quantifier in choosing a small enough B to apply the difference mag-
nification lemma with c = 1/(2n2 ), and we used the full triangle inequality to
5.2 The Inverse Function Theorem 211

obtain (5.4). This technique sufficed to prove that f is locally injective. Since
the proof of the difference magnification lemma used the mean value theorem
many times, the role of the mean value theorem in the multivariable inverse
function theorem is thus similar to its role in the one-variable proof reviewed
at the beginning of this chapter. However, while the one-variable proof that
f is locally surjective relied on the intermediate value theorem, the multivari-
able argument was far more elaborate. The idea was that the putative x taken
by f to a given y must be the actual x taken by f closest to y. We exploited
this idea by working in broad strokes:
• The extreme value theorem from Chapter 2 guaranteed that there is such
an actual x.
• The critical point theorem and then the chain rule from Chapter 4 de-
scribed necessary conditions associated to x.
• And finally, the linear invertibility theorem from Chapter 3 showed that
f (x) = y as desired. Very satisfyingly, the hypothesis that the derivative is
invertible sealed the argument that the mapping itself is locally invertible.
Indeed, the proof of local surjectivity used nearly every significant result from
Chapters 2 through 4 of these notes.
For an example, define f ∶ R2 Ð→ R2 by f (x, y) = (x3 − 2xy 2 , x + y). Is f
locally invertible at (1, −1)? If so, what is the best affine approximation to the
inverse near f (1, −1)? To answer the first question, calculate the Jacobian
R
3x2 − 2y 2 −4xy RRRR
f ′ (1, −1) = [ ] RR = [ ].
14
1 RRR
R(x,y)=(1,−1)
1 11

This matrix is invertible with inverse f ′ (1, −1)−1 = 13 [ −11 −14 ]. Therefore f
is locally invertible at (1, −1), and the affine approximation to f −1 near
f (1, −1) = (−1, 0) is

1 −1 4 h
f −1 (−1 + h, 0 + k) ≈ [ ] + [ ] [ ] = (1 − h + k, −1 + h − k).
1 1 4 1 1
−1 3 1 −1 k 3 3 3 3

The actual inverse function f −1 about (−1, 0) may not be clear, but the inverse
function theorem guarantees its existence, and its affine approximation is easy
to find.

Exercises

5.2.1. Define f ∶ R2 Ð→ R2 by f (x, y) = (x3 + 2xy + y 2 , x2 + y). Is f locally


invertible at (1, 1)? If so, what is the best affine approximation to the inverse
near f (1, 1)?

5.2.2. Same question for f (x, y) = (x2 − y 2 , 2xy) at (2, 1).


212 5 Inverse and Implicit Functions

5.2.3. Same question for C(r, θ) = (r cos θ, r sin θ) at (1, 0).


5.2.4. Same question for C(ρ, θ, φ) = (ρ cos θ sin φ, ρ sin θ sin φ, ρ cos φ) at (1, 0, π/2).
5.2.5. At what points (a, b) ∈ R2 is each of the following mappings guaranteed
to be locally invertible by the inverse function theorem? In each case, find the
best affine approximation to the inverse near f (a, b).
(a) f (x, y) = (x + y, 2xy 2 ).
(b) f (x, y) = (sin x cos y + cos x sin y, cos x cos y − sin x sin y).
5.2.6. Define f ∶ R2 Ð→ R2 by f (x, y) = (ex cos y, ex sin y). Show that f is
locally invertible at each point (a, b) ∈ R2 , but that f is not globally invertible.
Let (a, b) = (0, π3 ); let (c, d) = f (a, b); let g be the local inverse to f near (a, b).
Find an explicit formula for g, compute g ′ (c, d), and verify that it agrees with
f ′ (a, b)−1 .
5.2.7. If f and g are functions from R3 to R, show that the mapping F =
(f, g, f + g) ∶ R3 Ð→ R3 does not have a differentiable local inverse anywhere.
5.2.8. Define f ∶ R Ð→ R by


⎪x + 2x2 sin x1 if x ≠ 0,
f (x) = ⎨

⎪ if x = 0.

0

(a) Show that f is differentiable at x = 0 and that f ′ (0) ≠ 0. (Because this


is a one-dimensional problem, you may verify the old definition of derivative
rather than the new one.)
(b) Despite the result from (a), show that f is not locally invertible at
x = 0. Why doesn’t this contradict the inverse function theorem?
5.2.9. The proof of the inverse function theorem ended with a normalized
argument that the inverse function on W is again differentiable. Supply ex-
planation as necessary to the unnormalized version of the argument, as follows.
Let y be a fixed point of W , and let y + k lie in W as well. Take x = f −1 (y)
in V , and let f −1 (y + k) = x + h, thus defining h = f −1 (y + k)) − f −1 (y). We
know that f ′ (x) is invertible and that

f (x + h) − f (x) − f ′ (x)h = o(h).

We want to show that

f −1 (y + k) − f −1 (y) − f ′ (x)−1 k = o(k).

Compute,

f −1 (y + k) − f −1 (y) − f ′ (x)−1 k = h − f ′ (x)−1 (f (x + h) − f (x))


= h − f ′ (x)−1 (f ′ (x)h + o(h))
= −f ′ (x)−1 o(h).
5.3 The Implicit Function Theorem 213

Using (5.4) yields ∣h∣ = ∣x + h − x∣ ≤ 2∣f (x + h) − f (x)∣ = 2∣k∣ = O(k), so we have

f −1 (y + k) − f −1 (y) − f ′ (x)−1 k = −f ′ (x)−1 o(O(k)) = −f ′ (x)−1 o(k).

Multiplication by the fixed matrix −f ′ (x)−1 is a linear mapping, and every


linear mapping is O of its input. Altogether,

f −1 (y + k) − f −1 (y) − f ′ (x)−1 k = −f ′ (x)−1 o(k) = O(o(k)) = o(k),

as desired.

5.3 The Implicit Function Theorem


Let n and c be positive integers with c ≤ n, and let r = n − c. This section
addresses the following question:
When do c conditions on n variables locally specify c of the variables
in terms of the remaining r variables?
The symbols in this question will remain in play throughout this section. That
is:
• n = r + c is the total number of variables;
• c is the number of conditions, i.e., the number of constraints on the vari-
ables, and therefore the number of variables that might be dependent on
the others;
• and r is the number of remaining variables and therefore the number of
variables that might be free.
The word conditions (or constraints) provides a mnemonic for the symbol c,
and similarly remaining (or free) provides a mnemonic for r.
The question can be rephrased:
When is a level set locally a graph?
To understand the rephrasing, we begin by reviewing the idea of a level set,
given here in a slightly more general form than in Definition 4.8.3.

Definition 5.3.1 (Level set). Let g ∶ A Ð→ Rm (where A ⊂ Rn ) be a map-


ping. A level set of g is the set of points in A that map under g to some fixed
vector w in Rm ,
L = {v ∈ A ∶ g(v) = w}.
That is, L is the inverse image under g of the one-point set {w}.

Also, we review the argument in Section 4.8 that every graph is a level
set. Let A0 be a subset of Rr , and let f ∶ A0 Ð→ Rc be any mapping. Let
A = A0 × Rc (a subset of Rn ) and define a second mapping g ∶ A Ð→ Rc ,
214 5 Inverse and Implicit Functions

g(x, y) = f (x) − y, (x, y) ∈ A0 × Rc .

Then the graph of f is

graph(f ) = {(x, y) ∈ A0 × Rc ∶ y = f (x)}


= {(x, y) ∈ A ∶ g(x, y) = 0c },

and this is the set of inputs to g that g takes to 0c , a level set of g as desired.
Now we return to rephrasing the question at the beginning of this section.
Let A be an open subset of Rn , and let a mapping g ∶ A Ð→ Rc have continuous
partial derivatives at every point of A. Points of A can be written

(x, y), x ∈ Rr , y ∈ Rc .

(Throughout this section, we routinely will view an n-vector as the concate-


nation of an r-vector and c-vector in this fashion.) Consider the level set

L = {(x, y) ∈ A ∶ g(x, y) = 0c }.

The question was whether the c scalar conditions g(x, y) = 0c on the n = c + r


scalar entries of (x, y) define the c scalars of y in terms of the r scalars of x
near (a, b). That is, the question is whether the vector relation g(x, y) = 0c
for (x, y) near (a, b) is equivalent to a vector relation y = ϕ(x) for some
mapping ϕ that takes r-vectors near a to c-vectors near b. This is precisely
the question whether the level set L is locally the graph of such a mapping ϕ.
If the answer is yes, then we would like to understand ϕ as well as possible
by using the techniques of differential calculus. In this context we view the
mapping ϕ as implicit in the condition g = 0c , explaining the name of the
pending implicit function theorem.
The first phrasing of the question, whether c conditions on n variables
specify c of the variables in terms of the remaining r variables, is easy to
answer when the conditions are affine. Affine conditions take the matrix form
P v = w, where P ∈ Mc,n (R), v ∈ Rn , and w ∈ Rc , and P and w are fixed while
v is the vector of variables. Partition the matrix P into a left c × r block M
and a right square c × c block N , and partition the vector v into its first r
entries x and its last c entries y. Then the relation P v = w is

[M N ] [ ] = w,
x
y

that is,
M x + N y = w.
Assume that N is invertible. Then subtracting M x from both sides and then
left multiplying by N −1 shows that the relation is

y = N −1 (w − M x).
5.3 The Implicit Function Theorem 215

Thus, when the right c × c submatrix of P is invertible, the relation P v = w


explicitly specifies the last c variables of v in terms of the first r variables.
A similar statement applies to every invertible c × c submatrix of P and the
corresponding variables. A special case of this calculation, the linear case, will
be used throughout this section: for every M ∈ Mc,r (R), invertible N ∈ Mc (R),
h ∈ Rr , and k ∈ Rc ,

[M N ] [ ] = 0c k = −N −1 M h.
h
⇐⇒ (5.6)
k

When the conditions are nonaffine, the situation is not so easy to analyze.
However:
• The problem is easy to linearize. That is, given a point (a, b) (where a ∈ Rr
and b ∈ Rc ) on the level set {(x, y) ∶ g(x, y) = w}, differential calculus
tells us how to describe the tangent object to the level set at the point.
Depending on the value of r, the tangent object will be a line, or a plane,
or higher-dimensional. But regardless of its dimension, it is described by
the linear conditions g ′ (a, b)v = 0c , and these conditions take the form
that we have just considered,

[M N ] [ ] = 0c , M ∈ Mc,r (R), N ∈ Mc (R), h ∈ Rr , k ∈ Rc .


h
k

Thus if N is invertible then we can solve the linearized problem as in (5.6).


• The inverse function theorem says:
If the linearized inversion problem is solvable then the actual inver-
sion problem is locally solvable.
With a little work, we can use the inverse function theorem to establish
the implicit function theorem:
If the linearized level set is a graph then the actual level set is locally
a graph.
And in fact, the implicit function theorem will imply the inverse function
theorem as well.
For example, the unit circle C is described by one constraint on two vari-
ables (n = 2 and c = 1, so r = 1),

x2 + y 2 = 1.

Globally (in the large), this relation specifies neither x as a function of y nor
y as a function of x. It can’t: the circle is visibly not the graph of a function
of either sort—recall the vertical line test to check whether a curve is the
graph of a function y = ϕ(x), and analogously for the horizontal line test. The
situation does give a function, however, if one works locally (in the small) by
looking only at part of the circle at a time. Every arc in the bottom half of
the circle is described by the function
216 5 Inverse and Implicit Functions

y = ϕ(x) = − 1 − x2 .

Similarly, every arc in the right half is described by



x = ψ(y) = 1 − y 2 .

Every arc in the bottom right quarter is described by both functions. (See
Figure 5.6.) On the other hand, no arc of the circle about the point (a, b) =
(1, 0) is described by a function y = ϕ(x), and no arc about (a, b) = (0, 1) is
described by a function x = ψ(y). (See Figure 5.7.) Thus, about some points
(a, b), the circle relation x2 + y 2 = 1 contains the information to specify each
variable as a function of the other. These functions are implicit in the relation.
About other points, the relation implicitly defines one variable as a function
of the other, but not the second as a function of the first.

y = ϕ(x)
x = ψ(y)

Figure 5.6. Arc of a circle

x ≠ ψ(y)

y ≠ ϕ(x)

Figure 5.7. Trickier arcs of a circle

To bring differential calculus to bear on the situation, think of the circle


as a level set. Specifically, it is a level set of the function g(x, y) = x2 + y 2 ,
5.3 The Implicit Function Theorem 217

C = {(x, y) ∶ g(x, y) = 1}.

Let (a, b) be a point on the circle. The derivative of g at the point is

g ′ (a, b) = [2a 2b] .

The tangent line to the circle at (a, b) consists of the points (a + h, b + k) such
that (h, k) is orthogonal to g ′ (a, b),

[2a 2b] [ ] = 0.
h
k

That is,
2ah + 2bk = 0.
Thus, whenever b ≠ 0 we have

k = −(a/b)h,

showing that on the tangent line, the second coordinate is a linear function
of the first, and the function has derivative −a/b. And so on the circle it-
self near (a, b), plausibly the second coordinate is a function of the first as
well, provided that b ≠ 0. Note that indeed this argument excludes the two
points (1, 0) and (−1, 0), about which y is not an implicit function of x. But
about points (a, b) ∈ C where D2 g(a, b) ≠ 0, the circle relation should im-
plicitly define y as a function of x. And at such points (say, on the lower
half-circle), the function is explicitly

ϕ(x) = − 1 − x2 ,

so that ϕ′ (x) = x/ 1 − x2 = −x/y (the last minus sign is present because the
square root is positive but y is negative) and in particular,

ϕ′ (a) = −a/b.

Thus ϕ′ (a) is exactly the slope that we found a moment earlier by solving
the linear problem g ′ (a, b)v = 0 where v = (h, k) is a column vector. That is,
using the constraint g(x, y) = 0 to set up and solve the linear problem, making
no reference in the process to the function ϕ implicitly defined by the con-
straint, we found the derivative ϕ′ (a) nonetheless. The procedure illustrates
the general idea of the pending implicit function theorem:
Constraining conditions locally define some variables implicitly in
terms of others, and the implicitly defined function can be differen-
tiated without being found explicitly.
(And returning to the circle example, yet another way to find the derivative
is to differentiate the relation x2 + y 2 = 1 at a point (a, b) about which we
assume that y = ϕ(x),
218 5 Inverse and Implicit Functions

2a + 2bϕ′ (a) = 0,
so that again ϕ′ (a) = −a/b. The reader may recall from elementary calculus
that this technique is called implicit differentiation.)
It may help the reader to visualize the situation if we revisit the idea of
the previous paragraph more geometrically. Since C is a level set of g, the
gradient g ′ (a, b) is orthogonal to C at the point (a, b). When g ′ (a, b) has a
nonzero y-component, C should locally have a big shadow on the x-axis, from
which there is a function ϕ back to C. (See Figure 5.8, in which the arrow
drawn is quite a bit shorter than the true gradient, for graphical reasons.)

Figure 5.8. Nonhorizontal gradient and x-shadow

Another set defined by a constraining relation is the unit sphere, also


specified as a level set. Let

g(x, y, z) = x2 + y 2 + z 2 .

Then the sphere is


S = {(x, y, z) ∶ g(x, y, z) = 1}.
Imposing one condition on three variables should generally leave two of them
free (say, the first two) and define the remaining one in terms of the free ones.
That is, n = 3 and c = 1, so that r = 2. And indeed, the sphere implicitly
describes z as a function ϕ(x, y) about every point p = (a, b, c) ∈ S off the
equator, where c = 0. (So for this example we have just overridden the general
use of c as the number of constraints; here c is the third coordinate of a point on
the level set.) The equator is precisely the points where D3 g(p) = 2c vanishes.
Again geometry makes this plausible. The gradient g ′ (p) is orthogonal to S
at p. When g ′ (p) has a nonzero z-component, S should locally have a big
shadow in the (x, y)-plane from which there is a function back to S and then
to the z-axis. (See Figure 5.9.)
5.3 The Implicit Function Theorem 219
z

y
x

Figure 5.9. Function from the (x, y)-plane to the z-axis via the sphere

The argument based on calculus and linear algebra to suggest that near
points (a, b, c) ∈ S such that D3 g(a, b, c) ≠ 0, z is implicitly a function ϕ(x, y)
on S is similar to the case of the circle. The derivative of g at the point is

g ′ (a, b, c) = [2a 2b 2c] .

The tangent plane to the sphere at (a, b, c) consists of the points (a + h, b +


k, c + ℓ) such that (h, k, ℓ) is orthogonal to g ′ (a, b, c),
⎡h⎤
⎢ ⎥
⎢ ⎥
[2a 2b 2c] ⎢k ⎥ = 0.
⎢ ⎥
⎢ℓ⎥
⎣ ⎦
That is,
2ah + 2bk + 2cℓ = 0.
Thus, whenever c ≠ 0 we have

ℓ = −(a/c)h − (b/c)k,

showing that on the tangent plane, the third coordinate is a linear function of
the first two, and the function has partial derivatives −a/c and −b/c. And so
on the sphere itself near (a, b, c), plausibly the third coordinate is a function of
the first two as well, provided that c ≠ 0. This argument excludes points on the
equator, about which z is not an implicit function of (x, y). But about points
(a, b, c) ∈ S where D3 g(a, b, c) ≠ 0, the sphere relation should implicitly define z
as a function of (x, y). And at such points (say, on the upper hemisphere),
the function is explicitly

ϕ(x, y) = 1 − x2 − y 2 ,
220 5 Inverse and Implicit Functions
√ √
so that ϕ′ (x, y) = −[x/ 1 − x2 − y 2 y/ 1 − x2 − y 2 ] = −[x/z y/z], and in par-
ticular,
ϕ′ (a, b) = − [a/c b/c] .
The partial derivatives are exactly as predicted by solving the linear problem
g ′ (a, b, c)v = 0, where v = (h, k, ℓ) is a column vector, with no reference to ϕ.
(As with the circle, a third way to find the derivative is to differentiate the
sphere relation x2 + y 2 + z 2 = 1 at a point (a, b, c) about which we assume that
z = ϕ(x, y), differentiating with respect to x and then with respect to y,

2a + 2cD1 ϕ(a, b) = 0, 2b + 2cD2 ϕ(a, b) = 0.

Again we obtain ϕ′ (a, b) = −[a/c b/c].)


Next consider the intersection of the unit sphere and the 45-degree plane
z = −y. The intersection is a great circle, again naturally described as a level
set. That is, if we consider the mapping

g ∶ R3 Ð→ R2 , g(x, y, z) = (x2 + y 2 + z 2 , y + z),

then the great circle is a level set of g,

GC = {(x, y, z) ∶ g(x, y, z) = (1, 0)}.

The two conditions on the three variables should generally leave one variable
(say, the first one) free and define the other two variables in terms of it. That
is, n = 3 and c = 2, so that r = 1. Indeed, GC is a circle that is orthogonal to
the plane of the page, and away from its two points (±1, 0, 0) that are farthest
in and out of the page, it does define (y, z) locally as functions of x. (See
Figure 5.10.) This time we first proceed by linearizing the problem to obtain
the derivatives of the implicit function without finding the implicit function
ϕ = (ϕ1 , ϕ2 ) itself. The derivative matrix of g at p is

g ′ (a, b, c) = [ ].
2a 2b 2c
0 1 1

The level set GC is defined by the condition that g(x, y, z) remain constant
at (1, 0) as (x, y, z) varies. Thus the tangent line to GC at a point (a, b, c)
consists of points (a + h, b + k, c + ℓ) such that neither component function of g
is instantaneously changing in the (h, k, ℓ)-direction,
⎡h⎤
2a 2b 2c ⎢⎢ ⎥

[ ] ⎢k ⎥ = [ ] .
0
0 1 1 ⎢ ⎢ℓ⎥
⎥ 0
⎣ ⎦
The right 2 × 2 submatrix of g ′ (a, b, c) has nonzero determinant whenever
b ≠ c, that is, at all points of GC except the two aforementioned ex-
treme points (±1, 0, 0). Assuming that b ≠ c, let M denote the first column
5.3 The Implicit Function Theorem 221

of g ′ (a, b, c) and let N denote the right 2 × 2 submatrix. Then by (5.6), the
linearized problem has solution

1 −2c 2a −a
[ ] = −N −1 M h = [ ] [ ] h = [ 2b
a ]h
k 1
ℓ 2(c − b) −1 2b 0 − 2c

(the condition c = −b was used in the last step), or

k=− ℓ=−
a a
h, h. (5.7)
2b 2c
And so for all points (a + h, b + k, c + ℓ) on the tangent line to GC at (a, b, c),
the last two coordinate-offsets k and ℓ are specified in terms of the first co-
ordinate offset h via (5.7), and the component functions have partial deriva-
tives −a/(2b) and −a/(2c). (And as with the circle and the sphere, the two
partial derivatives can be obtained by implicit differentiation as well.)

Figure 5.10. y and z locally as functions of x on a great circle

To make the implicit function in the great circle relations explicit, note
that near the point p = (a, b, c) in the figure,
√ √
⎛ 1 − x2 1 − x2 ⎞
(y, z) = (ϕ1 (x), ϕ2 (x)) = −
⎝ 2 ⎠
, .
2

At p the component functions have derivatives


−a
ϕ′1 (a) = √ ϕ′2 (a) = √
a
and .
2 2
2 1−a
2
2 1−a
2
222 5 Inverse and Implicit Functions
√ √
But 1 − a2 = 2b2 = 2c2 , and b2 = −b since b < 0, while c2 = c since c > 0, so
the derivatives are

ϕ′1 (a) = − ϕ′2 (a) = −


a a
and .
2b 2c
Predictably enough, the implicitly calculated values displayed in (5.7) are
matched by these component derivatives of the true mapping ϕ that defines
y and z in terms of x for points near p on GC.
In the examples of the circle, the sphere, and the great circle, the functions
implicit in the defining relations could in fact be found explicitly. But in
general, relations may snarl the variables so badly that expressing some as
functions of the others is beyond our algebraic capacity. For instance, do the
simultaneous conditions

y 2 = ez cos(y + x2 ) and y 2 + z 2 = x2 (5.8)

define y and z implicitly in terms of x near the point (1, −1, 0)? (This point
meets both conditions.) Answering this directly by solving for y and z is
manifestly unappealing. But linearizing the problem is easy. At our point
(1, −1, 0), the mapping

g(x, y, z) = (y 2 − ez cos(y + x2 ), y 2 + z 2 − x2 )

has derivative matrix


R
2xez sin(y + x2 ) 2y + ez sin(y + x2 ) −ez cos(y + x2 ) RRRR
g ′ (1, −1, 0) = [ ] RR
−2x RRR
R(1,−1,0)
2y 2z

0 −2 −1
=[ ].
−2 −2 0

Since the right 2 × 2 determinant is nonzero, we expect that indeed y and z


are implicit functions ϕ1 (x) and ϕ2 (x) near (1, −1, 0). Furthermore, solving
the linearized problem as in the previous example with M and N similarly
defined suggests that if (y, z) = ϕ(x) = (ϕ1 (x), ϕ2 (x)) then

−2 −1 −1
−1
ϕ′ (1) = −N −1 M = − [ ] [ ]= [ ][ ] = [ ].
0 1 0 1 0
−2 0 −2 2 2 −2 −2 2

Thus for a point (x, y, z) = (1 + h, −1 + k, 0 + ℓ) near (1, −1, 0) satisfying con-


ditions (5.8), we expect that (k, ℓ) ≈ (−h, 2h), i.e.,

for x = 1 + h, (y, z) ≈ (−1, 0) + (−h, 2h).

The implicit function theorem fulfills these expectations.


5.3 The Implicit Function Theorem 223

Theorem 5.3.2 (Implicit function theorem). Let c and n be positive in-


tegers with n > c, and let r = n − c. Let A be an open subset of Rn , and let
g ∶ A Ð→ Rc have continuous partial derivatives at every point of A. Consider
the level set
L = {v ∈ A ∶ g(v) = 0c }.
Let p be a point of L, i.e., let g(p) = 0c . Let p = (a, b) where a ∈ Rr and
b ∈ Rc , and let g ′ (p) = [M N ] where M is the left c × r submatrix and N is
the remaining right square c × c submatrix.
If det N ≠ 0 then the level set L is locally a graph near p. That is, the
condition g(x, y) = 0c for (x, y) near (a, b) implicitly defines y as a function
y = ϕ(x) where ϕ takes r-vectors near a to c-vectors near b, and in particular
ϕ(a) = b. The function ϕ is differentiable at a with derivative matrix

ϕ′ (a) = −N −1 M.

Hence ϕ is well approximated near a by its affine approximation,

ϕ(a + h) ≈ b − N −1 M h.

We make three remarks before the proof.


• The condition g(x, y) = 0c could just as easily be g(x, y) = w for every fixed
point w ∈ Rc , as in our earlier examples. Normalizing to w = 0c amounts
to replacing g by g − w (with no effect on g ′ ), which we do to tidy up the
statement of the theorem.
• The implicit function theorem gives no information when det N = 0. In
this case, the condition g(x, y) = 0c may or may not define y in terms of x.
• While the theorem strictly addresses only whether the last c of n variables
subject to c conditions depend on the first r variables, it can be suitably
modified to address whether any c variables depend on the remaining ones
by checking the determinant of a suitable c × c submatrix of g ′ (p). The
modification is merely a matter of reindexing or permuting the variables,
not worth writing down formally in cumbersome notation, but the reader
should feel free to use the modified version.

Proof. Examining the derivative has already shown the theorem’s plausibility
in specific instances. Shoring up these considerations into a proof is easy with
a well-chosen change of variables and the inverse function theorem. For the
change of variables, define
G ∶ A Ð→ Rn
as follows: for all x ∈ Rr and y ∈ Rc such that (x, y) ∈ A,

G(x, y) = (x, g(x, y)).

Note that G incorporates g, but unlike g it is a map between spaces of the


same dimension n. Note also that the augmentation that changes g into G is
224 5 Inverse and Implicit Functions

highly reversible, being the identity mapping on the x-coordinates. That is, it
is easy to recover g from G. The mapping G affects only y-coordinates, and
it is designed to take the level set L = {(x, y) ∈ A ∶ g(x, y) = 0c } to the x-axis.
(See Figure 5.11, in which the inputs and the outputs of G are shown in the
same copy of Rn .)

Rc

A
b p

G(x, y) = (x, g(x, y))

G(A) x
Rn a Rr

Figure 5.11. Mapping A to Rn and the constrained set to x-space

The mapping G is differentiable at the point p = (a, b) with derivative


matrix
G′ (a, b) = [ r r×c ] ∈ Mn (R).
I 0
M N
This matrix has determinant det G′ (a, b) = det N ≠ 0, and so by the inverse
function theorem, G has a local inverse mapping Φ defined near the point
G(a, b) = (a, 0c ). (See Figure 5.12.) Since the first r components of G are the
identity mapping, the same holds for the inverse. That is, the inverse takes
the form
Φ(x, y) = (x, φ(x, y)),
where φ maps n-vectors near (a, 0c ) to c-vectors near b. The inversion criterion
is that for all (x, y) near (a, b) and all (x, ỹ) near (a, 0c ),
G(x, y) = (x, ỹ) ⇐⇒ (x, y) = Φ(x, ỹ).
Equivalently, since neither G nor Φ affects x-coordinates, for all x near a, y
near b, and ỹ near 0c ,
g(x, y) = ỹ ⇐⇒ y = φ(x, ỹ). (5.9)
Also by the inverse function theorem and a short calculation,

Φ′ (a, 0c ) = G′ (a, b)−1 = [ ].


Ir 0r×c
−N −1 M N −1
5.3 The Implicit Function Theorem 225
y

Rc

b p

Φ(x, y) = (x, φ(x, y))

x
Rn a Rr

Figure 5.12. Local inverse of G

Now we can exhibit the desired mapping implicit in the original g. Define
a mapping
ϕ(x) = φ(x, 0c ) for x near a. (5.10)
The idea is that locally this lifts the x-axis to the level set L where g(x, y) = 0c
and then projects horizontally to the y-axis. (See Figure 5.13.) For every (x, y)
near (a, b), a specialization of condition (5.9) combines with the definition
(5.10) of ϕ to give
g(x, y) = 0c ⇐⇒ y = ϕ(x).
This equivalence exhibits y as a local function of x on the level set of g, as
desired. And since by definition (5.10), ϕ is the last c component functions
of Φ restricted to the first r inputs to Φ, the derivative ϕ′ (a) is exactly the
lower left c × r block of Φ′ (a, 0c ), which is −N −1 M . This completes the proof.

Thus the implicit function theorem follows easily from the inverse function
theorem. The converse implication is even easier. Imagine a scenario in which
somehow we know the implicit function theorem but not the inverse function
theorem. Let f ∶ A Ð→ Rn (where A ⊂ Rn ) be a mapping that satisfies the
hypotheses for the inverse function theorem at a point a ∈ A. That is, f is
continuously differentiable in an open set containing a, and det f ′ (a) ≠ 0.
Define a mapping

g ∶ A × Rn Ð→ Rn , g(x, y) = f (x) − y.

(This mapping should look familiar from the beginning of this section.) Let
b = f (a). Then g(a, b) = 0, and the derivative matrix of g at (a, b) is

g ′ (a, b) = [f ′ (a) −In ] .


226 5 Inverse and Implicit Functions
y

Rc

φ(x, 0c ) p
b

ϕ(x) Φ(x, 0c ) = (x, φ(x, 0c ))

x
Rn a Rr

Figure 5.13. The implicit mapping from x-space to y-space via the level set

Since f ′ (a) is invertible, we may apply the implicit function theorem, with
the roles of c, r, and n in the theorem taken by the values n, n, and 2n here,
and with the theorem modified as in the third remark before its proof so that
we are checking whether the first n variables depend on the last n values. The
theorem supplies us with a differentiable mapping ϕ defined for values of y
near b such that for all (x, y) near (a, b),

g(x, y) = 0 ⇐⇒ x = ϕ(y).

But by the definition of g, this equivalence is

y = f (x) ⇐⇒ x = ϕ(y).

That is, ϕ inverts f . Also by the implicit function theorem, ϕ is differentiable


at b with derivative

ϕ′ (b) = −f ′ (a)−1 (−In ) = f ′ (a)−1

(as it must be), and we have recovered the inverse function theorem. In a
nutshell, the argument converts the graph y = f (x) into a level set g(x, y) = 0,
and then the implicit function theorem says that locally the level set is also
the graph of x = ϕ(y). (See Figure 5.14.)
Rederiving the inverse function theorem so easily from the implicit function
theorem is not particularly impressive, since proving the implicit function
theorem without citing the inverse function theorem would be just as hard as
the route we took of proving the inverse function theorem first. The point is
that the two theorems have essentially the same content.
We end this section with one more example. Consider the function

g ∶ R2 Ð→ R, g(x, y) = (x2 + y 2 )2 − x2 + y 2
5.3 The Implicit Function Theorem 227
y

Rn

(ϕ(y), y)
b

(x, f (x))
ϕ(y)
f (x)

x
R2n a Rn

Figure 5.14. The inverse function theorem from the implicit function theorem

and the corresponding level set, a curve in the plane,

L = {(x, y) ∈ R2 ∶ g(x, y) = 0}.

The implicit function theorem lets us analyze L qualitatively. The derivative


matrix of g is

g ′ (x, y) = 4[x((x2 + y 2 ) − 1/2) y((x2 + y 2 ) + 1/2)].

By the theorem, L is locally the graph of a function y = ϕ(x) except possibly


at its points where y((x2 + y 2 ) + 1/2) = 0, which is to say y = 0. To find all
such points is to find all x such that g(x, 0) = 0. This condition is x4 − x2 = 0,
or x2 (x2 − 1) = 0, and so the points of L where locally it might not be the
graph of y = ϕ(x) are (0, 0) and (±1, 0). Provisionally we imagine L to have
vertical tangents at these points.
Similarly, L is locally the graph of a function x = ϕ(y) except possibly at
its points where x((x2 + y 2 ) − 1/2) = 0, which is to say x = 0 or x2 + y 2 = 1/2.
The condition g(0, y) = 0 is y 4 + y 2 = 0, whose only solution is y = 0. And
+ y 2 = 1/2 then g(x, y) = 1/4√− x2 + y 2 = 3/4 − 2x2 , which vanishes for
if x2 √
x = ± 3/8, also determining y = ± 1/8. Thus the points of L where locally it
√ √
might not be the graph of x = ϕ(y) are (0, 0) and (± 3/8, ± 1/8) with the
two signs independent. Provisionally we imagine L to have horizontal tangents
at these points.
However, since also we imagined a vertical tangent at (0, 0), this point
requires further analysis. Keeping only the lowest-order terms of the relation
g(x, y) = 0 gives y 2 ≈ x2 , or y ≈ ±x, and so L looks like two crossing lines of
slopes ±1 near (0, 0). This analysis suffices to sketch L, as shown in Figure 5.15.
The level set L is called a lemniscate. The lemniscate originated in astronomy,
and the study of its arc length led to profound mathematical ideas by Gauss,
Abel, and many others.
228 5 Inverse and Implicit Functions

Figure 5.15. Lemniscate

Exercises
5.3.1. Does the relation x2 + y + sin(xy) = 0 implicitly define y as a function
of x near the origin? If so, what is its best affine approximation? How about
x as a function of y and its affine approximation?
5.3.2. Does the relation xy − z ln y + exz = 1 implicitly define z as a function
of (x, y) near (0, 1, 1)? How about y as a function of (x, z)? When possible,
give the affine approximation to the function.
5.3.3. Do the simultaneous conditions x2 (y 2 + z 2 ) = 5 and (x − z)2 + y 2 = 2
implicitly define (y, z) as a function of x near (1, −1, 2)? If so, then what is
the function’s affine approximation?
5.3.4. Same question for the conditions x2 + y 2 = 4 and 2x2 + y 2 + 8z 2 = 8
near (2, 0, 0).
5.3.5. Do the simultaneous conditions xy + 2yz = 3xz and xyz + x − y = 1
implicitly define (x, y) as a function of z near (1, 1, 1)? How about (x, z) as a
function of y? How about (y, z) as a function of x? Give affine approximations
when possible.
5.3.6. Do the conditions xy 2 +xzu+yv 2 = 3 and u3 yz+2xv−u2 v 2 = 2 implicitly
define (u, v) in terms of (x, y, z) near the point (1, 1, 1, 1, 1)? If so, what is the
derivative matrix of the implicitly defined mapping at (1, 1, 1)?
5.3.7. Do the conditions x2 + yu + xv + w = 0 and x + y + uvw = −1 implicitly
define (x, y) in terms of (u, v, w) near (x, y, u, v, w) = (1, −1, 1, 1, −1)? If so,
what is the best affine approximation to the implicitly defined mapping?
5.3.8. Do the conditions
2x + y + 2z + u − v = 1
xy + z − u + 2v = 1
yz + xz + u2 + v = 0
define the first three variables (x, y, z) as a function ϕ(u, v) near the point
(x, y, z, u, v) = (1, 1, −1, 1, 1)? If so, find the derivative matrix ϕ′ (1, 1).
5.4 Lagrange Multipliers: Geometric Motivation and Specific Examples 229

5.3.9. Define g ∶ R2 Ð→ R by g(x, y) = 2x3 − 3x2 + 2y 3 + 3y 2 and let L be the


level set {(x, y) ∶ g(x, y) = 0}. Find those points of L about which y need not
be defined implicitly as a function of x, and find the points about which x
need not be defined implicitly as a function of y. Describe L precisely—the
result should explain the points you found.

5.4 Lagrange Multipliers: Geometric Motivation and


Specific Examples
How close does the intersection of the planes x + y + z = 1 and x − y + 2z = −1 in
R3 come to the origin? This question is an example of an optimization prob-
lem with constraints. The goal in such problems is to maximize or minimize
some function, but with relations imposed on its variables. Equivalently, the
problem is to optimize some function whose domain is a level set.
A geometric solution of the sample problem just given is that the planes
intersect in a line through the point p = (0, 1, 0) in the direction d = (1, 1, 1) ×
(1, −1, 2), so the point-to-line distance formula from Exercise 3.10.12 answers
the question. This method is easy and efficient.
A more generic method of solution is via substitution. The equations of
the constraining planes are x + y = 1 − z and x − y = −1 − 2z; adding gives
x = −3z/2, and subtracting gives y = 1 + z/2. To finish the problem, minimize
the function d2 (z) = (−3z/2)2 + (1 + z/2)2 + z 2 , where d2 denotes distance
squared from the origin. Minimizing d2 rather than d avoids square roots.
Not all constrained problems yield readily to either of these methods. The
more irregular the conditions, the less amenable they are to geometry, and the
more tangled the variables, the less readily they distill. Merely adding more
variables to the previous problem produces a nuisance: How close does the
intersection of the planes v + w + x + y + z = 1 and v − w + 2x − y + z = −1 in R5
come to the origin? Now no geometric procedure lies conveniently at hand.
As for substitution, linear algebra shows that
⎡v ⎤
⎢ ⎥
⎢w⎥
1 11 11 ⎢ ⎢ ⎥

[ ] ⎢x⎥ = [ ]
1
1 −1 2 −1 1 ⎢ ⎥
⎢y ⎥ −1
⎢ ⎥
⎢z ⎥
⎣ ⎦
implies
⎡x⎤
1 11 ⎢ ⎥
−1 ⎛
⎢ ⎥⎞ −3x/2 − z
[ ]=[ ] ⎜[ ] − [ ] ⎢y ⎥⎟ = [ ].
v 1 1 1
1 −1 ⎝ −1 −1 ⎢ ⎥ + x/2 − y
w 2 1 ⎢z ⎥⎠ 1
⎣ ⎦
Since the resulting function d2 (x, y, z) = (−3x/2 − z) + (1 + x/2 − y) + x2 +
2 2

y 2 + z 2 is quadratic, partial differentiation and more linear algebra will find


its critical points. But the process is getting tedious.
230 5 Inverse and Implicit Functions

Let’s step back from specifics (but we will return to the currently unre-
solved example soon) and consider in general the necessary nature of a critical
point in a constrained problem. The discussion will take place in two stages:
first we consider the domain of the problem, and then we consider the critical
point.
The domain of the problem is the points in n-space that satisfy a set of c
constraints. To satisfy the constraints is to meet a condition

g(x) = 0c ,

where g ∶ A Ð→ Rc is a C 1 -mapping, with A ⊂ Rn an open set. That is, the


constrained set forming the domain in the problem is a level set L, the inter-
section of the level sets of the component functions gi of g. (See Figures 5.16
and 5.17. The first figure shows two individual level sets for scalar-valued
functions on R3 , and the second figure shows them together and then shows
their intersection, the level set for a vector-valued mapping.)

Figure 5.16. Level sets for two scalar-valued functions on R3

At every point p ∈ L, the set L must be locally orthogonal to each gradient


∇gi (p). (See Figures 5.18 and 5.19. The first figure shows the level sets for
the component functions of the constraint mapping, and the gradients of the
component functions at p, while the second figure shows the tangent line and
the normal plane to the level set at p. In the first figure, neither gradient is
tangent to the other surface, and so in the second figure the two gradients are
not normal to each other.) Therefore:
• L is orthogonal at p to every linear combination of the gradients,
c
∑ λi ∇gi (p) where λ1 , . . . , λc are scalars.
i=1

Equivalently:
5.4 Lagrange Multipliers: Geometric Motivation and Specific Examples 231

Figure 5.17. The intersection is a level set for a vector-valued mapping on R3

• Every such linear combination of gradients is orthogonal to L at p.


But we want to turn this idea around and assert the converse, that:
• Every vector that is orthogonal to L at p is such a linear combination.
However, the converse does not always follow. Intuitively, the argument is that
if the gradients ∇g1 (p), . . . , ∇gc (p) are linearly independent (i.e., they point
in c nonredundant directions) then the implicit function theorem should say
that the level set L therefore looks (n − c)-dimensional near p, so the space
of vectors orthogonal to L at p is c-dimensional, and so every such vector is
indeed a linear combination of the gradients. This intuitive argument is not a
proof, but for now it is a good heuristic.

Figure 5.18. Gradients to the level sets at a point of intersection

Proceeding to the second stage of the discussion, now suppose that p is a


critical point of the restriction to L of some C 1 -function f ∶ A Ð→ R. (Thus f
232 5 Inverse and Implicit Functions

Figure 5.19. Tangent line and normal plane to the intersection

has the same domain A ⊂ Rn as g.) Then for every unit vector d describing a
direction in L at p, the directional derivative Dd f (p) must be 0. But Dd f (p) =
⟨∇f (p), d⟩, so this means that:
• ∇f (p) must be orthogonal to L at p.
This observation combines with our description of the most general vector
orthogonal to L at p, in the third bullet above, to give Lagrange’s condition:
Suppose that p is a critical point of the function f restricted to the
level set L = {x ∶ g(x) = 0c } of g. If the gradients ∇gi (p) are linearly
independent, then
c
∇f (p) = ∑ λi ∇gi (p) for some scalars λ1 , . . . , λc ,
i=1

and since p is in the level set, also

g(p) = 0c .

Approaching a constrained problem by setting up these conditions and then


working with the new variables λ1 , . . . , λc is sometimes easier than the other
methods. The λi are useful but irrelevant constants.
This discussion has derived the Lagrange multiplier criterion for the lin-
earized version of the constrained problem. The next section will use the
implicit function theorem to derive the criterion for the actual constrained
problem, and then it will give some general examples. The remainder of this
section is dedicated to specific examples.
Returning to the unresolved second example at the beginning of this sec-
tion, the functions in question are
5.4 Lagrange Multipliers: Geometric Motivation and Specific Examples 233

f (v, w, x, y, z) = v 2 + w2 + x2 + y 2 + z 2
g1 (v, w, x, y, z) = v + w + x + y + z − 1
g2 (v, w, x, y, z) = v − w + 2x − y + z + 1

and the corresponding Lagrange condition and constraints are (after absorbing
a 2 into the λ’s, whose particular values are irrelevant anyway)

(v, w, x, y, z) = λ1 (1, 1, 1, 1, 1) + λ2 (1, −1, 2, −1, 1)


= (λ1 + λ2 , λ1 − λ2 , λ1 + 2λ2 , λ1 − λ2 , λ1 + λ2 )
v+w+x+y+z = 1
v − w + 2x − y + z = −1.

Substitute the expressions from the Lagrange condition into the constraints
to get 5λ1 + 2λ2 = 1 and 2λ1 + 8λ2 = −1. That is,

[ ] [ 1] = [ ] ,
52 λ 1
2 8 λ2 −1

and so, inverting the matrix to solve the system,

8 −2
[ 1] = [ ][ ] = [ ].
λ 1 1 10/36
λ2 36 −2 5 −1 −7/36

Note how much more convenient the two λ’s are to work with than the five
original variables. Their values are auxiliary to the original problem, but sub-
stituting back now gives the nearest point to the origin,

(v, w, x, y, z) = (3, 17, −4, 17, 3),


1
36

and its distance from the origin is 612/36. This example is just one instance
of a general problem of finding the nearest point to the origin in Rn subject
to c affine constraints. We will solve the general problem in the next section.
An example from geometry is Euclid’s least area problem. Given an angle
ABC and a point P interior to the angle as shown in Figure 5.20, what line
through P cuts off from the angle the triangle of least area?
Draw the line L through P parallel to AB and let D be its intersection
with AC. Let a denote the distance AD and let h denote the altitude from
AC to P . Both a and h are constants. Given any other line L′ through P ,
let x denote its intersection with AC and H denote the altitude from AC to
the intersection of L′ with AB. (See Figure 5.21.) The shaded triangle and its
subtriangle in the figure are similar, giving the relation x/H = (x − a)/h.
The problem is now to minimize the function f (x, H) = 21 xH subject to the
constraint g(x, H) = 0 where g(x, H) = (x−a)H −xh = 0. Lagrange’s condition
∇f (x, H) = λ∇g(x, H) and the constraint g(x, H) = 0 become, after absorbing
a 2 into λ,
234 5 Inverse and Implicit Functions

A C

Figure 5.20. Setup for Euclid’s least area problem

P
D h
x
a x−a

Figure 5.21. Construction for Euclid’s least area problem

(H, x) = λ(H − h, x − a),


(x − a)H = xh.

The first relation quickly yields (x − a)H = x(H − h). Combining this with the
second shows that H −h = h, that is, H = 2h. The solution of Euclid’s problem
is, therefore, to take the segment that is bisected by P between the two sides
of the angle. (See Figure 5.22.)
Euclid’s least area problem has the interpretation of finding the point
of tangency between the level set g(x, H) = 0, a hyperbola having asymp-
totes x = a and H = h, and the level sets of f (x, H) = (1/2)xH, a family
of hyperbolas having asymptotes x = 0 and H = 0. (See Figure 5.23, where
the dashed asymptotes meet at (a, h) and the point of tangency is visibly
(x, H) = (2a, 2h).)
5.4 Lagrange Multipliers: Geometric Motivation and Specific Examples 235

Figure 5.22. Solution of Euclid’s least area problem

Figure 5.23. Level sets for Euclid’s least area problem

An example from optics is Snell’s law. A particle travels through medium 1


at speed v, and through medium 2 at speed w. If the particle travels from point
A to point B as shown (Figure 5.24) in the least possible amount of time, what
is the relation between angles α and β?
Because time is distance over speed, a little trigonometry shows that this
problem is equivalent to minimizing f (α, β) = a sec α/v + b sec β/w subject
to the constraint g(α, β) = a tan α + b tan β = d (g measures lateral distance
traveled). The Lagrange condition ∇f (α, β) = λ∇g(α, β) is

( sin α sec2 α, sin β sec2 β) = λ(a sec2 α, b sec2 β).


a b
v w
Therefore λ = sin α/v = sin β/w, giving Snell’s famous relation,
sin α v
= .
sin β w
236 5 Inverse and Implicit Functions

A a tan(α)
medium 1
a sec(α) a
α
d

medium 2 β b sec(β)
b

b tan(β) B

Figure 5.24. Geometry of Snell’s law

Figure 5.25 depicts the situation using the variables x = tan α and y = tan β.
The level set of possible configurations becomes the portion of the line
√+ by = d in √
ax the first quadrant, and the function to be optimized becomes
a 1 + x2 /v + b 1 + y 2 /w. A level set for a large value of the function passes
through the point (0, d/b), the configuration with α = 0 in which the parti-
cle travels vertically in medium 1 and then travels a long path in medium 2,
and a level set for a smaller value of the function passes through the point
(d/a, 0), the configuration with β = 0 in which the particle travels a long path
in medium 1 and then travels vertically in medium 2, while a level set for an
even smaller value of the function is tangent to the line segment at its point
that describes the optimal configuration specified by Snell’s law.

Figure 5.25. Level sets for the optics problem

For an example from analytic geometry, let the function f measure the
square of the distance between the points x = (x1 , x2 ) and y = (y1 , y2 ) in the
5.4 Lagrange Multipliers: Geometric Motivation and Specific Examples 237

plane,
f (x1 , x2 , y1 , y2 ) = (x1 − y1 )2 + (x2 − y2 )2 .
Fix points a = (a1 , a2 ) and b = (b1 , b2 ) in the plane, and fix positive numbers
r and s. Define

g1 (x1 , x2 ) = (x1 − a1 )2 + (x2 − a2 )2 − r2 ,


g2 (y1 , y2 ) = (y1 − b1 )2 + (y2 − b2 )2 − s2 ,
g(x1 , x2 , y1 , y2 ) = (g1 (x1 , x2 ), g2 (y1 , y2 )).

Then the set of four-tuples (x1 , x2 , y1 , y2 ) such that

g(x1 , x2 , y1 , y2 ) = (0, 0)

can be viewed as the set of pairs of points x and y that lie respectively on the
circles centered at a and b with radii r and s. Thus, to optimize the function f
subject to the constraint g = 0 is to optimize the distance between pairs of
points on the circles. The rows of the 2 × 4 matrix

x − a1 x 2 − a2
g ′ (x, y) = 2 [ 1 ]
0 0
0 0 y 1 − b1 y 2 − b2

are linearly independent because x ≠ a and y ≠ b. The Lagrange condition


works out to

(x1 − y1 , x2 − y2 , y1 − x1 , y2 − x2 ) = λ1 (x1 − a1 , x2 − a2 , 0, 0)
− λ2 (0, 0, y1 − b1 , y2 − b2 ),

or
(x − y, y − x) = λ1 (x − a, 02 ) − λ2 (02 , y − b).
The second half of the vector on the left is the additive inverse of the first, so
the condition can be rewritten as

x − y = λ1 (x − a) = λ2 (y − b).

If λ1 = 0 or λ2 = 0 then x = y and both λi are 0. Otherwise, λ1 and λ2 are


nonzero, forcing x and y to be distinct points such that

x − y ∥ x − a ∥ y − b,

and so the points x, y, a, and b are collinear. Granted, these results are obvious
geometrically, but it is pleasing to see them follow so easily from the Lagrange
multiplier condition. On the other hand, not all points x and y such that x,
y, a, and b are collinear are solutions to the problem. For example, if both
circles are bisected by the x-axis and neither circle sits inside the other, then
x and y could be the leftmost points of the circles, neither the closest nor the
farthest pair.
238 5 Inverse and Implicit Functions

The last example of this section begins by maximizing the geometric mean
of n nonnegative numbers,

f (x1 , . . . , xn ) = (x1 ⋯xn )1/n , each xi ≥ 0,

subject to the constraint that their arithmetic mean is 1,


x1 + ⋯ + xn
= 1, each xi ≥ 0.
n
The set of such (x1 , . . . , xn )-vectors is compact, being a closed subset of [0, n]n .
Since f is continuous on its domain [0, ∞)n , it is continuous on the constrained
set, and so it takes minimum and maximum values on the constrained set. At
every constrained point set having some xi = 0, the function-value f = 0 is the
minimum. All other constrained points, having each xi > 0, lie in the interior
of the domain of f . The upshot is that we may assume that all xi are positive
and expect the Lagrange multiplier method to produce the maximum value
of f among the values that it produces. Especially, if the Lagrange multi-
plier method produces only one value (as it will) then that value must be the
maximum.
The constraining function is g(x1 , . . . , xn ) = (x1 + ⋯ + xn )/n, and the gra-
dients of f and g are
f (x1 , . . . , xn ) 1
∇f (x1 , . . . , xn ) = ( ,..., ),
1
n x1 xn
∇g(x1 , . . . , xn ) = (1, . . . , 1).
1
n
The Lagrange condition ∇f = λ∇g shows that all xi are equal, and the con-
straint g = 1 forces their value to be 1. Therefore, the maximum value of the
geometric mean when the arithmetic mean is 1 is the value

f (1, . . . , 1) = (1⋯1)1/n = 1.

This Lagrange multiplier argument provides most of the proof of the following
theorem.
Theorem 5.4.1 (Arithmetic–geometric mean inequality). The geomet-
ric mean of n positive numbers is at most their arithmetic mean:
a1 + ⋯ + an
(a1 ⋯an )1/n ≤ for all nonnegative a1 , . . . , an .
n
Proof. If any ai = 0 then the inequality is clear. Given positive numbers
a1 , . . . , an , let a = (a1 + ⋯ + an )/n and let xi = ai /a for i = 1, . . . , n. Then
(x1 + ⋯ + xn )/n = 1, and therefore
a1 + ⋯ + an
(a1 ⋯an )1/n = a(x1 ⋯xn )1/n ≤ a = .
n


5.4 Lagrange Multipliers: Geometric Motivation and Specific Examples 239

Despite these pleasing examples, Lagrange multipliers are in general no


computational panacea. Some problems of optimization with constraint are
solved at least as easily by geometry or substitution. Nonetheless, Lagrange’s
method provides a unifying idea that addresses many different types of op-
timization problem without reference to geometry or physical considerations.
In the following exercises, use whatever methods you find convenient.

Exercises

5.4.1. Find the nearest point to the origin on the intersection of the hyper-
planes x + y + z − 2w = 1 and x − y + z + w = 2 in R4 .

5.4.2. Find the nearest point on the ellipse x2 + 2y 2 = 1 to the line x + y = 4.

5.4.3. Minimize f (x, y, z) = z subject to the constraints 2x + 4y = 5, x2 + z 2 =


2y.

5.4.4. Maximize f (x, y, z) = xy + yz subject to the constraints x2 + y 2 = 2,


yz = 2.

5.4.5. Find the extrema of f (x, y, z) = xy + z subject to the constraints x ≥ 0,


y ≥ 0, xz + y = 4, yz + x = 4.

5.4.6. Find the rectangular box of greatest volume, having sides parallel to the
coordinate axes, that can be inscribed in the ellipsoid ( xa ) + ( yb ) + ( zc ) = 1.
2 2 2

5.4.7. The lengths of the twelve edges of a rectangular block sum to 4, and
the areas of the six faces sum to 4α. Find the lengths of the edges when the
excess of the block’s volume over that of a cube with edge equal to the least
edge of the block is greatest.

5.4.8. A cylindrical can (with top and bottom) has volume V . Subject to this
constraint, what dimensions give it the least surface area?

5.4.9. Find the distance in the plane from the point (0, 1) to the parabola y =
ax2 where a > 0. Note: the answer depends on whether a > 1/2 or 0 < a ≤ 1/2.

5.4.10. This exercise extends the arithmetic–geometric mean inequality. Let


e1 , . . . , en be positive numbers with ∑ni=1 ei = 1. Maximize the function
f (x1 , . . . , xn ) = xe11 ⋯xenn (where each xi ≥ 0) subject to the constraint
∑i=1 ei xi = 1. Use your result to derive the weighted arithmetic–geometric
n

mean inequality,

ae11 ⋯aenn ≤ e1 a1 + ⋯ + en an for all nonnegative a1 , . . . , an .

What values of the weights, e1 , . . . , en reduce this to the basic arithmetic–


geometric mean inequality?
240 5 Inverse and Implicit Functions

5.4.11. Let p and q be positive numbers satisfying the equation p1 + 1q = 1. Max-


imize the function of 2n variables f (x1 , . . . , xn , y1 , . . . , yn ) = ∑ni=1 xi yi subject
to the constraints ∑ni=1 xpi = 1 and ∑ni=1 yiq = 1. Derive Hölder’s inequality:
For all nonnegative a1 , . . . , an , b1 , . . . , bn ,
n n 1/p n 1/q
∑ a i bi ≤ ( ∑ a i ) (∑ bqi )
p
.
i=1 i=1 i=1

5.5 Lagrange Multipliers: Analytic Proof and General


Examples
Recall that the environment for optimization with constraints consists of
• an open set A ⊂ Rn ,
• a constraining C 1 -mapping g ∶ A Ð→ Rc ,
• the corresponding level set L = {v ∈ A ∶ g(v) = 0c },
• and a C 1 -function f ∶ A Ð→ R to optimize on L.
We have argued geometrically, and not fully rigorously, that if f on L is opti-
mized at a point p ∈ L then the gradient f ′ (p) is orthogonal to L at p. Also,
every linear combination of the gradients of the component functions of g
is orthogonal to L at p. We want to assert the converse, that every vector
that is orthogonal to L at p is such a linear combination. The desired con-
verse assertion does not always hold, but if it does then it gives the Lagrange
condition,
c
∇f (p) = ∑ λi ∇gi (p).
i=1
Here is the rigorous analytic justification that the Lagrange multiplier method
usually works. The implicit function theorem will do the heavy lifting, and it
will reaffirm that the method is guaranteed only where the gradients of the
component functions of g are linearly independent. The theorem makes the
rigorous proof of the Lagrange criterion easier and more persuasive—at least
in the author’s opinion—than the heuristic argument given earlier.
Theorem 5.5.1 (Lagrange multiplier condition). Let n and c be positive
integers with n > c. Let g ∶ A Ð→ Rc (where A ⊂ Rn ) be a mapping that is
continuously differentiable at each interior point of A. Consider the level set
L = {x ∈ A ∶ g(x) = 0c }.
Let f ∶ A Ð→ R be a function. Suppose that the restriction of f to L has an
extreme value at a point p ∈ L that is an interior point of A. Suppose that f is
differentiable at p, and suppose that the c × n derivative matrix g ′ (p) contains
a c × c block that is invertible. Then the following conditions hold:
∇f (p) = λg ′ (p) for some row vector λ ∈ Rc ,
g(p) = 0c .
5.5 Lagrange Multipliers: Analytic Proof and General Examples 241

The proof will culminate the ideas in this chapter as follows. The inverse
function theorem says:
If the linearized inversion problem is solvable then the actual inversion
problem is locally solvable.
The inverse function theorem is equivalent to the implicit function theorem:
If the linearized level set is a graph then the actual level set is locally
a graph.
And finally, the idea for proving the Lagrange condition is:
Although the graph is a curved space, where the techniques of Chapter 4
do not apply, its domain is a straight space, where they do.
That is, the implicit function theorem lets us reduce optimization on the graph
to optimization on the domain, which we know how to do.

Proof. The second condition holds since p is a point in L. The first condition
needs to be proved. Let r = n − c, the number of variables that should remain
free under the constraint g(x) = 0c , and notate the point p as p = (a, b),
where a ∈ Rr and b ∈ Rc . Using this notation, we have g(a, b) = 0c and
g ′ (a, b) = [M N ] where M is c × r and N is c × c and invertible. (We may
assume that N is the invertible block in the hypotheses to the theorem because
we may freely permute the variables.) The implicit function theorem gives a
mapping ϕ ∶ A0 Ð→ Rc (where A0 ⊂ Rr and a is an interior point of A0 ) with
ϕ(a) = b, ϕ′ (a) = −N −1 M , and for all points (x, y) ∈ A near (a, b), g(x, y) = 0c
if and only if y = ϕ(x).
Make f depend only on the free variables by defining

f0 = f ○ (idr , ϕ) ∶ A0 Ð→ R, f0 (x) = f (x, ϕ(x)).

(See Figure 5.26.) Since the domain of f0 doesn’t curve around in some larger
space, f0 is optimized by the techniques from Chapter 4. That is, the implicit
function theorem has reduced optimization on the curved set to optimization
in Euclidean space. Specifically, the multivariable critical point theorem says
that f0 has a critical point at a,

∇f0 (a) = 0r .

Our task is to express the previous display in terms of the given data f and g.
Doing so will produce the Lagrange condition.
Because f0 = f ○ (idr , ϕ) is a composition, the chain rule says that the
condition ∇f0 (a) = 0r is ∇f (a, ϕ(a)) ⋅ (idr , ϕ)′ (a) = 0r , or

∇f (a, b) [ ′ r ] = 0r .
I
ϕ (a)
242 5 Inverse and Implicit Functions

Let ∇f (a, b) = (u, v) where u ∈ Rr and v ∈ Rc are row vectors, and recall that
ϕ′ (a) = −N −1 M . The previous display becomes

[u v] [ ] = 0r ,
Ir
−N −1 M

giving u = vN −1 M . This expression for u and the trivial identity v = vN −1 N


combine to give in turn

[u v] = vN −1 [M N ] .

But [u v] = ∇f (a, b) and [M N ] = g ′ (a, b) and (a, b) = p. So set λ = vN −1 (a


row vector in Rc ), and the previous display is precisely Lagrange’s condition,

∇f (p) = λg ′ (p).


y
Rc

p
f

(idr , ϕ) f0 R

x
Rn A0 a Rr
Figure 5.26. The Lagrange multiplier criterion from the implicit function theorem

We have seen that the Lagrange multiplier condition is necessary but not
sufficient for an extreme value. That is, it can report a false positive, as in the
two-circle problem in the previous section. False positives are not a serious
problem, since inspecting all the points that meet the Lagrange condition will
determine which of them give the true extrema of f . A false negative would be
a worse situation, giving us no indication that an extreme value might exist,
much less how to find it. The following example shows that the false negative
scenario can arise without the invertible c × c block required in Theorem 5.5.1.
Let the temperature in the plane be given by

f (x, y) = x,

and consider a plane set defined by one constraint on two variables,


5.5 Lagrange Multipliers: Analytic Proof and General Examples 243

Figure 5.27. Curve with cusp

L = {(x, y) ∈ R2 ∶ y 2 = x3 }.

(See Figure 5.27.) Since temperature increases as we move to the right, the
coldest point of L is its leftmost point, the cusp at (0, 0). However, the La-
grange condition does not find this point. Indeed, the constraining function
is g(x, y) = x3 − y 2 (which does have continuous derivatives, notwithstanding
that its level set has a cusp: the graph of a smooth function is smooth, but
the level set of a smooth function need not be smooth—this is exactly the
issue addressed by the implicit function theorem). Therefore the Lagrange
condition and the constraint are

(1, 0) = λ(3x2 , −2y),


x3 = y 2 .

These equations have no solution. The problem is that the gradient at the cusp
is ∇g(0, 0) = (0, 0), and neither of its 1 × 1 subblocks is invertible. In general,
the Lagrange multiplier condition will not report a false negative as long as we
remember that it only claims to check for extrema at the nonsingular points
of L, the points p such that g ′ (p) has an invertible c × c subblock.
The previous section gave specific examples of the Lagrange multiplier
method. This section now gives some general families of examples.
Recall that the previous section discussed the problem of optimizing the
distance between two points in the plane, each point lying on an associated
circle. Now, as the first general example of the Lagrange multiplier method,
let (x, y) ∈ Rn × Rn denote a pair of points each from Rn , and let the function
f measure the square of the distance between such a pair,

f ∶ Rn × Rn Ð→ R, f (x, y) = ∣x − y∣2 .

Note that ∇f (x, y) = [x − y y − x], viewing x and y as row vectors. Given two
mappings g1 ∶ Rn Ð→ Rc1 and g2 ∶ Rn Ð→ Rc2 , define

g ∶ Rn × Rn Ð→ Rc1 +c2 , g(x, y) = (g1 (x), g2 (y)).

To optimize the function f subject to the constraint g(x, y) = (0c1 , 0c2 ) is


to optimize the distance between pairs of points x and y on the respective
244 5 Inverse and Implicit Functions

level sets cut out of Rn by the c1 conditions g1 (x) = 0c1 and the c2 conditions
g2 (y) = 0c2 . Assuming that the Lagrange condition holds for the optimizing
pair, it is

g ′ (x) 0c2 ×n
[x − y y − x] = λg ′ (x, y) = [λ1 −λ2 ] [ 1 ]
0c1 ×n g2′ (y)
= λ1 (g1′ (x), 0c2 ×n ) − λ2 (0c1 ×n , g2′ (y)),

where λ1 ∈ Rc1 and λ2 ∈ Rc2 are row vectors. The symmetry of ∇f reduces
this equality of 2n-vectors to an equality of n-vectors,

x − y = λ1 g1′ (x) = λ2 g2′ (y).

That is, either x = y or the line through x and y is normal to the first level
set at x and normal to the second level set at y, generalizing the result from
the two-circle problem. With this result in mind, you may want to revisit
Exercise 0.0.1 from the preface to these notes.
The remaining general Lagrange multiplier methods optimize a linear func-
tion or a quadratic function subject to affine constraints or a quadratic con-
straint. We gather the results in one theorem.
Theorem 5.5.2 (Low-degree optimization with constraints).
(1) Let f (x) = aT x (where a ∈ Rn ) subject to the constraint M x = b (where
M ∈ Mc,n (R) has linearly independent rows, with c < n, and b ∈ Rc ). Check
whether aT M T (M M T )−1 M = aT . If so, then f subject to the constraint is
identically aT M T (M M T )−1 b; otherwise, f subject to the constraint has no
optima.
(2) Let f (x) = xT Ax (where A ∈ Mn (R) is symmetric and invertible) subject to
the constraint M x = b (where M ∈ Mc,n (R) has linearly independent rows,
with c < n, and b ∈ Rc ). The x that optimizes f subject to the constraint
and the optimal value are

x = A−1 M T (M A−1 M T )−1 b and f (x) = bT (M A−1 M T )−1 b.

Especially when A = I, the point x such that M x = b closest to the origin


and its square distance from the origin are

x = M T (M M T )−1 b and ∣x∣2 = bT (M M T )−1 b.

(3) Let f (x) = aT x (where a ∈ Rn ) subject to the constraint xT M x = b (where


M ∈ Mn (R) is symmetric and invertible, and b ∈ R is nonzero). Check
whether aT M −1 ab > 0. If so, then the optimizing inputs and the optimal
values are
√ √
x = ±M −1 ab/ aT M −1 ab and f (x) = ± aT M −1 ab .

Otherwise, f subject to the constraint has no optima.


5.5 Lagrange Multipliers: Analytic Proof and General Examples 245

(4) Let f (x) = xT Ax (where A ∈ Mn (R) is symmetric) subject to the constraint


xT M x = b (where M ∈ Mn (R) is symmetric and invertible, and b ∈ R is
nonzero). The possible optimal values of f subject to the constraint are
f (x) = λb where λ is an eigenvalue of M −1 A.
(The term “eigenvalue” will be explained in the proof.) Especially when
A = I, the nearest square-distances from the origin on the quadratic surface
xT M x = b take the form λb where λ is an eigenvalue of M −1 .
Proof. (1) The data are (viewing vectors as columns)
f ∶ Rn Ð→ R, f (x) = aT x where a ∈ Rn ,
g ∶ Rn Ð→ Rc , g(x) = M x − b where M ∈ Mc,n (R) and b ∈ Rc .
Here we assume that c < n, i.e., there are fewer constraints than variables.
Also, we assume that the c rows of M are linearly independent in Rn , or
equivalently (invoking a result from linear algebra), that some c columns of M
are a basis of Rc , or equivalently, that some c×c subblock of M (not necessarily
contiguous columns) has nonzero determinant. The Lagrange condition and
the constraints are
aT = λT M where λ ∈ Rc ,
M x = b.
Before solving the problem, we need to consider the two relations in the pre-
vious display.
• The Lagrange condition aT = λT M is solvable for λ exactly when aT is a
linear combination of the rows of M . Since M has c rows, each of which
is a vector in Rn , and since c < n, generally aT is not a linear combination
of the rows of M , so the Lagrange conditions cannot be satisfied. That is:
Generally the constrained function has no optimum.
However, we will study the exceptional case, that aT is a linear combination
of the rows of M . In this case, the linear combination of the rows that
gives aT is unique because the rows are linearly independent. That is, if λ
exists then it is uniquely determined.
To find the only candidate λ, note that the Lagrange condition aT = λT M
gives aT M T = λT M M T , and thus λT = aT M T (M M T )−1 . This calculation’s
first step is not reversible, and so the calculation does not always show that
λ exists. But it does show that to check whether aT is a linear combination
of the rows of M , one checks whether aT M T (M M T )−1 M = aT , in which
case λT = aT M T (M M T )−1 .
Note that furthermore, the Lagrange condition aT = λT M makes no refer-
ence to x.
• The constraining condition M x = b has solutions x only if b is a linear
combination of the columns of M . Our assumptions about M guarantee
that this is the case.
246 5 Inverse and Implicit Functions

With aT being a linear combination of the rows of M and with b being a linear
combination of the columns of M , the Lagrange condition and the constraints
immediately show that for every x in the constrained set,

f (x) = aT x = λT M x = λT b = aT M T (M M T )−1 b.

That is, f subject to the constraint g = b is the constant = aT M T (M M T )−1 b.


For geometric insight into the calculation, envision the space of linear
combinations of the rows of M (a c-dimensional subspace of Rn ) as a plane,
and envision the space of vectors x̃ such that M x̃ = 0c (an (n − c)-dimensional
subspace of Rn ) as an axis orthogonal to the plane. The condition aT = λT M
says that a lies in the plane, and the condition M x = b says that x lies on an
axis parallel to the x̃-axis. (From linear algebra, the solutions of M x = b are
the vectors
x = x0 + x̃,
where x0 is the unique linear combination of the rows of M such that M x0 = b,
and x̃ is any vector such that M x̃ = 0c .) The constant value of f is aT x for
every x on the axis. In particular, the value is aT x0 where x0 is the point
where the axis meets the plane.
(2) Now we optimize a quadratic function subject to affine constraints.
Here the data are

f ∶ Rn Ð→ R, f (x) = xT Ax where A ∈ Mn (R) is symmetric,


g ∶ Rn Ð→ Rc , g(x) = M x − b where M ∈ Mc,n (R) and b ∈ Rc .

As in (1), we assume that c < n, and we assume that the c rows of M are
linearly independent in Rn , i.e., some c columns of M are a basis of Rc ,
i.e., some c × c subblock of M has nonzero determinant. Thus the constraints
M x = b have solutions x for every b ∈ Rc .
To set up the Lagrange condition, we need to differentiate the quadratic
function f . Compute that

f (x + h) − f (x) = (x + h)T A(x + h) − xT Ax = 2xT Ah + hT Ah,

and so the best linear approximation of this difference is T (h) = 2xT Ah. It
follows that
∇f (x) = 2xT A.
Returning to the optimization problem, the Lagrange condition and the
constraints are

x T A = λT M where λ ∈ Rc ,
M x = b.

Having solved a particular problem of this sort in Section 5.4, we use its
particular solution to guide our solution of the general problem. The first step
5.5 Lagrange Multipliers: Analytic Proof and General Examples 247

was to express x in terms of λ, so here we transpose the Lagrange condition


to get Ax = M T λ, then assume that A is invertible and thus get x = A−1 M T λ.
The second step was to write the constraint in terms of λ and then solve
for λ, so here we have b = M x = M A−1 M T λ, so that λ = (M A−1 M T )−1 b,
assuming that the c × c matrix M A−1 M T is invertible. Now the optimizing
input x = A−1 M T λ is

x = A−1 M T (M A−1 M T )−1 b,

and the optimal function value f (x) = xT Ax = λT M x = λT b is

f (x) = bT (M A−1 M T )−1 b.

In particular, letting A = I, the closest point x to the origin such that M x = b


and the square of its distance from the origin are

x = M T (M M T )−1 b, ∣x∣2 = bT (M M T )−1 b.

(3) Next we optimize a linear function subject to a quadratic constraint.


The data are

f ∶ Rn Ð→ R, f (x) = aT x where a ∈ Rn ,
M ∈ Mn (R) is symmetric,
g ∶ Rn Ð→ R, g(x) = xT M x − b where {
b ∈ R is nonzero.

The Lagrange condition and the constraint are

aT = λxT M where λ ∈ R,
x M x = b.
T

Therefore the possible optimized values of f are

f (x) = aT x = λxT M x = λb,

and so to find these values it suffices to find the possible values of λ. Assuming
that M is invertible, the Lagrange condition is aT M −1 = λxT , and hence

aT M −1 ab = λxT ab = λ2 b2 = f (x)2 .

Thus (assuming that aT M −1 ab > 0) the optimal values are



f (x) = ± aT M −1 ab .

The penultimate display also shows that λ = ± aT M −1 ab/b, so that the La-
grange condition gives the optimizing x-values,

x = ±M −1 ab/ aT M −1 ab .
248 5 Inverse and Implicit Functions

One readily confirms that indeed xT M x = b for these x.


As a small geometric illustration of the sign-issues in this context, suppose
that n = 2 and M = [ 01 10 ], so that the quadratic constraint is 2x1 x2 = b. For
b > 0 the optimizing problem is thus set on a hyperbola in the first and third
quadrants of the plane. The function to be optimized is f (x, y) = a1 x1 + a2 x2
for some a1 , a2 ∈ R. Since M is its own inverse, the quantity aT M −1 ab under
the square root is 2a1 a2 b, and thus the constrained optimization problem
has solutions only when a1 a2 > 0. Meanwhile, the level sets of f are lines of
slope −a1 /a2 , meaning that the problem has solutions only when the level sets
have negative slope. In that case, the solutions will be at the two points where
the hyperbola is tangent to a level set: a pair of opposite points, one in the
first quadrant and one in the third. For b < 0 the constraining hyperbola moves
to the second and fourth quadrants, and the problem has solutions when the
level sets of f have a positive slope.
(4) Finally, we optimize a quadratic function subject to a quadratic con-
straint. The data are

f ∶ Rn Ð→ R, f (x) = xT Ax where A ∈ Mn (R) is symmetric,


M ∈ Mn (R) is symmetric,
g ∶ Rn Ð→ R, g(x) = xT M x − b where {
b ∈ R is nonzero.

The Lagrange condition and the constraint are

xT A = λxT M where λ ∈ R,
x M x = b.
T

By the Lagrange condition and the constraint, the possible optimal values
of f take the form
f (x) = xT Ax = λxT M x = λb,
which we will know as soon as we find the possible values of λ, without needing
to find x. Assuming that M is invertible, the Lagrange condition gives

M −1 Ax = λx.

In other words, x must satisfy the condition that multiplying x by M −1 A gives


a scalar multiple of x. Every nonzero vector x that satisfies this condition is
called an eigenvector of M −1 A. The scalar multiple factor λ is the correspond-
ing eigenvalue. We will end the section with a brief discussion of eigenvalues.

The eigenvalues of a square matrix B are found by a systematic procedure.


The first step is to observe that the condition Bx = λx is

(B − λI)x = 0.
5.5 Lagrange Multipliers: Analytic Proof and General Examples 249

Since every eigenvector x is nonzero by definition, B −λI is not invertible, i.e.,

det(B − λI) = 0.

Conversely, for every λ ∈ R satisfying this equation there is at least one eigen-
vector x of B, because the equation (B − λI)x = 0 has nonzero solutions. And
so the eigenvalues are the real roots of the polynomial

pB (λ) = det(B − λI).

This polynomial is the characteristic polynomial of B, already discussed in


Exercise 4.7.10. For example, part (a) of that exercise covered the case n = 2,
showing that if B = [ ab db ] then

pB (λ) = λ2 − (a + d)λ + (ad − b2 ).

The discriminant of this quadratic polynomial is

∆ = (a + d)2 − 4(ad − b2 ) = (a − d)2 + 4b2 .

Since ∆ is nonnegative, all roots of the characteristic polynomial are real.


And a result of linear algebra says that for every positive n, all roots of
the characteristic polynomial of a symmetric n × n matrix B are real as well.
However, returning to our example, even though the square matrices A and M
are assumed to be symmetric, the product M −1 A need not be.
As a particular case of Theorem 5.5.2, part (4), if A = I then finding the
eigenvectors of M encompasses finding the points of a quadric surface that
are closest to the origin or farthest from the origin. For instance, if n = 2 and
M = [ ab db ] then we are optimizing on the set of points (x1 , x2 ) ∈ R2 such that,
say,
ax21 + 2bx1 x2 + dx22 = 1.
The displayed equation is the equation of a conic section. When b = 0 we have
an unrotated ellipse or hyperbola, and the only possible optimal points will
be the scalar multiples of e1 and e2 that lie on the curve. For an ellipse, a pair
of points on one axis is closest to the origin, and a pair on the other axis is
farthest; for a hyperbola, a pair on one axis is closest, and there are no points
on the other axis. In the case of a circle, the matrix M is a scalar multiple of
the identity matrix, and so all vectors are eigenvectors, compatibly with the
geometry that all points are equidistant from the origin. Similarly, if n = 3
then L is a surface such as an ellipsoid or a hyperboloid.

Exercises

5.5.1. Let f (x, y) = y and let g(x, y) = y 3 −x4 . Graph the level set L = {(x, y) ∶
g(x, y) = 0}. Show that the Lagrange multiplier criterion does not find any
candidate points where f is optimized on L. Optimize f on L nonetheless.
250 5 Inverse and Implicit Functions

5.5.2. Consider the linear mapping

g(x, y, z) = (x + 2y + 3z, 4x + 5y + 6z).

(a) Use Theorem 5.5.2, part (1), to optimize the linear function f (x, y, z) =
6x + 9y + 12z subject to the affine constraint g(x, y, z) = (7, 8).
(b) Verify without using the Lagrange multiplier method that the function
f subject to the constraint g = (7, 8) (with f and g from part (a)) is constant,
always taking the value that you found in part (a).
(c) Show that the function f (x, y, z) = 5x + 7y + z cannot be optimized
subject to any constraint g(x, y, z) = b.

5.5.3. (a) Use Theorem 5.5.2, part (2), to minimize the quadratic function
f (x, y) = x2 + y 2 subject to the affine constraint 3x + 5y = 8.
(b) Use the same result to find the extrema of f (x, y, z) = 2xy + z 2 subject
to the constraints x + y + z = 1, x + y − z = 0.
(c) Use the same result to find the nearest point to the origin on the
intersection of the hyperplanes x + y + z − 2w = 1 and x − y + z + w = 2 in R4 ,
reproducing your answer to Exercise 5.4.1.

5.5.4. (a) Use Theorem 5.5.2, part (3), to optimize f (x, y, z) = x − 2y + 2z on


the sphere of radius 3 centered at the origin.
(b) Use the same result to optimize the function f (x, y, z, w) = x + y − z − w
subject to the constraint g(x, y, z, w) = 1, g(x, y, z, w) = x2 /2 − y 2 + z 2 − w2 .

5.5.5. (a) Use Theorem 5.5.2, part (4), to optimize the function f (x, y) = 2xy
subject to the constraint g(x, y) = 1 where g(x, y) = x2 + 2y 2 .
(b) Use the same result to optimize the function f (x, y, z) = 2(xy +yz +zx)
subject to the constraint g(x, y, z) = 1 where g(x, y, z) = x2 + y 2 − z 2 .
Part II

Multivariable Integral Calculus


6
Integration

The integral of a scalar-valued function of many variables, taken over a box


of its inputs, is defined in Sections 6.1 and 6.2. Intuitively, the integral can
be understood as representing mass or volume, but the definition is purely
mathematical: the integral is a limit of sums, as in one-variable calculus.
Multivariable integration has many familiar properties—for example, the in-
tegral of a sum is the sum of the integrals. Section 6.3 shows that continuous
functions can be integrated over boxes. However, we want to carry out mul-
tivariable integration over more generally shaped regions. That is, the theory
has geometric aspects not present in the one-dimensional case, where inte-
gration is carried out over intervals. After a quick review of the one-variable
theory in Section 6.4, Section 6.5 shows that continuous functions can also be
integrated over nonboxes that have manageable shapes. The main tools for
evaluating multivariable integrals are Fubini’s theorem (Section 6.6), which
reduces an n-dimensional integral to an n-fold nesting of one-dimensional in-
tegrals, and the change of variable theorem (Section 6.7), which replaces one
multivariable integral by another that may be easier to evaluate. Section 6.8
provides some preliminaries for the proof of the change of variable theorem,
and then Section 6.9 gives the proof.

6.1 Machinery: Boxes, Partitions, and Sums

The integral represents physical ideas such as volume or mass or work, but
defining it properly in purely mathematical terms requires some care. Here is
some terminology that is standard from the calculus of one variable, except
perhaps compact (meaning closed and bounded ) from Section 2.4 of these
notes. The language describes a domain of integration and the machinery to
subdivide it.

Definition 6.1.1 (Compact interval, length, partition, subinterval).


A nonempty compact interval in R is a set

© Springer International Publishing AG 2016 253


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5_6
254 6 Integration

I = [a, b] = {x ∈ R ∶ a ≤ x ≤ b} ,

where a and b are real numbers with a ≤ b. The length of the interval is

length(I) = b − a.

A partition of I is a set of real numbers

P = {t0 , t1 , . . . , tk }

satisfying
a = t0 < t1 < ⋯ < tk = b.
Such a partition divides I into k subintervals J1 , . . . , Jk where

Jj = [tj−1 , tj ], j = 1, . . . , k.

A generic nonempty compact subinterval of I is denoted J. (See Fig-


ure 6.1.) Since the only intervals that we are interested in are nonempty and
compact, either or both of these properties will often be tacit from now on,
rather than stated again and again. As a special case, Definition 6.1.1 says
that every length-zero interval [a, a] has only one partition, P = {a}, which
divides it into no subintervals.

J1 J Jk

a = t0 t1 t2 t3 tk−1 tk = b

Figure 6.1. Interval and subintervals

The next definition puts an initial loose stipulation on functions to be


integrated.

Definition 6.1.2 (Bounded function). Let A be a subset of R, and let


f ∶ A Ð→ R be a function. Then f is bounded if its range, {f (x) ∶ x ∈ A}, is
bounded as a set in R, as in Definition 2.4.6. That is, f is bounded if there
exists some R > 0 such that ∣f (x)∣ < R for all x ∈ A.

Visually, a function is bounded if its graph is contained inside a horizontal


strip. On the other hand, the graph of a bounded function needn’t be contained
in a vertical strip, because the domain (and therefore the graph) need not be
bounded. For example, these functions are bounded:

f (x) = sin x, f (x) = 1/(1 + x2 ), f (x) = arctan x,


6.1 Machinery: Boxes, Partitions, and Sums 255

and these functions are not:

f (x) = ex , f (x) = 1/x for x ≠ 0.

However, since we want to integrate a bounded function over a compact in-


terval, the entire process is set inside a rectangle in the plane.
The next definition describes approximations of the integral by finite sums,
the integral to be a limit of such sums if it exists at all. The summands involve
limits, so already these sums are analytic constructs despite being finite.

Definition 6.1.3 (One-dimensional lower sum and upper sum). Let


I be a nonempty compact interval in R, and let f ∶ I Ð→ R be a bounded
function. For every nonempty subinterval J of I, the greatest lower bound of
the values taken by f on J is denoted mJ (f ),

mJ (f ) = inf {f (x) ∶ x ∈ J} ,

and similarly, the least upper bound is denoted MJ (f ),

MJ (f ) = sup {f (x) ∶ x ∈ J} .

Let P be a partition of I into subintervals J. The lower sum of f over P is

L(f, P ) = ∑ mJ (f ) length(J),
J

and the upper sum of f over P is

U (f, P ) = ∑ MJ (f ) length(J).
J

If the interval I in Definition 6.1.3 has length zero, then the lower and
upper sums are empty, and so they are assigned the value 0 by convention.
The function f in Definition 6.1.3 is not required to be differentiable or
even continuous, only bounded. Even so, the values mJ (f ) and MJ (f ) in
the previous definition exist by the set-bound phrasing of the principle that
the real number system is complete. To review this idea, see Theorem 1.1.3.
When f is in fact continuous, the extreme value theorem (Theorem 2.4.15)
justifies substituting min and max for inf and sup in the definitions of mJ (f )
and MJ (f ), since each subinterval J is nonempty and compact. It may be eas-
iest at first to understand mJ (f ) and MJ (f ) by imagining f to be continuous
and mentally substituting appropriately. But we will need to integrate dis-
continuous functions f . Such functions may take no minimum or maximum
on J, and so we may run into a situation like the one pictured in Figure 6.2,
in which the values mJ (f ) and MJ (f ) are not actual outputs of f . Thus the
definition must be as given to make sense.
The technical properties of inf and sup will figure in Lemmas 6.1.6, 6.1.8,
and 6.2.2. To see them in isolation first, we rehearse them now. So, let S
256 6 Integration

MJ (f )

mJ (f )

Figure 6.2. Sup and inf but no max or min

and T be nonempty sets of real numbers, both bounded. In the context of


integration, S and T will be sets of outputs of a bounded function f . This
specific description of S and T is irrelevant for the moment, but it may help
you to see later how these ideas are used in context if you now imagine S
and T on a vertical axis, as in Figure 6.2, rather than on a horizontal one. In
any case, the necessary results are as follows.
• inf(S) ≤ sup(S). In fact every lower bound of S is at most as big as every
upper bound, because every element of S lies between them. In particular,
this argument applies to the greatest lower bound inf(S) and the least
upper bound sup(S), giving the stated inequality.
• If S ⊂ T then inf(T ) ≤ inf(S) ≤ sup(S) ≤ sup(T ). We already have the
middle inequality. To establish the others, the idea is that since S is a
subset, the bounds on S are innately at least as tight as those on T . More
specifically, since inf(T ) is a lower bound of T , it is a lower bound of the
subset S, and because inf(S) is the greatest lower bound of S, the first
inequality follows. The third inequality is similar.
In particular, let I be a compact interval, let f ∶ I Ð→ R be a bounded
function, let J be a subinterval of I, let J ′ be a subinterval of J in turn,
and then take S and T to be sets of output-values of f ,

S = {f (x) ∶ x ∈ J ′ }, T = {f (x) ∶ x ∈ J}.

Then S ⊂ T because S is a set of fewer outputs than T , and so this bullet


has shown that

mJ (f ) ≤ mJ ′ (f ) ≤ MJ ′ (f ) ≤ MJ (f ).

• If s ≤ t for all s ∈ S and t ∈ T then sup(S) ≤ inf(T ). Imprecisely, the idea is


that S is entirely below T on the vertical axis, and so the smallest number
that traps S from above is still below the largest number that traps T
from below. A more careful proof is in the next section.
Graphing f over I in the usual fashion and interpreting the lower and upper
sum as sums of rectangle-areas shows that they are respectively too small
and too big to be the area under the graph. (See Figure 6.3.) Alternatively,
6.1 Machinery: Boxes, Partitions, and Sums 257

Figure 6.3. Too small and too big

thinking of f as the density function of a wire stretched over the interval I


shows that the lower and upper sums are too small and too big to be the
mass of the wire. The hope is that the lower and upper sums are trapping a
yet-unknown quantity (possibly to be imagined as area or mass) from each
side, and that as the partition P becomes finer, the lower and upper sums will
actually converge to this value.
All the terminology so far generalizes easily from one dimension to many,
i.e., from R to Rn . Recall that if S1 , S2 , . . . , Sn are subsets of R then their
Cartesian product is a subset of Rn ,

S1 × S2 × ⋯ × Sn = {(s1 , s2 , . . . , sn ) ∶ s1 ∈ S1 , s2 ∈ S2 , . . . , sn ∈ Sn } .

(See Figure 6.4, in which n = 2, and S1 has two components, and S2 has one
component, so that the Cartesian product S1 × S2 has two components.)

Figure 6.4. Cartesian product


258 6 Integration

Definition 6.1.4 (Compact box, box volume, partition, subbox). A


nonempty compact box in Rn is a Cartesian product

B = I1 × I2 × ⋯ × In

of nonempty compact intervals Ij for j = 1, . . . , n. The volume of the box is


the product of the lengths of its sides,
n
vol(B) = ∏ length(Ij ).
j=1

A partition of B is a Cartesian product of partitions Pj of Ij for j = 1, . . . , n,

P = P1 × P2 × ⋯ × Pn .

Such a partition divides B into subboxes J, each such subbox being a Carte-
sian product of subintervals. By a slight abuse of language, these are called
the subboxes of P .

(See Figure 6.5, and imagine its three-dimensional Rubik’s cube counterpart.)
Every nonempty compact box in Rn has partitions, even such boxes with
some length-zero sides. This point will arise at the very beginning of the next
section.

Figure 6.5. Box and subboxes

The definition of a bounded function f ∶ A Ð→ R, where now A is a subset


of Rn , is virtually the same as earlier in the section: again the criterion is that
its range must be bounded as a set. (In fact, the definition extends just as
easily to mappings f ∶ A Ð→ Rm , but we need only scalar-valued functions
here.)
6.1 Machinery: Boxes, Partitions, and Sums 259

Definition 6.1.5 (n-dimensional lower sum and upper sum). Let B be


a nonempty compact box in Rn , and let f ∶ B Ð→ R be a bounded function.
For every nonempty subbox J of B, define mJ (f ) and MJ (f ) analogously as
before,
mJ (f ) = inf {f (x) ∶ x ∈ J} and MJ (f ) = sup {f (x) ∶ x ∈ J} .
Let P be a partition of B into subboxes J. The lower sum and upper sum
of f over P are similarly
L(f, P ) = ∑ mJ (f ) vol(J) and U (f, P ) = ∑ MJ (f ) vol(J).
J J

With minor grammatical modifications, this terminology includes the pre-


vious definition as a special case when n = 1 (e.g., volume reverts to length, as
it should revert to area when n = 2), so from now on we work in Rn . However,
keeping the cases n = 1 and n = 2 in mind should help to make the pandimen-
sional ideas of multivariable integration geometrically intuitive. If the box B
in Definition 6.1.5 has any sides of length zero then the upper and lower sums
are 0.
Graphing f over B in the usual fashion when n = 2 and interpreting the
lower and upper sum as sums of box-volumes shows that they are respectively
too small and too big to be the volume under the graph. (See Figure 6.6.)
Alternatively, if n = 2 or n = 3, then thinking of f as the density of a plate
or a block occupying the box B shows that the lower and upper sums are
too small and too big to be the object’s mass. Again, the hope is that as the
partitions become finer, the lower and upper sums will converge to a common
value that they are trapping from either side.

Figure 6.6. Too small and too big

The first result supports this intuition.


Lemma 6.1.6. For every box B, every partition P of B, and every bounded
function f ∶ B Ð→ R,
L(f, P ) ≤ U (f, P ).
260 6 Integration

Proof. For every subbox J of P , mJ (f ) ≤ MJ (f ) by the first bullet from earlier


in this section with S = {f (x) ∶ x ∈ J ′ }, while also vol(J) ≥ 0, and therefore
mJ (f ) vol(J) ≤ MJ (f ) vol(J). Sum this relation over all subboxes J to get
the result. ⊔

The next thing to do is express the notion of taking a finer partition.

Definition 6.1.7 (Refinement). Let P and P ′ be partitions of B. Then P ′


is a refinement of P if P ′ ⊃ P .

Figure 6.7 illustrates the fact that if P ′ refines P then every subbox of P ′
is contained in a subbox of P . The literal manifestation in the figure of the
containment P ′ ⊃ P is that the set of points where a horizontal line segment
and a vertical line segment meet in the right side of the figure subsumes the
set of such points in the left side.
Refining a partition brings the lower and upper sums nearer each other:

Figure 6.7. Refinement

Lemma 6.1.8. Suppose that P ′ refines P as a partition of the box B. Then

L(f, P ) ≤ L(f, P ′ ) and U (f, P ′ ) ≤ U (f, P ).

See Figure 6.8 for a picture-proof for lower sums when n = 1, thinking of
the sums in terms of area. The formal proof is just a symbolic rendition of
the figure’s features.

Proof. Every subbox J of P divides further under the refinement P ′ into


subboxes J ′ . Since each J ′ ⊂ J, we have mJ ′ (f ) ≥ mJ (f ) by the second bullet
from earlier in this section, but even without reference to the bullet the idea
is that
6.1 Machinery: Boxes, Partitions, and Sums 261

Figure 6.8. Lower sum increasing under refinement

mJ ′ (f ) ≥ mJ (f ) because f has less opportunity to be small on the


subbox J ′ of J.
Thus
∑ mJ ′ (f ) vol(J ) ≥ ∑ mJ (f ) vol(J )
′ ′

J ′ ⊂J J ′ ⊂J
= mJ (f ) ∑ vol(J ′ ) = mJ (f )vol(J).
J ′ ⊂J

Sum the relation ∑J ′ ⊂J mJ ′ (f ) vol(J ′ ) ≥ mJ (f ) vol(J) over all subboxes J of


P to get L(f, P ′ ) ≥ L(f, P ). The argument is similar for upper sums. ⊓

The proof uncritically assumes that the volumes of a box’s subboxes sum
to the volume of the box. This assumption is true, and left as an exercise.
The emphasis here isn’t on boxes (which are straightforward), but on defining
the integral of a function f whose domain is a box. The next result helps
investigate whether the lower and upper sums indeed trap some value from
both sides. First we need a definition.
Definition 6.1.9 (Common refinement). Given two partitions of B,
P = P1 × P2 × ⋯ × Pn and P ′ = P1′ × P2′ × ⋯ × Pn′ ,
their common refinement is the partition
P ′′ = (P1 ∪ P1′ ) × (P2 ∪ P2′ ) × ⋯ × (Pn ∪ Pn′ ).
(See Figure 6.9.) The common refinement of two partitions P and P ′ is
certainly a partition that refines both P and P ′ , and it is the smallest such
partition. The union P ∪ P ′ is not taken as the definition of the common
refinement because it need not be a partition at all. The common refinement
does all the work for the next result.
Proposition 6.1.10 (Lower sums are at most upper sums). Let P and
P ′ be partitions of the box B, and let f ∶ B Ð→ R be any bounded function.
Then
L(f, P ) ≤ U (f, P ′ ).
262 6 Integration

Figure 6.9. Common refinement

Proof. Let P ′′ be the common refinement of P and P ′ . By the two lemmas,

L(f, P ) ≤ L(f, P ′′ ) ≤ U (f, P ′′ ) ≤ U (f, P ′ ),

proving the result. ⊔


Exercises

6.1.1. (a) Let I = [0, 1], let P = {0, 1/2, 1}, let P ′ = {0, 3/8, 5/8, 1}, and let P ′′
be the common refinement of P and P ′ . What are the subintervals of P , and
what are their lengths? Same question for P ′ . Same question for P ′′ .
(b) Let B = I × I, let Q = P × {0, 1/2, 1}, let Q′ = P ′ × {0, 1/2, 1}, and let
Q be the common refinement of Q and Q′ . What are the subboxes of Q and
′′

what are their areas? Same question for Q′ . Same question for Q′′ .

6.1.2. Show that the lengths of the subintervals of every partition of [a, b]
sum to the length of [a, b]. Same for the areas of the subboxes of [a, b] × [c, d].
Generalize to Rn .

6.1.3. Let J = [0, 1]. Compute mJ (f ) and MJ (f ) for each of the following
functions f ∶ J Ð→ R.
(a) f (x) = x(1 − x),


⎪1
(b) f (x) = ⎨
if x is irrational,

⎪ x = n/m in lowest terms, n, m ∈ Z and m > 0,

1/m if


⎪(1 − x) sin(1/x) if x ≠ 0,
(c) f (x) = ⎨

⎪ if x = 0.

0

6.1.4. (a) Let I, P , P ′ , and P ′′ be as in Exercise 6.1.1(a), and let f (x) = x2


on I. Compute the lower sums L(f, P ), L(f, P ′ ), L(f, P ′′ ) and the correspond-
ing upper sums, and check that they conform to Lemma 6.1.6, Lemma 6.1.8,
and Proposition 6.1.10.
6.2 Definition of the Integral 263

(b) Let B, Q, Q′ , and Q′′ be as in Exercise 6.1.1(b), and define f ∶ B Ð→ R


by


⎪0 if 0 ≤ x < 1/2,
f (x, y) = ⎨

⎪1 if 1/2 ≤ x ≤ 1.

Compute L(f, Q), L(f, Q′ ), L(f, Q′′ ), and the corresponding upper sums,
and check that they conform to Lemma 6.1.6, Lemma 6.1.8, and Proposi-
tion 6.1.10.

6.1.5. Draw the Cartesian product ([a1 , b1 ]∪[c1 , d1 ])×([a2 , b2 ]∪[c2 , d2 ]) ⊂ R2


where a1 < b1 < c1 < d1 and similarly for the other subscript.

6.1.6. When is a Cartesian product empty?

6.1.7. Show that the union of partitions of a box B need not be a partition
of B.

6.1.8. Draw a picture illustrating the proof of Proposition 6.1.10 when n = 1.

6.2 Definition of the Integral


Fix a nonempty compact box B and a bounded function f ∶ B Ð→ R. The set
of lower sums of f over all partitions P of B,

{L(f, P ) ∶ P is a partition of B} ,

is nonempty because such partitions exist (as observed in the previous sec-
tion), and similarly for the set of upper sums. Proposition 6.1.10 shows that
the set of lower sums is bounded above by every upper sum, and similarly the
set of upper sums is bounded below. Thus the next definition is natural.

Definition 6.2.1 (Lower integral, upper integral, integrability, inte-


gral). The lower integral of f over B is the least upper bound of the lower
sums of f over all partitions P ,

L ∫ f = sup {L(f, P ) ∶ P is a partition of B} .


B

Similarly, the upper integral of f over B is the greatest lower bound of the
upper sums of f over all partitions P ,

U ∫ f = inf {U (f, P ) ∶ P is a partition of B} .


B

The function f is called integrable over B if the lower and upper integrals
are equal, i.e., if L ∫B f = U ∫B f . In this case, their shared value is called the
integral of f over B and written ∫B f .
264 6 Integration

So we have a quantitative definition that seems appropriate. The integral,


if it exists, is at least as big as every lower sum and at least as small as
every upper sum; and it is specified as the common value that is approached
from below by lower sums and from above by upper sums. Less formally,
if quantities that we view as respectively too small and too big approach a
common value, then that value must be what we’re after.
The following lemma shows that L ∫B f ≤ U ∫B f . Its proof provides an
example of how to work with lower and upper bounds. Note that the argument
does not require a contradiction or an ε, but rather it goes directly to the point.
Lemma 6.2.2 (Persistence of order). Let L and U be nonempty sets of
real numbers such that

ℓ≤u for all ℓ ∈ L and u ∈ U. (6.1)

Then sup(L) and inf(U) exist, and they satisfy

sup(L) ≤ inf(U).

Proof. The given condition (6.1) can be rephrased as

for each ℓ ∈ L, ℓ ≤ u for all u ∈ U,

meaning precisely that

each ℓ ∈ L is a lower bound of U.

Since U is nonempty and has lower bounds, it has a greatest lower bound
inf(U). Since each ℓ ∈ L is a lower bound and inf(U) is the greatest lower
bound,
ℓ ≤ inf(U) for each ℓ ∈ L,
meaning precisely that

inf(U) is an upper bound of L.

Since L is nonempty and has an upper bound, it has a least upper bound
sup(L). Since sup(L) is the least upper bound and inf(U) is an upper bound,

sup(L) ≤ inf(U).

This is the desired result. ⊔



Again let a nonempty compact box B and a bounded function f ∶ B Ð→ R
be given. The lemma shows that L ∫B f ≤ U ∫B f (exercise). Therefore, to
show that ∫B f exists, it suffices to show only that the reverse inequality
holds, L ∫B f ≥ U ∫B f .
Not all bounded functions f ∶ B Ð→ R are integrable. The standard coun-
terexample is the interval B = [0, 1] and the function
6.2 Definition of the Integral 265


⎪1
f (x) = ⎨
if x is rational,
f ∶ B Ð→ R,



0 if x is irrational.

Chasing through the definitions shows that for this B and f , every lower
sum is L(f, P ) = 0, so the lower integral is L ∫B f = sup {0} = 0. Similarly,
U ∫B f = 1. Since the upper and lower integrals don’t agree, ∫B f does not
exist.
So the questions are, what functions are integrable, or at least, what are
some general classes of integrable functions, and how does one evaluate their
integrals? Working from the definitions, as in the last example, is a good
exercise in simple cases to get familiar with the machinery, but as a general
procedure it is hopelessly unwieldy. Here is one result that will help us in the
next section to show that continuous functions are integrable.
Proposition 6.2.3 (Integrability criterion). Let B be a box, and let f ∶
B Ð→ R be a bounded function. Then f is integrable over B if and only if for
every ε > 0, there exists a partition P of B such that U (f, P ) − L(f, P ) < ε.
Proof. ( Ô⇒ ) Let f be integrable over B and let ε > 0 be given. Since
∫B f − ε/2 is less than the least upper bound of the lower sums, it is not an
upper bound of the lower sums, and similarly ∫B f + ε/2 is not a lower bound
of the upper sums. Thus there exist partitions P and P ′ of B such that

L(f, P ) > ∫ f − ε/2 and U (f, P ′ ) < ∫ f + ε/2.


B B
′′ ′
Let P be the common refinement of P and P . Then since refining increases
lower sums and decreases upper sums, also

L(f, P ′′ ) > ∫ f − ε/2 and U (f, P ′′ ) < ∫ f + ε/2.


B B

This shows that U (f, P ) − L(f, P ) < ε, as required.


′′ ′′

( ⇐Ô ) We need to show that U ∫B f − L ∫B f = 0. To do so, use the little


principle that to prove that a nonnegative number is zero, it suffices to show
that it is less than every positive number. Let ε > 0 be given. By assumption
there exists a partition P such that

U (f, P ) − L(f, P ) < ε,

and by the definition of upper and lower integral, also

L(f, P ) ≤ L ∫ f ≤ U ∫ f ≤ U (f, P ).
B B

The last two displays combine to give

U ∫ f − L ∫ f < ε.
B B

Since the positive number ε is arbitrary, U ∫B f − L ∫B f = 0 as desired. ⊔



266 6 Integration

Here is an example of using the integrability criterion. It subsumes the


result from one-variable calculus that if ∫a f exists then also ∫a f and ∫c f
b c b

exist for every c between a and b, and they they sum to ∫a f .


b

Proposition 6.2.4. Let B be a box, let f ∶ B Ð→ R be a bounded function,


and let P be a partition of B. If f is integrable over B then f is integrable
over each subbox J of P , in which case

∑ ∫ f = ∫ f.
J J B

Proof. Consider any partition P ′ of B that refines P . For each subbox J of P ,


let PJ′ = P ′ ∩ J, a partition of J. Let the symbol J ′ denote subboxes of P ′ ,
and compute that
L(f, P ′ ) = ∑ mJ ′ (f ) vol(J ′ ) = ∑ ∑ mJ ′ (f ) vol(J ′ ) = ∑ L(f, PJ′ ).
J′ J J ′ ⊂J J

Similarly, U (f, P ) =

∑J U (f, PJ′ ).
Suppose that f is integrable over B. Let an arbitrary ε > 0 be given. By
“ Ô⇒ ” of the integrability criterion, there exists a partition P ′ of B such
that
U (f, P ′ ) − L(f, P ′ ) < ε.
Since refining a partition cannot increase the difference between the upper and
lower sums, we may replace P ′ by its common refinement with P and thus
assume that P ′ refines P . Therefore the formulas from the previous paragraph
show that
∑(U (f, PJ ) − L(f, PJ )) < ε,
′ ′

J
and so
U (f, PJ′ ) − L(f, PJ′ ) < ε for each subbox J of B.
Therefore f is integrable over each subbox J of B by “ ⇐Ô ” of the integra-
bility criterion.
Now assume that f is integrable over B and hence over each subbox J.
Still letting P ′ be any partition of B that refines P , the integral over each
subbox J lies between the corresponding lower and upper sums, and so

L(f, P ′ ) = ∑ L(f, PJ′ ) ≤ ∑ ∫ f ≤ ∑ U (f, PJ′ ) = U (f, P ′ ).


J J J J

Thus ∑J ∫J f is an upper bound of all lower sums L(f, P ′ ) and a lower bound
of all upper sums U (f, P ′ ), giving

L ∫ f ≤ ∑ ∫ f ≤ U ∫ f.
B J J B

But L ∫B f = U ∫B f = ∫B f because f is integrable over B, and so the inequal-


ities in the previous display collapse to give the desired result. ⊔

6.2 Definition of the Integral 267

Similar techniques show that the converse of the proposition holds as well,
so that given B, f , and P , f is integrable over B if and only if f is integrable
over each subbox J, but we do not need this full result. Each of the proposition
and its converse requires both implications of the integrability criterion.
The symbol B denotes a box in the next set of exercises.

Exercises

6.2.1. Let f ∶ B Ð→ R be a bounded function. Explain how Lemma 6.2.2


shows that L ∫B f ≤ U ∫B f .

6.2.2. Let U and L be real numbers satisfying U ≥ L. Show that U = L if and


only if for all ε > 0, U − L < ε.

6.2.3. Let f ∶ B Ð→ R be the constant function f (x) = k for all x ∈ B. Show


that f is integrable over B and ∫B f = k ⋅ vol(B).

6.2.4. Granting that every interval of positive length contains both rational
and irrational numbers, fill in the details in the argument that the function
f ∶ [0, 1] Ð→ R with f (x) = 1 for rational x and f (x) = 0 for irrational x is
not integrable over [0, 1].

6.2.5. Let B = [0, 1] × [0, 1] ⊂ R2 . Define a function f ∶ B Ð→ R by




⎪0 if 0 ≤ x < 1/2,
f (x, y) = ⎨

⎪ if 1/2 ≤ x ≤ 1.

1

Show that f is integrable and ∫B f = 1/2.

6.2.6. This exercise shows that integration is linear. Let f ∶ B Ð→ R and


g ∶ B Ð→ R be integrable.
(a) Let P be a partition of B and let J be some subbox of P . Show that

mJ (f ) + mJ (g) ≤ mJ (f + g) ≤ MJ (f + g) ≤ MJ (f ) + MJ (g).

Show that consequently,

L(f, P ) + L(g, P ) ≤ L(f + g, P ) ≤ U (f + g, P ) ≤ U (f, P ) + U (g, P ).

(b) Part (a) of this exercise obtained comparisons between lower and upper
sums, analogously to the first paragraph of the proof of Proposition 6.2.4.
Argue analogously to the rest of the proof to show that ∫B (f + g) exists and
equals ∫B f + ∫B g. (One way to begin is to use the integrability criterion twice
and then a common refinement to show that there exists a partition P of B
such that U (f, P ) − L(f, P ) < ε/2 and U (g, P ) − L(g, P ) < ε/2.)
(c) Let c ≥ 0 be any constant. Let P be any partition of B. Show that for
every subbox J of P ,
268 6 Integration

mJ (cf ) = c mJ (f ) and MJ (cf ) = c MJ (f ).

Explain why consequently

L(cf, P ) = c L(f, P ) and U (cf, P ) = c U (f, P ).

Explain why consequently

L ∫ cf = c L ∫ f and U ∫ cf = c U ∫ f.
B B B B

Explain why consequently ∫B cf exists and

∫ cf = c ∫ f.
B B

(d) Let P be any partition of B. Show that for every subbox J of P ,

mJ (−f ) = −MJ (f ) and MJ (−f ) = −mJ (f ).

Explain why consequently

L(−f, P ) = −U (f, P ) and U (−f, P ) = −L(f, P ).

Explain why consequently

L ∫ (−f ) = −U ∫ f and U ∫ (−f ) = −L ∫ f.


B B B B

Explain why consequently ∫B (−f ) exists and

∫ (−f ) = − ∫ f.
B B

Explain why the work so far here in part (d) combines with part (c) to show
that for every c ∈ R (positive, zero, or negative), ∫B cf exists and

∫ cf = c ∫ f.
B B

6.2.7. This exercise shows that integration preserves order. Let f ∶ B Ð→ R


and g ∶ B Ð→ R both be integrable, and suppose that f ≤ g, meaning that
f (x) ≤ g(x) for all x ∈ B. Show that ∫B f ≤ ∫B g. (Comment: Even though
f (x) ≤ g(x) for all x, upper sums for f can be bigger than lower sums for g (!),
so the argument requires a little finesse. Perhaps begin by explaining why the
previous exercise lets us show instead that ∫B (g − f ) ≥ 0. That is, introducing
the function h = g − f , we have h(x) ≥ 0 for all x and we need to show that
∫B h ≥ 0. This is precisely the original problem with g = h and f = 0, so once
one has assimilated this idea, one often says in similar contexts, “We may
take f = 0.”)
6.3 Continuity and Integrability 269

6.2.8. Suppose that f ∶ B Ð→ R is integrable, and that so is ∣f ∣. Show that


∣∫B f ∣ ≤ ∫B ∣f ∣.

6.2.9. Prove the converse to Proposition 6.2.4: Let B be a box, let f ∶ B Ð→ R


be a bounded function, and let P be a partition of B. If f is integrable over
each subbox J of P then f is integrable over B. (You may quote the formulas
from the first paragraph of the proof in the text, since that paragraph makes
no assumptions of integrability. It may help to let b denote the number of
subboxes J, so that this quantity has a handy name.)

6.3 Continuity and Integrability


Although the integrability criterion gives a test for the integrability of any
specific function f , it is cumbersome to apply case by case. But handily, it
will provide the punchline of the proof of the next theorem, which says that
a natural class of functions is integrable.

Theorem 6.3.1 (Continuity implies integrability). Let B be a box, and


let f ∶ B Ð→ R be a continuous function. Then f is integrable over B.

To prove this theorem, as we will at the end of this section, we first need to
sharpen our understanding of continuity on boxes. The version of continuity
that we’re familiar with isn’t strong enough to prove certain theorems, this
one in particular. Formulating the stronger version of continuity requires first
revising the grammar of the familiar brand.

Definition 6.3.2 (Sequential continuity). Let S ⊂ Rn be a set, and let


f ∶ S Ð→ Rm be a mapping. For every x ∈ S, f is sequentially continuous
at x if for every sequence {xν } in S converging to x, the sequence {f (xν )}
converges to f (x). The mapping f is sequentially continuous on S if f is
sequentially continuous at each point x in S.

Definition 6.3.3 (ε-δ continuity). Let S ⊂ Rn be a set, and let f ∶ S Ð→ Rm


be a mapping. For every x ∈ S, f is ε-δ continuous at x if for every ε > 0
there exists some δ > 0 such that

if x̃ ∈ S and ∣x̃ − x∣ < δ then ∣f (x̃) − f (x)∣ < ε.

The mapping f is ε-δ continuous on S if f is ε-δ continuous at each point


x in S.

Both definitions of continuity at a point x capture the idea that as inputs


to f approach x, the corresponding outputs from f should approach f (x).
This idea is exactly the substance of sequential continuity. (See Figure 6.10.)
For ε-δ continuity at x, imagine that someone has drawn a ball of radius ε
(over which you have no control, and it’s probably quite small) about the
270 6 Integration

f
x f (x)

Figure 6.10. Sequential continuity

point f (x) in Rm . The idea is that in response, you can draw a ball of some
radius—this is the δ in the definition—about the point x in S such that every
point in the δ-ball about x gets taken by f into the ε-ball about f (x). (See
Figure 6.11.)

δ f ε
x f (x)

Figure 6.11. ε-δ continuity

For example, the function f ∶ Rn Ð→ R given by f (x) = 2∣x∣ is ε-δ contin-


uous on Rn . To show this, consider any point x ∈ Rn , and let ε > 0 be given.
Set δ = ε/2. Then whenever ∣x̃ − x∣ < δ, a calculation that uses the generalized
triangle inequality at the third step shows that

∣f (x̃) − f (x)∣ = ∣2∣x̃∣ − 2∣x∣∣ = 2∣∣x̃∣ − ∣x∣∣ ≤ 2∣x̃ − x∣ < 2δ = ε,

as needed. Thus f is ε-δ continuous at x, and since x is arbitrary, f is ε-δ


continuous on Rn .
For another example, to prove that the function f ∶ R Ð→ R given by
f (x) = x2 is ε-δ continuous on R, consider any x ∈ R and let ε > 0 be given.
This time set
δ = min{1, ε/(1 + 2∣x∣)}.
This choice of δ may look strange, but its first virtue is that since δ ≤ 1, for
every x̃ ∈ R with ∣x̃ − x∣ < δ, we have ∣x̃ + x∣ = ∣x̃ − x + 2x∣ ≤ ∣x̃ − x∣ + 2∣x∣ < 1 + 2∣x∣;
and its second virtue is that also δ ≤ ε/(1 + 2∣x∣). These conditions fit perfectly
into the following calculation,
6.3 Continuity and Integrability 271

∣f (x̃) − f (x)∣ = ∣x̃2 − x2 ∣


= ∣x̃ + x∣ ∣x̃ − x∣
< (1 + 2∣x∣)
ε
by the two virtues of δ
1 + 2∣x∣
= ε.

And this is exactly what we needed to show that f is ε-δ continuous at x.


Since x is arbitrary, f is ε-δ continuous on R.
The tricky part of writing this sort of proof is finding the right δ. Doing
so generally requires some preliminary fiddling around on scratch paper. For
the proof just given, the initial scratch calculation would be

∣f (x̃) − f (x)∣ = ∣x̃2 − x2 ∣ = ∣(x̃ + x)(x̃ − x)∣ = ∣x̃ + x∣ ∣x̃ − x∣,

exhibiting the quantity that we need to bound by ε as a product of two terms,


the second bounded directly by whatever δ we choose. The idea is initially
to make the first term reasonably small by stipulating that δ be at most 1,
giving as in the previous paragraph

∣x̃ + x∣ = ∣x̃ − x + 2x∣ ≤ ∣x̃ − x∣ + 2∣x∣ < 1 + 2∣x∣.

Now ∣f (x̃) − f (x)∣ ≤ (1 + 2∣x∣)∣x̃ − x∣. Next we constrain δ further to make this
estimate less than ε when ∣x̃ − x∣ < δ. Stipulating that δ be at most ε/(1 + 2∣x∣)
does so. Hence the choice of δ in the proof.
To prove instead that the function f ∶ R Ð→ R given by f (x) = x2 is
sequentially continuous on R, again take any x ∈ R. Consider any sequence
{xν } in R converging to x. To show that the sequence {f (xν )} in R converges
to f (x), compute that by sequence limit properties,

{f (xν )} = {x2ν } → x2 = f (x).


ν

Since x is arbitrary, f is sequentially continuous on R. Note how much easier


this is than the ǫ-δ argument. Sequential continuity can be easier to establish,
as shown here, but also it can be harder to exploit.
In fact, there is no need to continue distinguishing between sequential
continuity and ε-δ continuity, because each type of continuity implies the
other.

Proposition 6.3.4 (Sequential and ε-δ continuity are equivalent). For


every set S ⊂ Rn and every mapping f ∶ S Ð→ Rm , f is sequentially continuous
on S if and only if f is ε-δ continuous on S.

Proof. Let x be any point of S.


( ⇐Ô ) Suppose that f is ε-δ continuous at x. We need to show that f is
sequentially continuous at x. So, let {xν } be a sequence in S converging to x.
272 6 Integration

To show that {f (xν )} converges to f (x) means that given an arbitrary ε > 0,
we need to exhibit a starting index N such that
for all ν > N , ∣f (xν ) − f (x)∣ < ε.
The definition of ε-δ continuity gives a δ such that
if x̃ ∈ S and ∣x̃ − x∣ < δ then ∣f (x̃) − f (x)∣ < ε.
And since {xν } converges in S to x, there is some starting index N such that
for all ν > N , ∣xν − x∣ < δ.
The last two displays combine to imply the first display, showing that f is
sequentially continuous at x.
( Ô⇒ ) Now suppose that f is not ε-δ continuous at x. Then for some ε > 0,
no δ > 0 satisfies the relevant conditions. In particular, δ = 1/ν fails the
conditions for ν = 1, 2, 3, . . . . So there is a sequence {xν } in S such that
∣xν − x∣ < 1/ν and ∣f (xν ) − f (x)∣ ≥ ε, ν = 1, 2, 3, . . . .
The display shows that f is not sequentially continuous at x.
Since the two types on continuity imply each other at each point x of S,
they imply each other on S. ⊓

The fact that the second half of this proof has to proceed by contrapo-
sition, whereas the first half is straightforward, shows that ε-δ continuity is
a little more powerful than sequential continuity on the face of it, until we
do the work of showing that they are equivalent. Also, the very definition
of ε-δ continuity seems harder for students than the definition of sequential
continuity, which is why these notes have used sequential continuity up to
now. However, the exceptionally alert reader may have recognized that the
second half of this proof is essentially identical to the proof of the persistence
of inequality principle (Proposition 2.3.10). Thus, the occasional arguments
in these notes that cited the persistence of inequality were tacitly using ε-δ
continuity already, because sequential continuity was not transparently strong
enough for their purposes. The reader who dislikes redundancy is encouraged
to rewrite the second half of this proof to quote the persistence of inequality
rather than re-prove it.
The reason that we bother with this new ε-δ type of continuity, despite
its equivalence to sequential continuity meaning that it is nothing new, is
that its grammar generalizes to describe the more powerful continuity that
we need. The two examples above of ε-δ continuity differed: in the example
f (x) = x2 , the choice of δ = min{1, ε/(2∣x∣ + 1)} for any given x and ε to
satisfy the definition of ε-δ continuity at x depended not only on ε but on x
as well. In the example f (x) = 2∣x∣, the choice of δ = ε/2 for any given x
and ε depended only on ε, i.e., it was independent of x. Here, one value of δ
works simultaneously at all values of x once ε is specified. This technicality
has enormous consequences.
6.3 Continuity and Integrability 273

Definition 6.3.5 (Uniform continuity). Let S ⊂ Rn be a set, and let f ∶


S Ð→ Rm be a mapping. Then f is uniformly continuous on S if for every
ε > 0 there exists some δ > 0 such that

if x, x̃ ∈ S and ∣x̃ − x∣ < δ then ∣f (x̃) − f (x)∣ < ε.

The nomenclature uniformly continuous on S is meant to emphasize that


given ε > 0, a single, uniform value of δ works in the definition of ε-δ continuity
simultaneously for all points x ∈ S. The scope of its effectiveness is large-scale.
Uniform continuity depends on both the mapping f and the set S.
A visual image may help distinguish between the old notion of continuity
(henceforth called pointwise continuity) and the new, stronger notion of
uniform continuity. Imagine the graph of a function f ∶ S Ð→ R (where S ⊂ R),
and take some input point x. Then f is pointwise continuous at x if for
every ε > 0, one can draw a rectangle of height 2ε centered at the point
(x, f (x)) that is narrow enough that the graph of f protrudes only from
the sides of the rectangle, not the top or bottom. The base of the rectangle
is 2δ, where δ comes from ε-δ continuity. Note that for a given ε, one may
need rectangles of various widths at different points. A rectangle that works
at x may not be narrow enough to work again at some other point x̃. (See
Figure 6.12, where ever-narrower rectangles are required as we move to the
left on the graph.) On the other hand, the function f is uniformly continuous
if given ε > 0, there is a single 2ε × 2δ rectangle that can slide along the entire
graph of f with its centerpoint on the graph, and the graph never protruding
from the top or bottom. (See Figure 6.13. A tacit assumption here is that the
graph of f either doesn’t extend beyond the picture frame, or it continues to
rise and fall tamely if it does.) By contrast, no single rectangle will work in
Figure 6.12.

Figure 6.12. One ε can require different values of δ at different points x


274 6 Integration

Figure 6.13. Or one δ can work uniformly for ε at all x

The domain of the nonuniformly continuous function f (x) = sin(1/x) in


Figure 6.12 is not compact, not being closed at its left endpoint. We are about
to prove that on a compact domain, uniform continuity follows for free from
pointwise continuity. In conjunction with the compactness of the boxes B over
which we integrate, this is the crucial ingredient for proving Theorem 6.3.1
(continuous functions are integrable over boxes), the goal of this section.

Theorem 6.3.6 (Continuity on compact sets is uniform). Let K ⊂ Rn


be compact, and let f ∶ K Ð→ Rm be pointwise continuous on K. Then f is
uniformly continuous on K.

As with the proof that sequential continuity implies ε-δ continuity, we


proceed by contraposition. That is, we show that in the case of a compact
domain, if f is not uniformly continuous then f cannot be continuous either.

Proof. Suppose that f is not uniformly continuous. Then for some ε > 0 there
exists no suitable uniform δ, and so in particular no reciprocal positive inte-
ger 1/ν will serve as δ in the definition of uniform continuity. Thus for each
ν ∈ Z+ there exist points xν and yν in K such that

∣xν − yν ∣ < 1/ν and ∣f (xν ) − f (yν )∣ ≥ ε. (6.2)

Consider the sequences {xν } and {yν } in K. By the sequential characteri-


zation of compactness (Theorem 2.4.13), {xν } has a convergent subsequence
converging in K; call it {xνk }. Throw away the rest of the xν ’s and throw
away the yν ’s of corresponding index, reindex the remaining terms of the two
sequences, and now {xν } converges to some p ∈ K. Since ∣xν − yν ∣ < 1/ν for
each ν (this remains true after the reindexing), {yν } converges to p as well.
So
lim xν = p = lim yν ,
and thus
f (lim xν ) = f (lim yν ).
But the second condition in (6.2) shows that

lim f (xν ) ≠ lim f (yν ),


6.3 Continuity and Integrability 275

i.e., even if both limits exist then they still cannot be equal. (If they both
exist and they agree then lim(f (xν ) − f (yν )) = 0, but this is incompatible
with the second condition in (6.2), ∣f (xν ) − f (yν )∣ ≥ ε for all ν.) The previous
two displays combine to show that

lim f (xν ) ≠ f (lim xν ) or lim f (yν ) ≠ f (lim yν ),

i.e., at least one of the left sides in the previous display doesn’t match the
corresponding right side or doesn’t exist at all. Thus f is not continuous at p.

Recall the main result that we want: If B is a box in Rn and f ∶ B Ð→ R


is continuous then ∫B f exists. The result is easy to prove now. The crucial
line of the proof is the opener.
Proof (of Theorem 6.3.1). The continuity of f on B is uniform. Thus, given
ε > 0, there exists δ > 0 such that

if x, x̃ ∈ B and ∣x̃ − x∣ < δ then ∣f (x̃) − f (x)∣ <


ε
.
vol(B)
(We may take vol(B) > 0, making the volume safe to divide by, since otherwise
all lower sums and upper sums are 0, making the integral 0 as well, and there
is nothing to prove.) Take a partition P of B whose subboxes J have sides
of length less than δ/n. By the size bounds (Proposition 2.2.7), all points x
and x̃ in a given subbox J satisfy ∣x̃ − x∣ < δ, so

if x, x̃ ∈ J then ∣f (x̃) − f (x)∣ <


ε
.
vol(B)
Let x and x̃ vary over J, and cite the extreme value theorem (Theorem 2.4.15)
to show that
MJ (f ) − mJ (f ) <
ε
.
vol(B)
Multiply by vol(J) to get

MJ (f )vol(J) − mJ (f )vol(J) <


ε vol(J)
,
vol(B)
and sum this relation over subboxes J to get

U (f, P ) − L(f, P ) < ε.

The integrability criterion now shows that ∫B f exists. ⊔



Integration synthesizes local data at each point of a domain into one whole.
The idea of this section is that integrating a continuous function over a box
is more than a purely local process: it requires the uniform continuity of the
function all through the box, a large-scale simultaneous estimate that holds
in consequence of the box being compact.
276 6 Integration

Exercises

6.3.1. Reread the proof that sequential and ε-δ continuity are equivalent; then
redo the proof with the book closed.
6.3.2. Let f ∶ R Ð→ R be the cubing function f (x) = x3 . Give a direct proof
that f is ε-δ continuous on R. (Hint: A3 − B 3 = (A − B)(A2 + AB + B 2 ).)
6.3.3. Here is a proof that the squaring function f (x) = x2 is not uniformly
continuous on R. Suppose that some δ > 0 satisfies the definition of uniform
continuity for ε = 1. Set x = 1/δ and x̃ = 1/δ + δ/2. Then certainly ∣x̃ − x∣ < δ,
but
1 δ 2 1 δ2 δ2
∣f (x̃) − f (x)∣ = ∣( + ) − 2 ∣ = ∣ 2 + 1 + − 2∣ = 1+
1 1
> ε.
δ 2 δ δ 4 δ 4
This contradicts uniform continuity.
Is the cubing function of the previous exercise uniformly continuous on R?
On [0, 500]?
6.3.4. (a) Show that if I ⊂ R is an interval (possibly all of R), f ∶ I Ð→ R is
differentiable, and there exists a positive constant R such that ∣f ′ (x)∣ ≤ R for
all x ∈ I then f is uniformly continuous on I.
(b) Prove that sine and cosine are uniformly continuous on R.

6.3.5. Let f ∶ [0, +∞) Ð→ R be the square root function f (x) = x. You may
take for granted that f is ε-δ continuous on [0, +∞).
(a) What does part (a) of the previous problem say about the uniform
continuity of f ?
(b) Is f uniformly continuous?
6.3.6. Let J be a box in Rn with sides of length less than δ/n. Show that all
points x and x̃ in J satisfy ∣x̃ − x∣ < δ.
6.3.7. For ∫B f to exist, it is sufficient that f ∶ B Ð→ R be continuous, but it
is not necessary. What preceding exercise provides an example of this? Here is
another example. Let B = [0, 1] and let f ∶ B Ð→ R be monotonic increasing,
meaning that if x1 < x2 in B then f (x1 ) ≤ f (x2 ). Show that such a function
is bounded, though it need not be continuous. Use the integrability criterion
to show that ∫B f exists.
6.3.8. The natural logarithm is defined as an integral. Let r ∶ R+ Ð→ R be the
reciprocal function, r(x) = 1/x for x > 0. The natural logarithm is


⎪ ∫ if x ≥ 1,
ln(x) = ⎨ [1,x]
r
ln ∶ R+ Ð→ R,

⎪− if 0 < x < 1.
⎩ ∫[x,1]
r

We know that the integrals in the previous display exist, because the reciprocal
function is continuous.
6.4 Integration of Functions of One Variable 277

(a) Show that limx→∞ ln x/x = 0 as follows. Let some small ε > 0 be given.
For x > 2/ε, let u(x, ε) denote the sum of the areas of the boxes [1, 2/ε] ×[0, 1]
and [2/ε, x] × [0, ε/2]. Show that u(x, ε) ≥ ln x. (Draw a figure showing the
boxes and the graph of r, and use the words upper sum in your answer.)
Compute limx→∞ u(ε, x)/x (here ε remains fixed), and use your result to show
that u(ε, x)/x < ε for all large enough x. This shows that limx→∞ ln x/x = 0.
(b) Let a > 0 and b > 1 be fixed real numbers. Part (a) shows that

ln x/x < ln b/(a + 1) for all large x.

Explain why consequently

xa /bx < 1/x for all large x.

This proves that exponential growth dominates polynomial growth,


xa
lim = 0, a > 0, b > 1.
x→∞ bx

Thus, for example,


x1000000
lim = 0,
x→∞ 1.0000001x

even though the values of x1000000 /1.0000001x are enormous as x begins to


grow.

6.4 Integration of Functions of One Variable

In a first calculus course one learns to do computations such as the following:


to evaluate
e (ln x)2
∫ dx,
x=1 x
let u = ln x; then du = dx/x, and as x goes from 1 to e, u goes from 0 to 1, so
the integral equals
1

1 1 3 1
∫ u 2
du = u = .
u=0 3 0 3
Or such as this: to evaluate

√ √ ,
9 dx

0 1+ x
√ √
let u = 1 + x. Then some algebra shows that x = (u2 − 1)2 , and so dx =
4(u2 − 1)u du. Also, when x = 0, u = 1, and when x = 9, u = 2. Therefore the
integral is
278 6 Integration
2 (u2 − 1)u
√ √ = 4∫ du = 4 ∫ (u2 − 1) du
9 dx 2

0 1+ x 1 u 1
2
= 4 ( u3 − u) ∣ = .
1 16
3 1
3

Although both of these examples use substitution, they differ from each
other in a way that a first calculus course may not explain. The first substitu-
tion involved picking an x-dependent u (i.e., u = ln x) where u′ (x) (i.e., 1/x)
was present in the integral and got absorbed by the substitution. The second
substitution took an opposite form to the first: this time the x-dependent u
was inverted to produce a u-dependent x, and the factor u′ (x) was introduced
into the integral rather than eliminated from it. Somehow, two different things
are going on under the guise of u-substitution.
In this section we specialize our theory of multivariable integration to n = 1
and review two tools for evaluating one-dimensional integrals, the fundamen-
tal theorem of integral calculus (FTIC) and the change of variable theorem.
Writing these down precisely will clarify the examples we just worked. More
importantly, generalizing these results appropriately to n dimensions is the
subject of the remainder of these notes.
The multivariable integral notation of this chapter, specialized to one di-
mension, is ∫[a,b] f . For familiarity, replace this by the usual notation,

b
∫ f =∫ f for a ≤ b.
a [a,b]

As matters stand, the redefined notation ∫a f makes sense only when a ≤ b,


b

so extend its definition to


b a
∫ f = −∫ f for a > b.
a b

Once this is done, the same relation between signed integrals holds regardless
of which (if either) of a and b is larger,
b a
∫ f = −∫ f for all a and b.
a b

Something nontrivial is happening here: when the multivariable integration of


this chapter is specialized to one dimension, it can be extended to incorporate
a sign convention to represent the order on R. If a < b then ∫a describes
b

positive traversal along the real line from a up to b, while ∫b describes negative
a

traversal from b down to a. This sort of thing does not obviously generalize
to higher dimensions, because Rn is not ordered.
Casewise inspection shows that for every three points a, b, c ∈ R in any
order, and for every integrable function f ∶ [min{a, b, c}, max{a, b, c}] Ð→ R,
6.4 Integration of Functions of One Variable 279
c b c
∫ f =∫ f +∫ f.
a a b

Also, if f ∶ [min{a, b}, max{a, b}] Ð→ R takes the constant value k then
b
∫ f = k(b − a),
a

again regardless of which of a and b is larger. These facts generalize Proposi-


tion 6.2.4 and Exercise 6.2.3 to signed one-variable integration.
Each of the next two theorems describes a sense in which one-variable
differentiation and integration are inverse operations. Both are called the fun-
damental theorem of integral calculus, but the second is more deserving of
the title because of how far it generalizes.

Theorem 6.4.1. Let the function f ∶ [a, b] Ð→ R be continuous. Define a


function
F ∶ [a, b] Ð→ R, F (x) = ∫
x
f.
a

Then F is differentiable on [a, b], and F ′ = f .

Proof. Let x and x + h lie in [a, b] with h ≠ 0. Study the difference quotient

F (x + h) − F (x) ∫a f − ∫a f ∫x f
x+h x x+h
= = .
h h h

If h > 0 then m[x,x+h] (f ) ⋅ h ≤ ∫x f ≤ M[x,x+h] (f ) ⋅ h, and dividing


x+h

through by h shows that the difference quotient lies between m[x,x+h] (f ) and
M[x,x+h] (f ). Thus the difference quotient is forced to f (x) as h goes to 0,
since f is continuous. A similar analysis applies when h < 0.
Alternatively, an argument using the characterizing property of the deriva-
tive and the Landau–Bachmann notation does not require separate cases de-
pending on the sign of h. Compute that

F (x + h) − F (x) − f (x)h = ∫ (f − f (x)) = ∫


x+h x+h
o(1) = o(h),
x x

But here the reader needs to believe, or check, the last equality. ⊔

The alert reader will recall the convention in these notes that a mapping
can be differentiable only at an interior point of its domain. In particular,
the derivative of a function F ∶ [a, b] Ð→ R is undefined at a and b. Hence
the statement of Theorem 6.4.1 is inconsistent with our usage, and strictly
speaking the theorem should conclude that F is continuous on [a, b] and
differentiable on (a, b) with derivative F ′ = f . The given proof does show this,
since the existence of the one-sided derivative of F at each endpoint makes F
continuous there.
280 6 Integration

However, we prohibited derivatives at endpoints only to tidy up our state-


ments. An alternative would have been to make the definition that for every
compact, connected set K ⊂ Rn (both of these terms were discussed in Sec-
tion 2.4), a mapping f ∶ K Ð→ Rm is differentiable on K if there exist an
open set A ⊂ Rn containing K and an extension of f to a differentiable map-
ping f ∶ A Ð→ Rm . Here the word extension means that the new function f
on A has the same behavior on K as the old f . One reason that we avoided
this slightly more general definition is that it is tortuous to track through the
material in Chapter 4, especially for the student who is seeing the ideas for
the first time. Also, this definition requires that the critical point theorem
(stating that the extrema of a function occur at points where its derivative
is 0) be fussily rephrased to say that this criterion applies only to the extrema
that occur at the interior points of the domain. From the same preference for
tidy statements over fussy ones, we now allow the more general definition of
the derivative.
Proving the FTIC from Theorem 6.4.1 requires the observation that if two
functions F1 , F2 ∶ [a, b] Ð→ R are differentiable, and F1′ = F2′ , then F1 = F2 + c
for some constant c. The observation follows from the mean value theorem
and is an exercise.

Theorem 6.4.2 (Fundamental theorem of integral calculus). Suppose


that the function F ∶ [a, b] Ð→ R is differentiable and F ′ is continuous. Then

F ′ = F (b) − F (a).
b

a

Proof. Define F2 ∶ [a, b] Ð→ R by F2 (x) = ∫a F ′ . Then F2′ = F ′ by the pre-


x

ceding theorem, so (Exercise 6.4.3) there exists a constant c such that for all
x ∈ [a, b],
F2 (x) = F (x) + c. (6.3)
Plug x = a into (6.3) to get 0 = F (a) + c, so c = −F (a). Next plug in x = b
to get F2 (b) = F (b) − F (a). Since F2 (b) = ∫a F ′ by definition, the proof is
b

complete. ⊔

One can also prove the fundamental theorem with no reference to Theo-
rem 6.4.1, letting the mean value theorem do all the work instead. Compute
that for every partition P of [a, b], whose points are a = t0 < t1 < ⋯ < tk = b,
k
F (b) − F (a) = ∑ F (ti ) − F (ti−1 ) (telescoping sum)
i=1
k
= ∑ F ′ (ci )(ti − ti−1 ) with each ci ∈ (ti−1 , ti ), by the MVT
i=1
≤ U (F ′ , P ).
6.4 Integration of Functions of One Variable 281

Since P is arbitrary, F (b)−F (a) is a lower bound of the upper sums and hence
is at most the upper integral U ∫a F ′ . Since F ′ is continuous, its integral exists
b

and the upper integral is the integral. That is,

F (b) − F (a) ≤ ∫
b
F ′.
a

A similar argument with lower sums gives the opposite inequality.


In one-variable calculus one learns various techniques to find antideriva-
tives; i.e., given continuous f , one finds F such that F ′ = f . Once this is done,
evaluating ∫a f is merely plugging in to the FTIC. But since not all continu-
b

ous functions have antiderivatives that are readily found, or even possible to
write in an elementary form (for example, try f (x) = e−x or f (x) = sin(x2 )),
2

the FTIC has its limitations.


Another tool for evaluating one-dimensional integrals is the change of vari-
able theorem. The idea is to transform one integral to another that may be
better suited to the FTIC.

Theorem 6.4.3 (Change of variable theorem; forward substitution


formula). Let φ ∶ [a, b] Ð→ R be differentiable with continuous derivative
and let f ∶ φ[a, b] Ð→ R be continuous. Then

∫ (f ○ φ) ⋅ φ = ∫
b φ(b)

f. (6.4)
a φ(a)

Proof. Use Theorem 6.4.1 to define F ∶ φ[a, b] Ð→ R such that F ′ = f . By the


chain rule, F ○ φ has derivative (F ○ φ)′ = (F ′ ○ φ) ⋅ φ′ = (f ○ φ) ⋅ φ′ , which is
continuous on [a, b]. Thus by the FTIC twice,

∫ (f ○ φ) ⋅ φ = ∫ (F ○ φ) = (F ○ φ)(b) − (F ○ φ)(a)
b b
′ ′
a a

= F (φ(b)) − F (φ(a)) = ∫
φ(b) φ(b)
F′ = ∫ f.
φ(a) φ(a)


One way to apply the change of variable theorem to an integral ∫a g is


b

to recognize that the integrand takes the form g = (f ○ φ) ⋅ φ′ , giving the left
side of (6.4) for suitable f and φ such that the right side ∫φ(a) f is easier
φ(b)

to evaluate. This method is called integration by forward substitution.


For instance, for the first integral ∫x=1 ((ln x)2 )/x) dx at the beginning of this
e

section, take
g ∶ R+ Ð→ R, g(x) = (ln x)2 /x.
To evaluate ∫1 g, define
e

φ ∶ R+ Ð→ R, φ(x) = ln x
282 6 Integration

and
f ∶ R Ð→ R, f (u) = u2 .
Then g = (f ○ φ) ⋅ φ′ , and φ(1) = 0, φ(e) = 1, so by the change of variable
theorem,
g = ∫ (f ○ φ) ⋅ φ′ = ∫
e e φ(e) 1
∫ f =∫ f.
1 1 φ(1) 0

Since f has antiderivative F where F (u) = u3 /3, the last integral equals F (1)−
F (0) = 1/3 by the FTIC.
The second integral at the beginning of the section was evaluated not by
the change of variable theorem as given, but by a consequence of it:

Corollary 6.4.4 (Inverse substitution formula). Let φ ∶ [a, b] Ð→ R be


continuous and let f ∶ φ[a, b] Ð→ R be continuous. Suppose further that φ is
invertible and that φ−1 is differentiable with continuous derivative. Then

∫ (f ○ φ) = ∫ f ⋅ (φ−1 )′ .
b φ(b)

a φ(a)

The formula in the corollary is the formula for integration by inverse


substitution. To obtain it from (6.4), consider the diagrams for forward and
inverse substitution:

[a, b] / [φ(a), φ(b)] [φ(a), φ(b)] / [a, b]


φ φ−1
❈❈ ■■
❈❈ ✉✉ ■■ ④ ④
❈❈ ✉✉ ■■ ④④
f ○φ ❈❈ ✉✉✉f f
■■
■■ ④④ f ○φ
✉ ④
! z✉
✉ $ ④
}
R R

Noting where the various elements of the left diagram occur in the forward
substitution formula ∫a (f ○ φ) ⋅ φ′ = ∫φ(a) f shows that applying the forward
b φ(b)

substitution suitably to the right diagram gives ∫φ(a) f ⋅ (φ−1 )′ = ∫a (f ○ φ),


φ(b) b

the inverse substitution formula as claimed.


To apply the formula in Corollary 6.4.4 to an integral ∫a g, write the
b

integrand as g = f ○ φ, giving the left side, and then invert φ and differentiate
the inverse to see whether √

the right side is easier to evaluate. For instance, for
the second integral ∫0 dx/ 1 + x at the beginning of the section, define
9

√ √
φ ∶ R≥0 Ð→ R≥1 , φ(x) = 1+ x

and
f ∶ R≥1 Ð→ R, f (u) = 1/u.
Then the integral is
√ √ = ∫ (f ○ φ).
9 dx 9

0 1+ x 0
6.4 Integration of Functions of One Variable 283

Let √ √
u = φ(x) = 1 + x.
Then a little algebra gives

x = (u2 − 1)2 = φ−1 (u),

so that
(φ−1 )′ (u) = 4u(u2 − 1).
Since φ(0) = 1 and φ(9) = 2, the integral becomes
2 u(u2 − 1) du
√ √ = ∫ (f ○ φ) = ∫ f ⋅ (φ−1 )′ = 4 ∫
9 dx 9 2
∫ ,
0 1+ x 0 1 1 u

and as before, this evaluates easily to 16/3.


The variable-based notation used to work the two integrals at the begin-
ning of this section, with x and u and dx and du, is much easier mnemonically
than the function-based notation used to rework them with the change of vari-
able theorem and its corollary. But a purist would object to it on two counts.
First, expressions such as (ln x)2 /x and u2 are not functions, they are the
outputs of functions, so strictly speaking we can’t integrate them. The prob-
lem is not serious, it is mere pedantry: we simply need to loosen our notation
to let ∫x=a f (x) be synonymous with ∫a f , at the cost of an unnecessary new
b b

symbol x. This x is called a dummy variable, because another symbol would


do just as well: ∫y=a f (y) and ∫♡=a f (♡) also denote ∫a f . At the theoretical
b b b

level, where we deal with functions as functions, this extra notation is useless
and cumbersome, but in any down-to-earth example it is in fact a convenience
because describing functions by formulas is easier and more direct than intro-
ducing new symbols to name them.
The second, more serious, objection to the variable-based notation is to
the dx, the du, and mysterious relations such as du = dx/x between them.
What kind of objects are dx and du? In a first calculus course they are typi-
cally described as infinitesimally small changes in x and u, but our theory of
integration is not based on such hazy notions; in fact, it was created in the
nineteenth century to answer objections to their validity. (Though infinitesi-
mals were revived and put on a firm footing in the 1960s, we have no business
with them here.) An alternative is to view dx and du as formal symbols that
serve, along with the integral sign ∫ , as bookends around the expression for the
function being integrated. This viewpoint leaves notation such as du = dx/x
still meaningless in its own right. In a first calculus course it may be taught
as a procedure with no real justification, whereas by contrast, the revisited
versions of the two integral-calculations of this section are visibly applications
of results that have been proved. However, the classical method is probably
easier for most of us, its notational conventions dovetailing with the change
of variable theorem and its corollary so well. So feel free to continue using it.
(And remember to switch the limits of integration when you do.)
284 6 Integration

However, to underscore that dx is an unnecessary, meaningless symbol, it


will generally not be used in these notes until it is defined in Chapter 9, as
something called a differential form.

Exercises

6.4.1. (a) Show that for three points a, b, c ∈ R in any order, and every inte-
grable function f ∶ [min{a, b, c}, max{a, b, c}] Ð→ R, ∫a f = ∫a f + ∫b f .
c b c

(b) Show that if f ∶ [min{a, b}, max{a, b}] Ð→ R takes the constant value k
then ∫a f = k(b − a), regardless of which of a and b is larger.
b

6.4.2. Complete the proof of Theorem 6.4.1 by analyzing the case h < 0.

6.4.3. Show that if F1 , F2 ∶ [a, b] Ð→ R are differentiable and F1′ = F2′ , then
F1 = F2 + C for some constant C. This result was used in this section to
prove the fundamental theorem of calculus (Theorem 6.4.2), so do not use
that theorem to address this exercise. However, this exercise does require a
theorem. Reducing to the case F2 = 0, as in the comment in Exercise 6.2.7,
will make this exercise a bit tidier.

6.4.4. (a) Suppose that 0 ≤ a ≤ b and f ∶ [a2 , b2 ] Ð→ R is continuous. Define


F ∶ [a, b] Ð→ R by F (x) = ∫a2 f . Does F ′ exist, and if so then what is it?
x2

(b) More generally, suppose f ∶ R Ð→ R is continuous, and α, β ∶ R Ð→ R


are differentiable. Define F ∶ R Ð→ R by F (x) = ∫α(x) f . Does F ′ exist, and if
β(x)

so then what is it?

6.4.5. Let f ∶ [0, 1] Ð→ R be continuous and suppose that for all x ∈ [0, 1],
∫0 f = ∫x f . What is f ?
x 1

6.4.6. Find all differentiable functions f ∶ R≥0 Ð→ R such that for all x ∈ R≥0 ,
(f (x))2 = ∫0 f .
x

6.4.7. Define f ∶ R+ Ð→ R by f (u) = e(u+ u ) /u and F ∶ R+ Ð→ R by F (x) =


1

∫1 f . Show that F behaves somewhat like a logarithm in that F (1/x) = −F (x)


x

for all x ∈ R+ . Interpret this property of F as a statement about area under


the graph of f . (Hint: define φ ∶ R+ Ð→ R+ by φ(u) = 1/u, and show that
(f ○ φ) ⋅ φ′ = −f .)

6.5 Integration over Nonboxes


So far, we know that ∫B f exists if B is a box and f ∶ B Ð→ R is continuous
(Theorem 6.3.1). With some more work, the theorem can be refined to relax
these requirements. The basic idea is that ∫B f still exists if f is discontinuous
on a small enough subset of B. The idea isn’t hard conceptually, but its
6.5 Integration over Nonboxes 285

justification requires some bookkeeping. Once it is established, integration


over compact sets K other than boxes is easy to define, provided that their
boundaries are suitably small.
To quantify the notion of small, and more generally the notion of set size,
let a set S ⊂ Rn be given. The characteristic function of S is


⎪1 if x ∈ S,
χS ∶ R Ð→ R,
n
χS (x) = ⎨



0 otherwise.

Suppose that S is bounded, meaning that S sits in some box B.


Definition 6.5.1 (Volume of a set). The volume of a bounded set S ⊂ Rn
is
vol(S) = ∫ χS where B is any box containing S,
B
if this integral exists.
This definition requires several comments. At first glance it seems ill-posed.
Conceivably, ∫B χS could exist for some boxes B containing S but not others,
and it could take different values for the various B where it exists. In fact, some
technique shows that if ∫B χS exists for some box B containing S then it exists
for every such box and always takes the same value, so the definition makes
sense after all. See the exercises. Also, an exercise shows that the volume of a
box B is the same under Definition 6.5.1 as under Definition 6.1.4, as it must
be for grammatical consistency. Finally, note that not all sets have volume,
only those whose characteristic functions are integrable.
Sets of volume zero are small enough that they don’t interfere with inte-
gration. To prove such a result explicitly, we first translate the definition of
volume zero into statements about the machinery of the integral. Let S ⊂ Rn
sit in a box B, and let P be a partition of B. The subboxes J of P consist of
two types:
type I : J such that J ∩ S ≠ ∅
and
type II : J such that J ∩ S = ∅.
Thus S sits in the union of subboxes J of type I, and the sum of their volumes
gives an upper sum for ∫B χS .
For example, Figure 6.14 shows a circle S inside a box B, and a partition P
of B, where the type I subboxes of the partition are shaded. The shaded boxes
visibly have a small total area. Similarly, Figure 6.15 shows a smooth piece of
surface in R3 , then shows it inside a partitioned box, and Figure 6.16 shows
some of the type I subboxes of the partition. Figure 6.16 also shows a smooth
arc in R3 and some of the type I rectangles that cover it, with the ambient
box and the rest of the partition now tacit. Figure 6.16 is meant to show that
all the type I boxes, which combine to cover the surface or the arc, have a
small total volume.
The following fact is convenient.
286 6 Integration

Figure 6.14. Circle, box, partition, and type I subboxes

Figure 6.15. A two-dimensional set in R3 ; the set inside a partitioned box

Proposition 6.5.2 (Volume-zero criterion). A set S contained in a box


B has volume zero if and only if for every ε > 0 there exists a partition P of
B such that
∑ vol(J) < ε.
J∶type I

The proof is an exercise. This criterion makes it plausible that every


bounded smooth arc in R2 has volume zero, and similarly for a bounded
smooth arc or smooth piece of surface in R3 . The next result uses the crite-
rion to provide a general class of volume-zero sets. Recall that for every set
S ⊂ Rk and every mapping ϕ ∶ S Ð→ Rℓ , the graph of ϕ is a subset of Rk+ℓ ,

graph(ϕ) = {(x, ϕ(x)) ∶ x ∈ S}.


6.5 Integration over Nonboxes 287

Figure 6.16. Some type I subboxes of the partition, and for an arc in R3

Proposition 6.5.3 (Graphs have volume zero). Let B be a box in Rm ,


and let ϕ ∶ B Ð→ R be continuous. Then graph(ϕ) has volume zero.

The idea is that the graph of the function ϕ in the proposition will describe
some of the points of discontinuity of a different function f that we want to
integrate. Thus the dimension m in the proposition is typically n − 1, where
the function f that we want to integrate has n-dimensional input.

Proof. The continuity of ϕ on B is uniform, and the image of ϕ, being com-


pact, sits in some interval I.
Let ε > 0 be given. Set ε′ equal to any positive number less than
ε/(2vol(B)) such that length(I)/ε′ is an integer. There exist a partition Q
of I whose subintervals K have length ε′ and a δ > 0 such that for all x, x̃ ∈ B,

∣x̃ − x∣ < δ Ô⇒ ∣ϕ(x̃) − ϕ(x)∣ < ε′ . (6.5)

Now take a partition P of B whose subboxes J have sides of length less than
δ/m, so that if two points are in a common subbox J then the distance between
them is less than δ. Consider the partition P × Q of B × I. For each subbox J
of P there exist at most two subboxes J × K of P × Q over J that intersect the
graph of ϕ, i.e., subboxes of type I. To see this, note that if we have three or
more such subboxes, then some pair J ×K and J ×K ′ are not vertical neighbors,
and so every hypothetical pair of points of the graph, one in each subbox, are
less than distance δ apart horizontally but at least distance ε′ apart vertically.
But by (6.5), this is impossible. (See Figure 6.17. The horizontal direction in
the figure is only a schematic of the m-dimensional box B, but the vertical
direction accurately depicts the one-dimensional codomain of ϕ.)
Now, working with subboxes J × K of P × Q, compute that
288 6 Integration

Figure 6.17. The graph meets at most two boxes over each base

∑ vol(J × K) = ∑ vol(J) ⋅ ε since length(K) = ε′


type I type I

≤ 2 ∑ vol(J) ⋅ ε′ by the preceding paragraph


J
= 2vol(B) ⋅ ε′ < ε since ε′ < ε/(2vol(B)),
and the proof is complete by the volume-zero criterion. ⊔

An exercise shows that every finite union of sets of volume zero also has
volume zero, and another exercise shows that every subset of a set of volume
zero also has volume zero. These facts and the preceding proposition are
enough to demonstrate that many regions have boundaries of volume zero. The
boundary of a set consists of all points simultaneously near the set and near
its complement—roughly speaking, its edge. (Unfortunately, the mathematical
terms bounded and boundary need have nothing to do with each other. A set
with a boundary need not be bounded, and a bounded set need not have any
boundary points nor contain any of its boundary points if it does have them.)
For example, the set in Figure 6.18 has a boundary consisting of four graphs
of functions on one-dimensional boxes, i.e., on intervals. Two of the boundary
pieces are graphs of functions y = ϕ(x), and the other two are graphs of
functions x = ϕ(y). Two of the four functions are constant functions.
The main result of this section is that discontinuity on a set of volume
zero does not interfere with integration.
Theorem 6.5.4 (Near-continuity implies integrability). Let B ⊂ Rn be
a box. Let f ∶ B Ð→ R be bounded, and continuous except on a set S ⊂ B of
volume zero. Then ∫B f exists.
Proof. Let ε > 0 be given.
The proof involves two partitions. Because f is bounded, there exists a
positive number R such that ∣f (x)∣ < R for all x ∈ B. Take a partition P
6.5 Integration over Nonboxes 289

y = 2π − x2

x = sin(y)

x=2

y=0

Figure 6.18. Boundary with area zero

of B whose subboxes J of type I (those intersecting the set S where f is


discontinuous) have volumes adding to less than ε/(4R). (See Figure 6.19, in
which the function f is the dome shape over the unit disk but is 0 outside the
unit disk, making it discontinuous on the unit circle.)

Figure 6.19. Type I subboxes of small total area

Consider some yet unspecified refinement P ′ of P dividing each subbox J


of P into further subboxes J ′ . (See Figure 6.20, in which the original boxes J
of type I remain shaded, but each box J of either type has been further parti-
tioned.) On every J ′ , MJ ′ (f ) − mJ ′ (f ) ≤ 2R, and so a short calculation shows
that regardless of how the refinement P ′ is to be specified, its subboxes J ′
that sit inside type I subboxes J of P make only a small contribution to the
difference between the lower sum and the upper sum of f over P ′ ,
290 6 Integration

∑ ∑ (MJ ′ (f ) − mJ ′ (f )) vol(J )

J : type I J ′ ⊂J
(6.6)
≤ 2R ∑ vol(J ) = 2R vol(J) < 2R = .
ε ε
∑ ∑

J : type I J ′ ⊂J J : type I 4R 2

Figure 6.20. Refinement of the partition

To specify the refinement P ′ of P that we need, consider the type II


subboxes J of P , i.e., the union of the unshaded boxes in Figure 6.19. The
function f is continuous on each such subbox and hence integrable over it by
Theorem 6.3.1. Let the number of these subboxes be denoted N . By ( Ô⇒ )
of the integrability criterion, each type II subbox J has a partition PJ′ such
that
U (f, PJ′ ) − L(f, PJ′ ) <
ε
.
2N
Let P ′ be a partition of the full box B that refines the original partition P
and also incorporates all the partitions PJ′ of the type II subboxes J. Thus
the intersection of P ′ with any particular type II subbox J refines PJ′ . Since
refinement cannot increase the distance between lower and upper sums, an-
other short calculation shows that the subboxes J ′ of P ′ that sit inside type II
subboxes J of P also make only a small contribution to the difference between
the lower sum and the upper sum of f over P ′ ,

∑ ∑ (MJ ′ (f ) − mJ ′ (f )) vol(J )

J : type II J ′ ⊂J
(6.7)
≤ U (f, PJ′ ) − L(f, PJ′ ) < N ⋅ = .
ε ε

J : type II 2N 2

Finally, combining (6.6) and (6.7) shows that U (f, P ′ ) − L(f, P ′ ) < ε, and so
by ( ⇐Ô ) of the integrability criterion, ∫B f exists. ⊓

6.5 Integration over Nonboxes 291

To recapitulate the argument: The fact that f is bounded means that its
small set of discontinuities can’t cause much difference between lower and up-
per sums, and the continuity of f on the rest of its domain poses no obstacle
to integrability either. The only difficulty was making the ideas fit into our
box-counting definition of the integral. The reader could well object that prov-
ing Theorem 6.5.4 shouldn’t have to be this complicated. Indeed, the theory
of integration being presented here, Riemann integration, involves laborious
proofs precisely because it uses such crude technology: finite sums over boxes.
More powerful theories of integration exist, with stronger theorems and more
graceful arguments. However, those theories also entail the startup cost of as-
similating a larger, more abstract set of working ideas, making them difficult
to present as quickly as Riemann integration.
Now we can discuss integration over nonboxes.
Definition 6.5.5 (Known-integrable function). A function
f ∶ K Ð→ R
is known-integrable if K is a compact subset of Rn having boundary of
volume zero, and if f is bounded on K and is continuous on all of K except
possibly a subset of volume zero.
For example, let K = {(x, y) ∶ ∣(x, y)∣ ≤ 1} be the closed unit disk in R2 ,

and define

⎪ 1 if x ≥ 0,
f ∶ K Ð→ R, f (x, y) = ⎨

⎪−1 if x < 0.

To see that this function is known-integrable, note that the boundary of K
is the union of the upper and lower unit semicircles, which are graphs of
continuous functions on the same 1-dimensional box,

ϕ± ∶ [−1, 1] Ð→ R, ϕ± (x) = ± 1 − x2 .
Thus the boundary of K has area zero. Furthermore, f is bounded on K, and
f is continuous on all of K except the vertical interval {0} × [−1, 1], which has
area zero by the 2-dimensional box area formula.
Definition 6.5.6 (Integral over a nonbox). Let
f ∶ K Ð→ R
be a known-integrable function. Extend its domain to Rn by defining a new
function


⎪f (x) if x ∈ K,
f˜ ∶ Rn Ð→ R, f˜(x) = ⎨
⎪0
⎪ if x ∉ K.

Then the integral of f over K is

∫ f = ∫ f˜ where B is any box containing K.


K B
292 6 Integration

For the example just before the definition, the extended function is

⎪ if ∣(x, y)∣ ≤ 1 and x ≥ 0,


1

f˜ ∶ R2 Ð→ R, f (x, y) = ⎨−1 if ∣(x, y)∣ ≤ 1 and x < 0,




⎩ 0 if ∣(x, y)∣ > 1,

and to integrate the original function over the disk, we integrate the extended
function over the box B = [0, 1] × [0, 1].
Returning to generality, the integral on the right side of the equality in the
definition exists because f˜ is bounded and discontinuous on a set of volume
zero, as required for Theorem 6.5.4. In particular, the definition of volume is
now, sensibly enough,
vol(K) = ∫ 1.
K
Naturally, the result of Proposition 6.2.4, that the integral over the whole
is the sum of the integrals over the pieces, is not particular to boxes and
subboxes.

Proposition 6.5.7. Let K ⊂ Rn be a compact set whose boundary has volume


zero. Let f ∶ K Ð→ R be continuous. Further, let K = K1 ∪ K2 where each Kj
is compact and the intersection K1 ∩ K2 has volume zero. Then f is integrable
over K1 and K2 , and
∫ f + ∫ f = ∫ f.
K1 K2 K

Proof. Define


⎪f (x) if x ∈ K1 ,
f1 ∶ K Ð→ R, f1 (x) = ⎨



0 otherwise.

Then f1 is known-integrable on K, and so ∫K f1 exists and equals ∫K1 f1 .


Define a corresponding function f2 ∶ K Ð→ R, for which the corresponding
conclusions hold. It follows that

∫ f1 + ∫ f2 = ∫ f1 + ∫ f2 = ∫ (f1 + f2 ).
K1 K2 K K K

But f1 + f2 equals f except on the volume-zero set K1 ∩ K2 , which contributes


nothing to the integral. The result follows. ⊔

Exercises

6.5.1. (a) Suppose that I1 = [a1 , b1 ], I2 = [a2 , b2 ], . . . are intervals in R. Show


that their intersection I1 ∩ I2 ∩ ⋯ is another interval (possibly empty).
(b) Suppose that S = S1 × ⋯ × Sn , T = T1 × ⋯ × Tn , U = U1 × ⋯ × Un , . . . are
Cartesian products of sets. Show that their intersection is
6.5 Integration over Nonboxes 293

S ∩ T ∩ U ∩ ⋯ = (S1 ∩ T1 ∩ U1 ∩ ⋯) × ⋯ × (Sn ∩ Tn ∩ Un ∩ ⋯).

(c) Show that every intersection of boxes in Rn is another box (possibly


empty).
(d) If S is a set and T1 , T2 , T3 , . . . are all sets that contain S, show that
T1 ∩ T2 ∩ T3 ∩ ⋯ contains S.

6.5.2. Let S be a nonempty bounded subset of Rn , let B be any box con-


taining S, and let B ′ be the intersection of all boxes containing S. By the
preceding problem, B ′ is also a box containing S. Use Proposition 6.2.4 to
show that if either of ∫B χS and ∫B ′ χS exist then both exist and they are
equal. It follows, as remarked in the text, that the definition of the volume
of S is independent of the containing box B.

6.5.3. Let B ⊂ Rn be a box. Show that its volume under Definition 6.5.1
equals its volume under Definition 6.1.4. (Hint: Exercise 6.2.3.)

6.5.4. Let S be the set of rational numbers in [0, 1]. Show that under Defini-
tion 6.5.1, the volume (i.e., length) of S does not exist.

6.5.5. Prove the volume-zero criterion.

6.5.6. If S ⊂ Rn has volume zero and R is a subset of S, show that R has


volume zero. (Hint: 0 ≤ χR ≤ χS .)

6.5.7. Prove that if S1 and S2 have volume zero, then so does S1 ∪ S2 . (Hint:
χS1 ∪S2 ≤ χS1 + χS2 .)

6.5.8. Find an unbounded set with nonempty boundary, and a bounded set
with empty boundary.

6.5.9. Review Figure 6.18 and its discussion in this section. Also review the
example that begins after Definition 6.5.5 and continues after Definition 6.5.6.
Similarly, use results from this section such as Theorem 6.5.4 and Proposi-
tion 6.5.3 to explain why for each set K and function f ∶ K Ð→ R below, the
integral ∫K f exists. Draw a picture each time, taking n = 3 for the picture in
part (f).
(a) K = {(x, y) ∶ 2 ≤ y ≤ 3, 0 ≤ x ≤ 1 + ln y/y}, f (x, y) = exy .

(b) K = {(x, y) ∶ 1 ≤ x ≤ 4, 1 ≤ y ≤ x}, f (x, y) = ex/y /y 5 .
2

(c) K = the region between the curves y = 2x2 and x = 4y 2 , f (x, y) = 1.


(d) K = {(x, y) ∶ 1 ≤ x2 + y 2 ≤ 2}, f (x, y) = x2 .
(e) K = the pyramid with vertices (0, 0, 0), (3, 0, 0), (0, 3, 0), (0, 0, 3/2),
f (x, y, z) = x.
(f) K = {x ∈ Rn ∶ ∣x∣ ≤ 1} (the solid unit ball in Rn ), f (x1 , . . . , xn ) = x1 ⋯xn .
294 6 Integration

6.6 Fubini’s Theorem


With existence theorems for the integral now in hand, this section and the
next one present tools to compute integrals.
An n-fold iterated integral is n one-dimensional integrals nested inside
each other, such as

f (x1 , x2 , . . . , xn ),
b1 b2 bn
∫ ∫ ⋯∫
x1 =a1 x2 =a2 xn =an

for some function f ∶ [a1 , b1 ] × ⋯ × [an , bn ] Ð→ R. An iterated integral is


definitely not the same sort of object as an n-dimensional integral. We can
evaluate an iterated integral by working from the inside out. For the innermost
integral, f is to be viewed as a function of the variable xn with its other inputs
treated as constants, and so on outward. For example,
2 1
xy ∣ x = x2 ∣
1 2 1 1 3 1 8 4 4
∫ ∫ xy 2 = ∫ =∫ = .
x=0 y=0 x=0 3 y=0
x=0 3 3 x=0
3
There are n! different orders in which one can iterate n integrals, e.g., the ex-
ample just worked is not the same object as ∫y=0 ∫x=0 xy 2 . Regardless of order,
2 1

each one-dimensional integral requires varying its particular input to f while


holding the other inputs fixed. The upshot of all this variable-dependence is
that there is no reasonable alternative to naming and writing the variables in
an iterated integral.
In an inner integral, outer variables may figure not only as inputs to the
integrand, but also in the limits of integration. For example, in the calculation
x
π x π π
∫ ∫ cos(x + y) = ∫ sin(x + y)∣ =∫ sin(2x) − sin(x) = −2,
x=0 y=0 x=0 x=0
y=0

each inner integral over y is being taken over a segment of x-dependent length
as the outer variable x varies from 0 to π. (See Figure 6.21.)
Fubini’s theorem says that under suitable conditions, the n-dimensional
integral is equal to the n-fold iterated integral. The theorem thus provides an
essential calculational tool for multivariable integration.
Theorem 6.6.1 (Fubini’s theorem). Let B = [a, b] × [c, d] ⊂ R2 , and let
f ∶ B Ð→ R be bounded, and continuous except on a subset S ⊂ B of area zero,
so ∫B f exists. Suppose that for each x ∈ [a, b], the cross-sectional integral
∫y=c f (x, y) exists; this happens if the cross-sectional function ϕx ∶ [c, d] Ð→ R
d

given by ϕx (y) = f (x, y) is continuous as a function of y except on a subset


of length zero, and in particular this happens if S contains only finitely many
points (possibly none) having first coordinate x. Then the iterated integral
∫x=a ∫y=c f (x, y) also exists, and
b d

f (x, y).
b d
∫ f =∫ ∫
B x=a y=c
6.6 Fubini’s Theorem 295

y=x

x
π

Figure 6.21. Variable range of inner integration

For notational convenience, the theorem is stated only in two dimensions.


Replacing [a, b] and [c, d] by boxes gives a more general version with a virtu-
ally identical proof. Thinking geometrically in terms of area and volume makes
the theorem plausible in two dimensions, because each inner integral is the
area of a cross section of the volume under the graph of f . (See Figure 6.22.)

y
x

Figure 6.22. Inner integral as cross-sectional area

However, since the multiple integral and the iterated integral are defined
analytically as limits of sums, our only available method for proving the the-
orem is analytic: we must compare approximating sums for the two integrals.
We now discuss the ideas before giving the actual proof. A lower sum for the
integral ∫B f is shown geometrically on the left side of Figure 6.23. A partition
P × Q divides the box B = [a, b] × [c, d] into subboxes I × J, and the volume
of each solid region in the figure is the area of a subbox times the minimum
height of the graph over the subbox. By contrast, letting g(x) = ∫y=c f (x, y)
d

be the area of the cross section at x, the right side of Figure 6.23 shows a lower
296 6 Integration

sum for the integral ∫x=a g(x). The partition P divides the interval [a, b] into
b

subintervals I, and the volume of each bread-slice in the figure is the length
of a subinterval times the minimum area of the cross sections orthogonal to I.
The proof will show that because integrating in the y-direction is a finer di-
agnostic than summing minimal box-areas in the y-direction, the bread-slices
on the right side of the figure are a superset of the boxes on the left side.
Consequently, the volume beneath the bread-slices is at least the volume of
the boxes,
L(f, P × Q) ≤ L(g, P ).
By similar reasoning for upper sums, in fact we expect that

L(f, P × Q) ≤ L(g, P ) ≤ U (g, P ) ≤ U (f, P × Q). (6.8)

Since L(f, P ×Q) and U (f, P ×Q) converge to ∫B f under a suitable refinement
of P × Q, so do L(g, P ) and U (g, P ). Thus the iterated integral exists and
equals the double integral as desired. The details of turning the geometric
intuition of this paragraph into a proof of Fubini’s theorem work out fine,
provided that we carefully tend to matters in just the right order. However,
the need for care is genuine. A subtle point not illustrated by Figure 6.23 is
that
• although the boxes lie entirely beneath the bread-slices (this is a relation
between two sets),
• and although the boxes lie entirely beneath the graph (so is this),
• and although the volume of the bread-slices is at most the volume beneath
the graph (but this is a relation between two numbers),
• the bread-slices need not lie entirely beneath the graph.
Since the bread-slices need not lie entirely beneath the graph, the fact that
their volume L(g, P ) estimates the integral ∫B f from below does not follow
from pointwise considerations. The proof finesses this point by establishing
the inequalities (6.8) without reference to the integral, only then bringing the
integral into play as the limit of the extremal sums in (6.8).

Proof. For each x ∈ [a, b], define the cross-sectional function

ϕx ∶ [c, d] Ð→ R, ϕx (y) = f (x, y).

The hypotheses of Fubini’s theorem ensure that as x varies from a to b, each


cross-sectional function ϕx is continuous except at finitely many points, and
hence it is integrable on [c, d]. Give the cross-sectional integral a name,

g ∶ [a, b] Ð→ R,
d
g(x) = ∫ ϕx .
c

The iterated integral ∫x=a ∫y=c f (x, y) is precisely the integral ∫a g. We need
b d b

to show that this exists and equals ∫B f .


6.6 Fubini’s Theorem 297

y y
x x

Figure 6.23. Geometry of two lower sums

Consider any partition P × Q of B into subboxes J × K. Thus P partitions


[a, b] into subintervals J, and Q partitions [c, d] into subintervals K. Take
any subinterval J of P , and take any point x of J. Note that ϕx on each K
samples f only on a cross section of J × K, and so f has more opportunity to
be small on J × K than ϕx has on K. That is,

mJ×K (f ) ≤ mK (ϕx ).

The lower sum of the cross-sectional function ϕx over the y-partition Q is a


lower bound for the cross-sectional integral g(x),

∑ mK (ϕx ) length(K) = L(ϕx , Q) ≤ ∫


d
ϕx = g(x).
K c

The previous two displays combine to give a lower bound for the cross-
sectional integral g(x), the lower bound making reference to the interval J on
which x lies but independent of the particular point x of J,

∑ mJ×K (f ) length(K) ≤ g(x) for all x ∈ J.


K

That is, the left side of this last display is a lower bound of all values g(x)
as x varies through J. So it is at most the greatest lower bound,

∑ mJ×K (f ) length(K) ≤ mJ (g).


K

Multiply through by the length of J to get

∑ mJ×K (f ) area(J × K) ≤ mJ (g) length(J).


K
298 6 Integration

(This inequality says that each y-directional row of boxes in the left half of
Figure 6.23 has at most the volume of the corresponding bread-slice in the
right half of the figure.) As noted at the end of the preceding paragraph, the
iterated integral is the integral of g. The estimate just obtained puts us in
a position to compare lower sums for the double integral and the iterated
integral,

L(f, P × Q) = ∑ mJ×K (f ) area(J × K) ≤ ∑ mJ (g) length(J) = L(g, P ).


J,K J

Concatenating a virtually identical argument with upper sums gives the an-
ticipated chain of inequalities,

L(f, P × Q) ≤ L(g, P ) ≤ U (g, P ) ≤ U (f, P × Q).

The outer terms converge to ∫B f under a suitable refinement of P × Q, and


hence so do the inner terms, showing that ∫a g exists and equals ∫B f . ⊔

b

Since we will use Fubini’s theorem to evaluate actual examples, all the
notational issues discussed in Section 6.4 arise here again. A typical notation
for examples is
∫ f (x, y) = ∫ f (x, y),
b d

B x=a y=c

where the left side is a 2-dimensional integral, the right side is an iterated
integral, and f (x, y) is an expression defining f . For example, by Fubini’s
theorem and the calculation at the beginning of this section,
1 2 4
∫ xy 2 = ∫ ∫ xy 2 = .
[0,1]×[0,2] x=0 y=0 3

Of course, an analogous theorem asserts that ∫B f (x, y) = ∫y=c ∫x=a f (x, y),
d b

provided that the set S of discontinuity meets horizontal segments at only


finitely many points too. In other words, the double integral also equals the
other iterated integral, and consequently the two iterated integrals agree. For
example, ∫y=0 ∫x=0 xy 2 also works out easily to 4/3.
2 1

In many applications, the integral over B is really an integral over a non-


rectangular compact set K, as defined at the end of the previous section. If
K is the area between the graphs of continuous functions ϕ1 , ϕ2 ∶ [a, b] Ð→ R,
i.e., if
K = {(x, y) ∶ a ≤ x ≤ b, ϕ1 (x) ≤ y ≤ ϕ2 (x)},

then one iterated integral takes the form ∫x=a ∫y=ϕ f (x, y). Similarly, if
2 b ϕ (x)
1 (x)

K = {(x, y) ∶ c ≤ y ≤ d, θ1 (y) ≤ x ≤ θ2 (y)},

then the other iterated integral is ∫y=c ∫x=θ f (x, y). (See Figure 6.24.)
d
2 θ (y)
1 (y)
6.6 Fubini’s Theorem 299

y = ϕ2 (x)

x = θ2 (y)

x = θ1 (y)
y = ϕ1 (x)

Figure 6.24. Setting up nonrectangular double integrals

The interchangeability of the order of integration leads to a fiendish class


of iterated integral problems in which one switches order to get a workable
integrand. For example, the iterated integral
2 1
∫ ∫
2
e−x
y=0 x=y/2

2
looks daunting because the integrand e−x has no convenient antiderivative,
but after exchanging the order of the integrations and then carrying out a
change of variable, it becomes
1 2x 1 1
∫ ∫ e−x = ∫ 2xe−x = ∫ e−u = 1 − e−1 .
2 2

x=0 y=0 x=0 u=0

Interchanging the order of integration can be tricky in such cases; often one
has to break K up into several pieces first, e.g.,
2 2 1 2 2 2
∫ ∫ =∫ ∫ +∫ ∫ .
x=1 y=1/x y=1/2 x=1/y y=1 x=1

A carefully labeled diagram facilitates this process. For example, Figure 6.25
shows the sketch that arises from the integral on the left side, and then the
resulting sketch that leads to the sum of two integrals on the right side.
Interchanging the outer two integrals in a triply iterated integral is no dif-
ferent from the double case, but interchanging the inner two is tricky, because
of the constant-but-unknown value taken by the outer variable. Sketching a
generic two-dimensional cross section usually makes the substitutions clear.
For example, consider the iterated integral
1 x2 x2
∫ ∫ ∫ . (6.9)
x=0 y=x3 z=y
300 6 Integration

y y
y=2
2
x=1
x=2
1
y = 1/x 1/2 x = 1/y

x x
1 2

Figure 6.25. Sketches for iterated integrals

(The function being integrated is irrelevant to this discussion of how to ex-


change the order of integration, so it is omitted from the notation.) Exchanging
the outer two integrals is carried out via the first diagram in Figure 6.26. The
diagram leads to the iterated integral

1 3 y x2
∫ ∫ √ ∫ .
y=0 x= y z=y

On the other hand, to exchange the inner integrals of (6.9), think of x as fixed
but generic between 0 and 1 and consider the second diagram in Figure 6.26.
This diagram shows that (6.9) is also the iterated integral
1 x2 z
∫ ∫ 3
∫ . (6.10)
x=0 z=x y=x3

Switching the outermost and innermost integrals of (6.9) while leaving the
middle one in place requires three successive switches of adjacent integrals.
For instance, switching the inner integrals as we just did and then doing an
outer exchange on (6.10) virtually identical to the outer exchange of a moment
earlier (substitute z for y in the first diagram of Figure 6.26) shows that (6.9)
is also √
3
1 z z
∫ ∫ √ ∫ .
z=0 x= z y=x3

Finally, the first diagram of Figure 6.27 shows how to exchange the inner
integrals once more. The result is

3 y
1 z
∫ ∫ ∫ √ .
z=0 y=z 3/2 x= z

The second diagram of Figure 6.27 shows the three-dimensional figure that our
iterated integral has traversed in various fashions. It is satisfying to see how
6.6 Fubini’s Theorem 301

y z

y = x2
√ z = x2
x= y
y = x3 x2
√ z=y

y = x3
x= 3 y x3
y=z
x y
1 x3 x2

Figure 6.26. Sketches for a triply iterated integral

this picture is compatible with the cross-sectional sketches, and to determine


which axis is which. However, the three-dimensional figure is unnecessary for
exchanging the order of integration. The author of these notes finds using two-
dimensional cross sections easier and more reliable than trying to envision an
entire volume at once. Also, the two-dimensional cross-section technique will
work in an n-fold iterated integral for every n ≥ 3, even when the whole
situation is hopelessly beyond visualizing.

z y = x3

z 3/2 x= 3 y
√ √ x
z 3z

Figure 6.27. Another cross section and the three-dimensional region

The unit simplex in R3 is the set

S = {(x, y, z) ∶ x ≥ 0, y ≥ 0, z ≥ 0, x + y + z ≤ 1}
302 6 Integration

(see Figure 6.28). Its centroid is (x, y, z), where

∫S x ∫S y ∫S z
x= , y= , z= .
vol(S) vol(S) vol(S)
Fubini’s theorem lets us treat the integrals as iterated, giving
1 1−x 1−x−y
∫ x=∫ ∫ ∫ x
S x=0 y=0 z=0
1 1−x
=∫ ∫ x(1 − x − y)
x=0 y=0
1 1
=∫ 1
2
x(1 − x)2 = ,
x=0 24
where the routine one-variable calculations are not shown in detail. Similarly,
vol(S) = ∫S 1 works out to 1/6, so x = 1/4. By symmetry, y = z = 1/4 also. See
the exercises for an n-dimensional generalization of this result.

Figure 6.28. Unit simplex

To find the volume between the two paraboloids z = 8 − x2 − y 2 and z =


x +3y 2 , first set 8−x2 −y 2 =√x2 +3y 2 to find that the graphs intersect over the
2

ellipse {(x, y) ∶ (x/2)2 + (y/ 2)2 = 1}. (See Figure 6.29.) By Fubini’s theorem
the volume is

√ √
2 4−2y 2 8−x2 −y 2
V = ∫ √ ∫ √ ∫ 1 = π8 2
y=− 2 x=− 4−2y 2 z=x2 +3y 2

where again the one-dimensional calculations are omitted.


Another example is to find the volume of the region K common to the
cylinders x2 +y 2 = 1 and x2 +z 2√= 1. For each√x-value between −1 and 1, y and z
vary independently between − 1 − x2 and 1 − x2 . That is, the intersection of
the two cylinders is a union of squares, whose corners form two tilted ellipses.
(See Figure 6.30.) By the methods of this section, the integral has the same
value as the iterated integral, which is
6.6 Fubini’s Theorem 303

Figure 6.29. Volume between two graphs

√ √

(1 − x2 ) =
1 1−x2 1−x2 1 16
∫ ∫ √ ∫ √ 2 1 = 4∫ .
x=−1 y=− 1−x2 z=− 1−x x=−1 3

Figure 6.30. Volume common to two cylinders

Finally, we end the section with a more theoretical example.


Proposition 6.6.2 (Differentiation under the integral sign). Consider
a function
f ∶ [a, b] × [c, d] Ð→ R.
Suppose that f and D1 f are continuous. Also consider the cross-sectional
integral function,
304 6 Integration

g ∶ [a, b] Ð→ R, f (x, y).


d
g(x) = ∫
y=c

Then g is differentiable, and g ′ (x) = ∫y=c D1 f (x, y). That is,


d

∫ f (x, y) = ∫ f (x, y).


d d d ∂

dx y=c y=c ∂x

Proof. Compute for x ∈ [a, b], using the fundamental theorem of integral cal-
culus (Theorem 6.4.2) for the second equality and then Fubini’s theorem for
the fourth,

f (x, y)
d
g(x) = ∫
y=c

(∫ D1 f (t, y) f (a, y))


d x
=∫ +
y=c t=a

D1 f (t, y) + C f (a, y))


d x d
=∫ ∫ (where C = ∫
y=c t=a y=c

D1 f (t, y) + C.
x d
=∫ ∫
t=a y=c

We show that ∫y=c D1 f (t, y) is a continuous function of t. Fix t, and let ε > 0
d

be given. The continuity of D1 f on its compact domain [a, b]×[c, d] is uniform,


so for some δ > 0, for all t̃ such that ∣t̃ − t∣ < δ, we have ∣D1 f (t̃, y) − D1 f (t, y)∣ <
ε/(d − c) for all y ∈ [c, d]. Thus for all such t̃,

∣∫ D1 f (t̃, y) − ∫ D1 f (t, y)∣ ≤ ∫ ∣D1 f (t̃, y) − D1 f (t, y)∣ < ε.


d d d

y=c y=c y=c

This proves the claimed continuity. Now Theorem 6.4.1 says that the deriva-
tive of the iterated integral is the inner integral evaluated at t = x,

g ′ (x) = ∫ D1 f (x, y).


d

y=c

This is the desired result. ⊔


See Exercise 6.6.10 for another example in this spirit.

Exercises

6.6.1. Let S be the set of points (x, y) ∈ R2 between the x-axis and the
sine curve as x varies between 0 and 2π. Since the sine curve has two arches
between 0 and 2π, and since the area of an arch of the sine function is 2,

∫ 1 = 4.
S
6.6 Fubini’s Theorem 305

On the other hand,


2π sin x 2π
∫ ∫ 1=∫ sin x = 0.
x=0 y=0 x=0

Why doesn’t this contradict Fubini’s theorem?

6.6.2. Exchange the order of integration in ∫x=a ∫y=a f (x, y).


b x

6.6.3. Exchange the inner order of integration in ∫x=0 ∫y=0 ∫z=0 f .


1 1−x x+y

x2 +y 2
6.6.4. Exchange the inner order of integration in ∫x=0 ∫y=0 ∫z=0
1 1
f . Sketch
the region of integration.

6.6.5. Evaluate ∫K f from parts (a), (b), (c), (f) of Exercise 6.5.9, except
change K to [0, 1]n for part (f).

6.6.6. Find the volume of the region K bounded by the coordinate planes,
x + y = 1, and z = x2 + y 2 . Sketch K.

6.6.7. Evaluate ∫K (1 + x + y + z)−3 where K is the unit simplex.

6.6.8. Find the volume of the region K in the first octant bounded by x = 0,
z = 0, z = y, and x = 4 − y 2 . Sketch K.

6.6.9. Find the volume of the region K between z = x2 + 9y 2 and z = 18 − x2 −


9y 2 . Sketch K.

6.6.10. Let f ∶ R2 Ð→ R have continuous mixed second-order partial deriva-


tives, i.e., let D12 f and D21 f exist and be continuous. Rederive the familiar
fact that D12 f = D21 f as follows. If D12 f (p, q) − D21 f (p, q) > 0 at some point
(p, q) then D12 f − D21 f > 0 on some rectangle B = [a, b] × [c, d] containing
(p, q), so ∫B (D12 f − D21 f ) > 0. Obtain a contradiction by evaluating this
integral.

6.6.11. Let K and L be compact subsets of Rn with boundaries of volume


zero. Suppose that for each x1 ∈ R, the cross-sectional sets

Kx1 = {(x2 , . . . , xn ) ∶ (x1 , x2 , . . . , xn ) ∈ K}


Lx1 = {(x2 , . . . , xn ) ∶ (x1 , x2 , . . . , xn ) ∈ L}

have equal (n − 1)-dimensional volumes. Show that K and L have the same
volume. (Hint: Use Fubini’s theorem to decompose the n-dimensional volume-
integral as the iteration of a 1-dimensional integral of (n − 1)-dimensional
integrals.) Illustrate for n = 2.
306 6 Integration

6.6.12. Let x0 be a positive real number, and let f ∶ [0, x0 ] Ð→ R be contin-


uous. Show that

f (xn ) = ∫ (x0 − t) f (t).


x0 x1 xn−1 1 x0
∫ ∫ ⋯∫ n−1
x1 =0 x2 =0 xn =0 (n − 1)! t=0

(Use induction. The base case n = 1 is easy; then the induction hypothesis
applies to the inner (n − 1)-fold integral.)

6.6.13. Let n ∈ Z+ and r ∈ R≥0 . The n-dimensional simplex of side r is

Sn (r) = {(x1 , . . . , xn ) ∶ 0 ≤ x1 , . . . , 0 ≤ xn , x1 + ⋯ + xn ≤ r}.

(a) Sketch Sn (r) for n = 1, 2, 3, with your sketches for n = 2 and n = 3


showing that Sn (r) is a disjoint union of cross-sectional (n − 1)-dimensional
simplices of side r − xn at height xn as xn varies from 0 to r. Explain this
symbolically for general n > 1. That is, explain why

Sn (r) = ⊔ Sn−1 (r − xn ) × {xn }.


xn ∈[0,r]

(b) Prove that vol(S1 (r)) = r. Use part (a) and Fubini’s theorem (cf. the
hint to Exercise 6.6.11) to prove that

vol(Sn (r)) = ∫ vol(Sn−1 (r − xn ))


r
for n > 1,
xn =0

and show by induction that vol(Sn (r)) = rn /n!.


(c) Use Fubini’s theorem to show that
r (r − xn )n−1
∫ xn = ∫
(n − 1)!
xn .
Sn (r) xn =0

Work this integral by substitution or by parts to get ∫Sn (r) xn = rn+1 /(n + 1)!.
(d) The centroid of Sn (r) is (x1 , . . . , xn ), where xj = ∫Sn (r) xj /vol(Sn (r))
for each j. What are these coordinates explicitly? (Make sure your answer
agrees with the case in the text.)

6.7 Change of Variable


Every point p ∈ R2 with Cartesian coordinates (x, y) is also specified by its
polar coordinates (r, θ), where r is the distance from the origin to p, and θ
is the angle from the positive x-axis to p. (See Figure 6.31.)
The angle θ is defined only up to multiples of 2π, and it isn’t defined at
all when p = (0, 0). Trigonometry expresses (x, y) in terms of (r, θ),

x = r cos θ, y = r sin θ. (6.11)


6.7 Change of Variable 307

y p
r
θ
x

Figure 6.31. Polar coordinates

But expressing (r, θ) in terms of (x, y) is a little more subtle. Certainly



r = x2 + y 2 .

Also, tan θ = y/x provided that x ≠ 0, but this doesn’t mean that θ =
arctan(y/x). Indeed, arctan isn’t even a well-defined function until its range
is specified, e.g., as (−π/2, π/2). With this particular restriction, the actual
formula for θ, even given that not both x and y are 0, is not arctan(y/x), but

⎪ x>0 and y ≥ 0 (this lies in [0, π/2)),



arctan(y/x) if


⎪π/2 x=0 and y > 0,



if
θ = ⎨arctan(y/x) + π x<0 (this lies in (π/2, 3π/2)),

if




⎪ x=0 and y < 0,

3π/2 if



⎩arctan(y/x) + 2π if x>0 and y < 0 (this lies in (3π/2, 2π)).

The formula is unwieldy, to say the least. (The author probably would not
read through the whole thing if he were instead a reader. In any case, see
Figure 6.32.) A better approach is that given (x, y), the polar radius r is the
unique nonnegative number such that

r 2 = x2 + y 2 ,

and then, if r ≠ 0, the polar angle θ is the unique number in [0, 2π) such
that (6.11) holds. But still, going from polar coordinates (r, θ) to Cartesian
coordinates (x, y) as in (6.11) is considerably more convenient than conversely.
This is good, since as we will see, doing so is also more natural.
The change of variable mapping from polar to Cartesian coordinates is

Φ ∶ R≥0 × [0, 2π] Ð→ R2 , Φ(r, θ) = (r cos θ, r sin θ).

The mapping is injective except that the half-lines R≥0 × {0} and R≥0 × {2π}
both map to the nonnegative x-axis, and the vertical segment {0} × [0, 2π] is
squashed to the point (0, 0). Each horizontal half-line R≥0 × {θ} maps to the
308 6 Integration

3π/2

π/2

Figure 6.32. The angle θ between 0 and 2π

y
θ

Φ
x
r

Figure 6.33. The polar coordinate mapping

ray of angle θ with the positive x-axis, and each vertical segment {r} × [0, 2π]
maps to the circle of radius r. (See Figure 6.33.)
It follows that regions in the (x, y)-plane defined by radial or angular con-
straints are images under Φ of (r, θ)-regions defined by rectangular constraints.
For example, the Cartesian disk

Db = {(x, y) ∶ x2 + y 2 ≤ b2 }

is the Φ-image of the polar rectangle

Rb = {(r, θ) ∶ 0 ≤ r ≤ b, 0 ≤ θ ≤ 2π}.

(See Figure 6.34.) Similarly, the Cartesian annulus and quarter disk

Aa,b = {(x, y) ∶ a2 ≤ x2 + y 2 ≤ b2 },
Qb = {(x, y) ∶ x ≥ 0, y ≥ 0, x2 + y 2 ≤ b2 },

are the images of rectangles. (See figures 6.35 and 6.36.)


6.7 Change of Variable 309

y
θ

Φ
x
b
r
b

Figure 6.34. Rectangle to disk under the polar coordinate mapping

y
θ

Φ
x
a b
r
a b

Figure 6.35. Rectangle to annulus under the polar coordinate mapping

y
θ

Φ
x
π/2 b
r
b

Figure 6.36. Rectangle to quarter disk under the polar coordinate mapping

Iterated integrals over rectangles are especially convenient to evaluate, be-


cause the limits of integration for the two one-variable integrals are constants
rather than variables that interact. For example,
b 2π 2π b
∫ ∫ =∫ ∫ .
r=a θ=0 θ=0 r=a

These tidy (r, θ) limits describe the (x, y) annulus Aa,b indirectly via Φ, while
the more direct approach of an (x, y)-iterated integral over Aa,b requires four
messy pieces,
310 6 Integration
√ √ √ √

[∫ √ ]+∫
−a b2 −x2 a − a2 −x2 b2 −x2 b b2 −x2
∫ ∫ √ +∫ +∫ √ ∫ √ .
x=−b y=− b2 −x2 x=−a y=− b2 −x2 y= a2 −x2 x=a y=− b2 −x2

Thus, since Fubini’s theorem equates integrals over two-dimensional regions to


twofold iterated integrals, it would be a real convenience to reduce integrating
over the (x, y)-annulus to integrating over the (r, θ) rectangle that maps to it
under Φ. The change of variable theorem will do so. This is the sense in which
it is natural to map from polar to Cartesian coordinates rather than in the
other direction.
The change of variable theorem says in some generality how to transform
an integral from one coordinate system to another. Recall that given a set
A ⊂ Rn and a differentiable mapping Φ ∶ A Ð→ Rn , the n × n matrix of partial
derivatives of Φ is denoted Φ′ ,

Φ′ = [Dj Φi ]i,j=1,...,n .

A differentiable mapping whose partial derivatives are all continuous is called


a C 1 -mapping. Also, for every set K ⊂ Rn , an interior point of K is a point
of K that is not a boundary point, and the interior of K is the set of all such
points,
K ○ = {interior points of K}.
We will discuss boundary points and interior points more carefully in the next
section. In the specific sorts of examples that arise in calculus, they are easy
enough to recognize.

Theorem 6.7.1 (Change of variable theorem for multiple integrals).


Let K ⊂ Rn be a compact and connected set having boundary of volume zero.
Let A be an open superset of K, and let

Φ ∶ A Ð→ Rn

be a C 1 -mapping such that

Φ is injective on K ○ and det Φ′ ≠ 0 on K ○ .

Let
f ∶ Φ(K) Ð→ R
be a continuous function. Then

∫ f = ∫ (f ○ Φ) ⋅ ∣ det Φ′ ∣.
Φ(K) K

This section will end with a heuristic argument to support Theorem 6.7.1,
and then Section 6.9 will prove the theorem after some preliminaries in Sec-
tion 6.8. In particular, Section 6.8 will explain why the left side integral in
the theorem exists. (The right-side integral exists because the integrand is
6.7 Change of Variable 311

continuous on K, which is compact and has boundary of volume zero, but


the fact that Φ(K) is nice enough for the left-side integral to exist requires
some discussion.) From now to the end of this section, the focus is on how
the theorem is used. Generally, the idea is to carry out substitutions of the
sort that were called inverse substitutions in the one-variable discussion of
Section 6.4. That is, to apply the theorem to an integral ∫D f , find a suitable
set K and mapping Φ such that D = Φ(K) and the integral ∫K (f ○ Φ) ⋅ ∣ det Φ′ ∣
is easier to evaluate. The new integral most likely will be easier because K
has a nicer shape than D (this wasn’t an issue in the one-variable case), but
also possibly because the new integrand is more convenient.
For example, to integrate the function f (x, y) = x2 + y 2 over the annulus
Aa,b , recall the polar coordinate mapping Φ(r, θ) = (r cos θ, r sin θ), and recall
that under this mapping, the annulus is the image of a box,

Aa,b = Φ([a, b] × [0, 2π]).

The composition of the integrand with Φ is

(f ○ Φ)(r, θ) = r2 ,

and the polar coordinate has derivative matrix

cos θ −r sin θ
Φ′ = [ ],
sin θ r cos θ

with absolute determinant


∣ det Φ′ ∣ = r.
So by the change of variable theorem, the desired integral is instead an integral
over a box in polar coordinate space,

∫ f =∫ r2 ⋅ r.
Aa,b [a,b]×[0,2π]

By Fubini’s theorem, the latter integral can be evaluated as an iterated inte-


gral,
∫ r = (b − a ).
2π b
r3 = ∫
π 4

3 4
[a,b]×[0,2π] θ=0 r=a 2
Similarly, the half disk Hb = Φ([0, b] × [0, π]) has centroid (0, y) where

∫Hb y ∫ ∫ r sin θ ⋅ r 2b /3
π b 3
4
y= = θ=0 r=02 = 2 =
area(Hb ) πb /2 πb /2 3π
b.

Indeed, 4/(3π) is somewhat less than 1/2, in conformance with our physical
intuition of the centroid of a region as its balancing point.
Subtle aspects of Theorem 6.7.1 were in play for the previous two examples.
The polar change of coordinate mapping Φ(r, θ) isn’t injective on all of the
312 6 Integration

box [a, b] × [0, 2π] that parametrizes the annulus: the 2π-periodic behavior
of Φ as a function of θ maps the top and bottom edges of the box to the
same segment [a, b] of the x-axis. Furthermore, on the box [0, b] × [0, π] that
parametrizes the half disk, not only does Φ collapse the left edge of the box to
the origin in the (x, y)-plane, but also det Φ′ = 0 on the left edge of the box.
Thus we really do require the theorem’s hypotheses that Φ need be injective
only on the interior of K, and that the condition det Φ′ ≠ 0 need hold only on
the interior of K.
Just as polar coordinates are convenient for radial symmetry in R2 , cylin-
drical coordinates in R3 conveniently describe regions with symmetry about
the z-axis. A point p ∈ R3 with Cartesian coordinates (x, y, z) has cylindrical
coordinates (r, θ, z) where (r, θ) are the polar coordinates for the point (x, y).
(See Figure 6.37.)

z z
y
θ Φ

x
r

Figure 6.37. Cylindrical coordinates

The cylindrical change of variable mapping is thus

Φ ∶ R≥0 × [0, 2π] × R Ð→ R3

given by
Φ(r, θ, z) = (r cos θ, r sin θ, z).
That is, Φ is just the polar coordinate mapping on z cross sections, so like the
polar map, it is mostly injective. Its derivative matrix is
⎡cos θ −r sin θ 0⎤
⎢ ⎥
⎢ ⎥
Φ′ = ⎢ sin θ r cos θ 0⎥ ,
⎢ ⎥
⎢ 0 1⎥
⎣ 0 ⎦
6.7 Change of Variable 313

and again
∣ det Φ′ ∣ = r.
So, for example, to integrate f (x, y, z) = y 2 z over the cylinder C : x2 + y 2 ≤ 1,
0 ≤ z ≤ 2, note that C = Φ([0, 1] × [0, 2π] × [0, 2]), and therefore by the change
of variable theorem and then Fubini’s theorem,
1 2
r4 z2
∣ ⋅ ∣
2π 1 2 2π
∫ f =∫ r2 sin2 θ ⋅ z ⋅ r = ∫ sin2 θ ⋅ = .
π
∫ ∫
C θ=0 r=0 z=0 θ=0 4 r=0 2 z=0 2

From now on, Fubini’s theorem no longer necessarily√ warrants comment.


For another example, we evaluate the integral ∫S x2 + y 2 where S is the
region bounded by z 2 = x2 + y 2 , z = 0, and z = 1. (This region looks like an
ice cream cone with the ice cream licked down flat.) The change of variable
theorem transforms the integral into (r, θ, z)-coordinates,
√ 1 2π 1
x2 + y 2 = ∫ 1=
π
∫ r2 ∫ ∫ .
S r=0 θ=0 z=r 6

Spherical coordinates in R3 are designed to exploit symmetry about


the origin. A point p = (x, y, z) ∈ R3 has spherical coordinates (ρ, θ, ϕ) where
the spherical radius ρ is the distance from the origin to p, the longitude θ
is the angle from the positive x-axis to the (x, y)-projection of p, and the
colatitude ϕ is the angle from the positive z-axis to p. By some geometry, the
spherical coordinate mapping is

Φ ∶ R≥0 × [0, 2π] × [0, π] Ð→ R3

given by
Φ(ρ, θ, ϕ) = (ρ cos θ sin ϕ, ρ sin θ sin ϕ, ρ cos ϕ).
The spherical coordinate mapping has derivative matrix
⎡cos θ sin ϕ −ρ sin θ sin ϕ ρ cos θ cos ϕ⎤
⎢ ⎥
⎢ ⎥
Φ = ⎢ sin θ sin ϕ ρ cos θ sin ϕ ρ sin θ cos ϕ ⎥ ,
⎢ ⎥

⎢ cos ϕ −ρ sin ϕ ⎥
⎣ 0 ⎦
with determinant (using column-linearity)
⎡cos θ sin ϕ sin θ cos θ cos ϕ⎤
⎢ ⎥
⎢ ⎥
det Φ = −ρ sin ϕ det ⎢ sin θ sin ϕ − cos θ sin θ cos ϕ ⎥
⎢ ⎥
′ 2

⎢ cos ϕ − sin ϕ ⎥
⎣ 0 ⎦
⎛ + ⎞
2 2 2 2
cos θ sin ϕ sin θ cos ϕ
= −ρ2 sin ϕ
⎝ + cos2 θ cos2 ϕ + sin2 θ sin2 ϕ⎠
= −ρ2 sin ϕ,

so that since 0 ≤ ϕ ≤ π,
314 6 Integration

∣ det Φ′ ∣ = ρ2 sin ϕ.
That is, the spherical coordinate mapping reverses orientation. It can be re-
defined to preserve orientation by changing ϕ to the latitude angle, varying
from −π/2 to π/2, rather than the colatitude.
Figure 6.38 shows the image under the spherical coordinate mapping of
some (θ, ϕ)-rectangles, each having a fixed value of ρ, and similarly for Fig-
ure 6.39 for some fixed values of θ, and Figure 6.40 for some fixed values of ϕ.
Thus the spherical coordinate mapping takes boxes to regions with these sorts
of walls, such as the half ice cream cone with a bite taken out of its bottom
in Figure 6.41.

z
y
ϕ θ Φ

x
ρ

Figure 6.38. Spherical coordinates for some fixed spherical radii

For an example of the change of variable theorem using spherical coordi-


nates, the solid ball of radius r in R3 is

B3 (r) = Φ([0, r] × [0, 2π] × [0, π]),

and therefore its volume is

vol(B3 (r)) = ∫
2π r π 1 4
1=∫ ∫ ∫ ρ2 sin ϕ = 2π ⋅ r3 ⋅ 2 = πr3 .
B3 (r) θ=0 ρ=0 ϕ=0 3 3
It follows that the cylindrical shell B3 (b) − B3 (a) has volume 4π(b3 − a3 )/3.
See Exercises 6.7.12 through 6.7.14 for the lovely formula giving the volume
of the n-ball for arbitrary n.
The change of variable theorem and spherical coordinates work together
to integrate over the solid ellipsoid of (positive) axes a, b, c,

Ea,b,c = {(x, y, z) ∶ (x/a)2 + (y/b)2 + (z/c)2 ≤ 1}.


6.7 Change of Variable 315

z
y
ϕ θ Φ

Figure 6.39. Spherical coordinates for some fixed longitudes

z
y
ϕ θ Φ

Figure 6.40. Spherical coordinates for some fixed colatitudes

For example, to compute the integral

∫ (Ax2 + By 2 + Cz 2 ),
Ea,b,c

first define a change of variable mapping that stretches the unit sphere into
the ellipsoid,

Φ ∶ B3 (1) Ð→ Ea,b,c , Φ(u, v, w) = (au, bv, cw).

The absolute determinant of the derivative matrix of Φ is the obvious volume-


dilation constant,
316 6 Integration

z
ϕ y
θ Φ

x
ρ

Figure 6.41. The spherical coordinate mapping on a box

⎡a 0 0⎤
⎢ ⎥
⎢ ⎥
Φ = ⎢0 b 0⎥ , ∣ det Φ′ ∣ = abc.
⎢ ⎥

⎢0 0 c ⎥
⎣ ⎦
Let f (x, y, z) = z . Then because Ea,b,c = Φ(B3 (1)) and (f ○Φ)(u, v, w) = c2 w2 ,
2

part of the integral is

∫ f =∫ (f ○ Φ) ⋅ ∣ det Φ′ ∣ = abc ⋅ c2 ∫ w2 .
Φ(B3 (1)) B3 (1) B3 (1)

Apply the change of variable theorem again, using the spherical coordinate
mapping into (u, v, w)-space,
1 2π π 4π
∫ w2 = ∫ ∫ ∫ ρ2 cos2 ϕ ⋅ ρ2 sin ϕ = .
B3 (1) ρ=0 θ=0 ϕ=0 15
By the symmetry of the symbols in the original integral, its overall value is
therefore

(Ax2 + By 2 + Cz 2 ) =

∫ abc(a2 A + b2 B + c2 C).
Ea,b,c 15
Another example is to find the centroid of upper hemispherical shell

S = (B3 (b) − B3 (a)) ∩ {z ≥ 0}.

By symmetry, x = y = 0. As for z, compute using spherical coordinates that

(b − a4 ).
b 2π π/2
∫ z=∫ ρ cos ϕ ⋅ ρ2 sin ϕ =
π 4
∫ ∫
S ρ=a θ=0 ϕ=0 4
This integral needs to be divided by the volume 2π(b3 − a3 )/3 of S to give
6.7 Change of Variable 317

3(b4 − a4 )
z=
8(b3 − a3 )
.

In particular, the centroid of the solid hemisphere is 3/8 of the way up. It
is perhaps surprising that π does not figure in this formula, as it did in the
two-dimensional case.
Here is a heuristic argument to support the change of variable theorem.
Suppose that K is a box. Recall the theorem’s assertion: under certain con-
ditions,
∫ f = ∫ (f ○ Φ) ⋅ ∣ det Φ′ ∣.
Φ(K) K

To argue that this equality holds, take a partition P dividing K into subboxes
J, and in each subbox choose a point xJ . If the partition is fine enough, then
each J maps under Φ to a small patch A of volume vol(A) ≈ ∣ det Φ′ (xJ )∣vol(J)
(cf. Section 3.8), and each xJ maps to a point yA ∈ A. (See Figure 6.42.) Since
the integral is a limit of weighted sums, it follows that

∫ f ≈ ∑ f (yA )vol(A)
Φ(K) A
≈ ∑ f (Φ(xJ ))∣ det Φ′ (xJ )∣vol(J)
J

≈ ∫ (f ○ Φ) ⋅ ∣ det Φ′ ∣,
K

and these should become equalities in the limit as P becomes finer. What
makes this reasoning incomplete is that the patches A are not boxes, as are
required for our theory of integration.

J A

Figure 6.42. Change of variable

Recall from Sections 3.8 and 3.9 that the absolute value of det Φ′ (x) de-
scribes how the mapping Φ scales volume at x, while the sign of det Φ′ (x)
says whether the mapping locally preserves or reverses orientation. The fac-
tor ∣ det Φ′ ∣ in the n-dimensional change of variable theorem (rather than
318 6 Integration

the signed det Φ′ ) reflects the fact that n-dimensional integration does not
take orientation into account. This unsigned result is less satisfying than
the corresponding result in one-variable theory, which does consider ori-
entation and therefore comes with a signed change of variable theorem,
∫φ(a) f = ∫a (f ○ φ) ⋅ φ . An orientation-sensitive n-dimensional integration
φ(b) b ′

theory will be developed in Chapter 9.

Exercises
6.7.1. Evaluate ∫S x2 + y 2 where S is the region bounded by x2 + y 2 = 2z and
z = 2. Sketch S.
6.7.2. Find the volume of the region S above x2 + y 2 = 4z and below x2 + y 2 +
z 2 = 5. Sketch S.
6.7.3. Find the volume of the region between the graphs of z = x2 + y 2 and
z = (x2 + y 2 + 1)/2.
6.7.4. Derive the spherical coordinate mapping.
6.7.5. Let Φ be the spherical coordinate mapping. Describe Φ(K) where
K = {(ρ, θ, ϕ) ∶ 0 ≤ θ ≤ 2π, 0 ≤ ϕ ≤ π/2, 0 ≤ ρ ≤ cos ϕ}.
(Hint: Along with visualizing the geometry, set θ = 0 and consider the condi-
tion ρ2 = ρ cos ϕ in Cartesian coordinates.) Same question for
K = {(ρ, θ, ϕ) ∶ 0 ≤ θ ≤ 2π, 0 ≤ ϕ ≤ π, 0 ≤ ρ ≤ sin ϕ}.
6.7.6. Evaluate ∫S xyz where S is the first octant of B3 (1).
6.7.7. Find the mass of a solid figure filling the spherical shell
S = B3 (b) − B3 (a)
with density δ(x, y, z) = x2 + y 2 + z 2 .
6.7.8. A solid sphere of radius b has density δ(x, y, z) = e−(x
2
+y 2 +z 2 )3/2
. Find
its mass, ∫B3 (b) δ.
6.7.9. Find the centroid of the region S = B3 (a) ∩ {x2 + y 2 ≤ z 2 } ∩ {z ≥ 0}.
Sketch S.
6.7.10. (a) Prove Pappus’s theorem: Let K be a compact set in the (x, z)-
plane lying to the right of the z-axis and with boundary of area zero. Let S
be the solid obtained by rotating K about the z-axis in R3 . Then
vol(S) = 2πx ⋅ area(K),
where as always, x = ∫K x/area(K). (Use cylindrical coordinates.)
(b) What is the volume of the torus Ta,b of cross-sectional radius a and
major radius b from the center of rotation to the center of the cross-sectional
disk? (See Figure 6.43.)
6.7 Change of Variable 319

Figure 6.43. Torus

6.7.11. Prove the change of scale principle: if the set K ⊂ Rn has volume
v then for every r ≥ 0, the set rK = {rx ∶ x ∈ K} has volume rn v. (Change
variables by Φ(x) = rx.)

6.7.12. (Volume of the n-ball, first version.) Let n ∈ Z+ and r ∈ R≥0 . The
n-dimensional ball of radius r is

Bn (r) = {x ∶ x ∈ Rn ∣x∣ ≤ r} = {(x1 , . . . , xn ) ∶ x21 + ⋯ + x2n ≤ r2 }.

Let
vn = vol(Bn (1)).
(a) Explain how Exercise 6.7.11 reduces computing the volume of Bn (r)
to computing vn .
(b) Explain why v1 = 2 and v2 = π.
(c) Let D denote the unit disk B2 (1). Explain why for n > 2,

Bn (1) = ⊔ {(x1 , x2 )} × Bn−2 ( 1 − x21 − x22 ).
(x1 ,x2 )∈D

√ unit n-ball is a union of cross-sectional (n − 2)-dimensional balls


That is, the
of radius 1 − x21 − x22 as (x1 , x2 ) varies through the unit disk. Make a sketch
for n = 3, the only value of n for which we can see this.
(d) Explain why for n > 2,

vn = vn−2 ∫ (1 − x21 − x22 ) 2 −1


n

(x1 ,x2 )∈D

(1 − r2 ) 2 −1 ⋅ r
2π 1
= vn−2 ∫ ∫
n

θ=0 r=0
= vn−2 π/(n/2).

(Use the definition of volume at the end of Section 6.5, Fubini’s theorem, the
definition of volume again, the change of scale principle from the previous
exercise, and the change of variable theorem.)
(e) Prove by induction the for n even case of the formula
320 6 Integration






π n/2
⎪ (n/2)! for n even,
vn = ⎨ (n−1)/2 n


⎪ 2 ((n − 1)/2)!

π


for n odd.
n!
(The for n odd case can be proved by induction as well, but the next two
exercises provide a better, more conceptual, approach to the volumes of odd-
dimensional balls.)

6.7.13. This exercise computes the improper integral I = ∫x=0 e−x , defined as
∞ 2

the limit limR→∞ ∫x=0 e−x . Let I(R) = ∫x=0 e−x for R ≥ 0.
R 2 R 2

(a) Use Fubini’s theorem to show that I(R)2 = ∫S(R) e−x −y , where S(R)
2 2

is the square
S(R) = {(x, y) ∶ 0 ≤ x ≤ R, 0 ≤ y ≤ R}.
(b) Let Q(R) be the quarter disk
Q(R) = {(x, y) ∶ 0 ≤ x, 0 ≤ y, x2 + y 2 ≤ R2 },

and similarly for Q( 2 R). Explain why

∫ ≤∫ ≤∫
2
−y 2 2
−y 2 −x2 −y 2
e−x e−x √ e .
Q(R) S(R) Q( 2 R)

(c) Change variables, and evaluate ∫Q(R) e−x −y and ∫Q(√2 R) e−x
2 2 2
−y 2
.
What are the limits of these two quantities as R → ∞?
(d) What is I?
6.7.14. (Volume of the n-ball, improved version) Define the gamma function
as an integral,
Γ (s) = ∫

xs−1 e−x dx, s > 0.
x=0
(This improper integral is well behaved, even though it is not being carried
out over a bounded region and even though the integrand is unbounded near
x = 0 when 0 < s < 1. We use dx here √ because this exercise is computational.)
(a) Show: Γ (1) = 1, Γ (1/2) = π, Γ (s + 1) = sΓ (s). (Substitute and see
the previous exercise for the second identity, integrate by parts for the third.)
(b) Use part (a) to show that n! = Γ (n + 1) for n = 0, 1, 2, . . . . Accordingly,
define x! = Γ (x + 1) for all real numbers x > −1, not only nonnegative integers.
(c) Use Exercise 6.7.12(b), Exercise 6.7.12(d), and the extended definition
of the factorial in part (b) of this exercise to obtain a uniform formula for the
volume of the unit n-ball,
π n/2
vn = n = 1, 2, 3, . . . .
(n/2)!
,

(We already have this formula for n even. For n odd, the argument is essen-
tially identical to Exercise 6.7.12(e) but starting at the base case n = 1.) Thus
the n-ball of radius r has volume
6.7 Change of Variable 321

π n/2 n
vol(Bn (r)) = n = 1, 2, 3, . . . .
(n/2)!
r ,

(d) The Legendre duplication formula for the gamma function is

Γ (2s) = 22s−1 π −1/2 Γ (s)Γ (s + 1/2).

For odd n, what value of s shows that the values of vn from part (c) of this
exercise and from part (e) of Exercise 6.7.12 are equal?
(e) (Read-only. While the calculation of vn in these exercises shows the
effectiveness of our integration toolkit, the following heuristic argument il-
lustrates that we would profit from an even more effective theory of integra-
tion.) Decompose Euclidean space Rn into concentric n-spheres (the n-sphere
is the boundary of the n-ball), each having radius r and differential radial
thickness dr. Since each such n-sphere is obtained by removing the n-ball of
radius r from the n-ball of radius r + dr, its differential volume is

vn (r + dr)n − vn rn ≈ vn nrn−1 dr.

Here we ignore the higher powers of dr on the grounds that they are so much
smaller than the dr-term. Thus, reusing some ideas from a moment ago, and
using informal notation,


n
π n/2 = ( ∫ e−x dx)
2
since the integral equals π
R

=∫
2
e−∣x∣ dV by Fubini’s theorem
Rn

= vn n ∫
2
rn−1 e−r dr integrating over spherical shells
r=0

= vn n/2 ∫ tn/2−1 e−t dt substituting t = r2
t=0
= vn n/2 Γ (n/2)
= vn (n/2)!.

The formula vn = π n/2 /(n/2)! follows immediately. The reason that this
induction-free argument lies outside our theoretical framework is that it in-
tegrates directly (rather than by the change of variable theorem) even as it
decomposes Rn into small pieces that aren’t boxes. Although we would prefer
a more flexible theory of integration that allows such procedures, developing
it takes correspondingly more time.

6.7.15. This exercise heuristically derives Stirling’s formula,



n! ∼ 2πn (n/e)n , n ≫ 0.

(a) Show that because n! = Γ (n + 1), it follows that n! = ∫t=0 en ln t−t dt.

322 6 Integration

(b) With n fixed and t variable, show that the quantity n ln t − t takes
its maximum value n ln n − n at t = n, where its first derivative is 0 and its
second derivative is −1/n. Thus the quantity’s quadratic approximation about
its maximizing point is n ln n − n − 2n1
(t − n)2 .
(c) In the integral expression of n! from (a), replace n ln t−t by its quadratic
approximation from (b) to get

Γ (n + 1) ∼ (n/e)n ∫
∞ 1 2
e− 2n (t−n) dt.
t=0

The quantity t − n runs through (−n, ∞) as t runs through (0, ∞). Thus,
assuming that n ≫ 0, replace t − n by t and extend the integration to all of R
to get
Γ (n + 1) ∼ (n/e)n ∫
∞ 1 2
e− 2n t dt.

t=−∞

Replace t by 2n t, and use exercise 6.7.13 to evaluate the resulting integral


and obtain Stirling’s formula.

6.7.16. This exercise evaluates the improper integral


∞ dx
Is = ∫ (for every real number s > 1/2).
x=−∞ (1 + x2 )s

(a) For α > 0, make a substitution in the integral ∫t=0 ts e−αt


∞ dt
to show
that it equals Γ (s)α−s . Thus
t

1 ∞
s −αt dt
α−s = ∫ t e α > 0.
Γ (s) t=0
,
t

(b) Explain how a particular choice of α in (a) leads to

1 ∞ ∞
s −(1+x2 )t dt
Is = ∫ ∫ t e
Γ (s) x=−∞ t=0
dx.
t

(c) Explain how after exchanging the order of integration, a few other steps
lead to
1 ∞ ∞
s−1/2 −t dt
Is = ∫ t ∫
2
e−x dx.
Γ (s) t=0
e
t x=−∞
(d) Use earlier exercises to conclude that

√ Γ (s − 1/2)
Is =
Γ (s)
π .

Can you check this formula for s = 1?

6.7.17. Let A and B be positive real numbers. This exercise evaluates the
improper integral
6.7 Change of Variable 323
∞ dx
Is = ∫ (for every real number s > 0).
x=−∞ (Ae2x + Be−2x )s

(a) Recall from Exercise 6.7.16(a) that α−s = ∫t=0 t e for all α > 0.
1 ∞ s −αt dt
Γ (s) t
Explain how a particular choice of α leads to
1 ∞ ∞
s −(Ae2x +Be−2x )t dt
Is = ∫ ∫ t e
Γ (s) x=−∞ t=0
dx.
t

(b) Let x = 1
2
log u (natural logarithm) and show that

1 ∞ ∞
s −(Au+Bu−1 )t dt du
Is = ∫ ∫ t e
2Γ (s) u=0 t=0
.
t u
Replace t by ut to get
1 ∞ ∞
s s −(Au2 +B)t dt du
Is = ∫ ∫ t u e
2Γ (s) u=0 t=0
.
t u

Replace u by u to get
1 ∞ ∞
s s/2 −(Au+B)t dt du
Is = ∫ ∫ t u e
4Γ (s) u=0 t=0
.
t u

(c) Exchange the order of integration and replace u by u/t to get


1 ∞ ∞
s/2 s/2 −(Au+Bt) du dt
Is = ∫ ∫ t u e
4Γ (s) t=0 u=0
.
u t

Replace u by u/A and t by t/B to get


1 ∞ ∞
s/2 −t dt s/2 −u du
Is = A−s/2 B −s/2 ∫ t e ∫ u e
4Γ (s) t=0
.
t u=0 u
Thus the integral is
∞ dx Γ (s/2)Γ (s/2) −s/2 −s/2
∫ = s > 0.
x=−∞ (Ae2x + Be ) 4Γ (s)
−2x s
A B ,

6.7.18. (Read-only. This exercise makes use not only of the gamma function
but of some results beyond our scope, in the hope of interesting the reader in
those ideas.)
(a) Consider any x ∈ R>0 , ξ ∈ R, and s ∈ R>1 . We show that


⎪ Γ (s) e−xξ ξ s−1 if ξ > 0,

eiξy


∫ =
(x + iy)s ⎪
dy
y=−∞ ⎪
⎩0 if ξ ≤ 0.

Indeed, replacing ξ by xξ in the gamma function integral gives a variant


expression of gamma that incorporates x,
324 6 Integration

Γ (s) = ∫
∞ dξ ∞ dξ
e−ξ ξ s = xs ∫ e−xξ ξ s .
ξ=0 ξ ξ=0 ξ
A result from complex analysis says that this formula extends from the open
half-line of positive x-values to the open half-plane of complex numbers x + iy
with x positive. That is, for every y ∈ R,

Γ (s) = (x + iy)s ∫
∞ dξ
e−(x+iy)ξ ξ s .
ξ=0 ξ
This is


Γ (s) ⎪e−xξ ξ s−1 if ξ > 0,
= ∫ e−iyξ ϕx (ξ) dξ ϕx (ξ) = ⎨

(x + iy) ⎪
where
⎪ if ξ ≤ 0.

s ξ=0 0

The integral here is a Fourier transform. That is, letting F denote the Fourier
transform operator, the previous display says that

Γ (s)
= (Fϕx )(y), y ∈ R.
(x + iy)s

The integral Γ (s) ∫y=−∞ eiξy (x + iy)−s dy is consequently the inverse Fourier

transform at ξ of the Fourier transform of ϕx . Fourier inversion says that


the inverse Fourier transform of the Fourier transform is the original function
multiplied by 2π. Putting all of this together gives the value of the integral at
the beginning of the exercise.
(b) We introduce an n-dimensional gamma function for every positive
integer n. Let

Cn = {n × n symmetric positive definite matrices}.

The set Cn is so denoted because it forms a structure called a cone: it is


closed under addition and under dilation by positive real numbers. For n > 1,
Exercise 4.7.11 gives a decomposition

Rn−1 × R>0 × Cn−1 ≈ Cn , c × a × ξ2 ≈ [ ].


a cT
c a cc + ξ2
−1 T

The nth gamma function is

Γn (s) = ∫ e−tr ξ (det ξ)s



(det ξ)(n+1)/2
,
ξ∈Cn

in which dξ = ∏i≤j dξij is the product of the differentials of the diagonal and
superdiagonal elements of ξ, where we recall that because ξ is symmetric the
subdiagonal entries are redundant. The decomposition of Cn combines with
some other facts (which the reader is encouraged to identify, if not prove) to
show that
6.7 Change of Variable 325

⎛ e−a ∣c∣ −a−tr ξ2 as (det ξ2 )s ⎞


−1 2

Γn (s) = ∫ ⎜ ⎟.
∫ ∫ ⎜ dξ da dc ⎟
⎝ ⋅
a(n+1)/2 (det ξ2 )(n+1)/2 ⎠
2
c∈Rn−1 a∈R>0 ξ2 ∈Cn−1

Replacing c by a1/2 c (and thus dc by a(n−1)/2 dc) lets the integral be separated,

Γn (s) = ∫
da
e−∣c∣ dc ⋅ ∫
2
e−a as
c∈Rn−1 a∈R>0 a
(det ξ2 )s−1/2
dξ2
⋅∫ −tr ξ2
(det ξ2 )n/2
e
ξ2 ∈Cn−1

= π (n−1)/2 Γ (s)Γn−1 (s − 12 ).

And iterating the argument gives the value of the nth gamma function in
terms of the basic gamma function,

Γn (s) = π (n−1)n/4 Γ (s)Γ (s − 12 )Γ (s − 22 )⋯Γ (s − n−2


2
)Γ (s − n−1
2
).

Similarly to part (a), one now can evaluate an integral over the vector space Vn
of n × n symmetric matrices for a given ξ ∈ Vn ,

⎪ (2π) π
⎪ e−tr(xξ) (det ξ)s−(n+1)/2 if ξ ∈ Cn ,
n (n−1)n/2
ei tr(ξy)
∫ = ⎨

dy Γn (s)
det(x + iy)s ⎪
⎩0 otherwise,
y∈Vn

using the fact that the constant for Fourier inversion over the space of n × n
symmetric matrices is (2π)n π (n−1)n/2 .

Figure 6.44. Geodesic dome


326 6 Integration

6.7.19. Figure 6.44 shows a geodesic dome with 5-fold vertices and 6-fold
vertices. (A geodesic of the sphere is a great circle.) Figure 6.45 shows a
bird’s-eye view of the dome. The thinner edges emanate from the 5-vertices,
while four of the six edges emanating from each 6-vertex are thicker. The five
triangles that meet at a 5-vertex are isosceles, while two of the six triangles
that meet at a 6-vertex are equilateral. This exercise uses vector algebra and
the spherical coordinate system to work out the lengths and angles of the
dome. Integration and the change of variable theorem play no role in this
exercise.

Figure 6.45. Geodesic dome, bird’s-eye view

(a) Take all vertices to lie on a sphere of radius 1. The ten thick edges
around the equator form a regular 10-gon. Show that consequently the thick
edges have length
a = 2 sin(π/10) = 2 cos(2π/5).
This famous number from geometry goes back to Euclid. Note that a = ζ5 +ζ5−1
where ζ5 = e2πi/5 = cos(2π/5) + i sin(2π/5) is the fifth root of unity one-fifth
of the way counterclockwise around the complex unit circle. Thus a2 + a − 1 =
ζ52 + 2 + ζ53 + ζ5 + ζ54 − 1, and the right side is 0 by the finite geometric sum
formula. That is,
a2 + a − 1 = 0, a > 0,
and so the length of the thick edges is

−1 + 5
a= = 0.618033988 . . . .
2
This number is a variant of the so-called golden ratio.
(b) The dome has a point at the north pole (0, 0, 1); then a layer of five
points p0 through p4 around the north pole at some colatitude ϕ; then a
6.7 Change of Variable 327

layer of ten points q0 through q9 , five at colatitude 2ϕ and the other five at
some second colatitude φ; and finally the layer of ten equatorial points r0
through r9 . The colatitude ϕ must be such that the triangle with vertices n,
q0 , and q2 is equilateral. These vertices may be taken to be

n = (0, 0, 1),
q0 , q2 = (cos(π/5) sin(2ϕ), ∓ sin(π/5) sin(2ϕ), cos(2ϕ)).

Show that the equilateral condition ∣n − q0 ∣2 = ∣q2 − q0 ∣2 gives the condition


cos2 (ϕ) = 1/(2 − a), then sin2 (ϕ) = (1 − a)/(2 − a), then tan2 (ϕ) = a2 , so that
the colatitude of the five points about the north pole is

ϕ = arctan(a) = 31.7174 . . .○ .

Use the cross-sectional triangle having vertices 0, n, p0 and the law of cosines
to show that the shorter segments have length
√ √
b = 2(1 − 1/ 2 − a) = 0.546533057 . . . .

(Alternatively, one can find ϕ and b using the triangle with vertices n, p0 , p1 .)
For reference in part (e), show that

2ϕ = arctan(2) = 63.4349 . . .○ .

(c) Show that the angle of an isosceles triangle where its equal sides meet
at a 5-vertex is
α = 2 arcsin(a/(2b)) = 68.8619 . . .○ ,
and the angles where its unequal sides meet at 6-vertices are

β = arccos(a/(2b)) = 55.5690 . . .○ .

(d) Show that the angle where two a-segments meet along a geodesic
is 180○ − 36○ . Show that the angle where two b-segments meet along a geodesic
(this happens at the 6-vertices but not at the 5-vertices) is 180○ − ϕ.
(e) To find the colatitude φ of q1 , q3 , . . . , q9 , take q9 and q1 to be

q9 , q1 = (cos(π/5) sin(2ϕ), ∓ sin(π/5) sin(2ϕ), cos(2ϕ)),

and consider the geodesic containing them. Their cross product is normal to
the plane of the geodesic. Show that this cross product is

q9 × q1 = 2 sin(π/5) sin(2ϕ)(− cos(2ϕ), 0, cos(π/5) sin(2ϕ)).

√ of this cross product is the colatitude φ


Show by illustration that the latitude
of q9 and q1 . Show that φ = arctan( 2 + a). Show further that (a + 1)2 = 2 + a,
so that in fact,
φ = arctan(a + 1) = 58.2825 . . .○ .
328 6 Integration

6.8 Topological Preliminaries for the Change of Variable


Theorem
This section establishes some topological results to prepare for proving the
change of variable theorem (Theorem 6.7.1), and then the next section gives
the proof. Both sections are technical, and so the reader is invited to skim
as feels appropriate. For instance, one might focus on the discussion, the
statements, and the figures, but go light on the proofs.
In preparation for proving the change of variable theorem, we review its
statement. The statement includes the terms boundary and interior, which
we have considered only informally so far, but we soon will discuss them
more carefully. The statement also includes the term open, and the reader
is reminded that a set is called open if its complement is closed; we soon
will review the definition of a closed set. The statement includes the term
C 1 -mapping, meaning a mapping such that all partial derivatives of all of its
component functions exist and are continuous. And the statement includes
the notation K ○ for the interior of a set K. The theorem says:
Let K ⊂ Rn be a compact and connected set having boundary of volume
zero. Let A be an open superset of K, and let

Φ ∶ A Ð→ Rn

be a C 1 -mapping such that

Φ is injective on K ○ and det Φ′ ≠ 0 on K ○ .

Let
f ∶ Φ(K) Ð→ R
be a continuous function. Then

∫ f = ∫ (f ○ Φ) ⋅ ∣ det Φ′ ∣.
Φ(K) K

Thus the obvious data for the theorem are K, Φ, and f . (The description of Φ
subsumes A, and in any case the role of A is auxiliary.) But also, although the
dimension n is conceptually generic but fixed, in fact the proof of the theorem
will entail induction on n, so that we should view n as a variable part of the
setup as well. Here are some comments about the data.
• The continuous image of a compact set is compact (Theorem 2.4.14), so
that Φ(K) is again compact. Similarly, by an invocation in Section 2.4,
the continuous image of a connected set is connected, so that Φ(K) is
again connected. The reader who wants to minimize invocation may in-
stead assume that that K is path-connected, so that Φ(K) is again path-
connected (see Exercise 2.4.10 for the definition of path-connectedness and
the fact that path-connectedness is a topological property); the distinction
6.8 Topological Preliminaries for the Change of Variable Theorem 329

between connectedness and path-connectedness is immaterial for every ex-


ample that will arise in calculus. We soon will see that the image Φ(K)
also has boundary of volume zero, so that in fact Φ(K) inherits all of the
assumed properties of K.
• Thus both integrals in the change of variable theorem exist, because in
each case the integrand is continuous on the domain of integration and
the domain of integration is compact and has boundary of volume zero.
• The hypotheses of the theorem can be weakened or strengthened in vari-
ous ways with no effect on the outcome. Indeed, the proof of the theorem
proceeds partly by strengthening the hypotheses. The hypotheses in The-
orem 6.7.1 were chosen to make the theorem fit the applications that arise
in calculus. Especially, parametrizations by polar, cylindrical, or spheri-
cal coordinates often degenerate on the boundary of the parameter-box,
hence the conditions that Φ is injective and det Φ′ ≠ 0 being required
only on the interior K ○ . See Figure 6.46. In the figure, the polar coordi-
nate mapping collapses the left side of the parametrizing rectangle to the
origin in the parametrized disk, and it takes the top and bottom sides
of the parametrizing rectangle to the same portion of the x-axis in the
parametrized disk. Furthermore, neither the origin nor the portion of the
x-axis is on the boundary of the parametrized disk even though they both
come from the boundary of the parametrizing rectangle. On the other
hand, every nonboundary point of the parametrizing rectangle is taken
to a nonboundary point of the parametrized disk, so that every bound-
ary point of the parametrized disk comes from a boundary point of the
parametrizing rectangle.
• While the hypotheses about Φ are weaker than necessary in order to make
the theorem easier to use, the hypothesis that f is continuous is stronger
than necessary in order to make the theorem easier to prove. The theorem
continues to hold if f is assumed only to be integrable, but then the proof
requires more work. In calculus examples, f is virtually always continuous.
This subject will be revisited at the end of Chapter 7.
This section places a few more topological ideas into play to set up the
proof of the change of variable theorem in the next section. The symbols K, A,
Φ, and f denoting the set, the open superset, the change of variable, and the
function in the theorem will retain their meanings throughout the discussion.
Symbols such as S will denote other sets, symbols such as Ψ will denote other
transformations, and symbols such as g will denote other functions.
Recall some topological ideas that we have already discussed.
• For every point a ∈ Rn and every positive real number r > 0, the open ball
centered at a of radius r is the set

B(a, r) = {x ∈ Rn ∶ ∣x − a∣ < r} .
330 6 Integration

θ y

Φ
x

Figure 6.46. The change of variable mapping need not behave well on the boundary

• A point a ∈ Rn is called a limit point of a set S ∈ Rn if every open ball


centered at a contains some point x ∈ S such that x ≠ a. A subset A of Rn
is called closed if it contains all of its limit points.

Definition 6.8.1. Let S be a subset of Rn . Its closure S is the smallest closed


superset of S.

Here smallest is taken in the sense of set-containment. The intersection of


closed sets is closed (Exercise 6.8.1(a)), and so S is the intersection of all closed
supersets of S, including Rn . Thus S exists and is unique. The special-case
definition
B(a, r) = {x ∈ Rn ∶ ∣x − a∣ ≤ r}
from Section 5.1 is consistent with Definition 6.8.1.
Closed sets can also be described in terms of boundary points rather than
limit points.

Definition 6.8.2. Let S be a subset of Rn . A point p ∈ Rn is called a bound-


ary point of S if for every r > 0 the open ball B(p, r) contains a point from S
and a point from the complement S c . The boundary of S, denoted ∂S, is the
set of boundary points of S.

A boundary point of a set need not be a limit point of the set, and a limit
point of a set need not be a boundary point of the set (Exercise 6.8.1(b)).
Nonetheless, similarly to the definition of closed set in the second bullet
before Definition 6.8.1, a set is closed if and only if it contains all of its
boundary points (Exercise 6.8.1(c)). The boundary of every set is closed (Ex-
ercise 6.8.1(d)). Since the definition of boundary point is symmetric in the
set and its complement, the boundary of the set is also the boundary of the
complement,
∂S = ∂(S c ).
The closure of a set is the union of the set and its boundary (Exercise 6.8.2(a)),
6.8 Topological Preliminaries for the Change of Variable Theorem 331

S = S ∪ ∂S.

If S is bounded then so is its closure S (Exercise 6.8.2(b)), and therefore the


closure of a bounded set is compact. The special-case definition

∂B(a, r) = {x ∈ Rn ∶ ∣x − a∣ = r}

from Section 6.1 is consistent with Definition 6.8.2.

Definition 6.8.3. An open box in Rn is a set of the form

J = (a1 , b1 ) × (a2 , b2 ) × ⋯ × (an , bn ).

The word box, unmodified, continues to mean a closed box.

Proposition 6.8.4 (Finiteness property of compact sets: special case


of the Heine–Borel theorem). Consider a compact set K ⊂ Rn . Suppose
that some collection of open boxes Ji covers K. Then a finite subcollection of
the open boxes Ji covers K.

Proof (Sketch). Suppose that no finite collection of the open boxes Ji cov-
ers K. Let B1 be a box that contains K. Partition B1 into 2n subboxes B ̃ by
̃
bisecting it in each direction. If for each subbox B, some finite collection of
the open boxes Ji covers K ∩ B, ̃ then the 2n -fold collection of these finite col-
lections in fact covers all of K. Thus no finite collection of the open boxes Ji
covers K ∩ B ̃ for at least one subbox B ̃ of B1 . Name some such subbox B2 ,
repeat the argument with B2 in place of B1 , and continue in this fashion,
obtaining nested boxes
B1 ⊃ B2 ⊃ B3 ⊃ ⋯
whose sides are half as long at each succeeding generation, and such that no
K ∩ Bj is covered by a finite collection of the open boxes Ji . The intersection
K ∩ B1 ∩ B2 ∩ ⋯ contains at most one point, because the boxes Bj eventually
shrink smaller than the distance between any two given distinct points. On the
other hand, since each K ∩Bj is nonempty (otherwise the empty subcollection
of the open boxes Ji would cover it), there is a sequence {cj } with each
cj ∈ K ∩ Bj ; and since K is compact and each Bj is compact and the Bj
are nested, the sequence {cj } has a subsequence that converges in K and
in each Bj , hence converging in the intersection K ∩ B1 ∩ B2 ∩ ⋯. Thus the
intersection is a single point c. Some open box Ji covers c because c ∈ K, and
so because the boxes Bj shrink to c, also Ji covers Bj for all high enough
indices j. This contradicts the fact that no K ∩ Bj is covered by finitely
many Ji . Thus the initial supposition that no finite collection of the open
boxes Ji covers K is untenable. ⊔

Although the finiteness property of compact sets plays only a small role in
these notes, the idea is important and far-reaching. For example, it lies at the
heart of sequence-free proofs that the continuous image of a compact set is
332 6 Integration

compact, the continuous image of a connected set is connected, and continuity


on compact sets is uniform.
The following lemma is similar to the difference magnification lemma
(Lemma 5.1.3). Its content is that although passing a box through a mapping
needn’t give another box, if the box is somewhat uniform in its dimensions
and if the mapping has bounded derivatives then the mapping takes the box
into a second box that isn’t too much bigger than the original.

Lemma 6.8.5 (Box-volume magnification lemma). Let B be a box in Rn


whose longest side is at most twice its shortest side. Let g be a differentiable
mapping from an open superset of B in Rn to Rn . Suppose that there is a
number c such that ∣Dj gi (x)∣ ≤ c for all i, j ∈ {1, . . . , n} and all x ∈ B. Then
g(B) sits in a box B ′ such that vol(B ′ ) ≤ (2nc)n vol(B).

Proof. Let x be the centerpoint of B and let x̃ be any point of B. Make the
line segment connecting x to x̃ the image of a function of one variable,

γ ∶ [0, 1] Ð→ Rn , γ(t) = x + t(x̃ − x).

Fix any i ∈ {1, . . . , n}. Identically to the proof of the difference magnification
lemma, we have for some t ∈ (0, 1),

gi (x̃) − gi (x) = ⟨gi′ (γ(t)), x̃ − x⟩.

For each j, the jth entry of the vector gi′ (γ(t)) is Dj gi (γ(t)), and we are
given that ∣Dj gi (γ(t))∣ ≤ c. Also, the jth entry of the vector x̃ − x satisfies
∣x̃j − xj ∣ ≤ ℓ/2, where ℓ is the longest side of B. Thus

∣gi (x̃) − gi (x)∣ ≤ ncℓ/2,

and so
gi (B) ⊂ [gi (x) − ncℓ/2, gi (x) + ncℓ/2].
Apply this argument for each i ∈ {1, . . . , n} to show that g(B) lies in the
box B ′ centered at g(x) having sides ncℓ and therefore having volume

vol(B ′ ) = (ncℓ)n .

On the other hand, since the shortest side of B is at least ℓ/2,

vol(B) ≥ (ℓ/2)n .

The result follows. ⊔


Using the previous two results, we can show that the property of having
volume zero is preserved under mappings that are well enough behaved. How-
ever, we need to assume more than just continuity. The property of having
volume zero is not a topological property.
6.8 Topological Preliminaries for the Change of Variable Theorem 333

Proposition 6.8.6 (Volume zero preservation under C 1 -mappings).


Let S ⊂ Rn be a compact set having volume zero. Let A be an open superset
of S, and let
Φ ∶ A Ð→ Rn
be a C 1 -mapping. Then Φ(S) again has volume zero.
Proof. For each s ∈ S there exists an rs > 0 such that the copy of the
box [−rs , rs ]n centered at s lies in A (Exercise 6.8.5(a)). Let Js denote the cor-
responding open box, i.e., a copy of (−rs , rs )n centered at s. By the finiteness
property of compact sets, a collection of finitely many of the open boxes Js
covers S, so certainly the corresponding collection U of the closed boxes does
so as well. As a finite union of compact sets, U is compact (Exercise 6.8.1(f)).
Therefore the partial derivatives Dj Φi for i, j = 1, . . . , n are uniformly contin-
uous on U , and so some constant c bounds all Dj Φi on U .
Let ε > 0 be given. Cover S by finitely many boxes Bi having total volume
less than ε/(2nc)n . After replacing each box by its intersections with the boxes
of U , we may assume that the boxes all lie in U . (Here it is relevant that the
intersection of two boxes is a box.) And after further subdividing the boxes if
necessary, we may assume that the longest side of each box is at most twice
the shortest side (Exercise 6.8.6(b)). By the box-volume magnification lemma,
the Φ-images of the boxes lie in a union of boxes Bi′ having volume

∑ vol(Bi ) ≤ (2nc) ∑ vol(Bi ) < ε.


′ n
i i



The last topological preliminary that we need is the formal definition of
interior.
Definition 6.8.7 (Interior point, interior of a set). Let S ⊂ Rn be a set.
Every nonboundary point of S is an interior point of S. Thus x is an interior
point of S if some open ball B(x, r) lies entirely in S. The interior of S is
S ○ = {interior points of S}.
The interior of every set S is open (Exercise 6.8.6(a)). Every set decom-
poses as the disjoint union of its interior points and its boundary points (Ex-
ercise 6.8.6(b)),
S = S ○ ∪ (S ∩ ∂S), S ○ ∩ ∂S = ∅.
As anticipated at the beginning of this section, we now can complete the
argument that the properties of the set K in the change of variable theorem
are preserved by the mapping Φ in the theorem.
Proposition 6.8.8. Let K ⊂ Rn be a compact and connected set having
boundary of volume zero. Let A be an open superset of K, and let Φ ∶ A Ð→ Rn
be a C 1 -mapping such that det Φ′ ≠ 0 everywhere on K ○ . Then Φ(K) is again
a compact and connected set having boundary of volume zero.
334 6 Integration

Proof. We have discussed the fact that Φ(K) is again compact and connected.
Restrict Φ to K. The inverse function theorem says that Φ maps interior points
of K to interior points of Φ(K), and thus ∂(Φ(K)) ⊂ Φ(∂K). By the volume-
zero preservation proposition, vol(Φ(∂K)) = 0. So vol(∂(Φ(K))) = 0 as well.

Exercises

6.8.1. (a) Show that every intersection—not just twofold intersections and
not even just finite-fold intersections—of closed sets is closed. (Recall from
Proposition 2.4.5 that a set S is closed if and only if every sequence in S that
converges in Rn in fact converges in S.)
(b) Show by example that a boundary point of a set need not be a limit
point of the set. Show by example that a limit point of a set need not be a
boundary point of the set.
(c) Show that a set is closed if and only if it contains each of its boundary
points. (Again recall the characterization of closed sets mentioned in part (a).)
(d) Show that the boundary of every set is closed.
(e) Show that every union of two closed sets is closed. It follows that every
union of finitely many closed sets is closed. Recall that by definition, a set is
open if its complement is closed. Explain why consequently every intersection
of finitely many open sets is open.
(f) Explain why every union of finitely many compact sets is compact.
6.8.2. Let S be any subset of Rn .
(a) Show that its closure is its union with its boundary, S = S ∪ ∂S.
(b) Show that if S is bounded then so is S.
6.8.3. (a) Which points of the proof of Proposition 6.8.4 are sketchy? Fill in
the details.
(b) Let S be an unbounded subset of Rn , meaning that S is not contained
in any ball. Find a collection of open boxes Ji that covers S but such that no
finite subcollection of the open boxes Ji covers S.
(c) Let S be a bounded but nonclosed subset of Rn , meaning that S is
bounded but missing a limit point. Find a collection of open boxes Ji that
covers S but such that no finite subcollection of the open boxes Ji covers S.
6.8.4. Let ε > 0. Consider the box B = [0, 1] × [0, ε] ⊂ R2 , and consider the
mapping g ∶ R2 Ð→ R2 given by g(x, y) = (x, x). What is the smallest box B ′
containing g(B)? What is the ratio vol(B ′ )/vol(B)? Discuss the relationship
between this example and Lemma 6.8.5.
6.8.5. The following questions are about the proof of Proposition 6.8.6.
(a) Explain why for each s ∈ S there exists an rs > 0 such that the copy of
the box [−rs , rs ]n centered at s lies in A.
(b) Explain why every box (with all sides assumed to be positive) can be
subdivided into boxes whose longest side is at most twice the shortest side.
6.9 Proof of the Change of Variable Theorem 335

6.8.6. Let S ⊂ Rn be any set.


(a) Show that the interior S ○ is open.
(b) Show that S decomposes as the disjoint union of its interior points and
its boundary points.

6.9 Proof of the Change of Variable Theorem


Again recall the statement of the change of variable theorem:
Let K ⊂ Rn be a compact and connected set having boundary of volume
zero. Let A be an open superset of K, and let Φ ∶ A Ð→ Rn be a C 1 -
mapping such that Φ is injective on K ○ and det Φ′ ≠ 0 on K ○ . Let
f ∶ Φ(K) Ð→ R be a continuous function. Then

∫ f = ∫ (f ○ Φ) ⋅ ∣ det Φ′ ∣.
Φ(K) K

We begin chipping away at the theorem by strengthening its hypotheses.

Proposition 6.9.1 (Optional hypothesis-strengthening). To prove the


change of variable theorem, it suffices to prove the theorem subject to any
combination of the following additional hypotheses:
• K is a box.
• Φ is injective on all of A,
• det Φ′ ≠ 0 on all of A.

Before proceeding to the proof of the proposition, it deserves comment


that we will not always want K to be a box. But once the proposition is
proved, we may take K to be a box or not as convenient.

Proof. Let ε > 0 be given.


Let B be a box containing K, and let P be a partition of B into subboxes J.
Define three types of subbox,

type I : J such that J ⊂ K ○ ,


type II : J such that J ∩ ∂K ≠ ∅ (and thus J ∩ ∂(B/K) ≠ ∅),
type III : J such that J ⊂ B − (K ∪ ∂K).

(In the left side of Figure 6.47, the type I subboxes are shaded and the type II
subboxes are white. There are no type III subboxes in the figure, but type III
subboxes play no role in the pending argument anyway.) The three types of
box are exclusive and exhaustive (Exercise 6.9.2(a)).
Also define a function


⎪(f ○ Φ)(x) ⋅ ∣ det Φ′ (x)∣ if x ∈ K,
g ∶ B Ð→ R, g(x) = ⎨

⎪ if x ∉ K.

0
336 6 Integration

θ y

Φ
x

Figure 6.47. Type I and type II subboxes, image of the type I subboxes

The continuous function f is necessarily bounded on Φ(K), say by R. The


partial derivatives Dj Φi of the component functions of Φ are continuous on K,
and so the continuous function ∣ det Φ′ ∣ is bounded on the compact set K, say
̃ Thus RR
by R. ̃ bounds g on B.
As in the proof of the volume zero preservation proposition (Proposi-
tion 6.8.6), we can cover the subset K of A by a collection U of finitely
many boxes that is again a subset of A, and so the continuous partial deriva-
tives Dj Φi of the component functions of Φ are bounded on the compact
set U , say by c. We may assume that the partition P is fine enough that all
subboxes J of type I and type II lie in U (Exercise 6.9.2(b)). And we may
assume that the longest side of each subbox J is at most twice the shortest
side. Recall that ε > 0 has been given. Because the boundary of K has volume
zero, we may further assume that the partition P is fine enough that

vol(J) < min { }


ε ε
∑ ,
̃
J∶type II R(2nc)n RR

(Exercise 6.9.2(c)).
Let
Φ(K)I = ⋃ Φ(J), Φ(K)II = Φ(K)/Φ(K)I .
J:type I

(Thus Φ(K)I is shaded in the right side of Figure 6.47, while Φ(K)II is white.)
Then the integral on the left side of the equality in the change of variable
theorem decomposes into two parts,

∫ f =∫ f +∫ f,
Φ(K) Φ(K)I Φ(K)II

and because Φ is injective on K ○ , the previous display can be rewritten as

∫ f= ∑ ∫ f +∫ f. (6.12)
Φ(K) J : type I Φ(J) Φ(K)II
6.9 Proof of the Change of Variable Theorem 337

Also,
Φ(K)II ⊂ ⋃ Φ(J),
J : type II

so that
∣∫ f∣ ≤ ∫ ∣f ∣ ≤ ∑ ∫ ∣f ∣.
Φ(K)II Φ(K)II J : type II Φ(J)

By the box-volume magnification lemma (Lemma 6.8.5), for each box J of


type II, vol(Φ(J)) ≤ (2nc)n vol(J). Thus, by the bounds on f and on the sum
of the type II box-volumes, it follows that

∣∫ f ∣ < ε.
Φ(K)II

That is, the second term on the right side of (6.12) contributes as negligibly
as desired to the integral on the left side, which is the integral on the left side
of the change of variable theorem. In terms of Figure 6.47, the idea is that if
the boxes in the left half of the figure are refined until the sum of the white
box-areas is small enough then the integral of f over the corresponding small
white region in the right half of the figure becomes negligible.
Meanwhile, the integral on the right side of the equality in the change of
variable theorem also decomposes into two parts,

∫ (f ○ Φ) ⋅ ∣ det Φ ∣ = ∑ ∫ g+ ∑ ∫ g.

(6.13)
K J : type I J J : type II J

By the bounds on g and on the sum of the type II box-volumes,


RR RR
RRR RR
RRR ∑ ∫ g RRRR ≤ ∑ ∫ ∣g∣ < ε.
RRRJ : type II J RRR J : type II J

That is, the second term on the right side of (6.13) contributes as negligibly
as desired to the integral on the left side, which is the integral on the right
side of the change of variable theorem. In terms of Figure 6.47, the idea is that
if the boxes in the left half of the figure are refined until the sum of the white
box-areas is small enough then the integral of (f ○ Φ) ⋅ ∣ det Φ′ ∣ over the white
boxes becomes negligible. That is, it suffices to prove the change of variable
theorem for boxes like the shaded boxes in the left half of the figure.
The type I subboxes J of the partition of the box B containing the orig-
inal K (which is not assumed to be a box) satisfy all of the additional hy-
potheses in the statement of the proposition: each J is a box, and we may
shrink the domain of Φ to the open superset K ○ of each J, where Φ is in-
jective and where det Φ′ ≠ 0. Thus, knowing the change of variable theorem
subject to any of the additional hypotheses says that the first terms on the
right sides of (6.12) and (6.13) are equal, making the integrals on the left sides
lie within ε of each other. Since ε is arbitrary, the integrals are in fact equal.
In sum, it suffices to prove the change of variable theorem assuming any of
the additional hypotheses, as desired. ⊔

338 6 Integration

Proposition 6.9.2 (Alternative optional hypothesis-strengthening).


To prove the change of variable theorem, it suffices to prove the theorem subject
to the following additional hypotheses:
• Φ(K) is a box (but now we may not assume that K is a box).
• Φ is injective on all of A.
• det Φ′ ≠ 0 on all of A.

Similarly to the remark after Proposition 6.9.1, we will not always want
the additional hypotheses.

Proof. With the previous proposition in play, the idea now is to run through its
proof in reverse, starting from the strengthened hypotheses that it grants us.
Thus we freely assume that K is a box, that the change of variable mapping Φ
is injective on all of A, and that det Φ′ ≠ 0 on all of A. By the inverse function
theorem, the superset Φ(A) of Φ(K) is open and Φ ∶ A Ð→ Φ(A) has a C 1
inverse
Φ−1 ∶ Φ(A) Ð→ A.
Let ε > 0 be given.
Let B be a box containing Φ(K), and let P be a partition of B into
subboxes J. Define three types of subbox,

type I : J such that J ⊂ Φ(K)○ ,


type II : J such that J ∩ ∂Φ(K) ≠ ∅ (and thus J ∩ ∂(B/Φ(K)) ≠ ∅),
type III : J such that J ⊂ B − (Φ(K) ∪ ∂Φ(K)).

These three types of box are exclusive and exhaustive. Also, define as before


⎪(f ○ Φ)(x) ⋅ ∣ det Φ′ (x)∣ if x ∈ K,
g ∶ B Ð→ R, g(x) = ⎨

⎪ if x ∉ K.

0

Again, f is bounded on Φ(K), say by R, and ∣ det Φ′ ∣ is bounded on K, say


by R,̃ so that RR ̃ bounds g on B. (See Figure 6.48, in which the type I
subboxes cover nearly all of Φ(K) and their inverse images cover nearly all
of K.)
Cover the subset Φ(K) of Φ(A) by a collection U of finitely many boxes
that is again a subset of Φ(A). Then the continuous partial derivatives Dj Φ−1i
of the component functions of Φ−1 are bounded on the compact set U , say
by c. We may assume that the partition P is fine enough that all subboxes J
of type I and type II lie in U . And we may assume that the longest side of each
subbox J is at most twice the shortest side. Recall that ε > 0 has been given.
Because the boundary of Φ(K) has volume zero, we may further assume that
the partition P is fine enough that

vol(J) < min { }.


ε ε
∑ ,
̃
J∶type II R RR(2nc)n
6.9 Proof of the Change of Variable Theorem 339

Φ−1

Figure 6.48. Type I, II, and III subboxes, inverse image of the type I subboxes

Let
KI = ⋃ Φ−1 (J), KII = K/KI .
J:type I

Then the integral on the left side of the equality in the change of variable
theorem decomposes into two parts,

∫ f= ∑ ∫ f+ ∑ ∫ f. (6.14)
Φ(K) J : type I J J : type II J

By the bounds on f and on the sum of the type II box-volumes,


RR RR
RRR RR
RRR ∑ ∫ f RRRR ≤ ∑ ∫ ∣f ∣ < ε.
RRRJ : type II J RRR J : type II J

That is, the second term on the right side of (6.14) contributes as negligibly
as desired to the integral on the left side, which is the integral on the left side
of the change of variable theorem.
Meanwhile, the integral on the right side of the equality in the change of
variable theorem also decomposes into two parts,

∫ (f ○ Φ) ⋅ ∣ det Φ ∣ = ∫ g+∫

g,
K KI KII

and because Φ−1 is injective, the previous display can be rewritten as

∫ (f ○ Φ) ⋅ ∣ det Φ ∣ = ∑ ∫ g+∫

g. (6.15)
K J : type I Φ−1 (J) KII

Also,
KII ⊂ ⋃ Φ−1 (J),
J : type II

so that
340 6 Integration

∣∫ g∣ ≤ ∫ ∣g∣ ≤ ∑ ∫ ∣g∣.
KII KII J : type II Φ−1 (J)

For each box J of type II, vol(Φ−1 (J)) ≤ (2nc)n vol(J). Thus, by the bounds
on g and on the sum of the type II box-volumes, it follows that

∣∫ g∣ < ε.
KII

That is, the second term on the right side of (6.15) contributes as negligibly
as desired to the integral on the left side, which is the integral on the right
side of the change of variable theorem.
The type I subboxes J of the partition of the box B containing the orig-
inal Φ(K) (which is not assumed to be a box) satisfy the new additional
hypothesis in the statement of the proposition. The other two additional hy-
potheses in the statement of the proposition are already assumed. Thus, know-
ing the change of variable theorem subject to the additional hypotheses says
that the first terms on the right sides of (6.14) and (6.15) are equal, making
the integrals on the left sides lie within ε of each other. Since ε is arbitrary, the
integrals are in fact equal. In sum, it suffices to prove the change of variable
theorem assuming the additional hypotheses, as desired. ⊔

Proposition 6.9.3 (Further optional hypothesis-strengthening). To


prove the change of variable theorem, it suffices to prove the theorem subject
to the additional hypothesis that f is identically 1.

As with the other hypothesis-strengthenings, we will not always want f to


be identically 1, but we may take it to be so when convenient.

Proof. We assume the strengthened hypotheses given us by Proposition 6.9.2.


Let P be a partition of the box Φ(K) into subboxes J. For each subbox J, view
the quantity MJ (f ) = sup {f (x) ∶ x ∈ J} both as a number and as a constant
function. Assume that the change of variable theorem holds for the constant
function 1 and therefore for every constant function, and compute

∫ (f ○ Φ) ⋅ ∣ det Φ ∣ = ∑ ∫ (f ○ Φ) ⋅ ∣ det Φ′ ∣

K J Φ−1 (J)

≤ ∑∫ (MJ (f ) ○ Φ) ⋅ ∣ det Φ′ ∣
J Φ−1 (J)

= ∑ ∫ MJ (f ) by the assumption
J J

= ∑ MJ (f ) vol(J)
J
= U (f, P ).

As a lower bound of the upper sums, ∫K (f ○Φ)⋅∣ det Φ′ ∣ is at most the integral,
6.9 Proof of the Change of Variable Theorem 341

∫ (f ○ Φ) ⋅ ∣ det Φ ∣ ≤ ∫

f.
K Φ(K)

A similar argument gives the opposite inequality, making the integrals equal
as desired. ⊔

The next result will allow the proof of the change of variable theorem to
decompose the change of variable mapping.

Proposition 6.9.4 (Persistence under composition). In the change of


variable theorem, suppose that the change of variable mapping is a composition

Φ=Γ ○Ψ

where the mappings


Ψ ∶ A Ð→ Rn
and
̃ Ð→ Rn
Γ ∶A ̃ is an open superset of Ψ (K))
(where A
satisfy the hypotheses of the change of variable theorem. If

∫ g = ∫ (g ○ Ψ ) ⋅ ∣ det Ψ ′ ∣ for continuous functions g ∶ Ψ (K) Ð→ R


Ψ (K) K

and
∫ 1=∫ ∣ det Γ ′ ∣
Γ (Ψ (K)) Ψ (K)

then also
∫ 1 = ∫ ∣ det Φ′ ∣.
Φ(K) K

Proof. The argument is a straightforward calculation using the definition of Φ,


the second given equality, the first given equality, the multiplicativity of the
determinant, the chain rule, and again the definition of Φ,

∫ 1=∫ 1=∫ ∣ det Γ ′ ∣


Φ(K) Γ (Ψ (K)) Ψ (K)

= ∫ ∣ det(Γ ′ ○ Ψ )∣ ⋅ ∣ det Ψ ′ ∣ = ∫ ∣ det ((Γ ′ ○ Ψ ) ⋅ Ψ ′ )∣


K K

= ∫ ∣ det(Γ ○ Ψ ) ∣ = ∫ ∣ det Φ ∣.
′ ′
K K


Proposition 6.9.5 (Linear change of variable). The change of variable


theorem holds for invertible linear mappings.

Proof. Let
T ∶ Rn Ð→ Rn
342 6 Integration

be an invertible linear mapping having matrix M . Thus T ′ (x) = M for all x.


Also, T is a composition of recombines, scales, and transpositions, and so
by the persistence of the change of variable theorem under composition, it
suffices to prove the theorem assuming that T is a recombine or a scale or a
transposition. In each case, Propositions 6.9.1 and 6.9.3 allow us to assume
that K is a box B and f = 1. Thus the desired result is simply

vol(T (B)) = ∣ det M ∣ ⋅ vol(B),

and we established this formula back in Section 3.8. ⊔



The change of variable theorem is proved partially by induction on the
dimension n.
Proposition 6.9.6 (Base case for the induction). The change of variable
theorem holds if n = 1.
Proof. Because n = 1, K is an interval [a, b] ⊂ R where a ≤ b. Here is where
we use the hypothesis that K is connected. Since we have not studied con-
nected sets closely, the reader is being asked to take for granted that every
compact and connected subset of R is a closed and bounded interval. (Or see
Exercise 6.9.1 for a proof that every compact and path-connected subset of R
is a closed and bounded interval.) The continuous function

Φ′ ∶ [a, b] Ð→ R

can take the value 0 only at a and b. Thus by the intermediate value theorem,
Φ′ never changes sign on [a, b]. If Φ′ ≥ 0 on [a, b] then Φ is increasing, and so
(using Theorem 6.4.3 for the second equality)

f = ∫ (f ○ Φ) ⋅ Φ′ = ∫ (f ○ Φ) ⋅ ∣Φ′ ∣.
Φ(b) b
∫ f =∫
Φ([a,b]) Φ(a) a [a,b]

If Φ′ ≤ 0 on [a, b] then Φ is decreasing, and so

f = − ∫ (f ○ Φ) ⋅ Φ′ = ∫ (f ○ Φ) ⋅ ∣Φ′ ∣.
Φ(a) Φ(b) b
∫ f =∫ f = −∫
Φ([a,b]) Φ(b) Φ(a) a [a,b]

Thus in either case the desired result holds. ⊔



Proposition 6.9.7 (Bootstrap induction step). For every n > 1, if the
change of variable theorem holds in dimension n − 1 then it holds in dimen-
sion n subject to the additional hypothesis that the transformation Φ fixes at
least one coordinate.
A 3-dimensional transformation Φ that fixes the third coordinate is shown
in Figure 6.49. The figure makes the proof of Proposition 6.9.7 inevitable:
the desired result holds for each slice because we are assuming the change of
variable theorem in dimension n − 1, and so Fubini’s theorem gives the result
for the entire figure.
6.9 Proof of the Change of Variable Theorem 343

Figure 6.49. A transformation that fixes the third coordinate

Proof. Propositions 6.9.1 and 6.9.3 allow us to assume that K is a box B,


that Φ is injective on B, that det Φ′ ≠ 0 on B, and that f = 1. Also, we may
assume that the coordinate fixed by Φ is the last coordinate. There exist a
box Bn−1 ⊂ Rn−1 and an interval I = [a, b] ⊂ R such that
B = ⋃ Bn−1 × {t}.
t∈I

By assumption, Φ is a C -mapping on an open superset A of B. For each t ∈ I


1

let At denote the cross section of A with last coordinate t,


At = {x ∈ Rn−1 ∶ (x, t) ∈ A}.
Then At is an open superset of Bn−1 in Rn−1 . For each t ∈ I define a mapping
Ψt ∶ At Ð→ Rn−1 , Ψt (x) = (Φ1 (x, t), . . . , Φn−1 (x, t)).
Each Ψt is a C -mapping on an open superset of Bn−1 , and
1

Φ(B) = ⋃ Ψt (Bn−1 ) × {t}.


t∈I

Since Φ is injective on B and det Φ′ ≠ 0 on B, it follows that each Ψt is injective


on Bn−1 , and the formula
∣ det Ψt′ (x)∣ = ∣ det Φ′ (x, t)∣, (x, t) ∈ B (6.16)
(Exercise 6.9.3) shows that ≠ 0 on Bn−1 . Thus for each t, the set Bn−1
det Ψt′
and the transformation Ψt satisfy the change of variable theorem hypotheses
in dimension n − 1. Compute, using Fubini’s theorem, quoting the change of
variable theorem in dimension n−1, and citing formula (6.16) and again using
Fubini’s theorem, that

∫ 1=∫ ∫ 1=∫ ∫ ∣ det Ψt′ ∣ = ∫ ∣ det Φ′ ∣.


Φ(B) t∈I Ψt (Bn−1 ) t∈I Bn−1 B



344 6 Integration

At long last we can prove the change of variable theorem for n > 1.
Proof. We may assume the result for dimension n − 1, and we may assume
that K is a box B, that A is an open superset of B, and that Φ ∶ A Ð→ Rn is
a C 1 -mapping such that Φ is injective on A and det Φ′ ≠ 0 on A. We need to
show that
∫ 1 = ∫ ∣ det Φ′ ∣. (6.17)
Φ(B) B

To prove the theorem, we will partition B into subboxes J, each J having an


open superset AJ on which Φ is a composition

Φ = T ○ Γ ○ Ψ,

where Ψ and Γ are C 1 -mappings that fix at least one coordinate and T is a
linear transformation. Note that Ψ , Γ , and T inherit injectivity and nonzero
determinant-derivatives from Φ, so that in particular, T is invertible. Since
the theorem holds for each of Ψ , Γ , and T , it holds for their composition. In
more detail,

∫ 1=∫ ∣ det T ′ ∣ by Proposition 6.9.5


T (Γ (Ψ (J))) Γ (Ψ (J))

=∫ ∣ det(T ′ ○ Γ )∣ ∣ det Γ ′ ∣ by Proposition 6.9.7


Ψ (J)

=∫ ∣ det(T ○ Γ )′ ∣ by the chain rule


Ψ (J)

= ∫ ∣ det ((T ○ Γ )′ ○ Ψ )∣ ∣ det Ψ ′ ∣ by Proposition 6.9.7


J

= ∫ ∣ det(T ○ Γ ○ Ψ )′ ∣ by the chain rule.


J

That is, for each J,


∫ 1 = ∫ ∣ det Φ′ ∣,
Φ(J) J

and so summing over all subboxes J finally gives (6.17).


To obtain the subboxes J, proceed as follows for each point x ∈ B. Let

T = DΦx

and define
̃ = T −1 ○ Φ,
Φ
so that DΦ̃x = idn is the n-dimensional identity map. Introduce the nth pro-
jection function, πn (x1 , . . . , xn ) = xn , and further define

Ψ ∶ A Ð→ Rn , ̃1 , . . . , Φ
Ψ = (Φ ̃n−1 , πn ),

so that DΨx = idn as well. By the inverse function theorem, Ψ is locally


invertible. Let Jx be a subbox of B containing x having an open superset Ax
such that Ψ −1 exists on Ψ (Ax ). Now define
6.9 Proof of the Change of Variable Theorem 345

Γ ∶ Ψ (Ax ) Ð→ Rn , ̃n ○ Ψ −1 ).
Γ = (π1 , . . . , πn−1 , Φ

Then Γ ○Ψ = Φ̃ = T −1 ○Φ on Ax , so that T ○Γ ○Ψ = Φ on Ax , and thus Ψ , Γ , and T


have the desired properties. (Figure 6.50 illustrates the decomposition for the
polar coordinate mapping. In the figure, Ψ changes only the first coordinate,
Γ changes only the second, and then the linear mapping T completes the
polar coordinate change of variable.)

Ψ Γ T

Figure 6.50. Local decomposition of the polar coordinate mapping

Cover B by the collection of open interiors of the boxes Jx . By the finite-


ness property of B, some finite collection of the interiors covers B, and so
certainly the corresponding finite collection of the boxes Jx themselves cov-
ers B. Partition B into subboxes J such that each J lies in one of the finitely
many Jx , and the process is complete. ⊔

In contrast to all of this, recall the much easier proof of the one-dimensional
change of variable theorem, using the construction of an antiderivative by in-
tegrating up to a variable endpoint (Theorem 6.4.1, sometimes called the first
fundamental theorem of integral calculus) and using the (second) fundamental
theorem of integral calculus twice,

∫ (f ○ φ) ⋅ φ = ∫ (F ○ φ) ⋅ φ where F (x) = ∫
b b x
′ ′ ′
f , so F ′ = f
a a a

= ∫ (F ○ φ)′
b
by the chain rule
a
= (F ○ φ)(b) − (F ○ φ)(a) by the FTIC
= F (φ(b)) − F (φ(a)) by definition of composition
φ(b)
=∫ F′ by the FTIC again
φ(a)
φ(b)
=∫ f since F ′ = f .
φ(a)

We see that the one-variable fundamental theorem of integral calculus is doing


the bulk of the work. Chapter 9 will give us a multivariable fundamental
theorem, after which we can sketch a proof of the multivariable change of
variable theorem in the spirit of the one-variable argument just given. A fully
346 6 Integration

realized version of that proof still has to handle topological issues, but even
so it is more efficient than the long, elementary method of this section.

Exercises

6.9.1. Let K be a nonempty compact subset of R. Explain why the quantities


a = min{x ∶ x ∈ K} and b = max{x ∶ x ∈ K} exist. Now further assume that K
is path-connected, so that in particular there is a continuous function

γ ∶ [0, 1] Ð→ R

such that γ(0) = a and γ(1) = b. Explain why consequently K = [a, b].

6.9.2. (a) Explain to yourself why the three types of rectangle in the proof
of Proposition 6.9.1 are exclusive. Now suppose that the three types are not
exhaustive, i.e., some rectangle J lies partly in K ○ and partly in (B/K)○
without meeting the set ∂K = ∂(B/K). Supply details as necessary for the
following argument. Let x ∈ J lie in K ○ and let x̃ ∈ J lie in (B/K)○ . Define
a function from the unit interval to R by mapping the interval to the line
segment from x to x̃, and then mapping each point of the segment to 1 if
it lies in K and to −1 if it lies in B/K. The resulting function is continuous
on the interval, and it changes sign on the interval, but it does not take the
value 0. This is impossible, so the rectangle J cannot exist.
(b) In the proof of Proposition 6.9.1, show that we may assume that the
partition P is fine enough that all subboxes J of type I and type II lie in U .
(c) In the proof of Proposition 6.9.1, show that given ε > 0, we may assume
that the partition P is fine enough that

vol(J) < min { }.


ε ε
∑ ,
̃
J∶type II R(2nc)n RR

6.9.3. In the proof of Proposition 6.9.7, establish formula (6.16).

6.9.4. Here is a sketched variant of the endgame of the change of variable


proof: A slightly easier variant of Proposition 6.9.7 assumes that the transfor-
mation Φ changes at most one coordinate, and then the process of factoring Φ
locally as a composition can be iterated until each factor is either linear or
changes at most one coordinate. Fill in the details.
7
Approximation by Smooth Functions

Let k be a nonnegative integer. Recall that a C k -function on Rn is a function


all of whose partial derivatives up through order k exist and are continuous.
That is, to say that a function

f ∶ Rn Ð→ R

is C k is to say that f , and Dj f for j = 1, . . . , n, and Djj ′ f for j, j ′ = 1, . . . , n,


and so on up to all Dj1 ⋯jk f exist and are continuous on Rn . Various ideas
that we have discussed so far have required different values of k:
• If f is C 1 then f is differentiable in the multivariable sense of derivative
(Theorem 4.5.3).
• If f is C 2 then its mixed second-order derivatives D12 f and D21 f are equal
(Theorem 4.6.1).
• The multivariable max/min test (Proposition 4.7.8) assumes a C 2 -function.
• The inverse function theorem says that if A ⊂ Rn is open and f ∶ A Ð→ Rn
is componentwise C 1 and its derivative Dfa is invertible at a point a then
f is locally invertible about a, and the local inverse is again C 1 (Theo-
rem 5.2.1).
• A C 0 -mapping from the unit interval can fill the square, but a C 1 -mapping
cannot.
• If f (again scalar-valued now) is C 0 then it is integrable over every compact
set having boundary of volume zero (Section 6.5).
• In the change of variable formula ∫Φ(K) f = ∫K (f ○ Φ) ⋅ ∣ det Φ′ ∣ for multiple
integrals (Theorem 6.7.1), the change of variable mapping Φ is assumed
to be C 1 , and for now the integrand f is assumed to be C 0 . We will return
to this example at the very end of this chapter.
Meanwhile, a smooth function is a function on Rn all of whose partial deriva-
tives of all orders exist. Smooth functions are also called C ∞ -functions, an
appropriate notation because the derivatives of each order are continuous
in consequence of the derivatives of one-higher order existing. This chapter

© Springer International Publishing AG 2016 347


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5_7
348 7 Approximation by Smooth Functions

briefly touches on the fact that for functions that vanish off a compact set,
C 0 -functions and C 1 -functions and C 2 -functions are well approximated by C ∞ -
functions.
The approximation technology is an integral called the convolution. The
idea is as follows. Suppose that we had a function

δ ∶ Rn Ð→ R

with the following properties:


(1) δ(x) = 0 for all x ≠ 0,
(2) ∫x∈Rn δ(x) = 1.
So conceptually the graph of δ is an infinitely high, infinitely narrow spike
above 0 having total volume 1. No such function exists, at least not in the
usual sense of function. (The function δ, known as the Dirac delta function,
is an example of a distribution, distributions being objects that generalize
functions.) Nonetheless, if δ were sensible then for every function f ∶ Rn Ð→ R
and for every x ∈ Rn , we would have in consequence that the graph of the
product of f and the x-translate of δ is an infinitely high, infinitely narrow
spike above x having total volume f (x),
(1) f (y)δ(x − y) = 0 for all y ≠ x,
(2) ∫y∈Rn f (y)δ(x − y) = f (x).
That is, granting the Dirac delta function, every function f can be expressed as
an integral. The idea motivating convolution is that if we replace the idealized
delta function by a smooth pulse ϕ, tall but finitely high, narrow but positively
wide, and having total volume 1, and if f is well enough behaved (e.g., f is
continuous and vanishes off a compact set) then we should still recover a close
approximation of f from the resulting integral,

∫ f (y)ϕ(x − y) ≈ f (x).
y∈Rn

The approximating integral on the left side of the previous display is the
convolution of f and ϕ evaluated at x. Although f is assumed only to be con-
tinuous, the convolution is smooth. Indeed, every xi -derivative passes through
the y-integral and ϕ is smooth, so that

∫ f (y)ϕ(x − y) = ∫ f (y) (x − y),


∂ ∂ϕ
∂xi y y ∂xi
and similarly for higher derivatives.
One can see convolution in action visually by comparing graphs of con-
volutions to the graph of the original function. And the conceptual frame-
work for establishing the properties of convolution analytically is not difficult.
Having discussed approximation by convolution, we will freely assume in the
remaining chapters of these notes that our functions are C ∞ , i.e., that they
are smooth.
7.1 Spaces of Functions 349

7.1 Spaces of Functions


To begin, we quantify the phrase functions that vanish off a compact set from
the chapter introduction.

Definition 7.1.1 (Support). Consider a function

f ∶ Rn Ð→ R.

The support of f is the closure of the set of its inputs that produce nonzero
outputs,
supp(f ) = {x ∈ Rn ∶ f (x) ≠ 0}.
The function f is compactly supported if its support is compact. The class
of compactly supported C k -functions is denoted Cck (Rn ). Especially, Cc0 (Rn )
denotes the class of compactly supported continuous functions.

Each class Cck (Rn ) of functions forms a vector space over R (Exercise 7.1.1).
Figure 7.1 shows a compactly supported C 0 -function on R and its support.
The graph has some corners, so the function is not C 1 .

Figure 7.1. Compactly supported continuous function on R and its support

The spaces of compactly supported functions shrink as their member-


functions are required to have more derivatives,

Cc0 (Rn ) ⊃ Cc1 (Rn ) ⊃ Cc2 (Rn ) ⊃ ⋯,

and we will see that all of the containments are proper.

Definition 7.1.2 (Test function). A test function is a compactly sup-


ported smooth function. The class of test functions is denoted Cc∞ (Rn ).

The class of test functions sits at the end of the chain of containments of
function-spaces from a moment ago,
350 7 Approximation by Smooth Functions

Cc∞ (Rn ) = ⋂ Cck (Rn ),


k≥0

and as an intersection of vector spaces over R, the test functions Cc∞ (Rn )
again form a vector space over R. In the chain of containments

Cc0 (Rn ) ⊃ Cc1 (Rn ) ⊃ Cc2 (Rn ) ⊃ ⋯ ⊃ Cc∞ (Rn ),

all of the containments are proper. Indeed, for a vivid example of the first
containment, Weierstrass showed how to construct a function f of one variable,
having support [0, 1], that is continuous everywhere but differentiable nowhere
on its support. The function of n variables

f0 (x1 , x2 , . . . , xn ) = f (∣(x1 , x2 , . . . , xn )∣)

thus lies in Cc0 (Rn ) but not in Cc1 (Rn ). Next, the function

f1 (x1 , x2 , . . . , xn ) = ∫ f0 (t1 , x2 , . . . , xn )
x1

t1 =0

lies in Cc1 (Rn ) but not Cc2 (Rn ), because its first partial derivative is f0 , which
does not have a first partial derivative. Defining f2 as a similar integral of f1
gives a function that lies in Cc2 (Rn ) but not Cc3 (Rn ), and so on. Finally, none
of the functions fk just described lies in Cc∞ (Rn ).
For every k > 0 and every f ∈ Cck (Rn ), the supports of the partial deriva-
tives are contained in the support of the original function,

supp(Dj f ) ⊂ supp(f ), j = 1, . . . , n.

Thus the partial derivative operators Dj take Cck (Rn ) to Cck−1 (Rn ) as sets.
The operators are linear because

Dj (f + f˜) = Dj f + Dj f˜, f, f˜ ∈ Cck (Rn )

and
Dj (cf ) = c Dj f, f ∈ Cck (Rn ), c ∈ R.
In addition, more can be said about the Dj operators. Each space Cck (Rn ) of
functions carries an absolute value function having properties similar to the
absolute value on Euclidean space Rn . With these absolute values in place,
the partial differentiation operators are continuous.

Definition 7.1.3 (Cck (Rn ) absolute value). The absolute value function
on Cc0 (Rn ) is

∣ ∣ ∶ Cc0 (Rn ) Ð→ R, ∣f ∣ = sup{∣f (x)∣ ∶ x ∈ Rn }.

Let k be a nonnegative integer. The absolute value function on Cck (Rn ) is


7.1 Spaces of Functions 351

∣ ∣k ∶ Cck (Rn ) Ð→ R

given by

⎪ ∣f ∣, ⎫



⎪ ⎪




⎪ ∣D ∣ = ⎪





j f for j 1, . . . , n, ⎪


∣f ∣k = max ⎨ ∣Djj ′ f ∣ for j, j = 1, . . . , n,

⎬.

⎪ ⎪



⎪ ⋮ ⎪




⎪ ⎪



⎪ ⎪
⎩ ∣Dj1 ⋯jk f ∣ for j1 , . . . , jk = 1, . . . , n⎪

That is, ∣f ∣k is the largest absolute value of f or of any derivative of f up to
order k. In particular, ∣ ∣0 = ∣ ∣.

The largest absolute values mentioned in the definition exist by the ex-
treme value theorem, because the relevant partial derivatives are compactly
supported and continuous. By contrast, we have not defined an absolute value
on the space of test functions Cc∞ (Rn ), because the obvious attempt to extend
Definition 7.1.3 to test functions would involve the maximum of an infinite
set, a maximum that certainly need not exist.

Proposition 7.1.4 (Cck (Rn ) absolute value properties).


(A1) Absolute value is positive: ∣f ∣k ≥ 0 for all f ∈ Cck (Rn ), and ∣f ∣k = 0 if and
only if f is the zero function.
(A2) Scaling property: ∣cf ∣k = ∣c∣ ∣f ∣k for all c ∈ R and f ∈ Cck (Rn ).
(A3) Triangle inequality: ∣f + g∣k ≤ ∣f ∣k + ∣g∣k for all f, g ∈ Cck (Rn ).

Proof. The first two properties are straightforward to check. For the third
property, note that for every f, g ∈ Cc0 (Rn ) and every x ∈ Rn ,

∣(f + g)(x)∣ ≤ ∣f (x)∣ + ∣g(x)∣ ≤ ∣f ∣ + ∣g∣.

Thus ∣f ∣ + ∣g∣ is an upper bound of all values ∣(f + g)(x)∣, so that

∣f + g∣ ≤ ∣f ∣ + ∣g∣.

That is, ∣f + g∣0 ≤ ∣f ∣0 + ∣g∣0 . If f, g ∈ Cc1 (Rn ) then the same argument shows
that also ∣Dj (f + g)∣ ≤ ∣Dj f ∣ + ∣Dj g∣ for j = 1, . . . , n, so that

∣f + g∣,
∣f + g∣1 = max { }
∣Dj f + Dj g∣ for j = 1, . . . , n
∣f ∣ + ∣g∣,
≤ max { }
∣Dj f ∣ + ∣Dj g∣ for j = 1, . . . , n
∣f ∣, ∣g∣,
≤ max { } + max { }
∣Dj f ∣ for j = 1, . . . , n ∣Dj g∣ for j = 1, . . . , n
= ∣f ∣1 + ∣g∣1 .
352 7 Approximation by Smooth Functions

(For the second inequality, note for example that


max(∣f ∣ + ∣g∣) = (∣f ∣ + ∣g∣)(x̃) for some x̃
= ∣f ∣(x̃) + ∣g∣(x̃) ≤ max ∣f ∣ + max ∣g∣,
and similarly for each partial derivative.) The proof that ∣f + g∣k ≤ ∣f ∣k + ∣g∣k
for higher values of k is more of the same. ⊔

Now we can verify the anticipated continuity of the linear operators Dj
from Cck (Rn ) to Cck−1 (Rn ).
Proposition 7.1.5 (Continuity of differentiation). For every k ≥ 1, the
partial differentiation mappings
Dj ∶ Cck (Rn ) Ð→ Cck−1 (Rn ), j = 1, . . . , n
are continuous.
Proof. Consider any function f ∈ Cck (Rn ) and any sequence {fm } in Cck (Rn ).
Suppose that
lim ∣fm − f ∣k = 0.
m
Then
lim ∣fm − f ∣ = 0,
m
lim ∣Dj fm − Dj f ∣ = 0 for j = 1, . . . , n,
m
lim ∣Djj ′ fm − Djj ′ f ∣ = 0 for j, j ′ = 1, . . . , n,
m

lim ∣Dj1 j2 ...jk fm − Dj1 j2 ...jk f ∣ = 0 for j1 , j2 , . . . , jk = 1, . . . , n.
m

Fix any j ∈ {1, . . . , n}. As a subset of the information in the previous display,
lim ∣Dj fm − Dj f ∣ = 0,
m
lim ∣Djj ′ fm − Djj ′ f ∣ = 0 for j ′ = 1, . . . , n,
m

lim ∣Djj2 ...jk fm − Djj2 ...jk f ∣ = 0 for j2 , . . . , jk = 1, . . . , n.
m

That is,
lim ∣Dj fm − Dj f ∣k−1 = 0.
m
The implication that we have just proved,
lim ∣fm − f ∣k = 0 Ô⇒ lim ∣Dj fm − Dj f ∣k−1 = 0,
m m

is exactly the assertion that Dj ∶ Cck (Rn ) Ð→ Cck−1 (Rn ) is continuous, and the
proof is complete. ⊔

7.1 Spaces of Functions 353

Again let k ≥ 1. The fact that ∣f ∣k−1 ≤ ∣f ∣k for every f ∈ Cck (Rn ) (Ex-
ercise 7.1.2) shows that for every f ∈ Cck (Rn ) and every sequence {fm }
in Cck (Rn ), if limm ∣fm − f ∣k = 0 then limm ∣fm − f ∣k−1 = 0. That is, the in-
clusion mapping
i ∶ Cck (Rn ) Ð→ Cck−1 (Rn ), i(f ) = f
is continuous.
The space Cc∞ (Rn ) of test functions is closed under partial differentiation,
meaning that the partial derivatives of a test function are again test functions
(Exercise 7.1.3).
In this chapter we will show that just as every real number x ∈ R is ap-
proximated as closely as desired by rational numbers q ∈ Q, every compactly
supported continuous function f ∈ Cck (Rn ) is approximated as closely as de-
sired by test functions g ∈ Cc∞ (Rn ). More precisely, we will show that:
For every f ∈ Cck (Rn ), there exists a sequence {fm } in Cc∞ (Rn ) such
that limm ∣fm − f ∣k = 0.
The fact that limm ∣fm − f ∣k = 0 means that given any ε > 0, there exists a
starting index m0 such that fm for all m ≥ m0 uniformly approximates f to
within ε up to kth order. That is, for all m ≥ m0 , simultaneously for all x ∈ Rn ,

∣fm (x) − f (x)∣ < ε,


∣Dj fm (x) − Dj f (x)∣ < ε for j = 1, . . . , n,
∣Djj ′ fm (x) − Djj ′ f (x)∣ < ε for j, j ′ = 1, . . . , n,

∣Dj1 ...jk fm (x) − Dj1 ...jk f (x)∣ < ε for j1 , . . . , jk = 1, . . . , n.

The use of uniform here to connote that a condition holds simultaneously


over a set of values is similar to its use in uniform continuity.

Exercises

7.1.1. Show that each class Cck (Rn ) of functions forms a vector space over R.

7.1.2. Verify that ∣f ∣k−1 ≤ ∣f ∣k for every f ∈ Cck (Rn ).

7.1.3. Explain why each partial derivative of a test function is again a test
function.

7.1.4. Let {fn } be a sequence of functions in Cc0 (Rn ), and suppose that the
sequence converges, meaning that there exists a function f ∶ Rn Ð→ R such
that limn fn (x) = f (x) for all x ∈ Rn . Must f have compact support? Must f
be continuous?
354 7 Approximation by Smooth Functions

7.2 Pulse Functions


A pulse function is a useful type of test function. To construct pulse functions,
first consider the function


⎪0 if x ≤ 0,
s ∶ R Ð→ R, s(x) = ⎨ −1/x

⎪ if x > 0.

e

(See Figure 7.2.) Each x < 0 lies in an open interval on which s is the constant
function 0, and each x > 0 lies in an open interval on which s is a composi-
tion of smooth functions, so in either case all derivatives s(k) (x) exist. More
specifically, for every nonnegative integer k, there exists a polynomial pk (x)
such that the kth derivative of s takes the form

⎪ if x < 0,



0
(k)
(x) = ⎨pk (x)x−2k e−1/x if x > 0,

s



⎩? if x = 0.

Only s(k) (0) is in question. However, s(0) (0) = 0, and if we assume that
s(k) (0) = 0 for some k ≥ 0 then it follows (because exponential behavior
dominates polynomial behavior) that

s(k) (h) − s(k) (0)


lim+ = lim+ pk (h)h−2k−1 e−1/h = 0.
h→0 h h→0

That is, s(k+1) (0) exists and equals 0 as well. By induction, s(k) (0) = 0 for
all k ≥ 0. Thus s is smooth: each derivative exists, and each derivative is
continuous because the next derivative exists as well. But s is not a test
function, because its support is not compact: supp(s) = [0, ∞).

Figure 7.2. Smooth function


7.2 Pulse Functions 355

Now the pulse function is defined in terms of the smooth function,

s(x + 1)s(−x + 1)
p ∶ R Ð→ R, p(x) = .
∫x=−1 s(x + 1)s(−x + 1)
1

The graph of p (Figure 7.3) explains the name pulse function. As a product
of compositions of smooth functions, p is smooth. The support of p is [−1, 1],
so p is a test function. Also, p is normalized so that

∫ p = 1.
[−1,1]

The maximum pulse value p(0) is therefore close to 1 because the pulse graph
is roughly a triangle of base 2, but p(0) is not exactly 1. The pulse function
p2 (x, y) = p(x)p(y) from R2 to R, having support [−1, 1]2 , is shown in Fig-
ure 7.4. A similar pulse function p3 on R3 can be imagined as a concentration
of density in a box about the origin.

Figure 7.3. Pulse function

Exercises

7.2.1. Since the function s in this section is smooth, it has nth-degree Taylor
polynomials Tn (x) at a = 0 for all nonnegative integers n. (Here n does not
denote the dimension of Euclidean space.) For what x does s(x) = Tn (x)?

7.2.2. Let p be the pulse function defined in this section. Explain why
supp(p) = [−1, 1].

7.2.3. Let p ∶ R Ð→ R be the one-dimensional pulse function from this section.


(a) Graph the function q(x) = p(2a − b + x(b − a))), where a < b.
356 7 Approximation by Smooth Functions

Figure 7.4. Two-dimensional pulse function

(b) Graph the function r(x) = ∫t=−1 p(t).


x

(c) Use the function r from part (b) to give a formula for a test function
that is 0 for x < a, climbs from 0 to 1 for a ≤ x ≤ b, is 1 for b < x < c, drops
from 1 to 0 for c ≤ x ≤ d, and is 0 for d < x.

7.3 Convolution
This section shows how to construct test functions from Cc0 (Rn )-functions. In
preparation, we introduce a handy piece of notation.
Definition 7.3.1 (Sum, difference of two sets). Let S and T be subsets
of Rn . Their sum is the set consisting of all sums of a point of S plus a point
of T ,
S + T = {s + t ∶ s ∈ S, t ∈ T }.
Their difference is similarly

S − T = {s − t ∶ s ∈ S, t ∈ T }.

Visually, S + T can be imagined as many copies of T , one based at each


point of S, or vice versa. For example, if K is a three-dimensional box and B
is a small ball about 03 then K + B is slightly larger than K, again shaped
like a box except that the edges and corners are rounded. Similarly, {0} − T is
the reflection of T through the origin. The sum or difference of two compact
sets is compact (Exercise 7.3.1(a)). The sum of of the open balls B(a, r)
and B(b, s) is B(a + b, r + s) (Exercise 7.3.1(b)). The reader is alerted that
the set difference here is different from another, more common notion of set
difference, that being the elements of one set that are not elements of another,

S/T = {s ∈ S ∶ s ∉ T }.
7.3 Convolution 357

Returning to Cc0 (Rn )-functions, every such function can be integrated over
all of Rn .
Definition 7.3.2 (Integral of a Cc0 (Rn )-function). Let f ∈ Cc0 (Rn ). The
integral of f is the integral of f over any box that contains its support,

∫ f =∫ f where supp(f ) ⊂ B.
B

In Definition 7.3.2 the integral on the right side exists by Theorem 6.3.1.
Also, the integral on the right side is independent of the suitable box B, always
being the integral over the intersection of all such boxes, the smallest suitable
box. Thus the integral on the left side exists and is unambiguous. We do not
bother writing ∫Rn f rather than ∫ f , because it is understood that by default
we are integrating f over Rn .
Definition 7.3.3 (Mollifying kernel). Let f ∈ Cc0 (Rn ) be a compactly sup-
ported continuous function, and let ϕ ∈ Cc∞ (Rn ) be a test function. The mol-
lifying kernel associated to f and ϕ is the function
κ ∶ Rn × Rn Ð→ R, κ(x, y) = f (y)ϕ(x − y).
For every fixed x ∈ Rn , the corresponding cross section of the mollifying kernel
is denoted κx ,
κx ∶ Rn Ð→ R, κx (y) = κ(x, y).
For each x ∈ Rn , the mollifying kernel κx (y) can be nonzero only if y ∈
supp(f ) and x − y ∈ supp(ϕ). It follows that
supp(κx ) ⊂ supp(f ) ∩ ({x} − supp(ϕ)).
Therefore κx is compactly supported. (Figure 7.5 shows an example of the
multiplicands f (y) and ϕ(x−y) of κx (y), and Figure 7.6 shows their compactly
supported product.) Also, since f and ϕ are continuous, κx is continuous.
That is, for each x, the mollifying kernel κx viewed as a function of y again
lies in Cc0 (Rn ), making it integrable by Theorem 6.3.1.
The mollifying kernel is so named for good reason. First, it is a kernel in
the sense that we integrate it to get a new function.
Definition 7.3.4 (Convolution). Let f ∈ Cc0 (Rn ) and let ϕ ∈ Cc∞ (Rn ). The
convolution of f and ϕ is the function defined by integrating the mollifying
kernel,

f ∗ ϕ ∶ Rn Ð→ R, (f ∗ ϕ)(x) = ∫ κx (y) = ∫ f (y)ϕ(x − y).


y y

Second, although the mollifying kernel is only as well behaved as f , inte-


grating it indeed mollifies f in the sense that the integral is as well behaved
as ϕ, i.e., the integral is a test function. Even if f is nowhere differentiable,
f ∗ ϕ has all partial derivatives of all orders while remaining compactly sup-
ported. Furthermore, the derivatives have the natural formula obtained by
passing them through the integral.
358 7 Approximation by Smooth Functions

f (y) ϕ(x − y)

Figure 7.5. Multiplicands of the mollifying kernel

κx (y)

Figure 7.6. The mollifying kernel is compactly supported

Proposition 7.3.5 (Derivatives of the convolution). Let f ∈ Cc0 (Rn ) and


let ϕ ∈ Cc∞ (Rn ). Then also f ∗ ϕ ∈ Cc∞ (Rn ). Specifically, the partial derivatives
of the convolution are the convolutions with the partial derivatives,

Dj (f ∗ ϕ) = f ∗ Dj ϕ, j = 1, . . . , n,

and similarly for the higher-order partial derivatives.


The following result helps to prove Proposition 7.3.5. In its statement, the
symbol ϕ, which usually denotes a test function, instead denotes a Cc1 (Rn )-
function. The reason for the weaker hypothesis will appear soon in the proof
of Corollary 7.3.7.
Lemma 7.3.6 (Uniformity lemma for C 1 -functions). Let ϕ ∈ Cc1 (Rn ).
Given ε > 0, there exists a corresponding δ > 0 such that for all a ∈ Rn and all
nonzero h ∈ R, and for every j ∈ {1, . . . , n},

ϕ(a + hej ) − ϕ(a)


∣h∣ < δ Ô⇒ ∣ − Dj ϕ(a)∣ < ε.
h

Proof. Fix some j in {1, . . . , n}. The mean value theorem at the jth coordinate
gives for all a ∈ Rn and all nonzero h ∈ R,
7.3 Convolution 359

ϕ(a + hej ) − ϕ(a)


∣ − Dj ϕ(a)∣ = ∣Dj ϕ(a + tej ) − Dj ϕ(a)∣ where ∣t∣ < ∣h∣.
h

Since Dj ϕ is continuous on Rn and is compactly supported, it is uniformly


continuous on Rn , and so given any ε > 0 there exists a corresponding δj > 0
such that for all a ∈ Rn and t ∈ R,

∣Dj ϕ(a + tej ) − Dj ϕ(a)∣ < ε if ∣t∣ < δj .

Thus
ϕ(a + hej ) − ϕ(a)
∣h∣ < δj Ô⇒ ∣ − Dj ϕ(a)∣ < ε.
h
After running the argument of the previous paragraph for j = 1, . . . , n, de-
fine δ = min{δ1 , . . . , δn }. Then for all nonzero h ∈ R and for each j ∈ {1, . . . , n},
if ∣h∣ < δ then ∣h∣ < δj . This implication combines with the previous display to
give the result. ⊔

Now we can establish the derivative formula for the convolution.

Proof (of Proposition 7.3.5). To see that f ∗ ϕ is compactly supported, recall


the observation that for a given x, the mollifying kernel κx (y) = f (y)ϕ(x − y)
can be nonzero only at y-values such that

y ∈ supp(f ) ∩ ({x} − supp(ϕ)).

Such y can exist only if x takes the form

x = y + z, y ∈ supp(f ), z ∈ supp(ϕ).

That is, the integrand is always zero if x ∉ supp(f ) + supp(ϕ) (see Figure 7.7).
Hence,
supp(f ∗ ϕ) ⊂ supp(f ) + supp(ϕ).

ϕ(x − y)
f (y)

Figure 7.7. The mollifying kernel is zero for x outside supp(f ) + supp(ϕ)
360 7 Approximation by Smooth Functions

To show that Dj (f ∗ϕ) exists and equals f ∗Dj ϕ for j = 1, . . . , n is precisely


to show that each x-derivative passes through the y-integral,

∫ f (y)ϕ(x − y) = ∫ f (y) (x − y), j = 1, . . . , n.


∂ ∂ϕ
∂xj y y ∂xj

Since the integral is being taken over some box B, the equality follows from
Proposition 6.6.2. But we prove it using other methods, for reasons that will
emerge later in the chapter. The function f is bounded, say by R, so we can
estimate that for every x ∈ Rn and every nonzero h ∈ R and every j,

(f ∗ ϕ)(x + hej ) − (f ∗ ϕ)(x)


∣ − (f ∗ Dj ϕ)(x)∣
h
∫y f (y)ϕ(x + hej − y) − ∫y f (y)ϕ(x − y)
=∣ − ∫ f (y)Dj ϕ(x − y)∣
h y

ϕ(x − y + hej ) − ϕ(x − y)


= ∣∫ f (y) ( − Dj ϕ(x − y))∣
y h
ϕ(x − y + hej ) − ϕ(x − y)
≤ R∫ ∣ − Dj ϕ(x − y)∣ .
y h

Assuming that ∣h∣ < 1, the support of the integrand as a function of y lies in
the bounded set
{x + tej ∶ −1 < t < 1} − supp(ϕ),
and therefore the integral can be taken over some box B. By the unifor-
mity lemma, given any ε > 0, for all small enough h the integrand is less
than ε/(R vol(B)) uniformly in y. Consequently the integral is less than ε/R.
In sum, given any ε > 0, for all small enough h we have

(f ∗ ϕ)(x + hej ) − (f ∗ ϕ)(x)


∣ − (f ∗ Dj ϕ)(x)∣ < ε.
h

Since x is arbitrary, this gives the desired result for first-order partial deriva-
tives,
Dj (f ∗ ϕ) = f ∗ Dj ϕ, j = 1, . . . , n.
As for higher-order partial derivatives, note that Dj ϕ ∈ Cc∞ (Rn ) for each j.
So the same result for second-order partial derivatives follows,

Djj ′ (f ∗ ϕ) = Dj ′ (f ∗ Dj ϕ) = f ∗ Djj ′ ϕ, j, j ′ = 1, . . . , n,

and so on. ⊔

The proof of Proposition 7.3.5 required only that each κx be integrable,


that f be bounded, and that ϕ lie in Cc1 (Rn ). We will make use of this obser-
vation in Section 7.5.
7.3 Convolution 361

If the function f lies in the subspace Cc1 (Rn ) of Cc0 (Rn ) then the partial
derivatives of the convolution pass through the integral to f as well as to ϕ.
That is, for differentiable functions, the derivative of the convolution is the
convolution of the derivative.

Corollary 7.3.7. Let k ≥ 1, let f ∈ Cck (Rn ), and let ϕ ∈ Cc∞ (Rn ). Then

Dj1 ...jk (f ∗ ϕ) = Dj1 ...jk f ∗ ϕ, j1 , . . . , jk = 1, . . . , n.

Proof. Since
(f ∗ ϕ)(x) = ∫ f (y)ϕ(x − y),
y

it follows by the change of variable theorem (replace y by x − y) that also

(f ∗ ϕ)(x) = ∫ f (x − y)ϕ(y).
y

Now the proof of the proposition works with the roles of f and ϕ exchanged
to show that Dj (f ∗ ϕ) = Dj f ∗ ϕ for j = 1, . . . , n. (Here is where it is relevant
that the uniformity lemma requires only a Cc1 (Rn )-function rather than a test
function.) Similarly, if f ∈ Cc2 (Rn ) then because Dj f ∈ Cc1 (Rn ) for j = 1, . . . , n,
it follows that

Djj ′ (f ∗ ϕ) = Djj ′ f ∗ ϕ, j, j ′ = 1, . . . , n.

The argument for higher derivatives is the same. ⊔


Consider a function f ∈ Cc0 (Rn ). Now that we know that every convolution
f ∗ ϕ (where ϕ ∈ Cc∞ (Rn )) lies in Cc∞ (Rn ), the next question is to what extent
the test function f ∗ ϕ resembles the original compactly supported continuous
function f . As already noted, for every x, the integral

(f ∗ ϕ)(x) = ∫ f (y)ϕ(x − y)
y

refers to values of f only on {x} − supp(ϕ). Especially, if supp(ϕ) is a small


set about the origin then the convolution value (f ∗ ϕ)(x) depends only on
the behavior of the original function f near x. The next section will construct
useful test functions ϕ having small support, the idea being that convolu-
tions f ∗ ϕ with such test functions will approximate the functions f being
convolved. For example, in Figure 7.5, f (x) is small and positive, while the
integral of the mollifying kernel shown in Figure 7.6 is plausibly small and
positive as well.
362 7 Approximation by Smooth Functions

Exercises

7.3.1. (a) Show that the sum of two compact sets is compact.
(b) Let B(a, r) and B(b, s) be open balls. Show that their sum is B(a +
b, r + s).
(c) Recall that there are four standard axioms for addition, either in the
context of a field or a vector space. Which of the four axioms are satisfied by
set addition, and which are not?
(d) Let 0 < a < b. Let A be the circle of radius b in the (x, y)-plane, centered
at the origin. Let B be the closed disk of radius a in the (x, z)-plane, centered
at (b, 0, 0). Describe the sum A + B.

7.3.2. Let f ∈ Cc0 (Rn ), and let ϕ ∈ Cc∞ (Rn ). Assume that ϕ ≥ 0, i.e., all
output values of ϕ are nonnegative, and assume that ∫ ϕ = 1. Suppose that R
bounds f , meaning that ∣f (x)∣ < R for all x. Show that R also bounds f ∗ ϕ.

7.4 Test Approximate Identity and Convolution


Our next technical tool is a sequence of test functions whose graphs are ever
taller and more narrow, each enclosing volume 1.

Definition 7.4.1 (Test approximate identity). A test approximate


identity is a sequence of test functions

{ϕm } = {ϕ1 , ϕ2 , ϕ3 , . . . }

such that:
(1) Each ϕm is nonnegative, i.e., each ϕm maps Rn to R≥0 .
(2) Each ϕm has integral 1, i.e., ∫ ϕm = 1 for each m.
(3) The supports of the ϕm shrink to {0}, i.e.,

supp(ϕ1 ) ⊃ supp(ϕ2 ) ⊃ ⋯, ⋂ supp(ϕm ) = {0}.


m=1

We can construct a test approximate identity using the pulse function p


from Section 7.2. Define for m = 1, 2, 3, . . .

ϕm ∶ Rn Ð→ R, ϕm (x) = mn p(mx1 ) p(mx2 )⋯p(mxn ).

Then supp(ϕm ) = [−1/m, 1/m]n for each m. Here the coefficient mn is chosen
such that ∫ ϕm = 1 (Exercise 7.4.1). Figure 7.8 shows the graphs of ϕ2 , ϕ4 ,
ϕ8 , and ϕ15 when n = 1. The first three graphs have the same vertical scale,
but not the fourth. Figure 7.9 shows the graphs of ϕ1 through ϕ4 when n = 2,
all having the same vertical scale.
The identity being approximated by the sequence of test functions {ϕm } is
the Dirac delta function from the chapter introduction, denoted δ. To repeat
7.4 Test Approximate Identity and Convolution 363

- -

-
Figure 7.8. The functions ϕ2 , ϕ4 , ϕ8 , and ϕ15 from an approximate identity

-
-

- -

Figure 7.9. The functions ϕ1 through ϕ4 from a two-dimensional approximate


identity

ideas from the introduction, δ is conceptually a unit point mass at the origin,
and so its properties should be

supp(δ) = {0}, ∫ δ = 1.
364 7 Approximation by Smooth Functions

No such function exists in the orthodox sense of the word function. But regard-
less of sense, for every function f ∶ Rn Ð→ R and every x ∈ Rn , the mollifying
kernel associated to f and δ,
κx (y) = f (y)δ(x − y),
is conceptually a point of mass f (x) at each x. That is, its properties should
be
supp(κx ) = {x}, (f ∗ δ)(x) = ∫ κx (y) = f (x).
y
Under a generalized notion of function, the Dirac delta makes perfect sense as
an object called a distribution, defined by the integral in the previous display
but only for a limited class of functions:
for all x, (f ∗ δ)(x) = f (x) for test functions f .
Yes, now it is f that is restricted to be a test function. The reason for this is
that δ is not a test function, not being a function at all, and to get a good
theory of distributions such as δ, we need to restrict the functions that they
convolve with. In sum, the Dirac delta function is an identity in the sense that
f ∗ δ = f for test functions f .
Distribution theory is beyond the scope of these notes, but we may conceive
of the identity property of the Dirac delta function as the expected limiting
behavior of any test approximate identity. That is, returning to the environ-
ment of f ∈ Cc0 (Rn ) and taking any test approximate identity {ϕm }, we expect
that
lim(f ∗ ϕm ) = f for Cc0 (Rn )-functions f .
m
As explained in Section 7.1, this limit will be uniform, meaning that the values
(f ∗ ϕm )(x) will converge to f (x) at one rate simultaneously for all x in Rn .
See Exercise 7.4.3 for an example of nonuniform convergence.
For an example of convolution with elements of a test approximate identity,
consider the sawtooth function

⎪∣x∣ if ∣x∣ ≤ 1/4,



f ∶ R Ð→ R, f (x) = ⎨1/2 − ∣x∣ if 1/4 < ∣x∣ ≤ 1/2,




⎩0 if 1/2 < ∣x∣.
Recall the test approximate identity {ϕm } from after Definition 7.4.1. Fig-
ure 7.10 shows f and its convolutions with ϕ2 , ϕ4 , ϕ8 , and ϕ15 . The convo-
lutions approach the original function while smoothing its corners, and the
convolutions are bounded by the bound on the original function as shown in
Exercise 7.3.2. Also, the convolutions have larger supports than the original
function, but the supports shrink toward the original support as m grows.
The following lemma says that if compact sets shrink to a point, then
eventually they lie inside any given ball about the point. Specifically, the sets
that we have in mind are the supports of a test approximate identity.
7.4 Test Approximate Identity and Convolution 365

Figure 7.10. The sawtooth function convolved with various ϕm

Lemma 7.4.2 (Shrinking sets lemma). Let

{Sm } = {S1 , S2 , S3 , . . . }

be a sequence of compact subsets of Rn such that

⋂ Sm = {0}.

S1 ⊃ S2 ⊃ S3 ⊃ ⋯,
m=1

Then for every δ > 0 there exists some positive integer m0 such that

for all m ≥ m0 , Sm ⊂ B(0, δ).

Proof. Let δ > 0 be given. If no Sm lies in B(0, δ) then there exist points

x1 ∈ S1 /B(0, δ),
x2 ∈ S2 /B(0, δ),
x3 ∈ S3 /B(0, δ),

and so on. The sequence {xm } lies in S1 , so it has a convergent subsequence.


The containments S1 ⊃ S2 ⊃ ⋯ show that replacing the sequence by the subse-
quence preserves the displayed conditions, so we may assume that the original
sequence converges. Let x denote its limit. For every m ≥ 1, the terms of the
sequence from index m onward lie in Sm , so x ∈ Sm . Thus x ∈ ⋂m Sm = {0},
i.e., x = 0. But also, ∣xm ∣ ≥ δ for each m, so ∣x∣ ≥ δ. This is a contradiction, so
we are done. ⊔

366 7 Approximation by Smooth Functions

The hypothesis of compactness is necessary in the shrinking sets lemma


(Exercise 7.4.2).
Theorem 7.4.3 (Cc0 (Rn )-approximation by convolutions). Consider a
function f ∈ Cc0 (Rn ) and let {ϕm } ∶ Rn Ð→ R be a test approximate identity.
Given ε > 0, there exists a positive integer m0 such that for all integers m,

m ≥ m0 Ô⇒ ∣f ∗ ϕm − f ∣ < ε.

That is, the convolutions f ∗ϕm converge uniformly to the original function f .
Proof. Let ε > 0 be given. Since the support of f is compact, f is uniformly
continuous on its support, and hence f is uniformly continuous on all of Rn .
So there exists some δ > 0 such that for all x, y ∈ Rn ,

∣y − x∣ < δ Ô⇒ ∣f (y) − f (x)∣ < ε.

Because the supports of the approximate identity functions shrink to {0},


the shrinking sets lemma says that there exists some positive integer m0 such
that for all integers m ≥ m0 , supp(ϕm ) ⊂ B(0, δ). Note that m0 depends only
on δ, which in turn depends only on ε, all of this with no reference to any
particular x ∈ Rn . Now, for all x, y ∈ Rn , and all m ≥ m0 ,

y ∈ x − supp(ϕm ) Ô⇒ y ∈ x − B(0, δ) = x + B(0, δ)


Ô⇒ ∣y − x∣ < δ
Ô⇒ ∣f (y) − f (x)∣ < ε.

Because the approximate identity functions ϕm have integral 1, we have for


all x ∈ Rn and all positive integers m,

f (x) = ∫ f (x)ϕm (x − y).


y

Use the fact that the approximate identity functions ϕm are nonnegative to
estimate that for all x ∈ Rn and all positive integers m,

∣(f ∗ ϕm )(x) − f (x)∣ = ∣ ∫ (f (y) − f (x))ϕm (x − y)∣


y

≤ ∫ ∣f (y) − f (x)∣ϕm (x − y).


y

We may integrate only over y-values in x − supp(ϕm ), so that if m ≥ m0 then


the integrand is less than εϕm (x − y). That is, since the approximate identity
functions have integral 1, we have for all x ∈ Rn and all positive integers m,

m ≥ m0 Ô⇒ ∣(f ∗ ϕm )(x) − f (x)∣ < ε ∫ ϕm (x − y) = ε.


y

This is the desired result. Note how the argument has used all three defining
properties of the approximate identity. ⊔

7.4 Test Approximate Identity and Convolution 367

Corollary 7.4.4 (Cck (Rn )-approximation by convolutions). Let k be a


positive integer. Consider a function f ∈ Cck (Rn ) and let {ϕm } ∶ Rn Ð→ R be a
test approximate identity. Given ε > 0, there exists a positive integer m0 such
that for all integers m,

m ≥ m0 Ô⇒ ∣f ∗ ϕm − f ∣k < ε.

That is, the convolutions and their derivatives converge uniformly to the orig-
inal function and its derivatives up to order k.
Proof. Recall from Corollary 7.3.7 that if f ∈ Cc1 (Rn ) then for every test func-
tion ϕ, the derivative of the convolution is the convolution of the derivative,

Dj (f ∗ ϕ) = Dj f ∗ ϕ, j = 1, . . . , n.

Since the derivatives Dj f lie in Cc0 (Rn ), the theorem says that their convo-
lutions Dj f ∗ ϕm converge uniformly to the derivatives Dj f as desired. The
argument for higher derivatives is the same. ⊔

Exercises

7.4.1. Recall that ∫ p = 1 where p ∶ Rn Ð→ R is the pulse function from


Section 7.2. Let m be any positive integer and recall the definition in this
section,
ϕm (x) = mn p(mx1 ) p(mx2 )⋯p(mxn ).
Explain why consequently ∫ ϕm = 1.
7.4.2. Find a sequence {Sm } of subsets of R satisfying all of the hypotheses
of the shrinking sets lemma except for compactness, and such that no Sm is
a subset of the interval B(0, 1) = (−1, 1).
7.4.3. This exercise illustrates a nonuniform limit. For each positive integer m,
define
fm ∶ [0, 1] Ð→ R, fm (x) = xm .


Also define

⎪0 if 0 ≤ x < 1,
f ∶ [0, 1] Ð→ R, f (x) = ⎨

⎪1 if x = 1.

(a) Using one set of axes, graph f1 , f2 , f3 , f10 , and f .
(b) Show that for every x ∈ [0, 1], limm fm (x) = f (x). That is, given ε > 0,
there exists some positive integer m0 such that for all positive integers m,

m ≥ m0 Ô⇒ ∣fm (x) − f (x)∣ < ε.

Thus the function f is the limit of the sequence of functions {fm }. That is,

for each x, f (x) = lim{fm (x)}.


m
368 7 Approximation by Smooth Functions

(c) Now let ε = 1/2. Show that for every positive integer m, no matter how
large, there exists some corresponding x ∈ [0, 1] such that ∣fm (x) − f (x)∣ ≥ ε.
That is,

for each m, ∣fm (x) − f (x)∣ fails to be small for some x.

Thus the convergence of {fm } to f is not uniform, i.e., the functions do not
converge to the limit-function at one rate simultaneously for all x ∈ [0, 1].

7.5 Known-Integrable Functions


Recall that the slogan-title of Theorem 6.5.4 is near-continuity implies in-
tegrability. The largest space of functions that we have considered so far in
this chapter is Cc0 (Rn ), so we have not yet discussed the entire class of func-
tions that we know to be integrable. This section gives some results about
convolution and approximation for such functions.
Recall also that a function is called bounded if its outputs form a bounded
set.

Definition 7.5.1 (Known-integrable function). A function

f ∶ Rn Ð→ R.

is known-integrable if it is bounded, compactly supported, and continuous


except on a set of volume zero. The class of known-integrable functions is
denoted Ic (Rn ).

Unsurprisingly, the class Ic (Rn ) forms a vector space over R.


Let f ∈ Ic (Rn ). The integral of f is the integral of f over any box that
contains its support,

∫ f =∫ f where supp(f ) ⊂ B.
B

Similarly to the remarks after Definition 7.3.2, the integral on the right side
exists, but this time by Theorem 6.5.4. The integral on the right side is inde-
pendent of the box B, and so the integral on the left side exists, is unambigu-
ous, and is understood to be the integral of f over all of Rn .
The convolution remains sensible when f is known-integrable. That is, if
f ∈ Ic (Rn ) and ϕ ∈ Cc∞ (Rn ) then for each x ∈ Rn the mollifying kernel

κx ∶ Rn Ð→ R, κx (y) = f (y)ϕ(x − y)

again lies in Ic (Rn ). And so we may continue to define the convolution of f


and ϕ as
f ∗ ϕ ∶ Rn Ð→ R, (f ∗ ϕ)(x) = ∫ κx (y).
y
7.5 Known-Integrable Functions 369

The formulas for convolution derivatives remain valid as well. That is, if f ∈
Ic (Rn ) and ϕ ∈ Cc∞ (Rn ) then also f ∗ ϕ ∈ Cc∞ (Rn ), and

Dj (f ∗ ϕ) = f ∗ ϕj , j = 1, . . . , n,
Djj ′ (f ∗ ϕ) = f ∗ Djj ′ ϕj , j, j ′ = 1, . . . , n,

and so on. Here is where it is relevant that our proof of Proposition 7.3.5
required only that each κx be integrable, that f be bounded, and that ϕ lie
in Cc1 (Rn ).
Given a known-integrable function f ∈ Ic (Rn ) and a test approximate
identity {ϕm }, we would like the convolutions {f ∗ ϕm } to approximate f uni-
formly as m grows. But the following proposition shows that this is impossible
when f has discontinuities.
Proposition 7.5.2 (The uniform limit of continuous functions is con-
tinuous). Let
{fm } ∶ Rn Ð→ R
be a sequence of continuous functions that converges uniformly to a limit func-
tion
f ∶ Rn Ð→ R.
Then f is continuous as well.
Proof. For every two points x, x̃ ∈ Rn and for every positive integer m we have

∣f (x̃) − f (x)∣ = ∣f (x̃) − fm (x̃) + fm (x̃) − fm (x) + fm (x) − f (x)∣


≤ ∣f (x̃) − fm (x̃)∣ + ∣fm (x̃) − fm (x)∣ + ∣fm (x) − f (x)∣.

Let ε > 0 be given. For all m large enough, the first and third terms are less
than ε/3 regardless of the values of x and x̃. Fix such a value of m, and fix x.
Then since fm is continuous, the middle term is less than ε/3 if x̃ is close
enough to x. It follows that

∣f (x̃) − f (x)∣ < ε for all x̃ close enough to x.

That is, f is continuous. ⊔



Thus the convergence property of convolutions must become more tech-
nical for known-integrable functions rather than compactly supported con-
tinuous functions. In preparation for proving the convergence property, the
following lemma says that if K is a compact subset of an open set then so is
the sum of K and some closed ball.
Lemma 7.5.3 (Thickening lemma). Let K and A be subsets of Rn such
that
K ⊂ A, K is compact, A is open.
Then
for some r > 0, K + B(0, r) ⊂ A.
370 7 Approximation by Smooth Functions

Proof. Since K is compact, it lies in some ball B(0, R). Solving the problem
with the open set A ∩ B(0, R) in place of A also solves the original problem.
Having replaced A by A ∩ B(0, R), define a function on K that takes
positive real values,

d ∶ K Ð→ R>0 , d(a) = sup{r ∶ B(a, r) ⊂ A}.

The fact that we have shrunk A (if necessary) to lie inside the ball has ensured
that d is finite, because specifically d(a) ≤ R for all a. Fix some a ∈ K and let
r = d(a). Let {rm } be a strictly increasing sequence of positive real numbers
such that limm {rm } = r. Then B(a, rm ) ⊂ A for each m, and so

B(a, r) = ⋃ B(a, rm ) ⊂ A.

m=1

This argument shows that in fact,

d(a) = max{r ∶ B(a, r) ⊂ A}.

The function d is continuous. To see this, fix some point a ∈ K and let
r = d(a). Consider also a second point ã ∈ K such that ∣ã − a∣ < r, and let
r̃ = d(ã). Then
B(ã, r − ∣ã − a∣) ⊂ B(a, r) ⊂ A,
showing that r̃ ≥ r − ∣ã − a∣. Either r̃ ≤ r + ∣ã − a∣, or r̃ > r + ∣ã − a∣ ≥ r so that also
∣ã − a∣ < r̃ and the same argument shows that r ≥ r̃ − ∣ã − a∣, i.e., r̃ ≤ r + ∣ã − a∣
after all. That is, we have shown that for every a ∈ K,

ã ∈ K
{ } Ô⇒ ∣d(ã) − d(a)∣ ≤ ∣ã − a∣.
∣ã − a∣ < r(a)

Thus d is continuous at a (given ε > 0, let δ = min{r(a), ε/2}), and since a ∈ K


is arbitrary, d is continuous on K as claimed.
Since K is compact and d is continuous, d takes a minimum value r̃ > 0.
Thus K +B(0, r̃) ⊂ A. Finally, let r = r̃/2. Then K +B(0, r̃) ⊂ A as desired. ⊓⊔
Now we can establish the convergence property of convolutions for known-
integrable functions.
Theorem 7.5.4 (Ic (Rn )-approximation by convolutions). Consider a
function f ∈ Ic (Rn ) and let {ϕm } ∶ Rn Ð→ R be a test approximate identity.
Let K be a compact subset of Rn such that f is continuous on an open su-
perset of K. Given ε > 0, there exists a positive integer m0 such that for all
integers m,

m ≥ m0 Ô⇒ ∣(f ∗ ϕm )(x) − f (x)∣ < ε for all x ∈ K.

That is, the convolutions converge uniformly to the original function on com-
pact subsets of open sets where the function is continuous.
7.5 Known-Integrable Functions 371

Figure 7.11. The truncated squaring function convolved with various ϕm

Proof. Let ε > 0 be given. By the thickening lemma, there exists some r > 0
such that f is continuous on K + B(0, r). Hence f is uniformly continuous on
K + B(0, r). That is, there exists δ > 0 (with δ < r) such that for all x ∈ K and
all y ∈ Rn ,
∣y − x∣ < δ Ô⇒ ∣f (y) − f (x)∣ < ε.
There exists some positive integer m0 such that for all integers m ≥ m0 ,
supp(ϕm ) ⊂ B(0, δ). For all x ∈ K, all y ∈ Rn , and all m ≥ m0 ,

y ∈ x − supp(ϕm ) Ô⇒ y ∈ x − B(0, δ) = x + B(0, δ)


Ô⇒ ∣y − x∣ < δ
Ô⇒ ∣f (y) − f (x)∣ < ε.

From here, the proof is virtually identical to the proof of Theorem 7.4.3. ⊔

For example, consider the truncated squaring function




⎪x2 if ∣x∣ ≤ 1/2,
f ∶ R Ð→ R, f (x) = ⎨

⎪ if 1/2 < ∣x∣.

0

Note that f lies in Ic (Rn ) rather than in Cc0 (Rn ) because of its discontinuities
at x = ±1/2. Figure 7.11 shows f and its convolutions with ϕ2 , ϕ4 , ϕ8 , and ϕ15 .
The convolutions converge uniformly to the truncated parabola on compact
sets away from the two points of discontinuity. But the convergence is not well
behaved at or near those two points. Indeed, the function value f (±1/2) = 1/4
rather than f (±1/2) = 0 is arbitrary and has no effect on the convolution
in any case. And again the convolutions are bounded by the bound on the
372 7 Approximation by Smooth Functions

original function, and their supports shrink toward the original support as m
grows.
7.5 Known-Integrable Functions 373

In consequence of Ic (Rn )-approximation by convolutions, every integral


of a known-integrable function is approximated as closely as desired by the
integral of a test function. Thus the hypothesis of a continuous integrand f in
the change of variable theorem for multiple integrals (Theorem 6.7.1), men-
tioned in the last bullet of the chapter introduction, can now be weakened to
a known-integrable integrand.
8
Parametrized Curves

This chapter introduces parametrized curves as a warmup for Chapter 9 to fol-


low. The subject of Chapter 9 is integration over k-dimensional parametrized
surfaces in n-dimensional space, and the parametrized curves of this chap-
ter are the special case k = 1. Multivariable integration plays no role in this
chapter. Aside from being one-dimensional surfaces, parametrized curves are
interesting in their own right.
Section 8.1 leads into the subject of curves by introducing two specific
curves that solve problems beyond the capabilities of classical straightedge
and compass constructions. One striking idea here is the fact that by using
algebra to study geometry, we can describe precisely how the classical con-
structions are limited. Section 8.2 begins the study of parametrized curves,
meaning curves that we view not only as sets but as specified traversals of
the sets. Section 8.3 discusses the canonical parametrization of a curve by arc
length, the traversal at unit speed. Section 8.4 specializes the discussion to
curves in the plane. In this case, a local parameter called the curvature gives
a fairly complete description of curves in the large. Similarly, Section 8.5 dis-
cusses curves in three-dimensional space. Here a second local parameter called
torsion is needed along with curvature to describe curves. Finally, Section 8.6
generalizes the idea of describing a curve in optimal local coordinates to n
dimensions.

8.1 Euclidean Constructions and Two Curves

The straightedge constructs the line that passes through two given points in
the Euclidean plane. The compass constructs the circle that is centered at
a given point and has a given distance as its radius. A finite succession of
straightedge and compass constructions is called a Euclidean construction.
Physical straightedge and compass constructions are imprecise. Further-
more, there is really no such thing as a straightedge: aside from having to be

© Springer International Publishing AG 2016 375


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5_8
376 8 Parametrized Curves

infinite, the line-constructor somehow requires a prior line for its own con-
struction. But we don’t concern ourselves with the details of actual tools for
drawing lines and circles. Instead we imagine the constructions to be ideal,
and we focus on the theoretical question of what Euclidean constructions can
or cannot accomplish.
With computer graphics being a matter of course to us today, the techno-
logical power of Euclidean constructions, however idealized, is underwhelming,
and so one might reasonably wonder why they deserve study. One point of this
section is to use the study of Euclidean constructions to demonstrate the idea
of investigating the limitations of a technology. That is, mathematical reason-
ing of one sort (in this case, algebra) can determine the capacities of some
other sort of mathematical technique (in this case, Euclidean constructions).
In a similar spirit, a subject called Galois theory uses the mathematics of fi-
nite group theory to determine the capacities of solving polynomial equations
by radicals.
In a high-school geometry course one should learn that Euclidean con-
structions have the capacity to
• bisect an angle,
• bisect a segment,
• draw the line through a given point and perpendicular to a given line,
• and draw the line through a given point and parallel to a given line.
These constructions (Exercise 8.1.1) will be taken for granted here.
Two classical problems of antiquity are trisecting the angle and doubling
the cube. This section will argue algebraically that neither of these problems
can be solved by Euclidean constructions, and then the second point of this
section is to introduce particular curves—and methods to generate them—
that solve the classical problems where Euclidean constructions fail to do so.
Take any two distinct points in the plane and denote them 0 and 1. Use
the straightedge to draw the line through them. We may as well take the
line to be horizontal with 1 appearing to the right of 0. Now define a real
number r as Euclidean if we can locate it on our number line with a Euclidean
construction. For instance, it is clear how the compass constructs the integers
from 0 to any specified n, positive or negative, in finitely many steps. Thus
the integers are Euclidean. Further, we can add an orthogonal line through
any integer. Repeating the process on such orthogonal lines gives us as much
of the integer-coordinate grid as we want.
Proposition 8.1.1. The Euclidean numbers form a subfield of R. That is, 0
and 1 are Euclidean, and if r and s are Euclidean, then so are r ± s, rs, and
(if s ≠ 0) r/s.
Proof. We have already constructed 0 and 1, and given any r and s it is
easy to construct r ± s. If s ≠ 0 then the construction shown in Figure 8.1
produces r/s. Finally, to construct rs when s ≠ 0, first construct 1/s, and then
rs = r/(1/s) is Euclidean as well. ⊔

8.1 Euclidean Constructions and Two Curves 377
y

x
r/s r
Figure 8.1. Constructing r/s

Let E denote the field of Euclidean numbers. Since Q is the smallest sub-
field of R, it follows that Q ⊂ E ⊂ R. The questions are whether E is no more
than Q, whether E is all of R, and—assuming that in fact E lies properly be-
tween Q and R—how we can describe the elements of E. The next proposition
shows that E is a proper superfield of Q.

Proposition 8.1.2. If c ≥ 0 is constructible, i.e., if c ∈ E, then so is c.

Proof. In the construction shown in Figure 8.2 we have a semicircle of radius


(c + 1)/2 centered at ((c + 1)/2, 0). This semicircle contains the point (1, y),
where √ √
y = ((c + 1)/2)2 − ((c − 1)/2)2 = c.
(Due to a tacit assumption in the figure, this proof isn’t quite complete, but
see Exercise 8.1.2.) ⊔

x
c+1
2
1

Figure 8.2. Constructing c
378 8 Parametrized Curves

Thus, every real number√expressible in terms of finitely many square roots


√ √
starting from Q, such as 1 + 2 + 3, lies in E. Next we show that the
converse holds as well. That is, every number in E is expressible in finitely
many square roots starting from Q.
Definition 8.1.3. Let F be any subfield of R. A point in F is a point (x, y)
in the plane whose coordinates x and y belong to F. A line in F is a line
through two points in F. A circle in F is a circle whose center is a point in F
and whose radius is a number in F.
Exercise 8.1.3 shows that every line in F has equation ax + by + c = 0 where
a, b, c ∈ F, and every circle in F has equation x2 + y 2 + ax + by + c = 0 with
a, b, c ∈ F.
Proposition 8.1.4. Let F be any subfield of R. Let L1 , L2 be any nonparallel
lines in F, and let C1 , C2 be distinct circles in F. Then:
(1) L1 ∩ L2 is a point in F.
(2) C1 ∩ C2 is either empty or it is one or two points whose coordinates are
expressible in terms of F and a square root of a value in F.
(3) C1 ∩ L1 is either empty or it is one or two points whose coordinates are
expressible in terms of F and a square root of a value in F.
Proof. (1) is Exercise 8.1.4(a).
(2) reduces to (3), for if the circles

C1 ∶ x2 + y 2 + a1 x + b1 y + c1 = 0,
C 2 ∶ x 2 + y 2 + a 2 x + b2 y + c 2 = 0

intersect, then C1 ∩ C2 = C1 ∩ L where L is the line

L ∶ (a1 − a2 )x + (b1 − b2 )y + (c1 − c2 ) = 0

(Exercise 8.1.4(b)). Since C1 is a circle in F, the equations for C2 and L show


that C2 is a circle in F if and only if L is a line in F.
To prove (3), keep the equation for the circle C1 and suppose the line L1
has equation dx + ey + f = 0. The case d = 0 is Exercise 8.1.4(c). Otherwise, we
may take d = 1 after dividing through by d, an operation that keeps the other
coefficients in F. Thus x = −ey − f . Now, for (x, y) to lie in C1 ∩ L1 , we need

(−ey − f )2 + y 2 + a1 (−ey − f ) + b1 y + c1 = 0,

a condition of the form Ay 2 + By + C = 0 with A, B, C ∈ F. Solving for y


involves at most a square root over F, and then x = −ey − f involves only
further operations in F. ⊔

This result characterizes the field E of constructible numbers. Points in E
are obtained by intersecting lines and circles, starting with lines and circles
in Q. By the proposition, this means taking a succession of square roots. Thus,
8.1 Euclidean Constructions and Two Curves 379

the field E is the set of numbers expressible in finitely many field and
square root operations starting from Q.
Now we can dispense with the two classical problems mentioned earlier.

Theorem 8.1.5. An angle of 60 degrees cannot be trisected by straightedge


and compass.

Proof. If we could construct a 20-degree angle then we could construct the


number cos(20○ ) (Exercise 8.1.5(a)). From trigonometry,

cos(3θ) = 4 cos3 (θ) − 3 cos(θ)

(Exercise 8.1.5(b)), so in particular, cos(20○ ) satisfies the cubic polynomial


relation
4x3 − 3x − 1/2 = 0.
This cubic relation has no quadratic factors, so its root cos(20○ ) is not con-
structible. (Strictly speaking, this last statement requires some algebraic jus-
tification, but at least it should feel plausible.) ⊔

Theorem 8.1.6. The side of a cube having volume 2 is not constructible.

Proof. Indeed, the side satisfies the relation x3 − 2 = 0, which again has no
quadratic factors. ⊓

Thinking algebraically had made certain complicated-seeming geometric


questions easy.
The second half of this section introduces curves to trisect the angle and
to double the cube. The first curve, the conchoid of Nicomedes, is defined
as follows. Fix a point O and a line L in the plane. For convenience, take
O = (0, 0) and L ∶ {y = b} where b > 0 is constant. Fix a positive real number d.
For each point P ∈ R2 with y-coordinate bigger than b, let ℓ(O, P ) denote the
line through O and P . The conchoid is then the set

{P ∈ R2 ∶ ℓ(O, P ) meets L at a point distance d from P }.

(See Figure 8.3.)


The conchoid can be organically generated, as shown in Figure 8.4. The
lighter piece of the device in the figure swivels at the origin, and as it swivels,
the tack at the lower end of the darker piece tracks the horizontal groove
at y = b. Thus the lighter piece slides along the length of the darker one, and
the pen at its upper end draws the conchoid.
The conchoid trisects angles, as shown in Figure 8.5. Given an angle
∠AOB with AB ⊥ OA, construct the conchoid with d = 2 ⋅ OB. (The conchoid
in this figure is rotated 90 degrees clockwise from those in the previous two
figures.) Then proceed as follows.
380 8 Parametrized Curves

x
O
Figure 8.3. A conchoid

Figure 8.4. Organic generation of the conchoid

• Let E be the point on the conchoid with the same y-coordinate as B.


• Let C be the intersection point of AB and OE. Thus CE = 2 ⋅ OB by our
choice of conchoid.
• Let D be the midpoint of CE. Thus CD = DE = OB, and also BD is the
same length.
• Let α = ∠AOC. Then also α = ∠BED and α = ∠EBD.
• Let β = ∠BOD. Then also β = ∠BDO. The angle ∠AOB that we want
to trisect equals α + β.
8.1 Euclidean Constructions and Two Curves 381

• So the other angle at D equals π − 2α, because it is the remaining angle


in triangle BDE, but also it is visibly π − β. Thus β = 2α.
• The angle ∠AOB = α + β that we want to trisect now equals 3α, and so α
is the desired trisection.

B E
α α

β D
C
β
α x
O
A

Figure 8.5. Trisecting the angle with the conchoid

The cissoid of Diocles is defined as follows. Take a circle of radius a > 0


centered at (a, 0). Each ray emanating from the origin into the right half-
plane intersects the circle at a point C and intersects the circle’s right vertical
tangent at a point B. Let P denote the the point on the ray such that OP =
CB. The cissoid is the set of all such p (see Figure 8.6).
Newton showed how to generate the cissoid organically (see Figure 8.7).
As the tack at the end of the shorter piece of the device in the figure tracks
the vertical groove at x = a, the longer piece of the device slides past the
bumper at (−a, 0). Consequently, the pen in the center of the shorter piece
draws the cissoid. Verifying that this construction indeed gives the cissoid is
Exercise 8.1.6.
The cissoid doubles the cube. In the left half of Figure 8.8, M is the
midpoint of the vertical segment from (1, 0) to (1, 1), so that the smaller
right triangle has height-to-base ratio 1/2. The line through (2, 0) and M
meets the point P on the cissoid, and the larger right triangle also has height-
to-base ratio 1/2. In the right side of the figure, the line through (0, 0) and P
382 8 Parametrized Curves
y

P
C

O x
a 2a

P′

C′
B′

Figure 8.6. The cissoid

meets the circle, and the two horizontal distances labeled x are equal by the
nature of the cissoid. Continuing to work in the right half of the figure, we
see that the right triangle with base x and height y is similar to the two other
right triangles, and the analysis of the left half of the figure has shown that
the unlabeled vertical segment in the right half has height (2 − x)/2. Thus the
similar right triangles give the relations
y 2−x
= =
y x
x (2 − x)/2
and .
x y

It follows that
y2 2
= 2 − x and =
y
.
x x 2 2−x
Multiply the two equalities to get

y 3
( ) = 2.
x
That is, multiplying the sides of a cube by y/x doubles the volume of the
cube, as desired.
8.1 Euclidean Constructions and Two Curves 383

Figure 8.7. Organic generation of the cissoid

P
M y

1 x 2 − 2x x

Figure 8.8. Doubling the cube with the cissoid

Exercises

8.1.1. Show how straightedge and compass constructions bisect an angle, bi-
sect a segment, draw the line through point P perpendicular to line L, and
draw the line through point P parallel to line L.

8.1.2. What tacit assumption does the proof of Proposition 8.1.2 make
about c? Complete the proof for constructible c ≥ 0 not satisfying the as-
sumption.
384 8 Parametrized Curves

8.1.3. Show that for every subfield F of R, every line in F has equation ax+by+
c = 0 with a, b, c ∈ F; show that every circle in F has equation x2 +y 2 +ax+by+c =
0 with a, b, c ∈ F. Are the converses to these statements true? If the line passes
through the point p in direction d, what are the relations between p, d and
a, b, c? If the circle has center p and radius r, what are the relations between
p, r and a, b, c?
8.1.4. (a) If L1 and L2 are nonparallel lines in F, show that L1 ∩ L2 is a point
with coordinates in F.
(b) If C1 and C2 are distinct intersecting circles in F with equations x2 +
y + a1 x + b1 y + c1 = 0 for C1 and similarly for C2 , show that C1 ∩ C2 is equal to
2

C1 ∩ L where L is the line with equation (a1 − a2 )x + (b1 − b2 )y + (c1 − c2 ) = 0.


(c) Prove Proposition 8.1.4 part (3) when C1 is as in part (b) here and L1
has equation ey + f = 0 with e ≠ 0.
8.1.5. (a) Suppose that the angle θ is constructible. Show that the number
cos θ is constructible as well.
(b) Equate the real parts of the equality ei3α = (eiα ) to establish the
3

trigonometric identity cos 3α = 4 cos3 α − 3 cos α.


8.1.6. Show that Newton’s organic construction really does generate the cis-
soid.

8.2 Parametrized Curves


For our purposes a curve is not specified as a subset of Rn , but instead as a
traversal of such a set.
Definition 8.2.1 (Parametrized curve). A parametrized curve is a
smooth mapping
α ∶ I Ð→ Rn
where I ⊂ R is a nonempty interval and n ≥ 1.
Here smooth means that the mapping α has derivatives of all orders. A
small technicality is that the definition should, strictly speaking, insist that if
the interval I is not open then α extends smoothly to some open superinterval
of I. We won’t bother checking this in our examples.
The interval I in the definition is the parameter interval of α. Every
point t ∈ I is a parameter value, and the corresponding point α(t) ∈ Rn is
a point on the curve. Collectively, the set of points on the curve,
̂ = {α(t) ∶ t ∈ I},
α
is the trace of the curve. So the nomenclature point on the curve is a slight
abuse of language: a curve is a mapping and its trace is a set, and really
α(t) is a point on the trace of the curve. But maintaining this distinction is
pointlessly pedantic. Also, since all of our curves will be parametrized, we will
refer to them simply as curves.
8.2 Parametrized Curves 385

Definition 8.2.2 (Tangent vector, regular curve). Let α ∶ I Ð→ Rn be a


curve, and let t ∈ I. The tangent vector of α at t is α′ (t). The curve α is
regular if its tangent vector α′ (t) is nonzero for all t ∈ I.
It is often helpful to think of I as an interval of time, so that α describes
a time-dependent motion through space. Thinking in this way suggests some
more terminology.
• The tangent vector α′ (t) is also called the velocity vector of α at t.
• The scalar magnitude ∣α′ (t)∣ of the velocity vector is the speed of α at t.
Thus we may visualize α(t) as a point and the velocity α′ (t) as an arrow ema-
nating from the point in the direction of motion, the length of the arrow being
the speed of the motion. The definition of a regular curve can be rephrased
as the criterion that its time-dependent traversal never comes to a halt.
Definition 8.2.3 (Arc length of a curve). Let α ∶ I Ð→ Rn be a curve,
and let t, t′ be points of I with t < t′ . The arc length of α from t to t′ is
t′
L(t, t ) = ∫

∣α′ (τ )∣ dτ.
τ =t

In physical terms, this definition is a curvy version of the familiar idea that
distance equals speed times time. For a more purely mathematical definition
of a curve’s arc length, we should take the limit of the lengths of inscribed
polygonal paths. Take a partition t0 < t1 < ⋯ < tn of the parameter interval
[t, t′ ], where t0 = t and tn = t′ . The partition determines the corresponding
points on the curve, α(t0 ), α(t1 ), . . . , α(tn ). The arc length should be the
limit of the sums of the lengths of the line segments joining the points,
n
L(t, t′ ) = lim ∑ ∣α(tk ) − α(tk−1 )∣.
n→∞
k=1

It is possible to show that for smooth curves—in fact, for C 1 -curves—the


limit exists and is equal to the integral definition of arc length. (The details
of the argument are too technical to deserve full explication here, but very
briefly: since the integral is conceptually the length of an inscribed polygon
with infinitely many sides, each infinitesimally long, and since the length of
an inscribed polygon increases when any of its segments is replaced by more
segments by adding more points of inscription, the definition of L(t, t′ ) as an
integral should be at least as big as the definition of L(t, t′ ) as a limit of sums,
and in fact this is easy to show. For the other inequality we need to argue
that the limit of sums gets as close to the integral as we wish. Since the sums
aren’t quite Riemann sums for the integral, this is where things get slightly
tricky.) Using the limit of sums as the definition of arc length is more general,
since it makes no reference to the smoothness of α. A continuous curve for
which the arc length (defined as the limit of inscribed polygon lengths) is
finite is called rectifiable. Perhaps surprisingly, not all continuous curves are
386 8 Parametrized Curves

rectifiable. For that matter, the image of a continuous curve need not match
our intuition of a curve. For instance, there is a continuous mapping from the
closed interval [0, 1] to all of the square [0, 1] × [0, 1], a so-called area-filling
curve. In any case, we will continue to assume that our curves are smooth,
and we will use the integral definition of arc length.
For example, the helix is the curve α ∶ R Ð→ R3 where

α(t) = (a cos t, a sin t, bt).

Here a > 0 and b > 0 are constants. (See Figure 8.9.)

Figure 8.9. The helix

The velocity vector of the helix is

α′ (t) = (−a sin t, a cos t, b),

and so the speed is √


∣α′ (t)∣ = a 2 + b2 .

For another example, the cycloid is the curve made by a point on a


rolling wheel of radius 1. (See Figure 8.10.) Its parametrization, in terms of
the angle θ through which the wheel has rolled, is

C(θ) = (θ − sin θ, 1 − cos θ), 0 ≤ θ ≤ 2π.

Its velocity vector is


8.2 Parametrized Curves 387

C ′ (θ) = (1 − cos θ, sin θ), 0 ≤ θ ≤ 2π,


and so its speed is
√ √
∣C ′ (θ)∣ = (1 − cos θ)2 + (sin θ)2 = 2 − 2 cos θ
√ √
= 4 ⋅ (1 − cos θ) = 4 sin2 (θ/2)
1
2
= 2∣ sin(θ/2)∣
= 2 sin(θ/2) since sin ≥ 0 on [0, π].
So the speed of the cycloid is greatest—equal to 2—when θ = π, i.e., when the
point is at the top of the wheel. And it is least—equal to 0—when θ = 0 and
θ = 2π. These results agree with what we see when we look at the wheel of a
moving bicycle: a blur at the top and distinct spokes at the bottom.

Figure 8.10. A rolling wheel

The length of the cycloid as the parameter varies from 0 to some angle θ
is

sin(τ ) dτ
θ θ θ/2
L(0, θ) = ∫ 2 sin(t/2) dt = 4 ∫ sin(t/2) d(t/2) = 4 ∫
t=0 t=0 τ =0
= 4 − 4 cos(θ/2), 0 ≤ θ ≤ 2π.
In particular, a full arch of the cycloid has length 8.
The cycloid has amazing properties. Upside down, it is the brachis-
tochrone, the curve of steepest descent, meaning that it is the curve between
two given points along which a bead slides (without friction) most quickly.
Upside down, it is also the tautochrone, meaning that a bead starting from
any point slides (without friction) to the bottom in the same amount of time.
For another property of the cycloid, suppose that a weight swings from a string
4 units long suspended at the origin, between two upside-down cycloids. The
right-hand upside-down cycloid is
388 8 Parametrized Curves

C(θ) = (θ − sin θ, cos θ − 1), 0 ≤ θ ≤ 2π.

Thus the weight’s position when it is swinging to the right is (for 0 ≤ θ ≤ π)


C ′ (θ)
α(θ) = C(θ) + (4 − L(0, θ))
∣C ′ (θ)∣
(1 − cos θ, − sin θ)
= (θ − sin θ, cos θ − 1) + 4 cos(θ/2)
2 sin(θ/2)
= (θ − sin θ, cos θ − 1) + 2 cot(θ/2)(1 − cos θ, − sin θ).

But since 0 ≤ θ ≤ π, we may carry out the following calculation, in which all
quantities under square root signs are nonnegative and so is the evaluation of
the square root at the last step,

1
(1 + cos θ)
=√
cos(θ/2)
cot(θ/2) =
2
sin(θ/2) 1
(1 − cos θ)
¿ √
2

Á (1 + (1 + cos θ)2
=ÁÀ cos 2
=
θ)
(1 − cos θ)(1 + cos θ) 1 − cos2 θ
1 + cos θ
= .
sin θ
And so now
1 + cos θ
α(θ) = (θ − sin θ, cos θ − 1) + 2 (1 − cos θ, − sin θ)
sin θ
= (θ − sin θ, cos θ − 1) + 2(sin θ, −1 − cos θ)
= (θ + sin θ, −3 − cos θ).

Shift the weight’s position rightward by π and upward by 2 to obtain

α(θ) + (π, 2) = (π + θ + sin θ, −1 − cos θ), 0 ≤ θ ≤ π.

On the other hand, the right half of the original upside-down cycloid is

C(π + θ) = (π + θ − sin(π + θ), cos(π + θ) − 1)


= (π + θ + sin θ, −1 − cos θ), 0 ≤ θ ≤ 2π.

These are identical: α(θ) + (π, 2) = C(θ + π) for 0 ≤ θ ≤ π. That is, the weight
swings along the trace of a cycloid congruent to the two others. Since the
the upside-down cycloid is the tautochrone, this idea was used by Huygens
to attempt to design pendulum-clocks that would work on ships despite their
complicated motion.
The area under one arch of the cycloid is the integral

∫ y(x) dx
x=0
8.2 Parametrized Curves 389

where y(x) is the function that takes the x-coordinate of a point of the cycloid
and returns its y-coordinate. As the cycloid parameter θ varies from 0 to 2π,
so does the x-coordinate of the cycloid-point,

x = x(θ) = θ − sin θ,

and the parametrization of the cycloid tells us that even without know-
ing y(x), we know that
y(x(θ)) = 1 − cos θ.
Thus the area under one arch of the cycloid is

y(x(θ))x′ (θ) dθ = ∫ (1 − cos θ)2 dθ,


2π 2π 2π
∫ y(x) dx = ∫
x=0 θ=0 θ=0

and routine calculation shows that the area is 3π.


A parametrization for the conchoid of Nicomedes is

α ∶ (−π/2, π/2) Ð→ R2 , α(θ) = (b sec θ + d)(cos θ, sin θ)

where now the line L is {x = b}, rotating the conchoid a quarter turn clockwise
from before, and where the parameter θ is the usual angle from the polar
coordinate system. Every point (x, y) on the conchoid satisfies the equation

(x2 + y 2 )(x − b)2 = d2 x2 .

A parametrization for the cissoid of Diocles is

2at2 2at3
α ∶ R Ð→ R2 , α(t) = ( , ).
1 + t2 1 + t2
where the parameter t is tan θ, with θ being the usual angle from the polar
coordinate system.

Exercises

8.2.1. (a) Let α ∶ I Ð→ Rn be a regular curve that doesn’t pass through the
origin, but has a point α(t0 ) of nearest approach to the origin. Show that
the position vector α(t0 ) and the velocity vector α′ (t0 ) are orthogonal. (Hint:
If u, v ∶ I Ð→ Rn are differentiable then ⟨u, v⟩′ = ⟨u′ , v⟩ + ⟨u, v ′ ⟩—this follows
quickly from the one-variable product rule.) Does the result agree with your
geometric intuition?
(b) Find a regular curve α ∶ I Ð→ Rn that does not pass through the origin
and does not have a point of nearest approach to the origin. Does an example
exist with I compact?
8.2.2. Let α be a regular parametrized curve with α′′ (t) = 0 for all t ∈ I. What
is the nature of α?
390 8 Parametrized Curves

8.2.3. Let α ∶ I Ð→ Rn be a parametrized curve and let v ∈ Rn be a fixed


vector. Assume that ⟨α′ (t), v⟩ = 0 for all t ∈ I and that ⟨α(to ), v⟩ = 0 for
some t0 ∈ I. Prove that ⟨α(t), v⟩ = 0 for all t ∈ I. What is the geometric idea?

8.2.4. (a) Verify the parametrization of the conchoid given in this section.
(b) Verify the relation (x2 + y 2 )(x − b)2 = d2 x2 satisfied by points on the
conchoid.

8.2.5. (a) Verify the parametrization of the cissoid given in this section. Is
this parametrization regular? What happens to α(t) and α′ (t) as t → ∞?
(b) Verify Newton’s organic generation of the cissoid.

8.3 Parametrization by Arc Length

Recall that the trace of a curve is the set of points on the curve. Thinking
of a curve as time-dependent traversal makes it clear that different curves
may well have the same trace. That is, different curves can describe different
motions along the same path. For example, the curves

α ∶ [0, 2π] Ð→ R2 , α(t) = (cos t, sin t)


β ∶ [0, 2π] Ð→ R ,
2
β(t) = (cos 5t, sin 5t)
γ ∶ [0, 2π] Ð→ R ,
2
γ(t) = (cos t, − sin t)
δ ∶ [0, log(2π + 1)] Ð→ R ,
2
δ(t) = (cos(et − 1), sin(et − 1))

all have the unit circle as their trace, but their traversals of the circle are
different: α traverses it once counterclockwise at unit speed, β traverses it five
times counterclockwise at speed 5, γ traverses it once clockwise at unit speed,
and δ traverses it once counterclockwise at increasing speed.
Among the four traversals, α and δ are somehow basically the same, mov-
ing from the same starting point to the same ending point in the same direc-
tion, never stopping or backing up. The similarity suggests that we should be
able to modify one into the other. On the other hand, β and γ seem essentially
different from α and from each other. The following definition describes the
idea of adjusting a curve without changing its traversal in any essential way.

Definition 8.3.1 (Equivalence of curves). Two curves α ∶ I Ð→ Rn and


β ∶ I ′ Ð→ Rn are equivalent, written

α ∼ β,

if there exists a mapping φ ∶ I Ð→ I ′ , smooth with smooth inverse, with φ′ > 0


on I, such that
α = β ○ φ.
8.3 Parametrization by Arc Length 391

For example, consider the mapping

φ ∶ [0, 2π] Ð→ [0, log(2π + 1)], φ(s) = log(s + 1).

This mapping is differentiable and so is its inverse,

φ−1 ∶ [0, log(2π + 1)] Ð→ [0, 2π], φ−1 (t) = et − 1.

Also, φ′ (s) = 1/(s + 1) is positive for all s ∈ I. Again recalling the examples α
and δ, the calculation

(δ ○ φ)(s) = (cos(elog(s+1) − 1), sin(elog(s+1) − 1)) = (cos s, sin s) = α(s)

shows that α ∼ δ, as expected.


A similar calculation with φ−1 shows that also δ ∼ α. This symmetry is a
particular instance of a general rule, whose proof is an exercise in formalism.
Proposition 8.3.2 (Properties of equivalence). Let α, β, and γ be curves.
Then:
(1) α ∼ α.
(2) If α ∼ β then β ∼ α.
(3) If α ∼ β and β ∼ γ then α ∼ γ.
In words, the relation “∼” is reflexive, symmetric, and transitive.
Among a family of equivalent regular curves, one is canonical: the curve
that traverses at unit speed. Recall that the arc length of a curve α from t
to t′ is ′

L(t, t′ ) = ∫ ∣α′ (τ )∣ dτ.


t

τ =t

Definition 8.3.3 (Parametrization by arc length). The curve γ is pa-


rametrized by arc length if for all points in s, s′ ∈ I with s < s′ ,

L(s, s′ ) = s′ − s.

Equivalently, γ is parametrized by arc length if ∣γ ′ (s)∣ = 1 for all s ∈ I.


As in the definition just given, we adopt the convention that a curve
parametrized by arc length is by default denoted γ rather than α, and its
parameter denoted s rather than t.
To justify the intuition that every regular curve is equivalent to some curve
parametrized by arc length, we need two familiar theorems. The first version
of the one-variable fundamental theorem of integral calculus says:
Let f ∶ [a, b] Ð→ R be continuous. Define a function

F ∶ [a, b] Ð→ R, F (x) = ∫
x
f.
a

Then F is differentiable on [a, b] and F ′ = f .


392 8 Parametrized Curves

And the one-variable inverse function theorem says:


Let f ∶ I Ð→ R have a continuous derivative on I with f ′ (x) ≠ 0
for all x ∈ I. Then the image of f is an interval I ′ , and f has a
differentiable inverse g ∶ I ′ Ð→ I. For each y ∈ I ′ , the derivative of the
inverse at y is given by the formula g ′ (y) = 1/f ′ (x) where x = g(y).
These theorems let us reparametrize every regular curve by arc length.

Proposition 8.3.4. Every regular curve is equivalent to a curve parametrized


by arc length.

Proof. Let α ∶ I Ð→ Rn be regular. Thus we are tacitly assuming that α is


smooth, so that in particular α′ is continuous. Pick any parameter value t0 ∈ I
and let p0 = α(t0 ) be the corresponding point on the curve. Define the arc
length function ℓ ∶ I Ð→ R by the formula

∣α′ (τ )∣ dτ.
t
ℓ(t) = ∫
τ =t0

By the fundamental theorem, ℓ is differentiable and ℓ′ (t) = ∣α′ (t)∣. Thus ℓ′


is continuous and never vanishes, so by the inverse function theorem ℓ has a
differentiable inverse ℓ−1 ∶ I ′ Ð→ I for some interval I ′ . Define a new curve
γ ∶ I ′ Ð→ Rn by γ = α ○ ℓ−1 . Thus α and γ are equivalent, and the following
diagram commutes (meaning that either path around the triangle yields the
same result):
I ❖❖❖
❖❖❖
❖❖α❖
❖❖❖
❖'
ℓ Rn .
♣♣♣7
♣♣
♣♣♣♣♣γ
 ♣♣♣
I′
For all t ∈ I, letting s = ℓ(t), the chain rule gives an equality of vectors,

α′ (t) = (γ ○ ℓ)′ (t) = γ ′ (s) ℓ′ (t) = γ ′ (s) ∣α′ (t)∣,

and then taking absolute values gives an equality of scalars,

∣α′ (t)∣ = ∣γ ′ (s)∣ ∣α′ (t)∣.

Since ∣α′ (t)∣ > 0 for all t because α is regular, it follows that

∣γ ′ (s)∣ = 1 for all s ∈ I ′ .

Thus γ is parametrized by arc length. ⊔



8.4 Plane Curves: Curvature 393

So every regular curve is equivalent to a curve parametrized by arc length.


The next question about a regular curve is whether its equivalent curve that
is parametrized by arc length is unique. The answer is essentially yes. The
only choice is the starting point, determined by the choice of t0 in the proof.
Explicitly reparametrizing by arc length can be a nuisance, because it
requires computing the inverse function ℓ−1 that we invoked in the abstract
during the course of reparametrizing. (This function can be doubly hard to
write down in elementary terms, because not only is it an inverse function,
but furthermore it is the inverse function of a forward function defined as an
integral.) Since the theory guarantees that each regular curve is equivalent to
a curve parametrized by arc length, when we prove theorems in the sequel,
we may assume that we are given such curves. But on the other hand, since
reparametrizing is nontrivial computationally, we want the formulas that we
will derive later in the chapter not to assume parametrization by arc length,
so that we can apply them to regular curves in general.

Exercises

8.3.1. Show that the equivalence “∼” on curves is reflexive, symmetric, and
transitive.

8.3.2. The parametrized curve

α ∶ [0, +∞) Ð→ R2 , α(t) = (aebt cos t, aebt sin t)

(where a > 0 and b < 0 are real constants) is called a logarithmic spiral.
(a) Show that as t → +∞, α(t) spirals in toward the origin.
(b) Show that as t → +∞, L(0, t) remains bounded. Thus the spiral has
finite length.

8.3.3. Explicitly reparametrize each curve α ∶ I Ð→ Rn with a curve γ ∶ I ′ Ð→


Rn parametrized by arc length.
(a) The ray α ∶ R>0 Ð→ Rn given by α(t) = t2 v where v is some fixed
nonzero vector.
(b) The circle α ∶ R Ð→ R2 given by α(t) = (cos et , sin et ).
(c) The helix α ∶ [0, 2π] Ð→ R3 given by α(t) = (a cos t, a sin t, bt).
(d) The cycloid α ∶ [π/2, 3π/2] Ð→ R2 given by α(t) = (t − sin t, 1 − cos t).

8.4 Plane Curves: Curvature

Let γ ∶ I Ð→ R2 be a plane curve parametrized by arc length s. We next specify


a natural coordinate system at each point of γ. Its tangent vector T (s) is

T = γ′.
394 8 Parametrized Curves

So to first order, the curve is moving in the T -direction. Its normal vec-
tor N (s) is the 90-degree counterclockwise rotation of T (s). Thus the Frenet
frame {T, N } is a positive basis of R2 consisting of orthogonal unit vectors.
Before proceeding, we need to establish two handy little facts that hold in
every dimension n.
Lemma 8.4.1. (a) Let v ∶ I Ð→ Rn be a smooth mapping such that ∣v(t)∣ = c
(where c is constant) for all t. Then

⟨v, v ′ ⟩ = 0.

(b) Let v, w ∶ I Ð→ Rn be smooth mappings such that ⟨v(t), w(t)⟩ = c (where c


is constant) for all t. Then

⟨w′ , v⟩ = −⟨v ′ , w⟩.

Proof. (a) ⟨v, v ′ ⟩ = 12 ⟨v, v⟩ = 0. (b) ⟨w′ , v⟩ + ⟨v ′ , w⟩ = ⟨v, w⟩ = 0. ⊔



′ ′

We return to dimension n = 2. Part (a) of the lemma shows that the


derivative of the tangent vector is some scalar multiple of the normal vector,

T ′ = κN, κ = κ(s) ∈ R.

The scalar-valued function κ(s) is the curvature of γ. The curvature can be


positive, negative, or zero depending on whether to second order the curve is
bending counterclockwise toward N , clockwise away from N , or not at all.
In particular, if r is a positive real number then the curve

γ(s) = r(cos(s/r), sin(s/r)), s∈R

is a circle of radius r parametrized by arc length. The tangent vector T (s) =


γ ′ (s) and its derivative T ′ (s) = γ ′′ (s) are

T = (− sin(s/r), cos(s/r)), T ′ = (− cos(s/r), − sin(s/r)) = N,


1 1
r r
showing that the curvature of the circle is the reciprocal of its radius,
1
κ(s) = .
r
In general, the absolute curvature of a curve is the reciprocal of the radius of
the best-fitting circle to γ. We omit the proof of this.
Plausibly, if we are told only that γ is some curve parametrized by arc
length, that γ begins at some point p0 with initial tangent vector T0 , and that
γ has curvature function κ(s), then we can reproduce the curve γ. This is
true but beyond our scope to show here. Nonetheless it explains that:
The combination of a set of initial conditions and the local information
of the curvature at each point of a curve is enough to recover the curve
itself, a global object.
8.4 Plane Curves: Curvature 395

Hence the local information—the curvature—is of interest.


To see how the Frenet frame continually adjusts itself as γ is traversed, we
differentiate T and N . Since these are orthogonal unit vectors, their derivatives
resolve nicely into components via the inner product,

T ′ = ⟨T ′ , T ⟩T + ⟨T ′ , N ⟩N,
N ′ = ⟨N ′ , T ⟩T + ⟨N ′ , N ⟩N.

The condition T ′ = κN shows that the top row inner products are ⟨T ′ , T ⟩ = 0
and ⟨T ′ , N ⟩ = κ. Since N is a unit vector, ⟨N ′ , N ⟩ = 0 by part (a) of the
lemma, and since T and N are orthogonal, ⟨N ′ , T ⟩ = −⟨T ′ , N ⟩ = −κ by part (b).
Thus the Frenet equations for a curve parametrized by arc length can be
formulated as
[ ′]=[ ][ ].
T′ 0 κ T
N −κ 0 N
The geometric idea is that as we move along the curve at unit speed, the
Frenet frame continually adjusts itself so that its first vector is tangent to
the curve in the direction of motion and the second vector is ninety degrees
counterclockwise to the first. The curvature is the rate (positive, negative, or
zero) at which the first vector is bending toward the second while the second
vector preserves the ninety-degree angle between them by bending away from
the first vector as much as the first vector is bending toward it.
Since γ ′ = T and thus γ ′′ = T ′ , the first and second derivatives of every
curve γ parametrized by arc length are expressed in terms of the Frenet frame,

[ ]=[ ][ ].
γ′ 10 T
γ ′′ 0κ N

This matrix relation shows that the local canonical form of a such a curve is,
up to quadratic order,

γ(s0 + s) ≈ γ(s0 ) + sγ ′ (s0 ) + s2 γ ′′ (s0 )


1
2
= γ(s0 ) + sT + s2 N.
κ
2
That is, in (T, N )-coordinates the curve is locally (s, (κ/2)s2 ), a parabola
at the origin that opens upward or downward or not at all, depending on κ.
If we view the curve in local coordinates as we traverse its length at unit
speed, we see the parabola change its shape as κ varies, possibly narrowing
and widening, or opening to a horizontal line and then bending the other way.
This periscope-view of γ, along with knowing γ(s) and γ ′ (s) for one value s
in the parameter domain, determines γ entirely.
We want a curvature formula for every regular smooth plane curve, not
necessarily parametrized by arc length,
396 8 Parametrized Curves

α ∶ I Ð→ R2 .

To derive the formula, recall that the reparametrization of α by arc length is


the curve γ characterized by a relation involving the arc length function of α,

∣α′ ∣ and so ℓ′ = ∣α′ ∣.


t
α = γ ○ ℓ, where ℓ(t) = ∫

By the chain rule, and then by the product rule and again the chain rule,

α′ = (γ ′ ○ ℓ) ⋅ ℓ′ ,
α′′ = (γ ′ ○ ℓ) ⋅ ℓ′′ + (γ ′′ ○ ℓ) ⋅ (ℓ′ )2 .

These relations and the earlier expressions of γ ′ and γ ′′ in terms of the Frenet
frame combine to give

ℓ′ 0 γ′ ○ ℓ ℓ′ 0
[ ′′ ] = [ ′′ ′ 2 ] [ ′′ ] = [ ′′ ′ 2 ] [ ][ ].
α′ 10 T
α ℓ ℓ γ ○ ℓ ℓ ℓ 0 κ N

Take determinants, recalling that ℓ′ = ∣α′ ∣,

det(α′ , α′′ ) = ∣α′ ∣3 κ.

Thus the curvature is


det(α′ , α′′ ) x′ y ′′ − x′′ y ′
κ= = (α = (x, y) regular).
∣α′ ∣3 ((x′ )2 + (y ′ )2 )
3/2

In particular, if a curve γ is parametrized by arc length then its curvature in


coordinates is

κ = det(γ ′ , γ ′′ ) = x′ y ′′ − x′′ y ′ (γ = (x, y) parametrized by arc length).

The fact that a plane curve lies on a circle if and only if its curvature
is constant cries out to be true. (If it isn’t, then our definitions must be
misguided.) And it is easy to prove using global coordinates. However, we
prove it by working with the Frenet frame, in anticipation of the less obvious
result for space curves to follow in the next section.
Proposition 8.4.2. Let γ ∶ I Ð→ R2 be regular. Then

γ lies on a circle ⇐⇒ κ(s) is a nonzero constant for all s ∈ I.

When these conditions hold, ∣κ∣ = 1/ρ where ρ > 0 is the radius of the circle.
Proof. We may assume that γ is parametrized by arc length.
( Ô⇒ ) We will zoom in on the global condition that γ lies on a circle,
differentiating repeatedly and using the Frenet frame as our coordinate sys-
tem. In the argument, γ and its derivatives depend on the parameter s, and
8.4 Plane Curves: Curvature 397

so does the curvature κ, but we omit s from the notation in order to keep the
presentation light. We are given that for some fixed point p ∈ R2 and some
fixed radius ρ > 0,
∣γ − p∣ = ρ.
And by the nature of the Frenet frame, γ − p decomposes as

γ − p = ⟨γ − p, T ⟩T + ⟨γ − p, N ⟩N. (8.1)

Since ∣γ − p∣ is constant, Lemma 8.4.1(a) gives ⟨γ − p, γ ′ ⟩ = 0, and now


Lemma 8.4.1(b) gives ⟨γ − p, γ ′′ ⟩ = −⟨γ ′ , γ ′ ⟩ = −1. Since γ ′ = T and γ ′′ = κN ,
these calculations have shown that ⟨γ − p, T ⟩ = 0 and ⟨γ − p, N ⟩ = −1/κ with
κ ≠ 0. Thus (8.1) is simply

γ − p = −(1/κ)N.

But since ∣γ − p∣ = ρ, it follows that 1/κ = ±ρ is constant, and so κ is constant,


as desired.
( ⇐Ô ) Assume that κ(s) is a nonzero constant. To show that γ − p =
−(1/κ)N , compute (using the Frenet equation N ′ = −κT ) the derivative

(γ + (1/κ)N )′ = T + (1/κ)(−κT ) = 0.

So γ +(1/κ)N is indeed some fixed vector p, and γ −p = −(1/κ)N , as expected.


It follows that γ lies on the circle of radius 1/∣κ∣ centered at p. ⊔

Since N = (1/κ)γ ′′ , the previous proof has shown that the differential
equation
γ − p = −(1/κ)2 γ ′′
arises from uniform circular motion of radius 1/∣κ∣.

Exercises

8.4.1. (a) Let a and b be positive. Find the curvature of the ellipse α(t) =
(a cos(t), b sin(t)) for t ∈ R.
(b) Let a be positive and b be negative. Find the curvature of the loga-
rithmic spiral α(t) = (aebt cos t, aebt sin t) for t ≥ 0.
8.4.2. Let γ ∶ I Ð→ R2 be parametrized by arc length. Fix any unit vector
v ∈ R2 , and define a function
θ ∶ I Ð→ R
by the conditions

cos(θ(s)) = ⟨T (s), v⟩, sin(θ(s)) = −⟨N (s), v⟩.

Thus θ is the angle that the curve γ makes with the fixed direction v. Show
that θ′ = κ. Thus our notion of curvature does indeed measure the rate at
which γ is turning.
398 8 Parametrized Curves

8.5 Space Curves: Curvature and Torsion


Now we discuss space curves similarly to the discussion of plane curves at the
end of the previous section. Let γ ∶ I Ð→ R3 be parametrized by arc length s.
Its tangent vector T (s) is
T = γ′.
So to first order, the curve is moving in the T -direction. Whenever T ′ is
nonzero, the curve’s curvature κ(s) and normal vector N (s) are defined
by the conditions
T ′ = κN, κ > 0.
(Be aware that although the same equation T ′ = κN appeared in the context
of plane curves, something different is happening now. For plane curves, N
was defined as the 90-degree counterclockwise rotation of T , and the condi-
tion ⟨T, T ⟩ = 1 forced T ′ to be normal to T and hence some scalar multiple
of N . The scalar was then given the name κ, and κ could be positive, neg-
ative, or zero depending on whether to second order the curve was bending
toward N , away from N , or not at all. But now, for space curves, the condi-
tions T ′ = κN and κ > 0 define both N and κ, assuming that T ′ ≠ 0. Again by
Lemma 8.4.1(a), T ′ is normal to T , and so N is normal to T , but now it makes
no sense to speak of N being counterclockwise to T , and now κ is positive.)
Assume that T ′ is always nonzero. Then the curve’s binormal vector is

B = T × N.

Thus, the Frenet frame {T, N, B} is a positive basis of R3 consisting of or-


thogonal unit vectors.
We want to differentiate T , N , and B. The derivatives resolve into com-
ponents,

T ′ = ⟨T ′ , T ⟩T + ⟨T ′ , N ⟩N + ⟨T ′ , B⟩B,
N ′ = ⟨N ′ , T ⟩T + ⟨N ′ , N ⟩N + ⟨N ′ , B⟩B,
B ′ = ⟨B ′ , T ⟩T + ⟨B ′ , N ⟩N + ⟨B ′ , B⟩B.

The definition
T ′ = κN
shows that the top row inner products are

⟨T ′ , T ⟩ = 0, ⟨T ′ , N ⟩ = κ, ⟨T ′ , B⟩ = 0.

And since N and B are unit vectors, the other two diagonal inner products
also vanish by Lemma 8.4.1(a),

⟨N ′ , N ⟩ = ⟨B ′ , B⟩ = 0.

Lemma 8.4.1(b) shows that the first inner product of the second row is the
negative of the second inner product of the first row,
8.5 Space Curves: Curvature and Torsion 399

⟨N ′ , T ⟩ = −⟨T ′ , N ⟩ = −κ,

and so only the third inner product of the second row is a new quantity,

N ′ = −κT + τ B for the scalar function τ = ⟨N ′ , B⟩.

The function τ is the torsion of γ. It can be positive, negative, or zero,


depending on whether to third order the curve is twisting out of the (T, N )-
plane toward B, away from B, or not at all.. Similarly, the first and second
inner products of the third row are the negatives of the third inner products
of the first and second rows,

⟨B ′ , T ⟩ = −⟨T ′ , B⟩ = 0, ⟨B ′ , N ⟩ = −⟨N ′ , B⟩ = −τ.

All of the derivatives computed so far can be gathered into the Frenet equa-
tions,
⎡T′ ⎤ ⎡ 0 κ 0⎤⎡T ⎤
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ ′⎥ ⎢ ⎥⎢ ⎥
⎢ N ⎥ = ⎢ −κ 0 τ ⎥ ⎢ N ⎥ .
⎢ ′⎥ ⎢ ⎥⎢ ⎥
⎢ B ⎥ ⎢ 0 −τ 0 ⎥ ⎢ B ⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦
The geometric idea is that as we move along the curve, the bending of the
first natural coordinate determines the second natural coordinate; the second
natural coordinate bends away from the first as much as the first is bending
toward it, in order to preserve the ninety-degree angle between them; the
remaining bending of the second coordinate is toward or away from the third
remaining orthogonal coordinate, which bends away from or toward from the
second coordinate at the same rate, in order to preserve the ninety-degree
angle between them.
The relations γ ′ = T and γ ′′ = T ′ = κN and γ ′′′ = (κN )′ = κ′ N + κN ′ , and
the second Frenet equation N ′ = −κT + τ B combine to show that
⎡ γ′ ⎤ ⎡ 1 0 0 ⎤ ⎡ T ⎤
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ ′′ ⎥ ⎢ ⎥⎢ ⎥
⎢γ ⎥ = ⎢ 0 κ 0 ⎥⎢N ⎥.
⎢ ′′′ ⎥ ⎢ 2 ′ ⎥ ⎢ ⎥
⎢ γ ⎥ ⎢−κ κ κτ ⎥ ⎢ B ⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦
This relation shows that the local canonical form of a such a curve is, up to
third order,

γ(s0 + s) ≈ γ(s0 ) + sγ ′ (s0 ) + s2 γ ′′ (s0 ) + s3 γ ′′′ (s0 )


1 1
2 6
= γ(s0 ) + sT + s κN + s (−κ T + κ′ N + κτ B)
1 2 1 3 2
2 6
κ2 3 κ′
= γ(s0 ) + (s − s ) T + ( s2 + s3 ) N +
κ κτ 3
s B.
6 2 6 6

In planar cross sections:


400 8 Parametrized Curves

• In the (T, N )-plane the curve is locally (s, (κ/2)s2 ), a parabola opening
upward at the origin (see Figure 8.11, viewing the curve down the positive
B-axis).
• In the (T, B)-plane the curve is locally (s, (κτ /6)s3 ), a cubic curve inflect-
ing at the origin, rising from left to right if τ > 0 and falling if τ < 0 (see
Figure 8.12, viewing the figure up the negative N -axis).
• In the (N, B)-plane the curve is locally ((κ/2)s2 , (κτ /6)s3 ), a curve in the
right half-plane with a cusp at the origin (see Figure 8.13, viewing the
curve down the positive T -axis).
The relation of the curve to all three local coordinate axes is shown in Fig-
ure 8.14.

Figure 8.11. Space curve in local coordinates, from above

Let α be a regular curve, not necessarily parametrized by arc length. We


want formulas for its curvature and torsion. Let γ be the reparametrization
of α by arc length, so that α = γ ○ ℓ where ℓ is the arc length of α. By the
chain rule,

α′ = (γ ′ ○ ℓ)ℓ′ ,
α′′ = (γ ′ ○ ℓ)ℓ′′ + (γ ′′ ○ ℓ)ℓ′2 ,
α′′′ = (γ ′ ○ ℓ)ℓ′′′ + 3(γ ′′ ○ ℓ)ℓ′ ℓ′′ + (γ ′′′ ○ ℓ)ℓ′3 .

These relations and the earlier expressions of γ ′ and γ ′′ in terms of the Frenet
frame combine to give
⎡ α ′ ⎤ ⎡ ℓ′ 0 0 ⎤ ⎡ γ ′ ○ ℓ ⎤ ⎡ ℓ′ 0 0 ⎤ ⎡ 1 0 0 ⎤ ⎡ T ⎤
⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥
⎢ ′′ ⎥ ⎢ ′′ ′2 ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥
⎢α ⎥ = ⎢ℓ ℓ 0 ⎥ ⎢ γ ′′ ○ ℓ ⎥ = ⎢ ℓ′′ ℓ′2 0 ⎥ ⎢ 0 κ 0 ⎥ ⎢ N ⎥ .
⎢ ′′′ ⎥ ⎢ ′′′ ′ ′′ ′3 ⎥ ⎢ ′′′ ⎥ ⎢ ′′′ ′ ′′ ′3 ⎥ ⎢ 2 ′ ⎥ ⎢ ⎥
⎢ α ⎥ ⎢ℓ 3ℓ ℓ ℓ ⎥ ⎢ γ ○ ℓ ⎥ ⎢ℓ 3ℓ ℓ ℓ ⎥ ⎢−κ κ κτ ⎥ ⎢ B ⎥
⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦⎣ ⎦⎣ ⎦
8.5 Space Curves: Curvature and Torsion 401
B

Figure 8.12. Space curve in local coordinates, from the side

Figure 8.13. Space curve in local coordinates, from the front

Thus α′ × α′′ = ℓ′ T × (∗T + ℓ′ κN ) = ℓ′3 κB, and since ℓ′ = ∣α′ ∣, this gives the
2

curvature,
∣α′ × α′′ ∣
κ=
∣α′ ∣3
.

Similarly, det(α′ , α′′ , α′′′ ) = ℓ′6 κ2 τ , giving the torsion,

det(α′ , α′′ , α′′′ )


τ=
∣α′ × α′′ ∣2
.
402 8 Parametrized Curves

Figure 8.14. Space curve in local coordinates

As mentioned, the counterpart of Proposition 8.4.2 for space curves is


considerably less obvious. Local measurements answer the global question
whether we are moving on a sphere.

Theorem 8.5.1. Let γ ∶ I Ð→ R3 be regular, with curvature and torsion func-


tions κ, τ never zero. Consider the reciprocal curvature and torsion functions,

r = 1/κ, t = 1/τ.

Assume also that κ′ never vanishes. Then

γ lies on a sphere ⇐⇒ r2 + (r′ t)2 is constant.

When these conditions hold, r2 + (r′ t)2 = ρ2 where ρ > 0 is the radius of the
sphere.

Proof. We may assume that γ is parametrized by arc length.


( Ô⇒ ) As in the proof of Proposition 8.4.2, we zoom in on the global
condition that γ lies on a sphere, differentiating repeatedly and using the
Frenet frame. We are given that for some fixed point p ∈ R3 and some fixed
radius ρ > 0,
∣γ − p∣ = ρ.
And by the nature of the Frenet frame, γ − p decomposes as

γ − p = ⟨γ − p, T ⟩T + ⟨γ − p, N ⟩N + ⟨γ − p, B⟩B. (8.2)
8.5 Space Curves: Curvature and Torsion 403

Because ∣γ − p∣ is constant, Lemma 8.4.1(a) gives ⟨γ − p, γ ′ ⟩ = 0; next,


Lemma 8.4.1(b) and the fact that γ is parametrized by arc length com-
bine to give ⟨γ − p, γ ′′ ⟩ = −⟨γ ′ , γ ′ ⟩ = −1; and now Lemma 8.4.1(b) and then
Lemma 8.4.1(a) (again using the parametrization by arc length) give ⟨γ −
p, γ ′′′ ⟩ = −⟨γ ′ , γ ′′ ⟩ = 0. Since γ ′ = T and γ ′′ = κN and γ ′′′ = −κ2 T + κ′ N + κτ B,
the first two calculations have shown that ⟨γ − p, T ⟩ = 0 and ⟨γ − p, N ⟩ = −1/κ
with κ ≠ 0, and the third one therefore has shown that

0 = ⟨γ − p, −κ2 T + κ′ N + κτ B⟩ = −κ′ /κ + κτ ⟨γ − p, B⟩,

from which ⟨γ − p, B⟩ = κ′ /(κ2 τ ). Thus the description (8.2) of γ − p in the


Frenet frame is
γ − p = −(1/κ)N + κ′ /(κ2 τ )B.
Because we have defined r = 1/κ, so that r′ = −κ′ /κ2 , and because t = 1/τ , we
have
γ − p = −rN − r′ tB.
And thus r2 + (r′ t)2 = ρ2 .
( ⇐Ô ) We expect that γ = p − rN − r′ tB. So let δ = γ + rN + r′ tB and
compute δ ′ using the Frenet equations and various other results,

δ ′ = T + r′ N + r(−κT + τ B) + (r′′ t + r′ t′ )B − r′ tτ N
= (1 − rκ)T + (r′ − r′ tτ )N + (rτ + r′′ t + r′ t′ )B

= ( + r′′ t + r′ t′ ) B.
r
t

But r2 + (r′ t)2 is constant, so its derivative is zero,

0 = 2rr′ + 2r′ t(r′′ t + r′ t′ ) = 2r′ t ( + r′′ t + r′ t′ ) .


r
t
Thus δ ′ = 0 (here is where we use the hypothesis that κ′ never vanishes:
it prevents r′ from vanishing) and so indeed δ is some fixed vector p. Thus
γ = p − rN + r′ tB as expected, and ∣γ − p∣2 is the constant r2 + (r′ t)2 . ⊔

Exercise

8.5.1. (a) Let a and b be positive. Compute the curvature κ and the torsion τ
of the helix α(t) = (a cos t, a sin t, bt).
(b) How do κ and τ behave if a is held constant and b → ∞?
(c) How do κ and τ behave if a is held constant and b → 0?
(d) How do κ and τ behave if b is held constant and a → ∞?
(e) How do κ and τ behave if b is held constant and a → 0?
404 8 Parametrized Curves

8.6 General Frenet Frames and Curvatures


This section extends the Frenet frame to any number of dimensions. As with
plane curves and space curves, the basic idea is to take the derivatives of the
curve and straighten them out, giving rise to a coordinate system of orthogonal
unit vectors where each new direction takes one more derivative—and hence
one more degree of the curve’s behavior—into account. The result is a local
coordinate system natural to the curve at each of its points.
Let n ≥ 2 be an integer, and let

α ∶ I Ð→ Rn

be a regular curve. For t ∈ I define the first Frenet vector of α at t to be


(suppressing the t from the notation for brevity)

F1 = α′ /∣α′ ∣.

Thus F1 is a unit vector pointing in the same direction as the tangent vector
of α at t.
Assuming that F1′ never vanishes and that n ≥ 3, next define the first
curvature κ1 (t) of α at t and the second Frenet vector F2 (t) of α at t by the
conditions
F1′ = κ1 F2 , κ1 > 0, ∣F2 ∣ = 1.
Since ∣F1 ∣ = 1 for all t, it follows from Lemma 8.4.1(a) that ⟨F2 , F1 ⟩ = 0.
Because ⟨F2 , F1 ⟩ = 0, Lemma 8.4.1(b) gives ⟨F2′ , F1 ⟩ = −⟨F1′ , F2 ⟩ = −κ1 .
Assuming that F2′ + κ1 F1 never vanishes and that n ≥ 4, define the second
curvature κ2 (t) and the third Frenet vector F3 (t) by the conditions

F2′ = −κ1 F1 + κ2 F3 , κ2 > 0, ∣F3 ∣ = 1.

Then ⟨F3 , F1 ⟩ = 0, because −κ1 F1 is the F1 -component of F2′ . Again by


Lemma 8.4.1(a), ⟨F2′ , F2 ⟩ = 0, so that (since also ⟨F1 , F2 ⟩ = 0) ⟨F3 , F2 ⟩ = 0.
In general, suppose that 2 ≤ k ≤ n−2, and suppose that we have t-dependent
orthogonal unit Frenet vectors F1 , . . . , Fk and t-dependent positive curvature
functions κ1 , . . . , κk−1 such that (defining κ0 F0 = 0 for convenience)

Fj′ = −κj−1 Fj−1 + κj Fj+1 , j = 1, . . . , k − 1.

Since ⟨Fk , Fk ⟩ = 1, it follows by Lemma 8.4.1(a) that ⟨Fk′ , Fk ⟩ = 0. And since


⟨Fk , Fj ⟩ = 0 for j = 1, . . . , k − 1, it follows by Lemma 8.4.1(b) that for such j,

⟨Fk′ , Fj ⟩ = −⟨Fj′ , Fk ⟩
= −⟨−κj−1 Fj−1 + κj Fj+1 , Fk ⟩


⎪ 0 if j = 1, . . . , k − 2,
=⎨

⎪−κk−1 if j = k − 1.

8.6 General Frenet Frames and Curvatures 405

So, assuming that Fk′ ≠ −κk−1 Fk−1 , define κk and Fk+1 by the conditions

Fk′ = −κk−1 Fk−1 + κk Fk+1 , κk > 0, ∣Fk+1 ∣ = 1.

Then the relation κk Fk+1 = Fk′ + κk−1 Fk−1 shows that ⟨Fk+1 , Fj ⟩ = 0 for j =
1, . . . , k. Use this process, assuming the nonvanishing that is needed, until
κn−2 and Fn−1 have been defined. Thus if n = 2 then the process consists only
of defining F1 ; if n = 3 then the process also defines κ1 and F2 ; if n = 4 then
the process further defines κ2 and F3 ; and so on.
Finally, define the nth Frenet vector Fn as the unique unit vector orthogo-
nal to F1 through Fn−1 such that det(F1 , F2 , . . . , Fn ) > 0, and then define the
(n − 1)st curvature κn−1 by the condition

Fn−1 = −κn−2 Fn−2 + κn−1 Fn .

The (n − 1)st curvature need not be positive. By Lemma 8.4.1(b) yet again,
we have Fn′ = −κn−1 Fn−1 , and so the Frenet equations are
⎡ F ′ ⎤ ⎡ 0 κ1 ⎤ ⎡ F1 ⎤
⎢ 1 ⎥ ⎢ ⎥⎢ ⎥
⎢ F ′ ⎥ ⎢ −κ 0 κ ⎥⎢ F ⎥
⎢ 2 ⎥ ⎢ 1 ⎥⎢ 2 ⎥
⎢ ′ ⎥ ⎢ ⎥⎢ ⎥
⎢ F3 ⎥ ⎢ ⎥ ⎢ F3 ⎥
2

⎢ ⎥ ⎢ −κ2 0 κ3 ⎥⎢ ⎥
⎢ ⋮ ⎥=⎢ ⋱ ⋱ ⋱ ⎥⎢ ⋮ ⎥.
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ ⋮ ⎥ ⎢ ⋱ ⋱ ⋱ ⎥⎢ ⋮ ⎥
⎢ ⎥ ⎢ ⎥⎢ ⎥
⎢ ′ ⎥ ⎢ ⎥⎢ ⎥
⎢ Fn−1 ⎥ ⎢ −κ ⎥ ⎢ Fn−1 ⎥
⎢ ′ ⎥ ⎢ 0 κ ⎥⎢ ⎥
⎢ Fn ⎥ ⎢ −κn−1 0 ⎥ ⎢ ⎥
n−2 n−1
⎣ ⎦ ⎣ ⎦ ⎣ Fn ⎦
The first n−1 Frenet vectors and the first n−2 curvatures can also be obtained
by applying the Gram–Schmidt process (see Exercise 2.2.16) to the vectors
α′ , . . . , α(n−1) .
The Frenet vectors and the curvatures are independent of parametrization.
To see this, let α̃ ∶ Ĩ Ð→ Rn be a second curve equivalent to α. That is,

α = α̃ ○ φ

where φ ∶ I Ð→ Ĩ is smooth and has a smooth inverse, and φ′ > 0 on I. By the


chain rule,
α′ (t) = α̃′ (t̃) ⋅ φ′ (t) where t̃ = φ(t).
Thus α′ (t) and α̃′ (t̃) point in the same direction (because φ′ (t) > 0), and so
the corresponding first Frenet vectors are equal,

F1 (t) = F̃1 (t̃).

Since the curvatures and the rest of the Frenet vectors are described in terms
of derivatives of the first Frenet vector with respect to its variable, it follows
that the Frenet vectors and the curvatures are independent of parametrization,
as claimed,
406 8 Parametrized Curves

F̃i (t̃) = Fi (t) for i = 1, . . . , n


and
κ̃i (t̃) = κi (t) for i = 1, . . . , n − 1.

Since the curvatures describe the curve in local terms, they should be
unaffected by passing the curve through a rigid motion. The remainder of this
section establishes this invariance property of curvature, partly because doing
so provides us an excuse to describe the rigid motions of Euclidean space.
Definition 8.6.1. The square matrix A ∈ Mn (R) is orthogonal if At A = I.
That is, A is orthogonal if A is invertible and its transpose is its inverse. The
set of n × n orthogonal matrices is denoted On (R).
It is straightforward to check (Exercise 8.6.2) that
• the identity matrix I is orthogonal,
• if A and B are orthogonal then so is the product AB,
• and if A is orthogonal then so is the inverse A−1 .
These three facts, along with the fact that matrix multiplication is associative,
show that the orthogonal matrices form a group under matrix multiplication.
Some examples of orthogonal matrices are

cos θ − sin θ
[ ], [ ] for every θ ∈ R.
1 0
0 −1 sin θ cos θ

Orthogonal matrices are characterized by the property that they preserve


inner products. That is, the following equivalence holds:

A ∈ On (R) ⇐⇒ ⟨Ax, Ay⟩ = ⟨x, y⟩ for all x, y ∈ Rn

(Exercise 8.6.2(a)). Consequently, multiplying vectors by an orthogonal ma-


trix A preserves their lengths and the angles between them. Also, if A ∈ Mn (R)
is orthogonal then det A = ±1 (Exercise 8.6.2(b)).
The orthogonal matrices of determinant 1 form the special orthogonal
group, denoted SOn (R). These matrices not only preserve length and angle,
but in addition they preserve orientation. Thus

⟨Ax, Ay⟩ = ⟨x, y⟩, x, y ∈ Rn ,


A ∈ SOn (R) ⇐⇒ {
det(Ax1 , . . . , Axn ) = det(x1 , . . . , xn ), x1 , . . . , xn ∈ Rn .

It is straightforward to check that SOn (R) forms a subgroup of On (R).


Definition 8.6.2. A bijective mapping R ∶ Rn Ð→ Rn is called rigid if

⟨R(x) − R(p), R(y) − R(p)⟩ = ⟨x − p, y − p⟩ for all p, x, y ∈ Rn .

That is, rigid maps preserve the geometry of vector differences. The next
proposition characterizes rigid mappings.
8.6 General Frenet Frames and Curvatures 407

Proposition 8.6.3. The mapping R ∶ Rn Ð→ Rn is rigid if and only if R


takes the form R(x) = Ax + b with A ∈ On (R) and b ∈ Rn .

Proof. Verifying that every mapping R(x) = Ax + b where A ∈ On (R) and


b ∈ Rn is rigid is Exercise 8.6.3.
Now let R ∶ Rn Ð→ Rn be rigid. Define a related mapping

S ∶ Rn Ð→ Rn , S(x) = R(x) − R(0).

It suffices to show that S(x) = Ax for some A ∈ On (R). A small calculation


shows that S preserves inner products: for every x, y ∈ Rn ,

⟨S(x), S(y)⟩ = ⟨R(x) − R(0), R(y) − R(0)⟩ = ⟨x − 0, y − 0⟩ = ⟨x, y⟩.

Especially, if {e1 , . . . , en } is the standard basis of Rn then {S(e1 ), . . . , S(en )}


is again an orthonormal basis of Rn . Furthermore, ⟨S(x), S(ei )⟩ = ⟨x, ei ⟩ for
every x ∈ Rn and for i = 1, . . . , n. That is,

S(x1 , . . . , xn ) = x1 S(e1 ) + ⋯ + xn S(en ).

This shows that S(x) = Ax where A has columns S(e1 ), . . . , S(en ). Since
⟨S(ei ), S(ej )⟩ = ⟨ei , ej ⟩ for i, j ∈ {1, . . . , n}, in fact A ∈ On (R), as desired. ⊓

Definition 8.6.4. A congruence is a rigid map R(x) = Ax + b where A is


special orthogonal.

With congruences understood, it is easy to show that they preserve cur-


vatures. Consider a regular curve

α ∶ I Ð→ Rn ,

let R(x) = Ax + b be a congruence, and define a second curve

α̃ ∶ I Ð→ Rn , α̃ = R ○ α.

Then for every t ∈ I,

α̃′ (t) = R′ (α(t)) ⋅ α′ (t) = Aα′ (t).

Thus the first Frenet vectors of the two curves satisfy the relation

F̃1 = AF1 ,

and similarly for their derivatives,

κ̃1 F̃2 = F̃1′ = (AF1 )′ = AF1′ = Aκ1 F2 = κ1 AF2 ,

so that since κ̃1 and κ1 are positive and ∣F̃2 ∣ = 1 = ∣F2 ∣ = ∣AF2 ∣,

κ̃1 = κ1 and F̃2 = AF2 .


408 8 Parametrized Curves

Similarly,
κ̃i = κi , i = 1, . . . , n − 1
and
F̃i = AFi , i = 1, . . . , n.
We need A to be special orthogonal rather than just orthogonal in order that
this argument apply to the last Frenet vector and the last curvature. If A is
orthogonal but not special orthogonal then F̃n = −AFn and ̃κn−1 = −κn−1 .

Exercises

8.6.1. Are the following matrices orthogonal?


⎡1 0 2⎤
− cos θ sin θ 1 ⎢
⎢ √


[ ], √ ⎢0 5 0⎥, [ ].
ab
sin θ cos θ 5⎢ ⎥
⎢ 2 0 −1 ⎥ 0d
⎣ ⎦
(b) Confirm that the identity matrix I is orthogonal, that if A and B are
orthogonal then so is the product AB, and that if A is orthogonal then so is
its inverse A−1 .

8.6.2. (a) Prove that a matrix A ∈ Mn (R) is orthogonal if and only if


⟨Ax, Ay⟩ = ⟨x, y⟩ for all x, y ∈ Rn . (The fact that ⟨v, w⟩ = v t w essentially
gives ( Ô⇒ ). For ( ⇐Ô ), show that At A has (i, j)th entry ⟨Aei , Aej ⟩ for
i, j = 1, . . . , n, and recall that In is the matrix whose (i, j)th entry is δij .)
(b) Prove that every matrix A ∈ On (R) has determinant det A = ±1.

8.6.3. Prove that every mapping R(x) = Ax + b where A ∈ On (R) and b ∈ Rn


is rigid.
9
Integration of Differential Forms

The integration of differential forms over surfaces is characteristic of a fully


developed mathematical theory: it starts from carefully preconfigured defini-
tions and proceeds to one central theorem, whose proof is purely mechanical
because of how the definitions are rigged. Furthermore, much of the work is
algebraic, even though the theorem appears analytical. Since the motivation
for the definitions is not immediately obvious, the early stages of working
through such a body of material can feel unenlightening, but the payoff lies in
the lucidity of the later arguments and the power of the end result. The main
theorem here is often called Stokes’s theorem, but in fact it is a generaliza-
tion not only of the classical Stokes’s theorem (which is not due to Stokes; he
just liked to put it on his exams), but also of other nineteenth-century results
called the divergence theorem (or Gauss’s theorem) and Green’s theorem, and
even of the fundamental theorem of integral calculus. In fact, a better name
for the theorem to be presented here is the general FTIC.
The definitions of a surface and of the integral of a function over a surface
are given in Section 9.1. Formulas for particular integrals called flow and flux
integrals are derived in Section 9.2. The theory to follow is designed partly to
handle such integrals easily. The definitions of a differential form and of the
integral of a differential form over a surface are given in Section 9.3, and the
definitions are illustrated by examples in Sections 9.4 and 9.5. Sections 9.6
through 9.9 explain the algebraic rules of how to add differential forms and
multiply them by scalars, how to multiply differential forms, how to differen-
tiate them, and how to pass them through changes of variable. A change of
variable theorem for differential forms follows automatically in Section 9.10.
A construction of antiderivatives of forms is given in Section 9.11. Returning
to surfaces, Sections 9.12 and 9.13 define a special class of surfaces called
cubes, and a geometric boundary operator from cubes to cubes of lower di-
mension. The general FTIC is proved in Section 9.14. Section 9.15 sketches
how it leads to another proof of the classical change of variable theorem. Fi-
nally, Section 9.16 explains how the classical vector integration theorems are

© Springer International Publishing AG 2016 409


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5_9
410 9 Integration of Differential Forms

special cases of the general FTIC, and Section 9.17 takes a closer look at some
of the quantities that arise in this context.

9.1 Integration of Functions over Surfaces

Having studied integration over solid regions in Rn , i.e., over subsets of Rn


with positive n-dimensional volume, we face the new problem of how to inte-
grate over surfaces of lower dimension in Rn . For example, the circle in R2 is
one-dimensional, and the torus surface in R3 is two-dimensional. Each of these
sets has volume zero as a subset of its ambient space, in which it is curving
around. In general, whatever the yet-undefined notion of a k-dimensional sub-
set of Rn means, such objects will have volume zero when k < n, and so any
attempt to integrate over them in the sense of Chapter 6 will give an integral
of zero and a dull state of affairs. Instead, the idea is to parametrize surfaces
in Rn and then define integration over a parametrized surface in terms of
integration over a noncurved parameter space.

Definition 9.1.1 (Parametrized surface). Let A be an open subset of Rn .


A k-surface in A is a smooth mapping

Φ ∶ D Ð→ A,

where D is a compact connected subset of Rk whose boundary has volume zero.


The set D is called the parameter domain of Φ.

See Figure 9.1. Here are some points to note about Definition 9.1.1:
• Recall that a subset A of Rn is called open if its complement is closed.
The definitions in this chapter need the environment of an open subset
rather than all of Rn in order to allow for functions that are not defined
everywhere. For instance, the reciprocal modulus function

1/∣ ⋅ ∣ ∶ Rn − {0} Ð→ R

is defined only on surfaces that avoid the origin. In most of the examples,
A will be all of Rn , but Exercise 9.11.1 will touch on how the subject
becomes more nuanced when it is not.
• Recall also that compact means closed and bounded. Connected means
that D consists of only one piece, as discussed informally in Section 2.4.
And as discussed informally in Section 6.5 and formally in Section 6.8, the
boundary of a set consists of all points simultaneously near the set and
near its complement—roughly speaking, its edge. Typically D will be some
region that is easy to integrate over, such as a box, whose compactness,
connectedness, and small boundary are self-evident.
9.1 Integration of Functions over Surfaces 411

• The word smooth in the definition means that the mapping Φ extends
to some open superset of D in Rk , on which it has continuous partial
derivatives of all orders. Each such partial derivative is therefore again
smooth. All mappings in this chapter are assumed to be smooth.
• When we compute, coordinates in parameter space will usually be written
as (u1 , . . . , uk ), and coordinates in Rn as (x1 , . . . , xn ).
• It may be disconcerting that a surface is by definition a mapping rather
than a set, but this is for good reason. Just as the integration of Chapter 6
was facilitated by distinguishing between functions and their outputs, the
integration of this chapter is facilitated by viewing the surfaces over which
we integrate as mappings rather than their images.
• A parametrized curve, as in Definition 8.2.1, is precisely a 1-surface.

z
v y
Φ
u

Figure 9.1. A surface

When k = 0, Definition 9.1.1 is a little tricky. By convention, R0 is the


set of all points with no coordinates, each of the no coordinates being a real
number. (Our definition of Rn at the beginning of Chapter 2 danced around
this issue by requiring that n be positive.) There is exactly one such point,
the point (). That is, R0 consists of a single point, naturally called 0 even
though it is not (0). A 0-surface in Rn is thus a mapping

Φp ∶ R0 Ð→ Rn , Φp (0) = p,

where p is some point in Rn . In other words, Φp simply parametrizes the


point p. At the other dimensional extreme, if k = n then every compact con-
nected subset D of Rn naturally defines a corresponding n-surface in Rn by
trivially parametrizing itself,

∆ ∶ D Ð→ Rn , ∆(u) = u for all u ∈ D.


412 9 Integration of Differential Forms

Thus Definition 9.1.1 of a surface as a mapping is silly in the particular cases


of k = 0 and k = n, when it amounts to parametrizing points using the empty
point as a parameter domain, or parametrizing solids by taking them to be
their own parameter domains and having the identity mapping map them to
themselves. But for intermediate values of k, i.e., 0 < k < n, we are going to in-
tegrate over k-dimensional subsets of Rn by traversing them, and parametriz-
ing is the natural way to do so. Especially, a 1-surface is a parametrized curve,
and a 2-surface is a parametrized surface in the usual sense of surface as in
Figure 9.1.
Let A be an open subset of Rn , let Φ ∶ D Ð→ A be a k-surface in A, and
let f ∶ A Ð→ R be a smooth function. As mentioned above, if k < n then the
integral of f over Φ(D) in the sense of Chapter 6 is zero, because Φ(D) is of
lower dimension than its ambient space Rn . However, the integral of f over Φ
can be defined more insightfully.
For each point u of the parameter domain D, the n × k derivative matrix
Φ′ (u) has as its columns vectors that are naturally viewed as tangent vectors
to Φ at Φ(u), the jth column being tangent to the curve in Φ that arises from
motion in the jth direction of the parameter domain. In symbols, the matrix
is
Φ′ (u) = [v1 ⋯ vk ]n×k ,
where each column vector vj is
⎡ D Φ (u) ⎤
⎢ j 1 ⎥
⎢ ⎥
vj = Dj Φ(u) = ⎢ ⋮ ⎥
⎢ ⎥
.
⎢Dj Φn (u)⎥
⎣ ⎦n×1
The parallelepiped spanned by these vectors (see Figure 9.2) has a naturally
defined k-dimensional volume.

v
z

x
u

Figure 9.2. Tangent parallelepiped


9.1 Integration of Functions over Surfaces 413

Definition 9.1.2 (Volume of a parallelepiped). Let v1 , . . . , vk be vectors


in Rn . Let V be the n × k matrix with these vectors as its columns. Then the
k-volume of the parallelepiped spanned by the {vj } is

volk (P(v1 , . . . , vk )) = det(V T V ) . (9.1)

In coordinates, this formula is



volk (P(v1 , . . . , vk )) = det ( [vi ⋅ vj ]i,j=1,...,k ) , (9.2)

where vi ⋅ vj is the inner product of vi and vj .

The matrix V in this definition is n × k, and its transpose V T is k × n, so


neither of them need be square. But the product V T V is square, k × k, and
this is the matrix whose determinant is being taken. Equation (9.2) follows
immediately from (9.1), because
⎡v T ⎤ ⎡v ⋅ v ⋯ v1 ⋅ vk ⎤
⎢ 1⎥ ⎢ 1 1 ⎥
⎢ ⎥ ⎢ ⎥
V V = ⎢ ⋮ ⎥ [v1 ⋯ vk ] = ⎢ ⋮ ⋱ ⋮ ⎥ = [vi ⋅ vj ]i,j=1,...,k .
⎢ T⎥ ⎢ ⎥
T

⎢vk ⎥ ⎢vk ⋅ v1 ⋯ vk ⋅ vk ⎥
⎣ ⎦ ⎣ ⎦
For example, if k = 1 and γ ∶ [a, b] Ð→ Rn is a 1-surface (i.e., a curve)
in Rn , then its derivative matrix at a point u of [a, b] has one column,
⎡ γ ′ (u) ⎤
⎢ 1 ⎥
⎢ ⎥
γ (u) = ⎢ ⋮ ⎥ .
⎢ ′ ⎥

⎢γn (u)⎥
⎣ ⎦
Consequently, formula (9.2) is

length(γ ′ (u)) = γ ′ (u) ⋅ γ ′ (u).

That is, Definition 9.1.2 for k = 1 specializes to the definition of ∣γ ′ ∣ as γ ′ ⋅ γ ′
from Section 2.2. (Here and throughout this chapter, we drop the notational
convention that curves named γ are parametrized by arc length; thus no as-
sumption is present that ∣γ ′ ∣ = 1.) At the other extreme, if k = n then for-
mula (9.1) is
voln (P(v1 , . . . , vn )) = ∣ det(v1 , . . . , vn )∣.
That is, Definition 9.1.2 for k = n recovers the interpretation of ∣ det ∣ as volume
from Section 3.8. When k = 2, formula (9.2) is

area(P(v1 , v2 )) = ∣v1 ∣2 ∣v2 ∣2 − (v1 ⋅ v2 )2

= ∣v1 ∣2 ∣v2 ∣2 (1 − cos2 θ12 )
= ∣v1 ∣ ∣v2 ∣ ∣ sin θ12 ∣,
414 9 Integration of Differential Forms

giving the familiar formula for the area of a parallelogram. When k = 2 and also
n = 3, we can study the formula further by working in coordinates. Consider
two vectors u = (xu , yu , zu ) and v = (xv , yv , zv ). An elementary calculation
shows that the quantity under the square root in the previous display works
out to
∣u∣2 ∣v∣2 − (u ⋅ v)2 = ∣u × v∣2 .
So when k = 2 and n = 3, Definition 9.1.2 subsumes the familiar formula
area(P(v1 , v2 )) = ∣v1 × v2 ∣.
Here is an argument that (9.2) is the appropriate formula for the k-
dimensional volume of the parallelepiped spanned by the vectors v1 , . . . , vk
in Rn . (The fact that the vectors are tangent vectors to a k-surface is irrele-
vant to this discussion.) Results from linear algebra guarantee that there exist
vectors vk+1 , . . . , vn in Rn such that
• each of vk+1 through vn is a unit vector orthogonal to all the other vj ,
• det(v1 , . . . , vn ) ≥ 0.
Recall the notation in Definition 9.1.2 that V is the n×k matrix with columns
v1 , . . . , vk . Augment V to an n × n matrix W by adding the remaining vj as
columns too,
W = [v1 ⋯ vn ] = [V vk+1 ⋯ vn ] .
The scalar det(W ) is the n-dimensional volume of the parallelepiped spanned
by v1 , . . . , vn . But by the properties of vk+1 through vn , this scalar should
also be the k-dimensional volume of the the parallelepiped spanned by v1 , . . . ,
vk . That is, the natural definition is (using the second property of v1 , . . . , vn
for the second equality to follow)
√ √
volk (P(v1 , . . . , vk )) = det(W ) = (det W )2 = det(W T ) det(W )

= det(W T W ).
The first property of v1 , . . . , vn shows that

WT W = [ ],
V T V 0k×(n−k)
0(n−k)×k In−k

so that det(W T W ) = det(V T V ), and the natural definition becomes the


desired formula, √
volk (P(v1 , . . . , vk )) = det(V T V ).
The argument here generalizes the ideas used in Section 3.10 to suggest a
formula for the area of a 2-dimensional parallelogram in R3 as a 3 × 3 deter-
minant. Thus the coordinate calculation sketched in the previous paragraph
to recover the relation between parallelogram area and cross product length
in R3 was unnecessary.
With k-dimensional volume in hand, we can naturally define the integral
of a function over a k-surface.
9.1 Integration of Functions over Surfaces 415

Definition 9.1.3 (Integral of a function over a surface). Let A be an


open subset of Rn . Let Φ ∶ D Ð→ A be a k-surface in A. Let f ∶ Φ(D) Ð→ R
be a function such that f ○ Φ is smooth. Then the integral of f over Φ is

∫ f = ∫ (f ○ Φ) volk (P(D1 Φ, . . . , Dk Φ)).


Φ D

In particular, the k-dimensional volume of Φ is

volk (Φ) = ∫ 1 = ∫ volk (P(D1 Φ, . . . , Dk Φ)).


Φ D

By Definition 9.1.2, the k-volume factor in the surface integral is


√ √
volk (P(D1 Φ, . . . , Dk Φ)) = det(Φ′ T Φ′ ) = det([Di Φ ⋅ Dj Φ]i,j=1,...,k ) .

The idea of Definition 9.1.3 is that as a parameter u traverses the parameter


domain D, the composition f ○ Φ samples the function f over the surface,
while the k-volume factor makes the integral the limit of sums of many f -
weighted small tangent parallelepiped k-volumes over the surface rather than
the limit of sums of many (f ○ Φ)-weighted small box volumes over the pa-
rameter domain. (See Figure 9.3.) The k-volume factor itself is not small, as
seen in Figure 9.2, but it is the ratio of the small parallelepiped k-volume to
the small box volume shown in Figure 9.3.

v z

y
Φ

u
x

Figure 9.3. Integrating over a surface

For example, let r be a positive real number and consider a 2-surface in R3 ,

Φ ∶ [0, 2π] × [0, π] Ð→ R3 , Φ(θ, ϕ) = (r cos θ sin ϕ, r sin θ sin ϕ, r cos ϕ).

This surface is the 2-sphere of radius r. Since the sphere is a surface of revo-
lution, its area is readily computed by methods from a first calculus course,
but we do so with the ideas of this section to demonstrate their use. The
derivative vectors are
416 9 Integration of Differential Forms
⎡ −r sin θ sin ϕ ⎤ ⎡r cos θ cos ϕ⎤
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ ⎥
v1 = ⎢ r cos θ sin ϕ⎥ , v2 = ⎢ r sin θ cos ϕ ⎥ ,
⎢ ⎥ ⎢ ⎥
⎢ ⎥ ⎢ −r sin ϕ ⎥
⎣ 0 ⎦ ⎣ ⎦
and so the integrand of the surface area integral is
√ √
∣v1 ∣2 ∣v2 ∣2 − (v1 ⋅ v2 )2 = r4 sin2 ϕ = r2 sin ϕ

(note that sin ϕ ≥ 0 because ϕ ∈ [0, π]). Therefore the area is


2π π
area(Φ) = r2 ∫ ∫ sin ϕ = 4πr2 .
θ=0 ϕ=0

The fact that the sphere-area magnification factor r2 sin ϕ is the familiar vol-
ume magnification factor for spherical coordinates is clear geometrically: to
traverse the sphere, the spherical coordinates θ and ϕ vary while r stays con-
stant, and when r does vary, it moves orthogonally to the sphere-surface so
that the incremental volume is the incremental surface-area times the incre-
mental radius-change. Indeed, the vectors v1 and v2 from a few displays back
are simply the second and third columns of the spherical change of variable
derivative matrix. The reader can enjoy checking that the first column of the
spherical change of variable derivative matrix is indeed a unit vector orthog-
onal to the second and third columns.
The integral in Definition 9.1.3 seems to depend on the surface Φ as a
parametrization rather than merely as a set, but in fact, the integral is unaf-
fected by reasonable changes of parametrization, because of the change of vari-
able theorem. To see this, let A be an open subset of Rn , and let Φ ∶ D Ð→ A
and Ψ ∶ D̃ Ð→ A be k-surfaces in A. Suppose that there exists a smoothly in-
vertible mapping T ∶ D Ð→ D ̃ such that Ψ ○T = Φ. In other words, T is smooth,
T is invertible, its inverse is also smooth, and the following diagram commutes
(meaning that either path around the triangle yields the same result):

D ◆◆
◆◆◆
◆◆Φ◆
◆◆◆
◆◆&
T 8A
qqqqq
qqq
 qqqq Ψ
D̃q

When such a mapping T exists, Ψ is called a reparametrization of Φ.


Let f ∶ A Ð→ R be any smooth function. Then the integral of f over the
reparametrization Ψ of Φ is

∫ (f ○ Ψ ) det(Ψ ′ Ψ ′ ).
T
̃
D
9.1 Integration of Functions over Surfaces 417

̃ = T (D), this integral is


By the change of variable theorem, since D

∫ (f ○ Ψ ○ T ) det ((Ψ ′ ○ T )T (Ψ ′ ○ T )) ∣ det(T )∣.

D

√ √
But ∣ det(T ′ )∣ = det(T ′ )2 = det(T ′ T ) det(T ′ ), so this becomes

∫ (f ○ Ψ ○ T ) det (T ′ (Ψ ′ ○ T )T (Ψ ′ ○ T ) T ′ ) ,
T
D

and by the general matrix rule B T AT AB = (AB)T AB, this is in turn



∫ (f ○ Ψ ○ T ) det (((Ψ ′ ○ T )T ′ )T (Ψ ′ ○ T )T ′ ) .
D

Finally, since Ψ ○ T = Φ, the chain rule shows that we have



∫ (f ○ Φ) det (Φ′ Φ′ ) ,
T
D

giving the integral of f over the original surface Φ, as desired.

Exercises

9.1.1. Consider two vectors u = (xu , yu , zu ) and v = (xv , yv , zv ). Calculate


that ∣u∣2 ∣v∣2 − (u ⋅ v)2 = ∣u × v∣2 .

9.1.2. Consider two vectors u = (xu , yu , zu ) and v = (xv , yv , zv ). Calculate


that the area of the parallelogram spanned by u and v is the square root
of the sum of the squares of the areas of the parallelogram’s shadows in the
(x, y)-plane, the (y, z)-plane, and the (z, x)-plane.

9.1.3. Let f (x, y, z) = x2 + yz.


(a) Integrate f over the box B = [0, 1]3 .
(b) Integrate f over the parametrized curve

γ ∶ [0, 2π] Ð→ R3 , γ(t) = (cos t, sin t, t).

(c) Integrate f over the parametrized surface

S ∶ [0, 1]2 Ð→ R3 , S(u, v) = (u + v, u − v, v).

(d) Integrate f over the parametrized solid

V ∶ [0, 1]3 Ð→ R3 , V (u, v, w) = (u + v, v − w, u + w).

9.1.4. Find the surface area of the upper half of the cone at fixed angle ϕ from
the z-axis, extended outward to radius a. That is, the surface is the image of
the spherical coordinate mapping with ϕ fixed at some value between 0 and π
as ρ varies from 0 to a and θ varies from 0 to 2π.
418 9 Integration of Differential Forms

9.1.5. (a) Let D ⊂ Rk be a parameter domain, and let f ∶ D Ð→ R be a


smooth function. Recall from Exercise 2.4.3 that the graph of f is a subset
of Rk+1 ,
G(f ) = {(u, f (u)) ∶ u ∈ D}.
Note that f is a 1-surface in R, while the surface that captures the idea of the
graph of f as a k-surface in Rk+1 is not f itself but rather

Φ ∶ D Ð→ Rk+1 , Φ(u) = (u, f (u)).

Derive a formula for the k-dimensional volume of Φ. In particular, show that


when k = 2, the formula is

area(Φ) = ∫ 1 + (D1 f )2 + (D2 f )2 .
D

(b) What is the area of the graph of the function f ∶ D Ð→ R (where D is


the unit disk in the plane) given by f (x, y) = x2 + y 2 ?

9.2 Flow and Flux Integrals


Let A be an open subset of Rn . A mapping F ∶ A Ð→ Rn is also called a
vector field on A. (The usage of field here is unrelated to the field axioms.) If
γ ∶ I Ð→ A is a curve in A and u is a point of I, then the flow of F along γ
at u is the scalar component of F (γ(u)) tangent to γ at γ(u). If Φ ∶ D Ð→ A
is an (n − 1)-surface in A and u is a point of D, then the flux of F through Φ
at u is the scalar component of F normal to Φ at Φ(u). Surface integrals
involving the flow or the flux of a vector field arise naturally. If F is viewed as
a force field then its flow integrals, also called line integrals, measure the work
of moving along curves γ in A. If F is viewed as a velocity field describing the
motion of some fluid then its flux integrals measure the rate at which fluid
passes through permeable membranes Φ in A. Each of the classical theorems
of vector integral calculus to be proved at the end of this chapter involves a
flow integral or a flux integral.
Flow and flux integrals have a more convenient form than the general
integral of a function over a surface, in that the k-volume factor from Defini-
tion 9.1.3 (an unpleasant square root) cancels, and what remains is naturally
expressed in terms of determinants of the derivatives of the component func-
tions of Φ. These formulas rapidly become complicated, so the point of this
section is only to see what form they take.
Working first in two dimensions, consider a vector field,

F = (F1 , F2 ) ∶ R2 Ð→ R2 ,

and a curve,
γ = (γ1 , γ2 ) ∶ [a, b] Ð→ R2 .
9.2 Flow and Flux Integrals 419

Assuming that the derivative γ ′ is always nonzero but not assuming that γ is
parametrized by arc length, the unit tangent vector to γ at the point γ(u),
pointing in the direction of the traversal, is
γ ′ (u)
T̂(γ(u)) = ′
∣γ (u)∣
.

Note that the denominator is the length factor in Definition 9.1.3. The parallel
component of F (γ(u)) along T̂(γ(u)) has magnitude (F ⋅ T̂)(γ(u)). (See Ex-
ercise 2.2.15.) Therefore the net flow of F along γ in the direction of traversal
is ∫γ F ⋅ T̂. By Definition 9.1.3, this flow integral is

γ ′ (u) ′
∫ F ⋅ T̂ = ∫ F (γ(u)) ⋅ ∣γ (u)∣ = ∫ F (γ(u)) ⋅ γ ′ (u),
b b

∣γ (u)∣

(9.3)
γ u=a u=a

and the length factor has canceled. In coordinates, the flow integral is

∫ F ⋅ T̂ = ∫ ((F1 ○ γ)γ1′ + (F2 ○ γ)γ2′ )(u).


b
(9.4)
γ u=a

On the other hand, for every vector (x, y) ∈ R2 , define (x, y)× = (−y, x). (This
seemingly ad hoc procedure of negating one of the vector entries and then
exchanging them will be revisited soon as a particular manifestation of a
general idea.) The unit normal vector to the curve γ at the point γ(u), at
angle π/2 counterclockwise from T̂(γ(u)), is

̂ (γ(u)) = γ (u) .
′ ×

∣γ (u)∣
N ′

Therefore the net flux of F through γ counterclockwise to the direction of


traversal is the flux integral

̂=∫ F (γ(u)) ⋅ γ ′ (u)× ,


b
∫ F ⋅N (9.5)
γ u=a

or, in coordinates,

̂=∫ ((F2 ○ γ)γ1′ − (F1 ○ γ)γ2′ )(u).


b
∫ F ⋅N (9.6)
γ u=a

Next let n = 3 and modify the vector field F suitably to

F = (F1 , F2 , F3 ) ∶ R3 Ð→ R3 .

The intrinsic expression (9.3) for the flow integral of F along a curve γ remains
unchanged in R3 , making the 3-dimensional counterpart of (9.4) in coordinates
obvious,

∫ F ⋅ T̂ = ∫ ((F1 ○ γ)γ1′ + (F2 ○ γ)γ2′ + (F3 ○ γ)γ3′ )(u).


b

γ u=a
420 9 Integration of Differential Forms

As for the flux integral, consider a 2-surface in R3 ,

Φ = (Φ1 , Φ2 , Φ3 ) ∶ D Ð→ R3 .

Assuming that the two columns D1 Φ and D2 Φ of the derivative matrix Φ′ are
always linearly independent, a unit normal to the surface Φ at the point Φ(u)
(where now u = (u1 , u2 )) is obtained from their cross product,

̂ (Φ(u)) = D1 Φ(u) × D2 Φ(u) .


∣D1 Φ(u) × D2 Φ(u)∣
N

By property CP6 of the cross product, the denominator in this expression is


the area of the parallelogram spanned by D1 Φ(u) and D2 Φ(u), and this is
the area factor in Definition 9.1.3 of the surface integral. Therefore this factor
cancels in the flux integral of F through Φ in the N ̂ -direction,

̂=∫
∫ F ⋅N F (Φ(u)) ⋅ (D1 Φ(u) × D2 Φ(u)), (9.7)
Φ u∈D

or, in coordinates,

⎛ (F1 ○ Φ)(D1 Φ2 D2 Φ3 − D1 Φ3 D2 Φ2 )⎞
̂=∫
∫ F ⋅N ⎜ +(F2 ○ Φ)(D1 Φ3 D2 Φ1 − D1 Φ1 D2 Φ3 )⎟ (u).
⎜ ⎟ (9.8)
⎝ +(F3 ○ Φ)(D1 Φ1 D2 Φ2 − D1 Φ2 D2 Φ1 )⎠
Φ u∈D

Whereas the 2-dimensional flow and flux integrands and the 3-dimensional
flow integrand involved derivatives γj′ of the 1-surface γ, the integrand here
contains the determinants of all 2 × 2 subblocks of the 3 × 2 derivative matrix
of the 2-surface Φ,
⎡D Φ D Φ ⎤
⎢ 1 1 2 1⎥
⎢ ⎥
Φ = ⎢D1 Φ2 D2 Φ2 ⎥ .
⎢ ⎥

⎢D1 Φ3 D2 Φ3 ⎥
⎣ ⎦
The subdeterminants give a hint about the general picture. Nonetheless, (9.8)
is forbidding enough that we should pause and think before trying to compute
more formulas.
For general n, formula (9.3) for the flow integral of a vector field along a
curve generalizes transparently,

∫ F ⋅ T̂ = ∫ ((F ○ γ) ⋅ γ ′ )(u) = ∫ ( ∑(Fi ○ γ)γi′ )(u).


b b n
(9.9)
γ u=a u=a i=1

But the generalization of formulas (9.5) through (9.8) to a formula for the flux
integral of a vector field in Rn through an (n − 1)-surface is not so obvious.
Based on (9.7), the intrinsic formula should be

̂=∫
∫ F ⋅N ((F ○ Φ) ⋅ (D1 Φ × ⋯ × Dn−1 Φ))(u), (9.10)
Φ u∈D
9.2 Flow and Flux Integrals 421

where the (n − 1)-fold cross product on Rn is analogous to the 2-fold cross


product on R3 from Section 3.10. That is, the cross product should be orthog-
onal to each of the multiplicand-vectors, its length should be their (n − 1)-
dimensional volume, and when the multiplicands are linearly independent,
they should combine with their cross product to form a positive basis of Rn .
Such a cross product exists by methods virtually identical to those of
Section 3.10. What is special to three dimensions is that the cross product is
binary, i.e., it is a twofold product. In coordinates, a mnemonic formula for
the cross product in R3 , viewing the vectors as columns, is
⎡ e1 ⎤
⎢ ⎥
⎢ ⎥
v1 × v2 = det ⎢ v1 e2 ⎥ .
⎢ ⎥
v2
⎢ e3 ⎥
⎣ ⎦
This formula appeared in row form in Section 3.10, and it makes the corre-
sponding formula for the cross product of n − 1 vectors in Rn inevitable,
⎡ e1 ⎤
⎢ ⎥
⎢ ⎥
v1 × ⋯ × vn−1 = det ⎢ v1 ⋯ vn−1 ⋮ ⎥.
⎢ ⎥
(9.11)
⎢ en ⎥
⎣ ⎦
For example, a single vector v = (x, y) in R2 has a sort of cross product,

v × = det [ ] = (−y, x)
x e1
y e2

This is the formula that appeared with no explanation as part of the flux
integral in R2 . That is, the generalization (9.10) of the 3-dimensional flux
integral to higher dimensions also subsumes the 2-dimensional case. Returning
to Rn , the cross product of the vectors D1 Φ(u),. . . ,Dn−1 Φ(u) is
⎡ e1 ⎤
⎢ ⎥
⎢ ⎥
(D1 Φ × ⋯ × Dn−1 Φ)(u) = det ⎢ D1 Φ(u) ⋯ Dn−1 Φ(u) ⋮ ⎥ .
⎢ ⎥
⎢ en ⎥
⎣ ⎦
This determinant can be understood better by considering the data in the
matrix as rows. Recall that for i = 1, . . . , n, the ith row of the n × (n − 1)
derivative matrix Φ′ is the derivative matrix of the ith component function
of Φ,
Φ′i (u) = [D1 Φi (u) ⋯ Dn−1 Φi (u)] .
In terms of these component function derivatives, the general cross product is
422 9 Integration of Differential Forms
⎡ Φ′ (u) e ⎤ ⎡e ⎤
⎢ 1 1⎥ ⎢ 1 Φ1 (u) ⎥

⎢ ⎥ ⎢ ⎥
(D1 Φ × ⋯ × Dn−1 Φ)(u) = det ⎢ ⋮ ⋮ ⎥ = (−1) det ⎢ ⋮ ⋮ ⎥
⎢ ′ ⎥ ⎢ ⎥
n−1

⎢ Φn (u) en ⎥ ⎢ en Φn (u) ⎥
⎣ ⎦ ⎣ ⎦

⎡ Φ′2 (u) ⎤ ⎡ Φ′1 (u) ⎤ ⎡ Φ′1 (u) ⎤


⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ Φ′ (u) ⎥ ⎢ Φ′ (u) ⎥ ⎢ Φ′ (u) ⎥
⎢ 3 ⎥ ⎢ 3 ⎥ ⎢ 2 ⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
= (−1)n−1 ( det ⎢ Φ′4 (u) ⎥ e1 − det ⎢ Φ′4 (u) ⎥ e2 + det ⎢ Φ′4 (u) ⎥ e3 + ⋯)
⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⎥ ⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎢ ′ ⎥ ⎢ ′ ⎥ ⎢ ′ ⎥
⎢Φ (u)⎥ ⎢Φ (u)⎥ ⎢Φ (u)⎥
⎣ n ⎦ ⎣ n ⎦ ⎣ n ⎦
⎡ Φ′1 (u) ⎤
⎢ ⎥
⎢ ⋮ ⎥
⎢ ⎥
⎢Φ′ (u)⎥
⎢ i−1 ⎥
n
= (−1) ∑(−1) det ⎢ ⎢
⎥ ei .

(u)
n−1 i−1

⎢ i+1 ⎥

Φ
⎢ ⋮ ⎥
i=1
⎢ ⎥
⎢ Φ′ (u) ⎥
⎣ n ⎦
Thus finally, the general flux integral in coordinates is
⎡ Φ′1 ⎤
⎢ ⎥
⎢ ⋮ ⎥
⎢ ⎥
⎢Φ′ ⎥
̂ ⎢ i−1 ⎥
( ∑(−1) (Fi ○ Φ) det ⎢ ′ ⎥ )(u).
n
∫ F ⋅ N = (−1) ∫ ⎢Φi+1 ⎥
n−1 i−1
(9.12)
⎢ ⎥
⎢ ⋮ ⎥
Φ u∈D i=1
⎢ ⎥
⎢ Φ′ ⎥
⎣ n⎦

The integrand here contains the determinants of all (n − 1) × (n − 1) subblocks


of the n × (n − 1) derivative matrix of the (n − 1)-surface Φ. The best way
to understand the notation of (9.12) is to derive (9.6) and (9.8) from it by
setting n = 2 and then n = 3.
We end this section by mentioning one more integral. Let k = 2 and let
n = 4, and consider a 2-surface in R4 ,

Φ = (Φ1 , Φ2 , Φ3 , Φ4 ) ∶ D Ð→ R4 .

Note that Φ′ is a 4 × 2 matrix,


⎡Φ′ ⎤ ⎡D1 Φ1 D2 Φ 1 ⎤
⎢ 1⎥ ⎢ ⎥
⎢Φ′ ⎥ ⎢D Φ D2 Φ 2 ⎥
⎢ ⎥ ⎢ ⎥
Φ = ⎢ 2′ ⎥ = ⎢ 1 2 ⎥,
⎢Φ3 ⎥ ⎢D1 Φ3 D2 Φ 3 ⎥

⎢ ′⎥ ⎢ ⎥
⎢Φ4 ⎥ ⎢D1 Φ4 D2 Φ 4 ⎥
⎣ ⎦ ⎣ ⎦
so that any two of its rows form a square matrix. Consider also any six smooth
functions
F1,2 , F1,3 , F1,4 , F2,3 , F2,4 , F3,4 ∶ R4 Ð→ R.
Then we can define an integral,
9.3 Differential Forms Syntactically and Operationally 423

⎛ (F1,2 ○ Φ) det [Φ1 ] + (F1,3 ○ Φ) det [Φ1 ] + (F1,4 ○ Φ) det [Φ1 ]⎞


′ ′ ′

⎜ 4 ⎟
⎜ ⎟ (u).
′ ′ ′
Φ Φ Φ
∫ ⎜ ⎟
2 3
u∈D ⎜ Φ′3 ⎟
[ ] (F [ ] (F [ ]
′ ′
+(F ○ + ○ + ○
Φ2 Φ2
⎝ 2,3 Φ) det
Φ′3 2,4 Φ) det
Φ′4 3,4 Φ) det
Φ′4 ⎠
(9.13)
Since the surface Φ is not 1-dimensional, this is not a flow integral. And since
Φ is not (n − 1)-dimensional, it is not a flux integral either. Nonetheless, since
the integrand contains the determinants of all 2 × 2 subblocks of the 4 × 2
derivative matrix of the 2-surface Φ, it is clearly cut from the same cloth as
the flow and flux integrands of this section. The ideas of this chapter will
encompass this integral and many others in the same vein.
As promised at the beginning of this section, the k-volume factor has
canceled in flow and flux integrals, and the remaining integrand features de-
terminants of the derivatives of the component functions of the surface of
integration. Rather than analyze such cluttered integrals, the method of this
chapter is to abstract their key properties into symbol-patterns, and then
work with the patterns algebraically instead. An analysis tracking all the de-
tails of the original setup would be excruciating to follow, not to mention
being unimaginable to recreate ourselves. Instead, we will work insightfully,
economy of ideas leading to ease of execution. Since the definitions to fol-
low do indeed distill the essence of vector integration, they will enable us to
think fluently about the phenomena that we encounter. This is real progress in
methodology, much less laborious than the classical approach. Indeed, having
seen the modern argument, it is unimaginable to want to recreate the older
one.

Exercises

9.2.1. Show that the n-dimensional cross product defined by a formula


in (9.11) satisfies the property

⟨v1 × ⋯ × vn−1 , w⟩ = det(v1 , . . . , vn−1 , w) for all w ∈ Rn .

As in Section 3.10, this property characterizes the cross product uniquely.


Are there significant differences between deriving the properties of the cross
product from its characterization (cf. Proposition 3.10.2) in n dimensions
rather than in 3?
9.2.2. Derive equations (9.6) and (9.8) from equation (9.12).

9.3 Differential Forms Syntactically and Operationally


We need objects to integrate over surfaces, objects whose integrals encompass
at least the general flow integral (9.9) and flux integral (9.12) of the previous
424 9 Integration of Differential Forms

section. Let A be an open subset of Rn . The objects are called differential


forms of order k on A or simply k-forms on A. Thus a k-form ω is some
sort of mapping
ω ∶ {k-surfaces in A} Ð→ R.
Naturally, the value ω(Φ) will be denoted ∫Φ ω. The definition of a k-form
will come in two parts. The first is syntactic: it doesn’t say what a k-form is
as a function of k-surfaces, only what kind of name a k-form can have. This
definition requires some preliminary vocabulary: an ordered k-tuple from
{1, . . . , n} is a vector

(i1 , . . . , ik ) with each ij ∈ {1, . . . , n}.

For example, the ordered 3-tuples from {1, 2} are

(1, 1, 1), (1, 1, 2), (1, 2, 1), (1, 2, 2), (2, 1, 1), (2, 1, 2), (2, 2, 1), (2, 2, 2).

A sum over the ordered k-tuples from {1, . . . , n} means simply a sum of terms
with each term corresponding to a distinct k-tuple. Thus we may think of an
ordered k-tuple (i1 , . . . , ik ) as a sort of multiple index or multiple subscript,
and for this reason we often will abbreviate it to I. These multiple subscripts
will figure prominently throughout this chapter, so you should get comfortable
with them. Exercise 9.3.1 provides some practice.

Definition 9.3.1 (Syntax of differential forms). Let A be an open subset


of Rn . A 0-form on A is a smooth function f ∶ A Ð→ R. For k ≥ 1, a k-form
on A is an element of the form
n
∑ f(i1 ,...,ik ) dxi1 ∧ ⋯ ∧ dxik ,
i1 ,...,ik =1

or
∑ fI dxI ,
I

where each I = (i1 , . . . , ik ) is an ordered k-tuple from {1, . . . , n} and each fI


is a smooth function fI ∶ A Ð→ R.

Make the convention that the empty set I = ∅ is the only ordered 0-tuple
from {1, . . . , n}, and that the corresponding empty product dx∅ is 1. Then
the definition of a k-form for k ≥ 1 in Definition 9.3.1 also makes sense for
k = 0, and it subsumes the special definition that was given for k = 0.
For example, a differential form for n = 3 and k = 1 is

ex+y+z dx + sin(yz) dy + x2 z dz,

and a differential form for n = 2 and k = 2 is

y dx ∧ dx + ex dx ∧ dy + y cos x dy ∧ dx,
9.3 Differential Forms Syntactically and Operationally 425

with the missing dy ∧ dy term tacitly understood to have the zero function as
its coefficient-function f(2,2) (x, y), and hence to be zero itself. The expression
1
dx
x
is a 1-form on the open subset A = {x ∈ R ∶ x ≠ 0} of R, but it is not a 1-form
on all of R. The hybrid expression

z dx ∧ dy + ex dz

is not a differential form, because it mixes an order-2 term and an order-1


term.
Before completing the definition of differential form, we need one more
piece of terminology. If M is an n × k matrix and I = (i1 , . . . , ik ) is an ordered
k-tuple from {1, . . . , n}, then MI denotes the square k × k matrix comprising
the Ith rows of M . For example, if
⎡1 2⎤
⎢ ⎥
⎢ ⎥
M = ⎢3 4⎥ ,
⎢ ⎥
⎢5 6⎥
⎣ ⎦
and if I = (3, 1), then
MI = [ ] .
56
12
The second part of the definition of a k-form explains how to integrate it over
a k-surface. In this definition, a differential form in the sense of Definition 9.3.1
is called a syntactic differential form.
Definition 9.3.2 (Integration of differential forms). Let A be an open
subset of Rn . For k = 0, a syntactic 0-form ω = f on A gives rise to a function
of 0-surfaces in A, also called ω,

ω ∶ {0-surfaces in A} Ð→ R,

defined by the rule that for every point p ∈ A,

ω(Φp ) = f (p).

That is, integrating ω over a one-point surface consists simply in evaluating f


at the point. For k ≥ 1, a syntactic k-form ω = ∑I fI dxI on A gives rise to a
function of k-surfaces in A, also called ω,

ω ∶ {k-surfaces in A} Ð→ R,

defined by the rule that for every k-surface Φ ∶ D Ð→ A,

ω(Φ) = ∫ ∑(fI ○ Φ) det Φ′I . (9.14)


D I
426 9 Integration of Differential Forms

For all k, the integral of ω over Φ is defined to be ω(Φ),

∫ ω = ω(Φ).
Φ

Formula (9.14), defining ω(Φ), is the key for everything to follow in this
chapter. It defines an integral over the image Φ(D), which may have volume
zero in Rn , by pulling back—this term will later be defined precisely—to an
integral over the parameter domain D, which is a full-dimensional set in Rk
and hence has positive k-dimensional volume.
Under Definition 9.3.2, the integral of a differential form over a surface
depends on the surface as a mapping, i.e., as a parametrization. However, it
is a straightforward exercise to show that that the multivariable change of
variable theorem implies that the integral is unaffected by reasonable changes
of parametrization.
Returning to formula (9.14): despite looking like the flux integral (9.12),
it may initially be impenetrable to the reader who (like the author) does not
assimilate notation quickly. The next two sections will illustrate the formula
in specific instances, after which its general workings should be clear. Before
long, you will have an operational understanding of the definition.
Operational understanding should be complemented by structural under-
standing. The fact that the formal consequences of Definitions 9.3.1 and 9.3.2
subsume the main results of classical integral vector calculus still doesn’t ex-
plain these ad hoc definitions conceptually. For everything to play out so
nicely, the definitions must somehow be natural rather than merely clever,
and a structural sense of why they work so well might let us extend the ideas
to other contexts rather than simply tracking them. Indeed, differential forms
fit into a mathematical structure called a cotangent bundle, with each differ-
ential form being a section of the bundle. The construction of the cotangent
bundle involves the dual space of the alternation of a tensor product, all of
these formidable-sounding technologies being utterly Platonic mathematical
objects. However, understanding this language requires an investment in ideas
and abstraction, and in the author’s judgment the startup cost is much higher
without some experience first. Hence the focus of the chapter is purely op-
erational. Since formula (9.14) may be opaque to the reader for now, the
first order of business is to render it transparent by working easy concrete
examples.

Exercises

9.3.1. Write out all ordered k-tuples from {1, . . . , n} in the cases n = 4, k = 1;
n = 3, k = 2. In general, how many ordered k-tuples I = (i1 , . . . , ik ) from
{1, . . . , n} are there? How many of these are increasing, meaning that i1 <
⋯ < ik ? Write out all increasing k-tuples from {1, 2, 3, 4} for k = 1, 2, 3, 4.
9.4 Examples: 1-Forms 427

9.3.2. An expression ω = ∑I fI dxI in which the sum is over only increasing


k-tuples from {1, . . . , n} is called a standard presentation of ω. Write out
explicitly what a standard presentation for a k-form on R4 looks like for
k = 0, 1, 2, 3, 4.

9.4 Examples: 1-Forms

A k-form is a function of k-surfaces. That is, one can think of a k-form ω as


a set of instructions: given a k-surface Φ, ω carries out some procedure on Φ
to produce a real number, ∫Φ ω.
For example, let
ω = x dy and λ = y dz,
both 1-forms on R3 . A 1-surface in R3 is a curve,

γ = (γ1 , γ2 , γ3 ) ∶ [a, b] Ð→ R3 ,

with 3 × 1 derivative matrix


γ1′
γ ′ = [ γ2′ ] .
γ3′

For every such curve, ω is the instructions integrate γ1 γ2′ over the parameter
domain [a, b], and similarly λ instructs to integrate γ2 γ3′ . You should work
through applying formula (9.14) to ω and λ to see how it produces these direc-
tions. Note that x and y are being treated as functions on R3 —for example,

x(a, b, c) = a for all (a, b, c),

so that x ○ γ = γ1 .
To see ω and λ work on a specific curve, consider the helix

H ∶ [0, 2π] Ð→ R3 , H(t) = (a cos t, a sin t, bt).

Its derivative matrix is


⎡ −a sin t ⎤
⎢ ⎥
⎢ ⎥
H (t) = ⎢ a cos t⎥ for all t ∈ [0, 2π].
⎢ ⎥

⎢ b ⎥
⎣ ⎦
Thus by (9.14),
2π 2π
∫ ω=∫ a cos t ⋅ a cos t = πa2 and ∫ λ=∫ a sin t ⋅ b = 0.
H t=0 H t=0

Looking at the projections of the helix in the (x, y)-plane and the (y, z)-plane
suggests that these are the right values for ∫H x dy and ∫H y dz if we interpret
the symbols x dy and y dz as in one-variable calculus. (See Figure 9.4.)
428 9 Integration of Differential Forms

z z

y y

x x

Figure 9.4. Integrating 1-forms over a helix

For another example, let


ω = dx,
3
a 1-form on R , and consider any curve

γ ∶ [a, b] Ð→ R3 , γ(t) = (γ1 (t), γ2 (t), γ3 (t)).

Then
∫ ω = ∫ (1 ○ γ) ⋅ γ1 = ∫ γ1′ = γ1 (b) − γ1 (a).
b b

γ a a

A change of notation makes this example more telling. Rewrite the component
functions of the curve as x, y, and z,

γ ∶ [a, b] Ð→ R3 , γ(t) = (x(t), y(t), z(t)).

So now x is not a function on R3 as in the previous example, but a function


on [a, b]. The integral can be rewritten as follows:

For curves γ = (x, y, z) ∶ [a, b] Ð→ R3 , ∫ dx = x(b) − x(a).


γ

That is, the form dx does indeed measure change in x along curves. As a set
of instructions, it simply says to evaluate the x-coordinate difference from the
initial point on the curve to the final point. Returning to the helix H, it is
now clear with no further work that

∫ dx = 0, ∫ dy = 0, ∫ dz = 2πb.
H H H

It would be good practice with formula (9.14) to confirm these values.


To generalize the previous example, let A be an open subset of Rn , let
f ∶ A Ð→ R be any smooth function, and associate a 1-form ω to f ,
9.4 Examples: 1-Forms 429

ω = D1 f dx1 + ⋯ + Dn f dxn .

Then for every curve γ ∶ [a, b] Ð→ A,

∫ ω = ∫ (D1 f ○ γ)γ1 + ⋯ + (Dn f ○ γ)γn


b
′ ′
γ a

= ∫ (f ○ γ)′
b
by the chain rule in coordinates
a

= (f ○ γ)∣a
b

= f (γ(b)) − f (γ(a)).

That is, the form ω measures change in f along curves. Indeed, ω is classically
called the total differential of f . It is tempting to give ω the name df , i.e., to
define
df = D1 f dx1 + ⋯ + Dn f dxn .
Soon we will do so as part of a more general definition.
(Recall the chain rule: If A ⊂ Rn is open, then for every smooth γ ∶ [a, b] Ð→
A and f ∶ A Ð→ R,

(f ○ γ)′ (t) = f ′ (γ(t))γ ′ (t)


⎡ γ ′ (t) ⎤
⎢ 1 ⎥
⎢ ⎥
= [D1 f (γ(t)) ⋯ Dn f (γ(t))] ⎢ ⋮ ⎥
⎢ ′ ⎥
⎢γn (t)⎥
⎣ ⎦
n
= ∑ Di f (γ(t))γi′ (t)
i=1
n
= [∑(Di f ○ γ)γi′ ] (t),
i=1

so indeed (f ○ γ)′ = ∑i=1 (Di f


n
○ γ)γi′ .)
Continuing to generalize, consider now a 1-form that does not necessarily
arise from differentiation,

ω = F1 dx1 + ⋯ + Fn dxn .

For every curve γ ∶ [a, b] Ð→ Rn the integral of ω over γ is


n
( ∑(Fi ○ γ)γi′ )(u),
b
∫ ω=∫
γ u=a i=1

and this is the general flow integral (9.9) of the vector field (F1 , . . . , Fn )
along γ. That is, the flow integrals from Section 9.2 are precisely the inte-
grals of 1-forms.
430 9 Integration of Differential Forms

Exercises

9.4.1. Let ω = x dy − y dx, a 1-form on R2 . Evaluate ∫γ ω for the following


curves.
(a) γ ∶ [−1, 1] Ð→ R2 , γ(t) = (t2 − 1, t3 − t);
(b) γ ∶ [0, 2] Ð→ R2 , γ(t) = (t, t2 ).

9.4.2. Let ω = z dx+x2 dy+y dz, a 1-form on R3 . Evaluate ∫γ ω for the following
two curves.
(a) γ ∶ [−1, 1] Ð→ R3 , γ(t) = (t, at2 , bt3 );
(b) γ ∶ [0, 2π] Ð→ R3 , γ(t) = (a cos t, a sin t, bt).

9.4.3. (a) Let ω = f dy where f ∶ R2 Ð→ R depends only on y. That is,


f (x, y) = ϕ(y) for some ϕ ∶ R Ð→ R. Show that for every curve γ = (γ1 , γ2 ) ∶
[a, b] Ð→ R2 ,
γ2 (b)
∫ ω=∫ ϕ.
γ γ2 (a)

(b) Let ω = f dx + g dy where f depends only on x and g depends only


on y. Show that ∫γ ω = 0 whenever γ ∶ [a, b] Ð→ R2 is a closed curve, meaning
that γ(b) = γ(a).

9.5 Examples: 2-Forms on R3


To get a more complete sense of what formula (9.14) is doing, we need to study
a case with k > 1, i.e., integration on surfaces of more than one dimension.
Fortunately, the case n = 3, k = 2 is rich enough in geometry to understand in
general how k-forms on n-space work.
Consider Figure 9.5. The figure shows a 2-surface in R3 ,

Φ = (Φ1 , Φ2 , Φ3 ) ∶ D Ð→ R3 .

The parameter domain D has been partitioned into subrectangles, and the
image Φ(D) has been divided up into subpatches by mapping the grid lines
in D over to it via Φ. The subrectangle J of D maps to the subpatch B of Φ(D),
which in turn has been projected down to its shadow B(1,2) in the (x, y)-
plane. The point (uJ , vJ ) resides in J, and its image under Φ is Φ(uJ , vJ ) =
(xB , yB , zB ).
Note that B(1,2) = (Φ1 , Φ2 )(J). Rewrite this as

B(1,2) = Φ(1,2) (J).

That is, B(1,2) is the image of J under the (1, 2) component functions of Φ.
If J is small then results on determinants give

area(B(1,2) ) ≈ ∣ det Φ′(1,2) (uJ , vJ )∣ area(J).


9.5 Examples: 2-Forms on R3 431

J Φ

B12 x

Figure 9.5. 2-surface in 3-space

Thus, the magnification factor between subrectangles of D and (x, y)-projected


subpatches of Φ(D) is (up to sign) the factor det Φ′I from formula (9.14) for
I = (1, 2). The sign is somehow keeping track of the orientation of the projected
patch, which would be reversed under projection onto the (y, x)-plane. (See
Figure 9.6.)

y
x

B21
x

B12 y

Figure 9.6. Projected patch and its reversal

Let ω = f dx∧dy, a 2-form on R3 , where f ∶ R3 Ð→ R is a smooth function.


By (9.14) and Riemann sum approximation,
432 9 Integration of Differential Forms

∫ ω = ∫ (f ○ Φ) det Φ(1,2)

Φ D
≈ ∑(f ○ Φ)(uJ , vJ ) det Φ′(1,2) (uJ , vJ )area(J)
J
≈ ∑ f (xB , yB , zB )( ± area(B(1,2) )).
B

This calculation gives a geometric interpretation of what it means to inte-


grate f dx ∧ dy over Φ: to evaluate ∫Φ f dx ∧ dy, traverse the set Φ(D) and
measure projected, oriented area in the (x, y)-plane, weighted by the density
function f . The interpretation is analogous for forms with dy ∧ dz, and so on.
For an illustrative example, consider the forms dx∧dy, dz ∧dx, and dy ∧dz
integrated over the arch surface

Φ ∶ [−1, 1] × [0, 1] Ð→ R3 , Φ(u, v) = (u, v, 1 − u2 ).

(See Figure 9.7.) The (x, y)-shadows of B1 , B2 have the same areas as J1 , J2
and positive orientation, so ∫Φ dx ∧ dy should be equal to area(D), i.e., 2. (See
the left half of Figure 9.8.) The (z, x)-shadows of B1 , B2 have area zero, so
∫Φ dz ∧ dx should be an emphatic 0. (See the right half of Figure 9.8.) The
(y, z)-shadows of B1 , B2 have the same area but opposite orientations, so
∫Φ dy ∧ dz should be 0 by some cancellation on opposite sides of the (y, z)-
plane or equivalently, cancellation in the u-direction of the parameter domain.
(See Figure 9.9.)

v
Φ
z
y
J1 J2
u
B2

Figure 9.7. An arch


9.5 Examples: 2-Forms on R3 433

y x

B1 B2 B2

x z

B1

Figure 9.8. (x, y)-shadows and (z, x)-shadows

z z

B1 B2

y y

Figure 9.9. (y, z)-shadows

Integrating with formula (9.14) confirms this intuition. Since


⎡ 1 0⎤
⎢ ⎥
⎢ ⎥
Φ (u, v) = ⎢ 0 1⎥ ,
⎢ ⎥

⎢−2u 0⎥
⎣ ⎦
we have
det [ ] = 2,
1 1
∫ dx ∧ dy = ∫ det Φ(1,2) = ∫
10


Φ D v=0 u=−1 01
and similarly

−2u 0
det [ ] = ∫ ∫ 0 = 0,
1 1
∫ dz ∧ dx = ∫ det Φ(3,1) = ∫ ∫

Φ D v=0 u=−1 1 0 v u

det [ ] = ∫ ∫ 2u = 0.
1 1
∫ dy ∧ dz = ∫ det Φ(2,3) = ∫
0 1


Φ D v=0 u=−1 −2u 0 v u
434 9 Integration of Differential Forms

Note how the first integral reduces to integrating 1 over the parameter do-
main, the second integral vanishes because its integrand is zero, and the third
integral vanishes because of cancellation in the u-direction. All three of these
behaviors confirm our geometric insight into how forms should behave.
Since the differential form dx ∧ dy measures projected area in the (x, y)-
plane, the integral
∫ z dx ∧ dy
Φ

should give the volume under the arch. And indeed formula (9.14) gives

∫ z dx ∧ dy = ∫ (1 − u2 ) ⋅ 1,
Φ (u,v)∈D

which is the volume. Specifically, the integral is

(1 − u2 ) = 1 ⋅ (2 − u3 /3∣ ) = 4/3.
1 1 1
∫ z dx ∧ dy = ∫ ∫ −1
Φ v=0 u=−1

Similarly, since dy ∧ dz measures oriented projected area in the (y, z)-plane,


integrating the differential form x dy ∧ dz should also give the volume under
the arch. Here the interesting feature is that for x > 0, the form will multiply
the positive distance from the (y, z)-plane to the arch by positive (y, z)-area,
while for x < 0, the form will multiply the negative distance from the plane to
the arch by negative (y, z)-area, again measuring a positive quantity. To see
explicitly that the integral is again 4/3, compute
1 1 1
∫ x dy ∧ dz = ∫ ∫ u ⋅ 2u = 1 ⋅ (2/3)u3 ∣
−1
= 4/3.
Φ v=0 u=−1

With these examples, the meaning of a k-form ω = f dxI on n-space is


fairly clear:
Integrating ω over a surface Φ ∶ D Ð→ Rn means traversing the set
Φ(D) and measuring oriented, k-dimensional volume of the projection
of Φ(D) into k-space RI , weighted by the density function f .
This interpretation explains the results from integrating various 1-forms over
the helix in the previous section. Those integrals deserve reviewing in light of
this interpretation.
As the last example of this section, consider a 2-form on R3 ,

ω = F1 dx2 ∧ dx3 + F2 dx3 ∧ dx1 + F3 dx1 ∧ dx2 .

For every 2-surface Φ ∶ D Ð→ R3 the integral of ω over Φ is

⎛ (F1 ○ Φ)(D1 Φ2 D2 Φ3 − D1 Φ3 D2 Φ2 )⎞
∫ ω=∫ ⎜ +(F2 ○ Φ)(D1 Φ3 D2 Φ1 − D1 Φ1 D2 Φ3 )⎟ (u),
⎜ ⎟
⎝ +(F3 ○ Φ)(D1 Φ1 D2 Φ2 − D1 Φ2 D2 Φ1 )⎠
Φ u∈D
9.5 Examples: 2-Forms on R3 435

and this is the flux integral (9.8) of the vector field (F1 , F2 , F3 ) through Φ. A
straightforward generalization of this example shows that the general integral
of an (n−1)-form over an (n−1)-surface in Rn is the general flux integral (9.12).
That is, the flux integrals from Section 9.2 are precisely the integrals of (n−1)-
forms.
Along with the last example of the previous section, this raises the follow-
ing question: why bother with k-forms for values of k other than 1 and n − 1,
and maybe also 0 and n? The answer is that the amalgamation of k-forms for
all values of k has a coherent algebraic structure, making the whole easier to
study than its parts. The remainder of the chapter is largely an elaboration
of this point.
After this discussion of the mechanics and meaning of integrating forms,
you should be ready to prove a result that has already been mentioned: inte-
gration of forms reduces to ordinary integration when k = n, and integration
of forms is unaffected by reasonable changes of parametrization. These points
are covered in the next set of exercises.

Exercises

9.5.1. Let a be a positive number. Consider a 2-surface in R3 ,

Φ ∶ [0, a] × [0, π] Ð→ R3 , Φ(r, θ) = (r cos θ, r sin θ, r2 ).

Sketch this surface, noting that θ varies from 0 to π, not from 0 to 2π. Try
to determine ∫Φ dx ∧ dy by geometric reasoning, and then check your answer
using (9.14) to evaluate the integral. Do the same for dy ∧ dz and dz ∧ dx. Do
the same for z dx ∧ dy − y dz ∧ dx.

9.5.2. Let ω = x dy ∧ dz + y dx ∧ dy, a 2-form on R3 . Evaluate ∫Φ ω when Φ


is the 2-surface (a) Φ ∶ [0, 1] × [0, 1] Ð→ R3 , Φ(u, v) = (u + v, u2 − v 2 , uv); (b)
Φ ∶ [0, 2π] × [0, 1] Ð→ R3 , Φ(u, v) = (v cos u, v sin u, u).

9.5.3. Consider a 2-form on R4 ,

ω = F1,2 dx1 ∧ dx2 + F1,3 dx1 ∧ dx3 + F1,4 dx1 ∧ dx4


+ F2,3 dx2 ∧ dx3 + F2,4 dx2 ∧ dx4 + F3,4 dx3 ∧ dx4 .

Show that for every 2-surface Φ ∶ D Ð→ R4 , the integral of ω over Φ is given


by formula (9.13) from near the end of Section 9.2.

9.5.4. This exercise proves that integration of k-forms on Rn reduces to stan-


dard integration when k = n
Let D ⊂ Rn be compact and connected. Define the corresponding natural
parametrization, ∆ ∶ D Ð→ Rn , by ∆(u1 , . . . , un ) = (u1 , . . . , un ). (This is how
to turn a set in Rn , where we can integrate functions, into the corresponding
436 9 Integration of Differential Forms

surface, where we can integrate n-forms.) Let ω = f dx1 ∧ ⋯ ∧ dxn , an n-form


on Rn . Use (9.14) to show that

∫ ω = ∫ f.
∆ D

Your solution should use the basic properties of ∆ but not the highly sub-
stantive change of variable theorem. Note that in particular if f = 1, then
ω = dx1 ∧ ⋯ ∧ dxn and ∫∆ ω = vol(D), explaining why in this case ω is called
the volume form.
Thus in Rn , we may from now on blur the distinction between integrating
the function f over a set and integrating the n-form ω = f dxI over a surface,
provided that I = (1, . . . , n) (i.e., the dxi factors appear in canonical order),
and provided that the surface is parametrized trivially.
9.5.5. This exercise proves that because of the change of variable theorem,
the integration of differential forms is invariant under orientation-preserving
reparametrizations of a surface.
Let A be an open subset of Rn . Let Φ ∶ D Ð→ A and Ψ ∶ D ̃ Ð→ A be
k-surfaces in A. Suppose that there exists a smoothly invertible mapping
T ∶ D Ð→ D ̃ such that Ψ ○ T = Φ. In other words, T is smooth, T is invertible,
its inverse is also smooth, and the following diagram commutes:

D ◆◆
◆◆◆
◆◆Φ◆
◆◆◆
◆◆&
T q8 A
qqqqq
qq
 qqqq Ψ
D̃ q

If det T ′ > 0 on D then the surface Ψ is called an orientation-preserving


reparametrization of Φ, while if det T ′ < 0 on D then Ψ is an orientation-
reversing reparametrization of Φ.
̃ Ð→ D, a
(a) Let Ψ be a reparametrization as just defined. Let S = T −1 ∶ D
smooth mapping. Starting from the relation (S ○ T )(u) = id(u) for all u ∈ D
(where id is the identity mapping on D), differentiate, use the chain rule, and
take determinants to show that det T ′ (u) ≠ 0 for all u ∈ D.
(b) Assume now that the reparametrization Ψ is orientation-preserving.
For every n × k matrix M and every ordered k-tuple I from {1, . . . , n}, recall
that MI denotes the k × k matrix comprising the Ith rows of M . If N is a
k × k matrix, prove the equality

(M N )I = MI N.

In words, this says that

the Ith rows of (M times N ) are (the Ith rows of M ) times N .


9.6 Algebra of Forms: Basic Properties 437

(Suggestion: Do it first for the case I = i, that is, I denotes a single row.)
(c) Use the chain rule and part (b) to show that for every I,

det Φ′I (u) = det ΨI′ (T (u)) det T ′ (u) for all u ∈ D.

(d) Let ω = f (x) dxI , a k-form on A. Show that

∫ ω=∫ (f ○ Ψ ) det ΨI′ .


Ψ T (D)

Explain why the change of variable theorem shows that

∫ ω = ∫ ((f ○ Ψ ) det ΨI ) ○ T ⋅ det T .


′ ′
Ψ D

Explain why this shows that

∫ ω = ∫ ω.
Ψ Φ

What would the conclusion be for orientation-reversing Ψ ?


(e) Do the results from (d) remain valid if ω has the more general form
ω = ∑I fI dxI ?

9.6 Algebra of Forms: Basic Properties


One advantage of forms over earlier setups of vector integral calculus is that
one can do much of the necessary work with them algebraically. That is,
crucial properties will follow from purely rule-driven symbolic manipulation
rather than geometric intuition or close analysis.
Let A be an open subset of Rn . Since k-forms on A are functions (functions
of k-surfaces), they come with an inherent notion of equality. The meaning of

ω1 = ω2

is that ω1 (Φ) = ω2 (Φ) for all k-surfaces Φ in A. In particular, the meaning of


ω = 0 is that ω(Φ) = 0 for all Φ, where the first 0 is a form, while the second
is a real number. Addition of k-forms is defined naturally,

(ω1 + ω2 )(Φ) = ω1 (Φ) + ω2 (Φ) for all ω1 , ω2 , Φ,

where the first “+” lies between two forms, the second between two real num-
bers. Similarly, the definition of scalar multiplication is

(cω)(Φ) = c(ω(Φ)) for all c, ω, Φ.

The addition of forms here is compatible with the twofold use of summation
in the definition of forms and how they integrate. Addition and scalar multi-
plication of forms inherit all the vector space properties from corresponding
438 9 Integration of Differential Forms

properties of addition and multiplication in the real numbers, showing that


the set of all k-forms on A forms a vector space. Proving familiar-looking facts
about addition and scalar multiplication of forms reduces quickly to citing the
analogous facts in R. For example, (−1)ω = −ω for every k-form ω (where the
second minus sign denotes additive inverse), because for every k-surface Φ,

(ω + (−1)ω)(Φ) = ω(Φ) + ((−1)ω)(Φ) = ω(Φ) + (−1)(ω(Φ)) = 0,

the last equality holding since (−1)x = −x for all real numbers x.
Forms have other algebraic properties that are less familiar. For example,
on R2 , dy ∧ dx = −dx ∧ dy. This rule follows from the skew symmetry of the
determinant: for any 2-surface Φ ∶ D Ð→ R2 ,

(dy ∧ dx)(Φ) = ∫ det Φ′(2,1) = − ∫ det Φ′(1,2) = −(dx ∧ dy)(Φ).


D D

More generally, given two k-tuples I and J from {1, . . . , n}, dxJ = −dxI if J
is obtained from I by an odd number of transpositions. Thus for example,

dz ∧ dy ∧ dx = −dx ∧ dy ∧ dz

since (3, 2, 1) is obtained from (1, 2, 3) by swapping the first and third entries.
Showing this reduces again to the skew symmetry of the determinant. As a
special case, dxI = 0 whenever the k-tuple I has two matching entries. This
rule holds because exchanging those matching entries has no effect on I but
gives the negative of dxI , and so dxI = −dxI , forcing dxI = 0. One can also
verify directly that dxI = 0 if I has matching entries by referring back to the
fact that the determinant of a matrix with matching rows vanishes.
Using these rules (dy∧dx = −dx∧dy, dx∧dx = 0, and their generalizations),
one quickly convinces oneself that every k-form can be written

ω = ∑ fI dxI (sum only over increasing I),


I

where a k-tuple I = (i1 , . . . , ik ) is called increasing if i1 < ⋯ < ik , as men-


tioned in Exercise 9.3.1. This is the standard presentation for ω mentioned in
Exercise 9.3.2. It is not hard to show that the standard presentation for ω is
unique. In particular, ω is identically zero as a function of surfaces if and only
if ω has standard presentation 0.
The next few sections will define certain operations on forms and develop
rules of algebra for manipulating the forms under these operations. Like other
rules of algebra, they will be unfamiliar at first and deserve to be scrutinized
critically, but eventually they should become second nature and you should
find yourself skipping steps fluently.

Exercise

9.6.1. Show that if ω is a k-form on Rn that satisfies ω = −ω, then ω = 0.


9.7 Algebra of Forms: Multiplication 439

9.7 Algebra of Forms: Multiplication


Given a k-tuple and an ℓ-tuple, both from {1, . . . , n},

I = (i1 , . . . , ik ) and J = (j1 , . . . , jℓ ),

define their concatenation (I, J), a (k+ℓ)-tuple from {1, . . . , n}, in the obvious
way,
(I, J) = (i1 , . . . , ik , j1 , . . . , jℓ ).
Also, if f, g ∶ A Ð→ R are functions on an open subset of Rn then their product
f g is the function

f g ∶ A Ð→ R, (f g)(x) = f (x)g(x).

Definition 9.7.1 (Wedge product). Let A be an open subset of Rn . If ω =


∑I fI dxI and λ = ∑J gJ dxJ are respectively a k-form and an ℓ-form on A,
then their wedge product ω ∧ λ is a (k + ℓ)-form on A,

ω ∧ λ = ∑ fI gJ dx(I,J) .
I,J

That is, the wedge product is formed by following the usual distributive law
and wedge-concatenating the dx-terms.

For convenient notation, let Λk (A) denote the vector space of k-forms
on A. Thus the wedge product is a mapping,

∧ ∶ Λk (A) × Λℓ (A) Ð→ Λk+ℓ (A).

For example, a wedge product of a 1-form and a 2-form on R3 is

(f1 dx+f2 dy + f3 dz) ∧ (g1 dy ∧ dz + g2 dz ∧ dx + g3 dx ∧ dy)


= f1 g1 dx ∧ dy ∧ dz + f1 g2 dx ∧ dz ∧ dx + f1 g3 dx ∧ dx ∧ dy
+ f2 g1 dy ∧ dy ∧ dz + f2 g2 dy ∧ dz ∧ dx + f2 g3 dy ∧ dx ∧ dy
+ f3 g1 dz ∧ dy ∧ dz + f3 g2 dz ∧ dz ∧ dx + f3 g3 dz ∧ dx ∧ dy
= (f1 g1 + f2 g2 + f3 g3 ) dx ∧ dy ∧ dz.

This example shows that the wedge product automatically encodes the inner
product in R3 , and the idea generalizes easily to Rn . For another example, a
wedge product of two 1-forms on R3 is

(xu dx + yu dy + zu dz) ∧ (xv dx + yv dy + zv dz)


= (yu zv − zu yv ) dy ∧ dz + (zu xv − xu zv ) dz ∧ dx + (xu yv − yu xv ) dx ∧ dy.

Comparing this to the formula for the cross product in Section 3.10 shows
that the wedge product automatically encodes the cross product. Similarly, a
wedge product of two 1-forms on R2 is
440 9 Integration of Differential Forms

(a dx + b dy) ∧ (c dx + d dy) = (ad − bc) dx ∧ dy,


showing that the wedge product encodes the 2 × 2 determinant as well.
Lemma 9.9.2 to follow will show that it encodes the general n×n determinant.
Naturally the wedge in Definition 9.7.1 is the same as the one in Defini-
tion 9.3.1. There is no conflict in now saying that the two wedges are the same,
since each wedge in the earlier definition sits between two 1-forms, and the
definition attached no meaning to the wedge symbol. Definition 9.3.1 also jux-
taposes functions (0-forms) and dxI terms (k-forms) without putting a wedge
between them, and it is still unclear what sort of multiplication that juxtaposi-
tion connotes. In fact, it is also a wedge product, but when we wedge-multiply
a 0-form and a k-form, we usually suppress the wedge. A basic property of
the wedge, its skew symmetry, will explain why in a moment.
Proposition 9.7.2 (Properties of the wedge product). Let A be an open
subset of Rn . The wedge product has the following properties.
(1) The wedge product distributes over form addition: for all ω ∈ Λk (A) and
λ1 , λ2 ∈ Λℓ (A),
ω ∧ (λ1 + λ2 ) = ω ∧ λ1 + ω ∧ λ2 .
(2) The wedge product is associative: for all ω ∈ Λk (A), λ ∈ Λℓ (A), and µ ∈
Λm (A),
(ω ∧ λ) ∧ µ = ω ∧ (λ ∧ µ).
(3) The wedge product is skew symmetric: for all ω ∈ Λk (A) and λ ∈ Λℓ (A),
λ ∧ ω = (−1)kℓ ω ∧ λ.
The proof is an exercise. The unfamiliar (and hence interesting) property
is the third one. The essence of its proof is to show that for every k-tuple I
and every ℓ-tuple J,
dxJ ∧ dxI = (−1)kℓ dxI ∧ dxJ .
This formula follows from counting transpositions.
Note that the skew symmetry of the wedge product reduces to symmetry
(i.e., commutativity) when either of the forms being multiplied is a 0-form.
The symmetry is why one generally doesn’t bother writing the wedge when a
0-form is involved. In fact, the wedge symbol is unnecessary in all cases, and
typically in multivariable calculus one sees, for example,
dx dy dz rather than dx ∧ dy ∧ dz.
Indeed, we could use mere juxtaposition to denote form-multiplication, but
because this new multiplication obeys unfamiliar rules, we give it a new symbol
to remind us of its novel properties as we study it.
Also, the special case of multiplying a constant function c and a k-form
ω is consistent with scalar multiplication of c (viewed now as a real number)
and ω. Thus all of our notions of multiplication are compatible.
9.8 Algebra of Forms: Differentiation 441

Exercises

9.7.1. Find a wedge product of two differential forms that encodes the inner
product of R4 .

9.7.2. Find a wedge product of three differential forms that encodes the 3 × 3
determinant.

9.7.3. Prove the properties of the wedge product.

9.7.4. Prove that (ω1 + ω2 ) ∧ λ = ω1 ∧ λ + ω2 ∧ λ for all ω1 , ω2 ∈ Λk (A) and


λ ∈ Λℓ (A). (Use skew symmetry, distributivity, and skew symmetry again.)

9.8 Algebra of Forms: Differentiation


Definition 9.8.1 (Derivative of a differential form). Let A be an open
subset of Rn . For each integer k ≥ 0 define the derivative mapping,

d ∶ Λk (A) Ð→ Λk+1 (A),

by the rules
n
df = ∑ Di f dxi for a 0-form f ,
i=1
dω = ∑ dfI ∧ dxI for a k-form ω = ∑ fI dxI .
I I

For example, we saw in Section 9.4 that for a function f , the 1-form

df = D1 f dx1 + ⋯ + Dn f dxn

is the form that measures change in f along curves. To practice this new kind
of function-differentiation in a specific case, define the function

π1 ∶ R3 Ð→ R

to be projection onto the first coordinate,

π1 (x, y, z) = x for all (x, y, z) ∈ R3 .

Then by the definition of the derivative,

dπ1 = D1 π1 dx + D2 π1 dy + D3 π1 dz = dx. (9.15)

This calculation is purely routine. In practice, however, one often blurs the
distinction between the name of a function and its output, for instance speak-
ing of the function x2 rather than the function f ∶ R Ð→ R where f (x) = x2
or the squaring function on R. Such loose nomenclature is usually harmless
442 9 Integration of Differential Forms

enough and indeed downright essential in any explicit calculation in which we


compute using a function’s values. But if we blur the distinction here between
the function π1 and its output x then the calculation of dπ1 in (9.15) can be
rewritten as
dx = dx. (!)
This is not tautological: the two sides have different meanings. The left side
is the operator d acting on the projection function x, while the right side is a
single entity, the 1-form denoted dx. The equation is better written

d(x) = dx.

However it is written, this equality ensures that there is no possible conflict


between naming the differential operator d and using this same letter as part
of the definition of differential form.
Similarly, for a function f ∶ R Ð→ R of one variable, the definition of d
immediately says that
df
df = dx,
dx
where the single, indivisible symbol df /dx is the Leibniz notation for the
derivative of f . This relation, which is sometimes presented in first-semester
calculus with nebulous meanings attached to df and dx, and which canNOT
be proved by cancellation, is now a relation between 1-forms that follows from
the definition of d. The moral is that the operator d has been so named to
make such vague, undefined formulas into definitions and theorems. For more
examples of differentiation, if

ω = x dy − y dx

then according to Definition 9.8.1,

dω = (D1 x dx + D2 x dy) ∧ dy − (D1 y dx + D2 y dy) ∧ dx = 2 dx ∧ dy.

And if
ω = x dy ∧ dz + y dz ∧ dx + z dx ∧ dy
then
dω = 3 dx ∧ dy ∧ dz.
The differentiation operator d commutes with sums and scalar multiples.
That is, if ω1 , ω2 are k-forms and c is a constant then

d(cω1 + ω2 ) = c dω1 + dω2 .

More interesting are the following two theorems about form differentiation.
Theorem 9.8.2 (Product rule for differential forms). Let A be an open
subset of Rn . Let ω and λ be respectively a k-form and an ℓ-form on A. Then

d(ω ∧ λ) = dω ∧ λ + (−1)k ω ∧ dλ.


9.8 Algebra of Forms: Differentiation 443

Proof. Start with the case of 0-forms f and g. Then


n
d(f g) = ∑ Di (f g) dxi
i=1
n
= ∑(Di f g + f Di g) dxi
i=1
n n
= (∑ Di f dxi ) g + f (∑ Di g dxi )
i=1 i=1
= df g + f dg.

Next consider a k-form and an ℓ-form with one term each, fI dxI and gJ dxJ .
Then

d(fI dxI ∧ gJ dxJ ) = d(fI gJ dxI ∧ dxJ ) by definition of multiplication


= d(fI gJ ) ∧ dxI ∧ dxJ by definition of d
= (dfI gJ + fI dgJ ) ∧ dxI ∧ dxJ by the result for 0-forms
= dfI (gJ ∧ dxI ) ∧ dxJ by distributivity
+ fI (dgJ ∧ dxI ) ∧ dxJ and associativity of ∧
= dfI ∧ (−1) 0⋅k
(dxI ∧ gJ ) ∧ dxJ
by skew symmetry
+ fI (−1) 1⋅k
(dxI ∧ dgJ ) ∧ dxJ
= d(fI ∧ dxI ) ∧ gJ dxJ by associativity and symmetry
+ (−1) fI dxI ∧ d(gJ ∧ dxJ )
k and definition of d.

Finally, in the general case, ω = ∑I ωI and λ = ∑J λJ , where each ωI is equal


to fI dxI and each λJ is equal to gJ dxJ , quoting the one-term result at the
third equality,

d(ω ∧ λ) = d (∑ ωI ∧ ∑ λJ ) = ∑ d(ωI ∧ λJ )
I J I,J

= ∑ (dωI ∧ λJ + (−1)k ωI ∧ dλJ )


I,J

= d ∑ ωI ∧ ∑ λJ + (−1)k ∑ ωI ∧ d ∑ λJ
I J I J

= dω ∧ λ + (−1) ω ∧ dλ.k


Because the last step in this proof consisted only in pushing sums tediously
through the other operations, typically it will be omitted from now on, and
proofs will be carried out for the case of one-term forms.
Consider a function f (x, y) on R2 . Its derivative is
444 9 Integration of Differential Forms

df = D1 f (x, y) dx + D2 f (x, y) dy,

and its second derivative is in turn

d2 f = d(df ) = d(D1 f (x, y) dx) + d(D2 f (x, y) dy)


= D11 f (x, y) dx ∧ dx + D12 f (x, y) dy ∧ dx
+ D21 f (x, y) dx ∧ dy + D22 f (x, y) dy ∧ dy.

The dx ∧ dx term and the dy ∧ dy term are both 0. And the other two terms
sum to 0, because the mixed partial derivatives D12 f (x, y) and D21 f (x, y)
are equal while dy ∧ dx and dx ∧ dy are opposite. Overall, then,

d2 f = 0.

This phenomenon of the second derivative vanishing is completely general.

Theorem 9.8.3 (Nilpotence of d). Let A be an open subset of Rn . Then


d2 ω = 0 for every form ω ∈ Λk (A), where d2 means d ○ d. In other words,

d2 = 0.

Proof. For a 0-form f ,


n
df = ∑ Di f dxi ,
i=1

and so
n
d2 f = d(df ) = ∑ d(Di f ) ∧ dxi = ∑ Dij f dxj ∧ dxi .
i=1 i,j

All terms with i = j cancel because dxi ∧ dxi = 0, and the rest of the terms
cancel pairwise because for i ≠ j, Dji f = Dij f (equality of mixed partial
derivatives) and dxi ∧dxj = −dxj ∧dxi (skew symmetry of the wedge product).
Thus
d2 f = 0.
Also, for a k-form dxI with constant coefficient function 1,

d(dxI ) = d(1dxI ) = (d1) ∧ dxI = 0.

Next, for a one-term k-form ω = f dxI ,

dω = df ∧ dxI ,

and so by the first two calculations,

d2 ω = d(df ∧ dxI ) = d2 f ∧ dxI + (−1)1 df ∧ d(dxI ) = 0 + 0 = 0.

For a general k-form, pass sums and d2 s through each other. ⊔



9.8 Algebra of Forms: Differentiation 445

A form ω is called

exact if ω = dλ for some form λ

and
closed if dω = 0.
Theorem 9.8.3 shows that:

Every exact form is closed.

The converse question, whether every closed form is exact, is more subtle. We
will discuss it in Section 9.11.

Exercises

9.8.1. Let ω = f dx + g dy + h dz. Show that

dω = (D2 h − D3 g) dy ∧ dz + (D3 f − D1 h) dz ∧ dx + (D1 g − D2 f ) dx ∧ dy.

9.8.2. Let ω = f dy ∧ dz + g dz ∧ dx + h dx ∧ dy. Evaluate dω.

9.8.3. Differential forms of orders 0, 1, 2, 3 on R3 are written

ω0 = φ,
ω1 = f1 dx + f2 dy + f3 dz,
ω2 = g1 dy ∧ dz + g2 dz ∧ dx + g3 dx ∧ dy,
ω3 = h dx ∧ dy ∧ dz.

(a) For a 0-form φ, what are the coefficients fi of dφ in terms of φ?


(b) For a 1-form ω1 , what are the coefficients gi of dω1 in terms of the
coefficients fi of ω1 ?
(c) For a 2-form ω2 , what is the coefficient h of dω2 in terms of the coeffi-
cients gi of ω2 ?

9.8.4. Classical vector analysis features the operator

∇ = (D1 , D2 , D3 ),

where the Di are familiar partial derivative operators. Thus, for a function
φ ∶ R3 Ð→ R,
∇φ = (D1 φ, D2 φ, D3 φ).
Similarly, for a mapping F = (f1 , f2 , f3 ) ∶ R3 Ð→ R3 , ∇ × F is defined in the
symbolically appropriate way, and for a mapping G = (g1 , g2 , g3 ) ∶ R3 Ð→ R3 ,
so is ⟨∇, G⟩. Write down explicitly the vector-valued mapping ∇ × F and the
function ⟨∇, G⟩ for F and G as just described. The vector-valued mapping ∇φ
is the gradient of φ from Section 4.8,
446 9 Integration of Differential Forms

grad φ = ∇φ.

The vector-valued mapping ∇ × F is the curl of F ,

curl F = ∇ × F.

And the scalar-valued function ⟨∇, G⟩ is the divergence of G,

div G = ⟨∇, G⟩.

9.8.5. Continuing with the notation of the previous two problems, introduce
correspondences between the classical scalar–vector environment and the en-
vironment of differential forms, as follows. Let

ds = (dx, dy, dz),


Ð

dn = (dy ∧ dz, dz ∧ dx, dx ∧ dy),


Ð→

dV = dx ∧ dy ∧ dz.

Let id be the mapping that takes each function φ ∶ R3 Ð→ R to itself, but with
Ð→
the output-copy of φ viewed as a 0-form. Let ⋅ds be the mapping that takes
each vector-valued mapping F = (f1 , f2 , f3 ) to the 1-form
Ð

F ⋅ ds = f1 dx + f2 dy + f3 dz.

Let ⋅dn be the mapping that takes each vector-valued mapping G = (g1 , g2 , g3 )
Ð→
to the 2-form
Ð→
G ⋅ dn = g1 dy ∧ dz + g2 dz ∧ dx + g3 dx ∧ dy.

And let dV be the mapping that takes each function h to the 3-form

h dV = h dx ∧ dy ∧ dz.

Combine the previous problems to verify that the following diagram com-
mutes, meaning that either path around each square yields the same result.
(Do each square separately, e.g., for the middle square start from an arbitrary
(f1 , f2 , f3 ) with no assumption that it is the gradient of some function φ.)

φ
✤ grad
/ (f1 , f2 , f3 ) ✤ curl / (g1 , g2 , g3 ) ✤ div /h
❴ ❴ ❴ ❴
Ð
→ Ð→
⋅ds ⋅dn
id dV
 
f1 dx g1 dy ∧ dz
+f2 dy ✤ +g2 dz ∧ dx ✤ / h dx ∧ dy ∧ dz
 ✤ d d d

φ / /
+f3 dz +g3 dx ∧ dy

Thus the form-differentiation operator d, specialized to three dimensions, uni-


fies the classical gradient, divergence, and curl operators.
9.9 Algebra of Forms: The Pullback 447

9.8.6. Two of these operators are zero:

curl ○ grad, div ○ curl, div ○ grad.

Explain, using the diagram from the preceding exercise and the nilpotence
of d. For a function φ ∶ R3 Ð→ R, write out the harmonic equation (or
Laplace’s equation), which does not automatically hold for all φ but turns
out to be an interesting condition,

div(grad φ) = 0.

9.9 Algebra of Forms: The Pullback


Recall the change of variable theorem from Chapter 6: given a change of
variable mapping now called T (rather than Φ as in Chapter 6) and given a
function f on the range space of T , the appropriate function to integrate over
the domain is obtained by composing with T and multiplying by an absolute
determinant factor,

∫ f = ∫ (f ○ T ) ⋅ ∣ det T ′ ∣.
T (D) D

A generalization to forms of the notion of composing with T lets us similarly


transfer forms—rather than functions—from the range space of a mapping T
to the domain. This generalization will naturally include a determinant factor
that is no longer encumbered by absolute value signs. The next section will
show that integration of differential forms is inherently invariant under change
of variable.
We start with some examples. The familiar polar coordinate mapping from
(r, θ)-space to (x, y)-space is

(x, y) = T (r, θ) = (r cos θ, r sin θ).

Using this formula, and thinking of T as mapping from (r, θ)-space forward
to (x, y)-space, every form on (x, y)-space can naturally be converted back
into a form on (r, θ)-space, simply by substituting r cos θ for x and r sin θ
for y. If the form on (x, y)-space is named λ then the form on (r, θ)-space is
denoted T ∗ λ. For example, the 2-form that gives area on (x, y)-space,

λ = dx ∧ dy,

has a naturally corresponding 2-form on (r, θ)-space,

T ∗ λ = d(r cos θ) ∧ d(r sin θ).

Working out the derivatives and then the wedge shows that
448 9 Integration of Differential Forms

T ∗ λ = (cos θ dr − r sin θ dθ) ∧ (sin θ dr + r cos θ dθ)


= r dr ∧ dθ.

Thus (now dropping the wedges from the notation), this process has converted
dx dy into r dr dθ as required by the change of variable theorem.
For another example, continue to let T denote the polar coordinate map-
ping, and consider a 1-form on (x, y)-space (for (x, y) ≠ (0, 0)),
x dy − y dx
ω= .
x2 + y 2
The corresponding 1-form on (r, θ)-space (for r > 0) is
r cos θ d(r sin θ) − r sin θ d(r cos θ)
T ∗ω =
(r cos θ)2 + (r sin θ)2
.

Here the differentiations give

d(r sin θ) = sin θ dr + r cos θ dθ, d(r cos θ) = cos θ dr − r sin θ dθ,

and so the form on (r, θ)-space is


r cos θ(sin θ dr + r cos θ dθ) − r sin θ(cos θ dr − r sin θ dθ)
T ∗ω = = dθ.
r2
This result suggests that integrating ω over a curve in (x, y)-space will return
the change in angle along the curve. For example, integrating ω counterclock-
wise over the unit circle should return 2π.
Geometrically, let γ ∶ I Ð→ R2 −{0} be a parametrized curve, let p = (x, y) =
γ(t) be a point on the curve, and view the unary cross product (x, y)× =
(−y, x) as a vector originating at p, pointing in the direction of increasing
polar angle θ. The tangent vector γ ′ (t) = (x′ (t), y ′ (t)) has component length
along the unary cross product vector as follows,
⟨(x′ , y ′ ), (−y, x)⟩ xy ′ − yx′
=√
∣(−y, x)∣
.
x2 + y 2

(See Figure 9.10.) To infinitesimalize this, multiply it by dt, and then, to make
the resulting form measure infinitesimal change in the polar angle θ along the
curve, we also need to divide by the distance from the origin to get altogether
(x dy − y dx)/(x2 + y 2 ).
For a third example, again start with the 1-form
x dy − y dx
ω= ,
x2 + y 2
but this time consider a different change of variable mapping,

(x, y) = T (u, v) = (u2 − v 2 , 2uv).


9.9 Algebra of Forms: The Pullback 449
y

(x′ , y ′ )

(x, y)×

(x, y)

Figure 9.10. Angular component of the tangent vector

The 1-form on (u, v)-space (for (u, v) ≠ (0, 0)) corresponding to ω is now

(u2 − v 2 ) d(2uv) − 2uv d(u2 − v 2 )


T ∗ω =
(u2 − v 2 )2 + (2uv)2
.

The derivatives are

d(2uv) = 2(v du + u dv), d(u2 − v 2 ) = 2(u du − v dv),

and so
(u2 − v 2 )(v du + u dv) − 2uv(u du − v dv)
T ∗ω = 2
(u2 + v 2 )2
((u − v )v − 2u2 v) du + ((u2 − v 2 )u + 2uv 2 ) dv
2 2
=2
(u2 + v 2 )2
u dv − v du
=2 2 .
u + v2
Thus T ∗ ω is essentially the original form, except that it is doubled, and now
it is a form on (u, v)-space. The result of the calculation stems from the fact
that T is the complex square mapping, which doubles angles. The original
form ω, which measures change of angle in (x, y)-space, has transformed back
to the form that measures twice the change of angle in (u, v)-space. Integrating
T ∗ ω along a curve γ in (u, v)-space that misses the origin returns twice the
change in angle along this curve, and this is the change in angle along the
image-curve T ○ γ in (x, y)-space.
450 9 Integration of Differential Forms

Given a mapping, the natural process of changing variables in a differen-


tial form on the range of the mapping to produce a differential form on the
domain of the mapping is called pulling the differential form back through the
mapping. The general definition is as follows.

Definition 9.9.1 (Pullback of a differential form). Let k be a nonneg-


ative integer. Let A be an open subset of Rn , and let B be an open subset
of Rm . Let
T = (T1 , . . . , Tm ) ∶ A Ð→ B
be a smooth mapping. Then T gives rise to a pullback mapping of k-forms
in the other direction,
T ∗ ∶ Λk (B) Ð→ Λk (A).
Let the coordinates on Rn be (x1 , . . . , xn ), and let the coordinates on Rm be
(y1 , . . . , ym ). For each k-tuple I = (i1 , . . . , ik ) from {1, . . . , m}, let dTI denote
dTi1 ∧ ⋯ ∧ dTik . Then the pullback of a k-form on B,

ω = ∑ fI dyI ,

is
T ∗ ω = ∑(fI ○ T ) dTI .
I

Since each Tij is a function on A, each dTij is a 1-form on A, and the


definition makes sense. As usual, when k = 0, the empty products dyI and dTI
are interpreted as 1, and the pullback is simply composition,

T ∗ f = f ○ T.

As the examples before the definition have shown, computing pullbacks is easy
and purely mechanical: given a form ω in terms of y’s and dy’s, its pullback
T ∗ ω comes from replacing each yi in ω by the expression Ti (x1 , . . . , xn ) and
then working out the resulting d’s and wedges.
The fact that pulling the form dx ∧ dy back through the polar coordinate
mapping produced the factor r from the change of variable theorem is no
coincidence.

Lemma 9.9.2 (Wedge–determinant lemma). Define an n-form-valued


function ∆ on n-tuples of n-vectors as follows. For n vectors in Rn ,

a1 = (a11 , a12 , . . . , a1n ),


a2 = (a21 , a22 , . . . , a2n ),

an = (an1 , an2 , . . . , ann ),

create the corresponding 1-forms,


9.9 Algebra of Forms: The Pullback 451

ω1 = a11 dx1 + a12 dx2 + ⋯ + a1n dxn ,


ω2 = a21 dx1 + a22 dx2 + ⋯ + a2n dxn ,

ωn = an1 dx1 + an2 dx2 + ⋯ + ann dxn ,

and then define


∆(a1 , a2 , . . . , an ) = ω1 ∧ ω2 ∧ ⋯ ∧ ωn .
Then
∆(a1 , a2 , . . . , an ) = det(a1 , a2 , . . . , an ) dx1 ∧ ⋯ ∧ dxn .
That is, ∆ = det ⋅dx(1,...,n) .

We have already seen this result for n = 2 in Section 9.7 and for n = 3 in
Exercise 9.7.2.

Proof. The only increasing n-tuple from {1, . . . , n} is (1, . . . , n). As a product
of n 1-forms on Rn , ∆(a1 , a2 , . . . , an ) is an n-form on Rn , and therefore it is
a scalar-valued function δ(a1 , a2 , . . . , an ) times dx(1,...,n) . The relation

δ(a1 , a2 , . . . , an ) dx(1,...,n) = ω1 ∧ ω2 ∧ ⋯ ∧ ωn ,

where ωi is the inner product ai ⋅ (dx1 , . . . , dxn ) for each i, combines with
various properties of the wedge product to show that the following three con-
ditions hold:
• The function δ is linear in each of its vector variables, e.g.,

δ(a1 , a2 + ã2 , . . . , an ) = δ(a1 , a2 , . . . , an ) + δ(a1 , ã2 , . . . , an )

and
δ(a1 , ca2 , . . . , an ) = c δ(a1 , a2 , . . . , an ).
• The function δ is skew-symmetric, i.e., transposing two of its vector vari-
ables changes its sign.
• The function δ is normalized, i.e., δ(e1 , e2 , . . . , en ) = 1.
The determinant is the unique function satisfying these three conditions, so δ =
det. ⊔

Theorem 9.9.3 (Pullback–determinant theorem). Let A be an open


subset of Rn , and let B be an open subset of Rm . Let T ∶ A Ð→ B be a smooth
mapping. Let Rn have coordinates (x1 , . . . , xn ), and let Rm have coordinates
(y1 , . . . , ym ). Let I = (i1 , . . . , in ) be an n-tuple from {1, . . . , m}. Then

T ∗ dyI = det TI′ dx1 ∧ ⋯ ∧ dxn .


452 9 Integration of Differential Forms

Proof. By definition,

T ∗ dyI = dTI = dTi1 ∧ ⋯ ∧ dTin


= (D1 Ti1 dx1 + ⋯ + Dn Ti1 dxn )
∧ (D1 Ti2 dx1 + ⋯ + Dn Ti2 dxn )

∧ (D1 Tin dx1 + ⋯ + Dn Tin dxn ).

The right side is precisely ∆(Ti′1 , Ti′2 , . . . , Ti′n ), so the lemma completes the
proof. ⊔

In particular, when m = n and I = (1, . . . , n), the theorem says that

T ∗ (dy1 ∧ ⋯ ∧ dyn ) = det T ′ dx1 ∧ ⋯ ∧ dxn ,

confirming the polar coordinate example early in this section. Similarly, if T


is the spherical coordinate mapping,

T (ρ, θ, φ) = (ρ cos θ sin φ, ρ sin θ sin φ, ρ cos φ),

then the theorem tells us that

T ∗ (dx ∧ dy ∧ dz) = −ρ2 sin φ dρ ∧ dθ ∧ dφ.

You may want to verify this directly to get a better feel for the pullback and the
lemma. In general, the pullback–determinant theorem can be a big time-saver
for computing pullbacks when the degree of the form equals the dimension of
the domain space. Instead of multiplying out lots of wedge products, simply
compute the relevant subdeterminant of a derivative matrix.
What makes the integration of differential forms invariant under change of
variable is that the pullback operator commutes with everything else in sight.

Theorem 9.9.4 (Properties of the pullback). Let A be an open subset


of Rn , and let B be an open subset of Rm . Let T = (T1 , . . . , Tm ) ∶ A Ð→ B be
a smooth mapping. Then:
(1) For all ω1 , ω2 , ω ∈ Λk (B) and c ∈ R,

T ∗ (ω1 + ω2 ) = T ∗ ω1 + T ∗ ω2 ,
T ∗ (cω) = c T ∗ ω.

(2) For all ω ∈ Λk (B) and λ ∈ Λℓ (B),

T ∗ (ω ∧ λ) = (T ∗ ω) ∧ (T ∗ λ).

(3) For all ω ∈ Λk (B),


T ∗ (dω) = d(T ∗ ω).
9.9 Algebra of Forms: The Pullback 453

That is, the pullback is linear, the pullback is multiplicative (meaning that
it preserves products), and the pullback of the derivative is the derivative of
the pullback. The results in the theorem can be expressed in commutative
diagrams, as in Exercise 9.8.5. Part (2) says that the following diagram com-
mutes:
Λk (B) × Λℓ (B) / Λk (A) × Λℓ (A)
(T ∗ ,T ∗ )

∧ ∧

Λk+ℓ (B) / Λk+ℓ (A),


 ∗

T

and part (3) says that the following diagram commutes:

Λk (B) / Λk (A)
T∗

d d

Λk+1 (B) / Λk+1 (A).


 ∗

T

All of this is especially gratifying because the pullback itself is entirely natural.
Furthermore, the proofs are straightforward: all we need to do is compute, ap-
ply definitions, and recognize definitions. The only obstacle is that the process
requires patience.
Proof. (1) Is immediate from the definition.
(2) For one-term forms f dyI and g dyJ ,
T ∗ (f dyI ∧ g dyJ ) = T ∗ (f g dy(I,J) ) by definition of multiplication
= (f g) ○ T dT(I,J) by definition of the pullback
= f ○ T dTI ∧ g ○ T dTJ since (f g) ○ T = (f ○ T )(g ○ T )
= T ∗ (f dyI ) ∧ T ∗ (g dyJ ) by definition of the pullback.
The result on multiterm forms follows from this and (1).
(3) For a 0-form f ∶ Rm Ð→ R, compute that
m
T ∗ (df ) = T ∗ (∑ Di f dyi ) applying the definition of d
i=1
m
= ∑(Di f ○ T ) dTi
applying the definition
i=1 of the pullback
m n
= ∑ Di f ○ T ⋅ ∑ Dj Ti dxj applying the definition of d
i=1 j=1
n m
= ∑ [∑(Di f ○ T ) ⋅ Dj Ti ] dxj interchanging the sums
j=1 i=1
n
= ∑ Dj (f ○ T ) dxj recognizing the chain rule
j=1
454 9 Integration of Differential Forms

= d(f ○ T ) recognizing the definition of d


= d(T ∗ f ) recognizing the pullback.

For a one-term k-form f dyI we have d(f dyI ) = df ∧ dyI , so by (2) and the
result for 0-forms,

T ∗ (d(f dyI )) = T ∗ (df ∧ dyI ) applying the definition of d


= T df ∧ T dyI
∗ ∗
since pullback and wedge commute
= d(T ∗ f ) ∧ T ∗ dyI by the just-established result
= d(f ○ T ) ∧ dTI by definition of the pullback, twice
= d(f ○ T dTI ) recognizing the definition of d
= d(T ∗ (f dyI )) recognizing the pullback.
The multiterm result follows from this and (1). ⊔

The pullback also behaves naturally with respect to composition.
Theorem 9.9.5 (Contravariance of the pullback). Let A be an open sub-
set of Rn , let B be an open subset of Rm , and let C be an open subset of Rℓ .
Let T ∶ A Ð→ B and S ∶ B Ð→ C be smooth mappings. Then for every form
ω ∈ Λk (C),
(S ○ T )∗ ω = (T ∗ ○ S ∗ )ω.
This peculiar-looking result—that the pullback of a composition is the com-
position of the pullbacks, but in reverse order—is grammatically inevitable.
Again, a commutative diagram expresses the idea:

Λk (C) / Λk (B)
4 Λ (A).
S∗ T∗ / k

(S○T )∗

Proof. For a 0-form f ∶ C Ð→ R, the result is simply the associativity of


composition,
(S ○ T )∗ f = f ○ (S ○ T ) = (f ○ S) ○ T = T ∗ (S ∗ f ) = (T ∗ ○ S ∗ )f.

Let (z1 , . . . , zℓ ) be coordinates on Rℓ . Every one-term 1-form dzq (where q is


an integer from {1, . . . , ℓ}) can be viewed as d(zq ), with d the differentiation
operator and zq the qth projection function. Thus

(S ○ T )∗ dzq = d((S ○ T )∗ zq ) since derivative commutes with pullback


= d((T ∗ ○ S ∗ )zq ) from just above, since zq is a function
= d(T ∗ (S ∗ zq )) by definition of composition
= T ∗ (d(S ∗ zq )) since derivative commutes with pullback
= T ∗ (S ∗ dzq ) since derivative commutes with pullback
= (T ∗ ○ S ∗ )dzq by definition of composition.
9.9 Algebra of Forms: The Pullback 455

Since every k-form is a sum of wedge products of 0-forms and 1-forms, and
since the pullback passes through sums and products, the general case follows.


Recapitulating this section: To pull a differential form back though a map
is to change variables in the form naturally. Because the wedge product has
the determinant wired into it, so does the pullback. Because the pullback is
natural, it commutes with addition, scalar multiplication, wedge multiplica-
tion, and differentiation of forms, and it anticommutes with composition of
forms. That is, everything that we are doing is preserved under change of
variables.
The results of this section are the technical heart of this chapter. The
reader is encouraged to contrast their systematic algebraic proofs with the
tricky analytic estimates in the main proofs of Chapter 6. The work of this
section will allow the pending proof of the general fundamental theorem of
integral calculus to be carried out by algebra, an improvement over hand-
waving geometry or tortuous analysis. The classical integration theorems of
the nineteenth century will follow without recourse to the classical procedure
of cutting a big curvy object into many pieces and then approximating each
small piece by a straight piece instead. The classical procedure is either im-
precise or byzantine, but for those willing to think algebraically, the modern
procedure is accurate and clear.
We end this section by revisiting the third example from its beginning.
Recall that we considered the 1-form
x dy − y dx
ω=
x2 + y 2
and the complex square mapping

(x, y) = T (u, v) = (u2 − v 2 , 2uv),

and we computed that the pullback T ∗ ω was twice ω, but written in (u, v)-
coordinates. Now we obtain the same result more conceptually in light of the
results of this section. The idea is that since ω measures change in angle, which
doubles under the complex square mapping, the result will be obvious in polar
coordinates, and furthermore, the pullback behaves so well under changes of
variable that the corresponding result for Cartesian coordinates will follow
easily as well. Thus, consider the polar coordinate mapping

Φ ∶ R>0 × R Ð→ R2 /{(0, 0)}, Φ(r, θ) = (r cos θ, r sin θ) = (u, v).

In polar coordinates, the complex square mapping can be reexpressed as

S ∶ R>0 × R Ð→ R>0 × R, S(r, θ) = (r2 , 2θ) = (r̃, θ̃).

And the polar coordinate mapping also applies to the polar coordinates that
are output by the complex square mapping,
456 9 Integration of Differential Forms

Φ ∶ R>0 × R Ð→ R2 /{(0, 0)}, Φ(r̃, θ̃) = (r̃ cos θ̃, r̃ sin θ̃) = (x, y).

Thus we have a commutative diagram

R>0 × R
Φ / R2 /{(0, 0)}

S T

/ R2 /{(0, 0)}.

R>0 × R
 Φ

In terms of differential forms and pullbacks we have the resulting diagram

Λ1 (R>0 × R) o Λ1 (R2 /{(0, 0)})


Φ∗
O O
S∗ T∗

Λ1 (R>0 × R) o Λ1 (R2 /{(0, 0)}).



Φ

Now to find T ∗ ω, where ω = (x dy − y dx)/(x2 + y 2 ), recall that ω pulls back


through the polar coordinate mapping to dθ̃, and recall that θ̃ = 2θ. Thus we
have in the second diagram

d(2θ) o T ∗O ω
O


✤❴
dθ̃ o ω
Since d(2θ) = 2 dθ, the sought-for pullback T ∗ ω must be the (u, v)-form that
pulls back through the polar coordinate mapping to 2 dθ. And so T ∗ ω should
be the double of ω, but with u and v in place of x and y,
u dv − v du
T ∗ω = 2 .
u2 + v 2
This is the value of T ∗ ω that we computed mechanically at the beginning
of this section. Indeed, note that this second derivation of T ∗ ω makes no
reference whatsoever to the formula T (u, v) = (u2 − v 2 , 2uv), only to the fact
that in polar coordinates the complex square mapping squares the radius and
doubles the angle.
Similarly, we can use these ideas to pull the area-form λ = dx ∧ dy back
through T . Indeed, dx ∧ dy pulls back through the polar coordinate mapping
to r̃ dr̃ ∧ dθ̃, which pulls back through S to r2 d(r2 ) ∧ d(2θ) = 4r3 dr ∧ dθ. Thus
we have a commutative diagram

4r3 drO ∧ dθ o

T ∗O λ


✤❴
r̃ dr̃ ∧ dθ̃ o λ
9.9 Algebra of Forms: The Pullback 457

So T ∗ λ must pull back through the polar coordinate mapping to 4r3 dr ∧ dθ.
Since the area-form du ∧ dv pulls back to r dr ∧ dθ, the answer is the area
form du ∧ dv multiplied
√ by 4r2 in (u, v)-coordinates. That is, since r in (u, v)-
coordinates is u + v ,
2 2

T ∗ λ = T ∗ (dx ∧ dy) = 4(u2 + v 2 ) du ∧ dv.

This formula for T ∗ λ can be verified directly by purely mechanical computa-


tion.

Exercises

9.9.1. Define S ∶ R2 Ð→ R2 by S(u, v) = (u + v, uv) = (x, y). Let ω = x2 dy +


call

y 2 dx and λ = xy dx, forms on (x, y)-space.


(a) Compute ω ∧ λ, S ′ (u, v), and (use the pullback–determinant theorem)
S (ω ∧ λ).

(b) Compute S ∗ ω, S ∗ λ, and S ∗ ω ∧ S ∗ λ. How do you check the last of


these? Which of the three commutative diagrams from this section is relevant
here?
(c) Compute dω and S ∗ (dω).
(d) Compute d(S ∗ ω). How do you check this? Which commutative dia-
gram is relevant?
(e) Define T ∶ R2 Ð→ R2 by T (s, t) = (s − t, set ) = (u, v). Compute
call

T (S λ).
∗ ∗

(f) What is the composite mapping S ○ T ? Compute (S ○ T )∗ λ. How do


you check this, and which commutative diagram is relevant?
9.9.2. Recall the two forms from the beginning (and the end) of this section,
x dy − y dx
ω= , λ = dx ∧ dy.
x2 + y 2
Consider a mapping from the nonzero points of (u, v)-space to nonzero points
of (x, y)-space.
−v
(x, y) = T (u, v) = ( 2 ).
u
, 2
u + v u + v2
2

As at the end of this section, in light of the fact that T is the complex reciprocal
mapping, determine what T ∗ ω and T ∗ λ must be. If you wish, confirm your
answers by computing them mechanically as at the beginning of this section.
9.9.3. Consider a differential form on the punctured (x, y)-plane,
x dx + y dy
µ= √ .
x2 + y 2

(a) Pull µ back through the polar coordinate mapping from the end of this
section,
458 9 Integration of Differential Forms

(x, y) = Φ(r̃, θ̃) = (r̃ cos θ̃, r̃ sin θ̃).


In light of the value of the pullback, what must be the integral ∫γ µ where γ
is a parametrized curve in the punctured (x, y)-plane?
(b) In light of part (a), pull µ back through the complex square mapping
from this section,
(x, y) = T (u, v) = (u2 − v 2 , 2uv),
using diagrams rather than relying heavily on computation. Check your an-
swer by computation if you wish.
(c) Similarly to part (a), pull µ back through the complex reciprocal map-
ping from the previous exercise,
−v
(x, y) = T (u, v) = ( ).
u
, 2
u2 + v u + v2
2

using diagrams. Check your answer by computation if you wish.


(d) Let k be an integer. The relation x + iy = (u + iv)k determines (x, y)
as a function T (u, v). Pull the forms ω and λ from the previous exercise and
the form µ from this exercise back through T , with no reference to any ex-
plicit formula for T . The results should in particular reproduce your previous
answers for k = 2 and k = −1.

9.9.4. Let A = R3 − {0}. Let r be a fixed positive real number. Consider a


2-surface in A,

Φ ∶ [0, 2π] × [0, π] Ð→ A, Φ(θ, ϕ) = (r cos θ sin ϕ, r sin θ sin ϕ, r cos ϕ).

Consider also a 2-form on A,

ω = −(x/r) dy ∧ dz − (y/r) dz ∧ dx − (z/r) dx ∧ dy.

Compute the derivative matrix Φ′ (θ, ϕ), and use the pullback–determinant
theorem three times to compute the pullback Φ∗ ω. Compare your answer
to the integrand of the surface integral near the end of Section 9.1 used to
compute the volume of the sphere of radius r. (It follows that ω is the area-
form for the particular surface Φ in this exercise, but not that ω is a general
area-form for all surfaces.)

9.10 Change of Variable for Differential Forms

The definition of integration and the algebra of forms combine to make a


change of variable theorem for differential forms a triviality. First, a theorem
of independent interest allows us to replace any integral of a differential form
over a parametrized surface with an integral over the trivial parametrization
of the surface’s parameter domain.
9.10 Change of Variable for Differential Forms 459

Theorem 9.10.1 (Pullback theorem). Let A be an open subset of Rn .


Let ω be a k-form on A and let Φ ∶ D Ð→ A be a k-surface in A. Define a
k-surface in Rk ,

∆D ∶ D Ð→ Rk , ∆D (u) = u for all u ∈ D.

Then
∫ ω=∫ Φ∗ ω.
Φ ∆D

Proof. As usual, just do the case of a one-term form, ω = f dxI . Then

∫ f dxI = ∫ (f ○ Φ) det ΦI

by definition, as in (9.14)
Φ D

=∫ (f ○ Φ) det Φ′I du1 ∧ ⋯ ∧ duk by Exercise 9.5.4


∆D

=∫ (f ○ Φ)Φ∗ dxI by Theorem 9.9.3


∆D

=∫ Φ∗ (f dxI ) by definition of pullback.


∆D


The general change of variable theorem for differential forms follows im-
mediately from the pullback theorem and the contravariance of the pullback.

Theorem 9.10.2 (Change of variable for differential forms). Let A be


an open subset of Rn , and let B be an open subset of Rm . Let T ∶ A Ð→ B
be a smooth mapping. For every k-surface in A, Φ ∶ D Ð→ A, the composition
T ○ Φ ∶ D Ð→ B is thus a k-surface in B. Let ω be a k-form on B. Then

∫ ω = ∫ T ∗ ω.
T ○Φ Φ

Proof. Let ∆D ∶ D Ð→ Rk be as above. Then

∫ ω=∫ (T ○ Φ)∗ ω = ∫ Φ∗ (T ∗ ω) = ∫ T ∗ ω.
T ○Φ ∆D ∆D Φ


The pullback theorem is essentially equivalent to the definition of inte-


gration once one has the pullback–determinant theorem. Thus, a logically
equivalent route to ours through this material is to define integration of a
k-form in k-space as ordinary integration, and integration of a k-form in n-
space for k < n via the pullback. Doing so would have been a little tidier (there
would not be two notions of integration when k = n whose compatibility needs
to be verified), but the approach here has the advantage that one can start
integrating immediately before developing all the algebra.
460 9 Integration of Differential Forms

Exercise

9.10.1. Let T ∶ R2 Ð→ R2 be given by T (x1 , x2 ) = (x21 − x22 , 2x1 x2 ) = (y1 , y2 ).


call

Let γ be the curve γ ∶ [0, 1] Ð→ R2 given by γ(t) = (1, t) mapping the unit
interval into (x1 , x2 )-space, and let T ○ γ be the corresponding curve mapping
into (y1 , y2 )-space. Let ω = y1 dy2 , a 1-form on (y1 , y2 )-space.
(a) Compute T ○ γ, and then compute ∫T ○γ ω using formula (9.14).
(b) Compute T ∗ ω, the pullback of ω by T .
(c) Compute ∫γ T ∗ ω using formula (9.14). What theorem says that the
answer here is the same as (a)?
(d) Let λ = dy1 ∧ dy2 , the area form on (y1 , y2 )-space. Compute T ∗ λ.
(e) A rectangle in the first quadrant of (x1 , x2 )-space,

R = {(x1 , x2 ) ∶ a1 ≤ x1 ≤ b1 , a2 ≤ x2 ≤ b2 },

gets taken to some indeterminate patch B = T (R) by T . Find the area of B,


∫B λ, using (d). (This exercise abuses notation slightly, identifying R with its
natural parametrization and B with the corresponding surface T ○ R.)
(f) Why does this exercise require that R lie in the first quadrant? Can
the restriction be weakened?

9.11 Closed Forms, Exact Forms, and Homotopy


Let ω be a differential form. Recall the terminology that

ω is exact if ω = dλ for some λ

and
ω is closed if dω = 0.
The nilpotence of d (the rule d2 = 0 from Theorem 9.8.3) shows that every
exact form is closed. We now show that under certain conditions, the converse
is true as well, i.e., under certain conditions a closed differential form can be
antidifferentiated.
A homotopy of a set is a process of deforming the set to a single point,
the deformation taking place entirely within the original set. For example,
consider the open ball
A = {x ∈ Rn ∶ ∣x∣ < 1}.
A mapping that shrinks the ball to its center as one unit of time elapses is

h ∶ [0, 1] × A Ð→ A, h(t, x) = tx.

The idea geometrically is that at time t = 1, h is the identity mapping so


that h(1, A) = A, while at any intermediate time t ∈ (0, 1), h(t, A) = tA is a
scaled-down copy of the ball, and finally at time t = 0, h(0, A) = {0} and the
9.11 Closed Forms, Exact Forms, and Homotopy 461

ball has shrunk to its center. (So here we have let time flow from t = 1 to t = 0
for convenience.)
However, the geometric story just told is slightly misleading. We could
replace the ball A in the previous example by all of Euclidean space Rn , and
the map
h ∶ [0, 1] × Rn Ð→ Rn , h(t, x) = tx
would still contract Rn to {0} in the sense that each point x ∈ Rn is moved
by h to 0 as t varies from 1 to 0. However, at any intermediate time t ∈ (0, 1),
h(t, Rn ) = tRn = Rn is still all of Euclidean space. Although every point
of Rn is moved steadily by h to 0, h does not shrink the set Rn as a whole
until the very end of the process, when space collapses instantaneously to a
point. Each point x of Rn is taken close to the origin once the time t is close
enough to 0, but the required smallness of t depends on x; for no positive t,
however close to 0, is all of Rn taken close to the origin simultaneously. The
relevant language here is that homotopy is a convergent process that need not
be uniformly convergent, analogously to how a continuous function need not
be uniformly continuous. The mental movie that we naturally have of a set
shrinking to a point depicts a uniformly convergent process, and so it doesn’t
fully capture homotopy.
For another example, consider the annulus
A = {x ∈ R2 ∶ 1 < ∣x∣ < 2}.
Plausibly there is no homotopy of the annulus, meaning that the annulus
cannot be shrunk to a point by a continuous process that takes place entirely
within the annulus. But proving that there is no homotopy of the annulus is
not trivial. We will return to this point in Exercise 9.11.1.
The formal definition of a homotopy is as follows.
Definition 9.11.1 (Homotopy, contractible set). Let A be an open subset
of Rn . Let ε be a positive number and let
B = (−ε, 1 + ε) × A,
an open subset of Rn+1 . A homotopy of A is a smooth mapping
h ∶ B Ð→ A
such that for some point p of A,
h(0, x) = p
{ } for all x ∈ A.
h(1, x) = x
An open subset A of Rn that has a homotopy is called contractible.
Again, the idea is that B is a sort of cylinder over A, and that at one end
of the cylinder the homotopy gives an undisturbed copy of A, while by the
other end of the cylinder the homotopy has compressed A down to a point.
This section proves the following result.
462 9 Integration of Differential Forms

Theorem 9.11.2 (Poincaré). Let A be a contractible subset of Rn , and


let k ≥ 1 be an integer. Then every closed k-form on A is exact.

To prepare for the proof of theorem, we consider a cylinder over A,

B = (−ε, 1 + ε) × A,

but for now we make no reference to the pending homotopy that will have B
as its domain. Recall that the differentiation operator d increments the degree
of a differential form. Now, by contrast, we define a linear operator that takes
differential forms on B and returns differential forms of one degree lower on A.
Let the coordinates on B be (t, x) = (t, x1 , . . . , xn ) with t viewed as the zeroth
coordinate.

Definition 9.11.3. For each positive integer k, define a linear mapping of


differential forms,

c ∶ Λk (B) Ð→ Λk−1 (A), k = 1, 2, 3, . . . ,

as follows: c acts on a one-term form that contains dt by integrating its com-


ponent function in the t-direction and suppressing its dt, and c annihilates
differential forms that don’t contain dt. That is, letting I denote (k − 1)-tuples
and J denote k-tuples, all tuples being from {1, . . . , n},

c (∑ gI (t, x) dt dxI + ∑ gJ (t, x) dxJ ) = ∑ (∫ gI (t, x)) dxI .


1

I J I t=0

With c in hand, we have two degree-preserving mappings from differen-


tial forms on B to differential forms on A, the compositions of c and the
differentiation operator d in either order,

cd, dc ∶ Λk (B) Ð→ Λk (A), k = 1, 2, 3, . . . .

However, note that cd proceeds from Λk (B) to Λk (A) via Λk+1 (B), while dc
proceeds via Λk−1 (A). To analyze the two compositions, compute first that
for a one-term differential form that contains dt,
n
(cd)(g(t, x) dt dxI ) = c (∑ Di g(t, x) dxi dt dxI )
i=1
n
= c (− ∑ Di g(t, x) dt dx(i,I) )
i=1
n
= − ∑ (∫
1
Di g(t, x)) dx(i,I) ,
i=1 t=0

while, using the fact that xi -derivatives pass through t-integrals for the third
equality to follow,
9.11 Closed Forms, Exact Forms, and Homotopy 463

(dc)(g(t, x) dt dxI ) = d ((∫ g(t, x)) dxI )


1

t=0
n
= ∑ Di (∫
1
g(t, x)) dx(i,I)
i=1 t=0
n
= ∑ (∫
1
Di g(t, x)) dx(i,I) .
i=1 t=0

Thus cd + dc annihilates forms that contain dt. On the other hand, for a
one-term differential form without dt,

⎛ n ⎞
(cd)(g(t, x) dxJ ) = c D0 g(t, x) dt dxJ + ∑ Dj g(t, x) dx(j,J)
⎝ j=1 ⎠

= (∫
1
D0 g(t, x)) dxJ
t=0
= (g(1, x) − g(0, x)) dxJ ,

while
(dc)(g(t, x) dxJ ) = d(0) = 0.
That is, cd + dc replaces each coefficient function g(t, x) in forms without dt
by g(1, x) − g(0, x), a function of x only.
To notate the effect of cd + dc more tidily, define the two natural mappings
from A to the cross sections of B where the pending homotopy of A will end
and where it will begin,

β0 (x) = (0, x)
β0 , β1 ∶ A Ð→ B, { }.
β1 (x) = (1, x)

Because β0 and β1 have ranges where t is constant, and because they don’t
affect x, their pullbacks,

β0∗ , β1∗ ∶ Λk (B) Ð→ Λk (A), k = 0, 1, 2, . . . ,

act correspondingly by replacing t with a constant and dt with 0 while pre-


serving x and dx,

β0∗ (g(t, x) dt dxI ) = 0, β1∗ (g(t, x) dt dxI ) = 0

and

β0∗ (g(t, x) dxJ ) = g(0, x) dxJ , β1∗ (g(t, x) dxJ ) = g(1, x) dxJ .

It follows that our calculations can be rephrased as Poincaré’s identity,

(cd + dc)λ = (β1∗ − β0∗ )λ, λ ∈ Λk (B), k = 1, 2, 3, . . . .

With Poincaré’s identity established, we prove Poincaré’s theorem.


464 9 Integration of Differential Forms

Proof (of Theorem 9.11.2). We have an open subset A of Rn , a point p of A,


a cylinder B = (−ε, 1 + ε) × A for some positive number ε, and a homotopy

h ∶ B Ð→ A.

So also we have the corresponding pullback

h∗ ∶ Λk (A) Ð→ Λk (B), k = 0, 1, 2, . . . .

Let k ≥ 1, and consider a closed form ω ∈ Λk (A). Then h∗ ω ∈ Λk (B) and


ch∗ ω ∈ Λk−1 (A). We show that ch∗ ω is an antiderivative of ω by computing the
quantity (cd + dc)h∗ ω in two ways. First, because the pullback and boundary
operators commute and because dω = 0,

(cd + dc)h∗ ω = ch∗ dω + dch∗ ω = d(ch∗ ω).

Second, by Poincaré’s identity and the contravariance of the pullback,

(cd + dc)h∗ ω = (β1∗ − β0∗ )h∗ ω = ((h ○ β1 )∗ − (h ○ β0 )∗ )ω.

But (h ○ β1 )(x) = h(1, x) = x and (h ○ β0 )(x) = h(0, x) = p, i.e., h ○ β1 is


the identity mapping and h ○ β0 is a constant mapping, so that (h ○ β1 )∗ has
no effect on ω, while (h ○ β0 )∗ annihilates the dx’s of ω (which are present
because k ≥ 1), thus annihilating ω. In sum, the second computation gives ω.
So the computations combine to give

d(ch∗ ω) = ω.

That is, ω is exact, as desired. ⊔


Note that this process of antidifferentiating ω by taking ch∗ ω moves from A


up to the larger space B and then back down to A. In terms of algebra, the
process inserts t’s into ω by pulling it back through the homotopy and then
strips them out in a different way by applying the c operator.
We end this section with an example. Consider any closed form on R2 ,

ω = f (x, y) dx + g(x, y) dy, D2 f = D1 g.

Pull ω back through the homotopy h(t, x, y) = (tx, ty) of R2 to get

h∗ ω = f (tx, ty) d(tx) + g(tx, ty) d(ty)


= (xf (tx, ty) + yg(tx, ty)) dt + tf (tx, ty) dx + tg(tx, ty) dy.

Apply c to h∗ ω in turn to get

(xf (tx, ty) + yg(tx, ty)).


1
ch∗ ω = ∫
t=0
9.11 Closed Forms, Exact Forms, and Homotopy 465

This function must have derivative ω. To verify that it does, compute that its
first partial derivative is

(f (tx, ty) + xD1 (f (tx, ty)) + yD1 (g(tx, ty))).


1
D1 ch∗ ω(x, y) = ∫
t=0

By the chain rule and then by the fact that D1 g = D2 f , the first partial
derivative is therefore

(f (tx, ty) + xD1 f (tx, ty)t + yD1 g(tx, ty)t)


1
D1 ch∗ ω(x, y) = ∫
t=0

f (tx, ty) + ∫ t(xD1 f (tx, ty) + yD2 f (tx, ty)).


1 1
=∫
t=0 t=0

The last integral takes the form ∫t=0 u v ′ where u(t) = t and v(t) = f (tx, ty).
1

And so finally, integrating by parts gives

f (tx, ty) + tf (tx, ty)∣


1 1 1
D1 ch∗ ω(x, y) = ∫ −∫ f (tx, ty)
t=0 t=0 t=0
= f (x, y).

Similarly D2 ch∗ ω(x, y) = g(x, y), so that indeed

d(ch∗ ω) = f (x, y) dx + g(x, y) dy = ω.

Exercises

9.11.1. (a) Here is a special case of showing that a closed form is exact without
recourse to Poincaré’s theorem. A function f ∶ R3 Ð→ R is called homoge-
neous of degree k if

f (tx, ty, tz) = tk f (x, y, z) for all t ∈ R and (x, y, z) ∈ R3 .

Such a function must satisfy Euler’s identity,

xD1 f + yD2 f + zD3 f = kf.

Suppose that ω = f1 dx + f2 dy + f3 dz is a closed 1-form whose coefficient


functions are all homogeneous of degree k where k ≥ 0. Show that ω = dφ
where
1
φ= (xf1 + yf2 + zf3 ).
k+1
(Suggestion: first check only the dx term of dφ, remembering that ω is closed.
The other two terms will work out similarly by symmetry.)
(b) Here is a closed form that is not exact. Let

x dy − y dx
ω= ,
x2 + y 2
466 9 Integration of Differential Forms

a 1-form on the punctured plane A = R2 − {(0, 0)}. Show that ω is closed.


Compute that integrating ω around the counterclockwise unit circle,

γ ∶ [0, 2π] Ð→ A, γ(t) = (cos t, sin t),

gives a nonzero answer. Explain why this shows that there is no 0-form (i.e.,
function) θ on the punctured plane such that ω = dθ.
(c) Use part (b) to show that there cannot exist a homotopy of the punc-
tured plane. How does this nonexistence relate to the example of the annulus
at the beginning of this section?

9.11.2. (a) Let ω = f (x, y) dx ∧ dy be a form on R2 , so that dω = 0. Find and


confirm an antiderivative of ω.
(b) Let ω = f (x, y, z) dy∧dz+g(x, y, z) dz∧dx+h(x, y, z) dx∧dy be a closed
form on R3 . (Here h does not denote a homotopy.) Find an antiderivative of ω.

9.12 Cubes and Chains

Sections 9.7 through 9.9 introduced algebraic operators on differential forms:


the wedge product, the derivative, and the pullback. The next section will
introduce a geometric operator on surfaces. The first thing to do is specialize
the definition of a surface a bit. As usual, let [0, 1] denote the unit interval.
For k ≥ 0, the unit k-box is the Cartesian product

[0, 1]k = [0, 1] × ⋯ × [0, 1] = {(u1 , . . . , uk ) ∶ ui ∈ [0, 1] for i = 1, . . . , k}.

As mentioned in Section 9.3, when k = 0 this means the one-point set whose
point is ().

Definition 9.12.1 (Singular cube, standard cube). Let A be an open


subset of Rn . A singular k-cube in A is a surface whose parameter domain
is the unit k-box,
Φ ∶ [0, 1]k Ð→ A.
In particular, the standard k-cube is

∆k ∶ [0, 1]k Ð→ Rk , ∆k (u) = u for all u ∈ [0, 1]k .

As with Definition 9.1.1 of a surface, now a cube is by definition a mapping,


and in particular, a 0-cube is the parametrization of a point. In practice, we
often blur the distinction between a mapping and its image, and under this
blurring the word cube now encompasses noncubical objects such as a torus-
surface (which is a singular 2-cube in R3 ) and a solid sphere (a singular 3-cube
in R3 ). The next definition allows us to consider more than one cube at a time.
The purpose is to integrate over several cubes in succession, integrating over
each of them a prescribed number of times.
9.12 Cubes and Chains 467

Definition 9.12.2 (Chain). Let A be an open subset of Rn . A k-chain in


A is a finite formal linear combination

C = ∑ νs Φ(s) ,
s

where each νs is an integer and each Φ(s) is a singular k-cube in A. (The


surface subscript is in parentheses only to distinguish it from a component
function subscript.)

For example, if Φ, Ψ , and Γ are singular k-cubes in Rn then

2Φ − 3Ψ + 23Γ

is a k-chain in Rn . This k-chain is not the singular k-cube that maps points u
to 2Φ(u) − 3Ψ (u) + 23Γ (u) in Rn . The term formal linear combination in the
definition means that we don’t actually carry out any additions and scalings.
Rather, the coefficients νs are to be interpreted as integration multiplicities.
A k-chain, like a k-form, is a set of instructions.

Definition 9.12.3 (Integral of a k-form over a k-chain in n-space).


Let A be an open subset of Rn . Let

C = ∑ νs Φ(s)
s

be a k-chain in A, and let ω be a k-form on A. Then the integral of ω over C


is
∫ ω = ∑ νs ∫ ω.
C s Φ(s)

This definition can be written more suggestively as

∫ ω = ∑ νs ∫ ω.
∑ νs Φ(s) s Φ(s)

Although C is a formal linear combination, the operations on the right of the


equality are literal addition and multiplication in R. For example, let a and b
be points in Rn , and let Φa and Φb be the corresponding 0-cubes. Then for
every 0-form on Rn , ω = f ∶ Rn Ð→ R,

∫ ω = f (b) − f (a).
Φb −Φa

One can define predictable rules for addition and scalar multiplication (integer
scalars) of chains, all of which will pass through the integral sign tautologically.
Especially, the change of variable theorem for differential forms extends from
integrals over surfaces to integrals over chains,

∫ ω = ∫ T ∗ ω.
T ○C C
468 9 Integration of Differential Forms

We will quote this formula in the proof of the general FTIC.


Also, if C is a chain in A and T ∶ A Ð→ B is a mapping, then we can
naturally compose them to get a chain in B by passing sums and constant
multiples through T . That is,
if C = ∑ νs Φ(s) then T ○ C = ∑ νs (T ○ Φ(s) ).
s s

Exercises
9.12.1. Let A be an open subset of Rn . Consider the inner-product-like func-
tion (called a pairing)
⟨ , ⟩ ∶ {k-chains in A} × {k-forms on A} Ð→ R
defined by the rule

⟨C, ω⟩ = ∫ ω for all suitable k-chains C and k-forms ω.


C

Show that this inner product is bilinear, meaning that for all suitable chains
C and Ci , all suitable forms ω and ωi , and all constants ci ,
⟨∑ ci Ci , ω⟩ = ∑ ci ⟨Ci , ω⟩
i i

and
⟨C, ∑ ci ωi ⟩ = ∑ ci ⟨C, ωi ⟩.
i i
It makes no sense to speak of symmetry of this pairing, because the argu-
ments cannot be exchanged.
Do you think the pairing is nondegenerate, meaning that for every fixed
chain C, if ⟨C, ω⟩ = 0 for all forms ω then C must be 0, and for every fixed
form ω, if ⟨C, ω⟩ = 0 for all chains C then ω must be 0?
9.12.2. Let A be an open subset of Rn , let B be an open subset of Rm , and
let k ≥ 0. Every smooth mapping T ∶ A Ð→ B gives rise via composition to a
corresponding pushforward mapping from k-surfaces in A to k-surfaces in B,
T∗ ∶ {k-surfaces in A} Ð→ {k-surfaces in B}, T∗ Φ = T ○ Φ.
In more detail, since a k-surface in A takes the form Φ ∶ D Ð→ A where D ⊂ Rk
is a parameter domain, the pushforward mapping is

(Φ ∶ D Ð→ A) z→ (T ○ Φ ∶ D Ð→ B).
T∗

Using the pairing-notation of the previous exercise, which result from earlier
in this chapter can be renotated as
⟨T∗ Φ, ω⟩ = ⟨Φ, T ∗ ω⟩ for all suitable Φ and ω?
Note that the renotation shows that the pushforward and pullback are like a
pair of adjoint operators in the sense of linear algebra.
9.13 Geometry of Chains: The Boundary Operator 469

9.13 Geometry of Chains: The Boundary Operator


This section defines an operator that takes k-chains to (k − 1)-chains. The
idea is to traverse the edge of each singular k-cube in the chain, with suitable
multiplicity and orientation. The following definition gives three rules that
say how to do so. The first rule reduces taking the boundary of a k-chain
to taking the boundary of its constituent singular k-cubes. The second rule
reduces taking the boundary of a singular k-cube to taking the boundary
of the standard k-cube. The third rule, giving the procedure for taking the
boundary of the standard k-cube, is the substance of the definition. It is best
understood by working through specific cases.

Definition 9.13.1 (Boundary). Let A be an open subset of Rn . For each


k ≥ 1, define the boundary mapping

∂ ∶ {k-chains in A} Ð→ {(k − 1)-chains in A}

by the following properties:


(1) For every k-chain ∑ νs Φ(s) ,

∂ (∑ νs Φ(s) ) = ∑ νs ∂Φ(s) .

(2) For every singular k-cube Φ,

∂Φ = Φ ○ ∂∆k .

(The composition here is of the sort defined at the end of the previous
section.)
(3) Define mappings from the standard (k−1)-cube to the faces of the standard
k-cube as follows: for every i ∈ {1, . . . , n} and α ∈ {0, 1}, the mapping to
the face where the ith coordinate equals α is

∆ki,α ∶ [0, 1]k−1 Ð→ [0, 1]k ,

given by

∆ki,α (u1 , . . . , uk−1 ) = (u1 , . . . , ui−1 , α, ui , . . . , uk−1 ).

Then
k 1
∂∆k = ∑ ∑ (−1)i+α ∆ki,α . (9.16)
i=1 α=0

In property (2) the composition symbol “○” has been generalized a little
from its ordinary usage. Since ∂∆k is a chain ∑ µs Ψ(s) , the composition Φ○∂∆k
is defined as the corresponding chain ∑ µs Φ ○ Ψ(s) . The compositions in the
sum make sense, because by property (3), each Ψ(s) maps [0, 1]k−1 into [0, 1]k .
To remember the definition of ∆ki,α in (9.16), read its name as:
470 9 Integration of Differential Forms

Of k variables, set the ith to α,

or just set the ith variable to α. The idea of formula (9.16) is that for each of
the directions in k-space (i = 1, . . . , k), the standard k-cube has two faces with
normal vectors in the ith direction (α = 0, 1), and we should take these two
faces with opposite orientations in order to make both normal vectors point
outward. Unlike differentiation, which increments the degree of the form it
acts on, the boundary operator decrements chain dimension.
For example, the boundary of the standard 1-cube is given by (9.16),

∂∆1 = −∆11,0 + ∆11,1 .

That is, the boundary is the right endpoint of [0, 1] with a plus and the left
endpoint with a minus. (See Figure 9.11. The figures for this section show the
images of the various mappings involved, with symbols added as a reminder
that the images are being traversed by the mappings.) One consequence of
this is that the familiar formula from the one-variable fundamental theorem
of integral calculus,
f ′ = f (1) − f (0),
1

0
is now expressed suggestively in the notation of differential forms as

∫ df = ∫ f.
∆1 ∂∆1

As for the boundary of a singular 1-cube γ ∶ [0, 1] Ð→ Rn (i.e., a curve in


space) with γ(0) = a and γ(1) = b, property (2) of the boundary definition
gives
∂γ = γ ○ ∂∆1 = −γ ○ ∆11,0 + γ ○ ∆11,1 .
Thus the boundary is the curve’s endpoint b with a plus and the start-point a
with a minus. The last example of Section 9.4 now also takes on a more
suggestive expression,
∫ df = ∫ f.
γ ∂γ

− +

Figure 9.11. Standard 1-cube and its boundary

The boundary of the standard 2-cube is again given by (9.16),

∂∆2 = −∆21,0 + ∆21,1 + ∆22,0 − ∆22,1 .


9.13 Geometry of Chains: The Boundary Operator 471

This chain traverses the boundary square of [0, 1]2 once counterclockwise.
(See Figure 9.12.) Next consider a singular 2-cube that parametrizes the unit
disk,
Φ ∶ [0, 1]2 Ð→ R2 , Φ(r, θ) = (r cos 2πθ, r sin 2πθ).
By property (2), ∂Φ = Φ ○ ∂∆2 . This chain traverses the boundary circle
once counterclockwise, two radial traversals cancel, and there is a degener-
ate mapping to the centerpoint. (See Figure 9.13.) Changing to Φ(r, θ) =
(r cos 2πθ, −r sin 2πθ) also parametrizes the unit disk, but now ∂Φ traverses
the boundary circle clockwise.

Figure 9.12. Standard 2-cube and its boundary

Figure 9.13. Boundary of a singular 2-cube

The boundary of the standard 3-cube is, by (9.16),

∂∆3 = −∆31,0 + ∆31,1 + ∆32,0 − ∆32,1 − ∆33,0 + ∆33,1 .

This chain traverses the faces of [0, 1]3 , oriented positively if we look at them
from outside the solid cube. (See Figure 9.14.)
The second boundary of the standard 2-cube works out by cancellation to
472 9 Integration of Differential Forms

Figure 9.14. Boundary of the standard 3-cube

∂ 2 ∆2 = 0.

(See the left side of Figure 9.15.) And the second boundary of the standard
3-cube similarly is
∂ 2 ∆3 = 0.
(See the right side of Figure 9.15.) These two examples suggest that the no-
tational counterpart to the nilpotence of d is also true,

∂ 2 = 0.

The nilpotence of ∂ is indeed a theorem, and it is readily shown by a double


sum calculation in which terms cancel pairwise. (See Exercise 9.13.8.) But it
will also follow immediately from the main theorem of the chapter, the general
FTIC, which states that in a precise sense, the differentiation operator d and
the boundary operator ∂ are complementary. Their complementary nature is
why they are notated so similarly.
Because integration is invariant under reparametrization, you needn’t be
too formal in computing boundaries once you understand how they work on
standard cubes. The boundary of the unit square (the 2-cube), for example, is
adequately described as its edge traversed counterclockwise at unit speed, and
so the boundary of every singular 2-cube Φ from the unit square into Rn is
simply the restriction of Φ to the edge of the square with appropriate traversal,
or any orientation-preserving reparametrization thereof. In particular, every
9.13 Geometry of Chains: The Boundary Operator 473

+
z

− +
y

+ −
− +

Figure 9.15. Second boundaries

rectangle in R2 can be obtained by scaling and translating the unit square in


an orientation-preserving fashion, so the boundary of such a rectangle is, as
one would hope, its edge, counterclockwise. More generally, a singular 2-cube
in R3 is a sort of parametrized membrane floating in space, and its boundary
is just its edge, traversed in the direction inherited from the parametrization,
as we saw for the disk. Without the parametrization, neither direction of
traversing the membrane’s edge in Rn for n > 2 is naturally preferable to
the other. Similarly in R3 , the boundary of the unit cube is its six faces,
oriented to look positive from outside the cube. In other words, an acceptable
coordinate system for a boundary face of the cube is two orthonormal vectors
whose cross product is an outward unit normal to the cube. The boundary of
every singular 3-cube Φ ∶ [0, 1]3 Ð→ R3 is the restriction of Φ to the boundary
faces of [0, 1]3 .
For example, consider the surface
Φ ∶ [0, a] × [0, 2π] × [0, b] Ð→ R3
given by the cylindrical coordinate mapping,
Φ(r, θ, z) = (r cos θ, r sin θ, z).
Although the parametrizing box is not literally [0, 1]3 , we grant ourselves
license to treat the upper limits of the parameters as 1 in determining the
signs in the formula
∂Φ = Φ ○ (−∆31,0 + ∆31,a + ∆32,0 − ∆32,2π − ∆33,0 + ∆33,b ).
Here we also grant ourselves license to use chain-addition inside the parenthe-
ses rather than compose Φ six times. The boundary components, unsigned,
474 9 Integration of Differential Forms

are

(Φ ○ ∆31,0 )(θ, z) = (0, 0, z),


(Φ ○ ∆31,a )(θ, z) = (a cos θ, a sin θ, z),
(Φ ○ ∆32,0 )(r, z) = (r, 0, z),
(Φ ○ ∆32,2π )(r, z) = (r, 0, z),
(Φ ○ ∆33,0 )(r, θ) = (r cos θ, r sin θ, 0),
(Φ ○ ∆33,b )(r, θ) = (r cos θ, r sin θ, b).

The first component maps to the z-axis from 0 to b, which is trivial as a


2-surface in the sense that integrating any 2-form over it will give 0. The
second component maps to the vertical outside of the cylinder (the label on
the can), and the positive sign that it carries connotes that the associated
normal vector at each point of the vertical outside of the cylinder points
outward. The third and fourth components map to a solid a × b rectangle in
the (x, z)-plane, inside the solid cylinder; these are not trivial as 2-surfaces,
but the components carry opposite signs and so they cancel. The last two
components map to the bottom and the top of the cylinder, and the signs
that they carry connote that in each case the natural normal vector points
outward. Thus the boundary of the cylinder is as expected.
For another example, let B3 denote the solid unit ball in R3 . Let a, b, c
be positive numbers, and consider the surface that dilates the ball to the
associated solid ellipsoid

Φ ∶ B3 Ð→ R3 , Φ(x, y, z) = (ax, by, cz).

Since the parameter domain of Φ is not a box, Φ is not a singular 3-cube


even under the looser grammar that we have granted ourselves. Thus, to
compute the boundary of Φ formally, we should preparametrize B3 from a
box using the spherical coordinate system, Ψ ∶ [0, 1] × [0, 2π] × [0, π] Ð→ B3 ,
and then understand that ∂Φ really means ∂(Φ ○ Ψ ) = Φ ○ Ψ ○ ∂∆3 , where the
notation ∆3 is being stretched a little as in the previous example, because the
parameter domain of Ψ isn’t literally [0, 1]3 . Inevitably, the boundary of the
ball works out to be its spherical skin, although it is unfortunately oriented so
that the natural normal vector points inward in consequence of our spherical
coordinate system reversing orientation. (See Exercise 9.13.3.) Consequently
the boundary of the ellipsoid is its skin as well.

Exercises

9.13.1. Define a singular k-cube called the simplex, Φ ∶ [0, 1]k Ð→ Rk , by


k−1
Φ(u1 , . . . , uk ) = (u1 , (1 − u1 )u2 , (1 − u1 )(1 − u2 )u3 , . . . , ∏ (1 − ui )uk ).
i=1
9.13 Geometry of Chains: The Boundary Operator 475

(a) Let (x1 , . . . , xk ) = Φ(u1 , . . . , uk ). Show that ∑ki=1 xi = 1 − ∏ki=1 (1 − ui ).


(b) Show that the image of Φ lies in the set (also called the simplex)
k
S = {(x1 , . . . , xk ) ∶ x1 ≥ 0, . . . , xk ≥ 0, ∑ xi ≤ 1}.
i=1

(In fact, the image is all of the simplex, but showing this would take us too
far afield.)
(c) For each of the values k = 1, 2, 3, do the following. Calculate ∂Φ (the
result is a (k − 1)-chain). Graph ∂Φ by graphing each (k − 1)-cube in the chain
and indicating its coefficient (+1 or −1) beneath the graph. Each graph should
show [0, 1]k−1 and Rk .

√ shell H ∶ D Ð→ R where
3
9.13.2. Describe the boundary of the hemispherical
D is the unit disk in R2 and H(x, y) = (x, y, 1 − x2 − y 2 ). (You might
parametrize D from [0, 1]2 and then compute the boundary of the compo-
sition, or you might simply push ∂D from this section through H.)

9.13.3. Describe the boundary of the solid unit upper hemisphere

H = {(x, y, z) ∈ R3 ∶ x2 + y 2 + z 2 ≤ 1, z ≥ 0}.

(Since H is being described as a set, parametrize it.)

9.13.4. Describe the boundary of the paraboloid Φ ∶ D Ð→ R3 where again D


is the unit disk in R2 and

Φ(u, v) = (u, v, u2 + v 2 ).

9.13.5. Describe the boundary of Φ ∶ [0, 2π] × [0, π] Ð→ R3 where

Φ(θ, φ) = (cos θ sin φ, sin θ sin φ, cos φ).

(Before going straight to calculations, it will help to understand the geometry


of the problem, especially the interpretation of θ and φ in the image-space R3 .)

9.13.6. Describe the boundary of Φ ∶ [0, 1] × [0, 2π] × [0, π] Ð→ R3 where

Φ(ρ, θ, φ) = (ρ cos θ sin φ, ρ sin θ sin φ, ρ cos φ).

(Again, first make sure that you understand the geometry of the problem,
especially the interpretation of the parametrizing variables in the image-
space.) How does this exercise combine with the result ∂ 2 = 0 to bear on
Exercise 9.13.5?

9.13.7. Fix constants 0 < a < b. Describe the boundary of Φ ∶ [0, 2π] × [0, 2π] ×
[0, 1] Ð→ R3 where Φ(u, v, t) = ((b + at cos v) cos u, (b + at cos v) sin u, at sin v).
(First understand the geometry, especially the interpretation of u, v, and t in
the image-space.)
476 9 Integration of Differential Forms

9.13.8. This exercise gives a self-contained proof that the double boundary
operator is identically zero. It suffices to show this for the double boundary
on the standard k-cube, where k ≥ 2.
(a) Explain why the double boundary is
k 1 k−1 1
∂ 2 ∆k = ∑ ∑ ∑ ∑ (−1)i+j+α+β ∆ki,α ○ ∆k−1
j,β .
i=1 α=0 j=1 β=0

(b) Show that if i ≤ j then we have

j,β (u1 , . . . , uk−2 ) = (u1 , . . . , ui−1 , α, ui , . . . , uj−1 , β, uj , . . . , uk−2 ),


∆ki,α ○ ∆k−1

with α in the ith slot and β in the (j + 1)st slot, whereas if i > j then we have

j,β (u1 , . . . , uk−2 )(u1 , . . . , uj−1 , β, uj , . . . , ui−2 , α, ui−1 , . . . , uk−2 ),


∆ki,α ○ ∆k−1

with β in the jth slot and α in the ith slot. Thus the double boundary of the
standard k-cube consists of two sums, written as formal sums of functions of
the variables u1 , . . . , uk−2 ,
k−1 k−1
∂ 2 ∆ = ∑ ∑ (−1)i+j+α+β (u1 , . . . , ui−1 , α, ui , . . . , uj−1 , β, uj , . . . , uk−2 )
i=1 j=i
k i−1
+ ∑ ∑ (−1)i+j+α+β (u1 , . . . , uj−1 , β, uj , . . . , ui−2 , α, ui−1 , . . . , uk−2 ).
i=1 j=1

(c) Explain why the second double sum can instead be written as
k−1 k
∑ ∑ (−1) (u1 , . . . , uj−1 , β, uj , . . . , ui−2 , α, ui−1 , . . . , uk−2 ).
i+j+α+β
j=1 i=j+1

(d) Convince yourself that it is valid to replace i by i+1 in this new second
sum, and that doing so gives
k−1 k−1
− ∑ ∑ (−1)i+j+α+β (u1 , . . . , uj−1 , β, uj , . . . , ui−1 , α, ui , . . . , uk−2 ),
j=1 i=j

now with α in the (i + 1)st slot.


(e) Convince yourself that it is valid to exchange the roles of i and j, and
to exchange the roles of α and β, and that doing so gives
k−1 k−1
− ∑ ∑ (−1)i+j+α+β (u1 , . . . , ui−1 , α, ui , . . . , uj−1 , β, uj , . . . , uk−2 ).
i=1 j=i

This cancels the first sum, so we are done.


9.14 The General Fundamental Theorem of Integral Calculus 477

9.14 The General Fundamental Theorem of Integral


Calculus
As mentioned in the previous section, the algebraic encoding d of the deriva-
tive (an analytic operator) and the algebraic encoding ∂ of the boundary (a
geometric operator) are complementary with respect to integration:
Theorem 9.14.1 (General FTIC). Let A be an open subset of Rn . Let C
be a k-chain in A, and let ω be a (k − 1)-form on A. Then

∫ dω = ∫ ω. (9.17)
C ∂C

Before proving the theorem, we study two examples. First, suppose that
k = n = 1, and that the 1-chain C is a singular 1-cube Φ ∶ [0, 1] Ð→ R taking 0
and 1 to some points a and b. Then the theorem says that for every suitable
smooth function f ,
f ′ (x) dx = f (b) − f (a).
b

a
This is the one-variable fundamental theorem of integral calculus. Thus, what-
ever else we are doing, we are indeed generalizing it.
Second, to study a simple case involving more than one variable, suppose
that C = ∆2 (the standard 2-cube) and ω = f (x, y) dy for some smooth function
f ∶ [0, 1]2 Ð→ R. The derivative on the left side of (9.17) works out to

dω = D1 f (x, y) dx ∧ dy,

Exercise 9.5.4 says that we may drop the wedges from the integral of this
2-form over the full-dimensional surface ∆2 in 2-space to obtain a Chapter 6
function-integral, and so the left side of (9.17) works out to

∫ dω = ∫ D1 f (x, y) dx ∧ dy = ∫ D1 f.
∆2 ∆2 [0,1]2

Meanwhile, on the right side of (9.17), the boundary ∂∆2 has four pieces, but
on the two horizontal pieces dy is zero because y is constant. Thus only the
integrals over the two vertical pieces contribute, giving

f (1, u) − ∫ f (0, u) = ∫ f (1, u) − f (0, u).


1 1 1
∫ ω=∫
∂∆2 u=0 u=0 u=0

By the one-variable fundamental theorem, the integrand is

f (1, u) − f (0, u) = ∫ D1 f (t, u),


1

t=0

and so by Fubini’s theorem, the integral is

D1 f (t, u) = ∫
1 1
∫ ∫ D1 f.
u=0 t=0 [0,1]2
478 9 Integration of Differential Forms

Thus both sides of (9.17) work out to ∫[0,1]2 D1 f , making them equal, as
desired, and the general FTIC holds in this case. The first step of its proof is
essentially the same process as in this example.

Proof. Recall that we want to establish formula (9.17), ∫C dω = ∫∂C ω, where


C is a k-chain and ω is a (k − 1)-form. Begin with the special case that C is
the standard k-cube,
C = ∆k ,
̂ j ∧ ⋯ ∧ dxk , where x = (x1 , . . . , xk )
and ω takes the form ω = f (x) dx1 ∧ ⋯ ∧ dx
and the ̂ means to omit the term. Thus

ω = f (x) dxJ where J = (1, . . . , ̂


j, . . . , k).

To evaluate the left side ∫C dω of (9.17), we need to compute dω. In this special
case,
dω = Dj f (x) dxj ∧ dxJ = (−1)j−1 Dj f dx(1,...,k) ,
and so by Exercise 9.5.4, the left side reduces to the function-integral of the
jth partial derivative over the unit box,

∫ dω = (−1)j−1 ∫ Dj f dx(1,...,k) = (−1)j−1 ∫ Dj f. (9.18)


∆k ∆k [0,1]k

To evaluate the right side ∫∂C ω of (9.17), we need to examine the boundary
k 1
∂∆k = ∑ ∑ (−1)i+α ∆ki,α ,
i=1 α=0

where ∆ki,α (u1 , . . . , uk−1 ) = (u1 , . . . , ui−1 , α, ui , . . . , uk−1 ). Note that

⎡10⋯00⋯0⎤
⎢ ⎥
⎢01⋯00⋯0⎥
⎢ ⎥
⎢ ⎥
⎢ ⋮ ⋮ ⋱ ⋮ ⋮ ⋮ ⎥
⎢ ⎥
⎢00⋯10⋯0⎥

(∆i,α ) = ⎢ ⎥.

k
⎢0 0 ⋯ 0 0 ⋯ 0 ⎥

⎢ ⎥
⎢00⋯01⋯0⎥
⎢ ⎥
⎢ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⎥
⎢ ⎥
⎢00⋯00⋯1⎥
⎣ ⎦
This derivative matrix is k × (k − 1), consisting of the identity matrix except
that zeros have been inserted at the ith row, displacing everything from there
downward. Meanwhile, recall that J = (1, . . . , ̂j, . . . , k), where the omitted
index j is fixed throughout this calculation. It follows that as the index i of
summation varies, the determinant of the Jth rows of the matrix is


⎪1 if i = j,
det(∆ki,α )′J =⎨

⎪ if i ≠ j.

0
9.14 The General Fundamental Theorem of Integral Calculus 479

That is, the integral of ω = f (x) dxJ can be nonzero only for the two terms
in the boundary chain ∂∆k with i = j, parametrizing the two boundary faces
whose normal vectors point in the direction missing from dxJ :

∫ f (x) dxJ = ∫ f (x) dxJ


∂∆k (−1)j+1 (∆k
j,1 −∆j,0 )
k

= (−1)j+1 ∫ (f ○ ∆kj,1 ) ⋅ 1 − (f ○ ∆kj,0 ) ⋅ 1.


[0,1]k−1

Here the last equality follows from the definition of integration over chains
and the defining formula (9.14). For every point u = (u1 , . . . , uk−1 ) ∈ [0, 1]k−1 ,
the integrand can be rewritten as an integral of the jth partial derivative by
the one-variable fundamental theorem of integral calculus,

(f ○ ∆kj,1 − f ○ ∆kj,0 )(u)


= f (u1 , . . . , uj−1 , 1, uj , . . . , uk−1 ) − f (u1 , . . . , uj−1 , 0, uj , . . . , uk−1 )
=∫ Dj f (u1 , . . . , uj−1 , t, uj , . . . , uk−1 ).
t∈[0,1]

Therefore, the right side of (9.17) is

∫ ω = (−1)j+1 ∫ ∫ Dj f (u1 , . . . , uj−1 , t, uj , . . . , uk−1 ).


∂∆k u∈[0,1]k−1 t∈[0,1]

By Fubini’s theorem this is equal to the right side of (9.18), and so the general
FTIC is proved in the special case.
The rest of the proof is handled effortlessly by the machinery of forms and
chains. A general (k − 1)-form on [0, 1]k is

̂ j ∧ ⋯ ∧ dxk .
k
ω = ∑ ωj , each ωj = fj (x) dx1 ∧ ⋯ ∧ dx
j=1

Each ωj is a form of the type covered by the special case, and dω = ∑j dωj .
So, continuing to integrate over the standard k-cube, and citing the special
case just shown for the crucial third equality,

∫ dω = ∫ ∑ dωj = ∑ ∫ dωj
∆k ∆k j j ∆k

= ∑∫ ωj = ∫ ∑ ωj = ∫ ω.
j ∂∆k ∂∆k j ∂∆k

Thus the theorem holds for a general form when C = ∆k .


For a singular k-cube Φ in A and for every (k − 1)-form ω on A, we now
have
480 9 Integration of Differential Forms

∫ dω = ∫ Φ∗ (dω) by the pullback theorem


Φ ∆k

=∫ d(Φ∗ ω) since derivative commutes with pullback


∆k

=∫ Φ∗ ω since the result holds on ∆k


∂∆k
by the change of variable theorem for
=∫ ω
Φ○∂∆k differential forms, extended to chains
=∫ ω by definition of boundary.
∂Φ

So the result holds for singular cubes.


Finally, for a k-chain C = ∑s νs Φ(s) in A and for every (k − 1)-form ω on A,

∫ dω = ∫ dω = ∑ νs ∫ dω = ∑ νs ∫ ω,
C ∑s νs Φ(s) s Φ(s) s ∂Φ(s)

with the third equality due to the result for singular cubes, and the calculation
continues

∑ νs ∫ ω=∫ ω=∫ ω=∫ ω.


s ∂Φ(s) ∑s νs ∂Φ(s) ∂(∑s νs Φ(s) ) ∂C

This completes the proof. ⊔


The beauty of this argument is that the only analytic results that it uses
are the one-variable FTIC and Fubini’s theorem, and the only geometry that
it uses is the definition of the boundary of a standard k-cube. All the twist-
ing and turning of k-surfaces and their boundaries in n-space is filtered out
automatically by the algebra of differential forms.
Computationally, the general FTIC will sometimes give you a choice be-
tween evaluating two integrals, one of which may be easier to work. Note that
the integral of lower dimension may not be the preferable one, however; for
example, integrating over a solid 3-cube may be quicker than integrating over
the six faces of its boundary.
Conceptually the general FTIC is exciting because it allows the possi-
bility of evaluating an integral over a region by antidifferentiating and then
integrating only over the boundary of the region instead.

Exercises

9.14.1. Similarly to the second example before the proof of the general FTIC,
show that the theorem holds when C = ∆3 and ω = f (x, y, z) dz ∧ dx.

9.14.2. Prove as a corollary to the general FTIC that ∂ 2 = 0, in the sense


that ∫∂ 2 C ω = 0 for all forms ω.
9.15 Classical Change of Variable Revisited 481

9.14.3. Let C be a k-chain in Rn , f ∶ Rn Ð→ R a function, and ω a (k − 1)-


form on Rn . Use the general FTIC to prove a generalization of the formula
for integration by parts,

∫ f dω = ∫ f ω − ∫ df ∧ ω.
C ∂C C

9.14.4. Let Φ be a 4-chain in R4 with boundary ∂Φ. Suitably specialize the


general FTIC to prove the identity

∫ f1 dy ∧ dz ∧ dw + f2 dz ∧ dw ∧ dx + f3 dw ∧ dx ∧ dy + f4 dx ∧ dy ∧ dz
∂Φ

= ∫ (D1 f1 − D2 f2 + D3 f3 − D4 f4 ) dx ∧ dy ∧ dz ∧ dw.
Φ

Here the order of the variables is (x, y, z, w).

9.15 Classical Change of Variable Revisited


The most technical argument in these notes is the proof of the classical change
of variable theorem (Theorem 6.7.1) in Sections 6.8 and 6.9. The analytic re-
sults contributing to the proof were the one-variable change of variable the-
orem and Fubini’s theorem, and of these, the one-variable change of variable
theorem is a consequence of the one-variable FTIC. Meanwhile, the analytic
results contributing to the proof of the general FTIC were the one-variable
FTIC and Fubini’s theorem. Thus the proofs of the multivariable classical
change of variable theorem and of the general FTIC rely on the same analy-
sis. However, the proof of the general FTIC was easy. Now, with the general
FTIC in hand, we revisit the classical change of variable theorem, sketching
the light, graceful proof that it deserves in turn.
The first issue to address is that the classical change of variable theorem
has been quoted in this chapter, and so if we now propose to revisit its proof
then we must take care not to argue in a circle. In fact, our only uses of the clas-
sical change of variable theorem in this chapter were to prove that integrals
of functions over surfaces are independent of reparametrization (the end of
Section 9.1) and that integrals of differential forms over surfaces are indepen-
dent of orientation-preserving reparametrization (Exercise 9.5.5). The proof
of the general FTIC requires neither the classical change of variable theorem
nor independence of parametrization. Thus this chapter could have proceeded
without the classical change of variable theorem, but then requiring us to re-
member that all of its results were provisionally parametrization-dependent.
A schematic layout of the ideas is shown in Figure 9.16.
Nonetheless, even the parametrization-dependent general FTIC, which we
may grant ourselves without the classical change of variable theorem, is a
powerful result, and in particular it leads to the conceptually different proof
of the classical change of variable theorem. Once the theorem is proved, we
482 9 Integration of Differential Forms

FTIC CoV
(n = 1) (n = 1)
+3 Fubini
♣♣
♣♣♣♣♣
♣♣
♣♣♣♣♣
 ♣♣
s{ ♣♣
CoV
(n > 1)

∫ f ind. of param.
Φ

∫ ω ind. of orient.-pres. param.


Φ

'/ FTIC ow
(general)

Figure 9.16. Layout of the main results as established so far

may conclude that this chapter’s results are independent of parametrization


after all. Being patient gives us the same results more easily. The provisional
new layout of the ideas is shown in Figure 9.17. The improved organization is
clear.
Let J be a box in Rn , and consider a smooth change of variable mapping
Φ ∶ J Ð→ Rn .
(See Figure 9.18.) Assume that
det Φ′ > 0 everywhere on J.
To prove the classical change of variable theorem, we need to show that the
following formula holds for every smooth function f ∶ Φ(J) Ð→ R:

∫ f = ∫ (f ○ Φ) ⋅ det Φ′ .
Φ(J) J

View the mapping Φ as a singular n-cube in Rn . (Since J need not be the unit
box, the definition of a singular n-cube is being extended here slightly to allow
any box as the domain. The boundary operator extends correspondingly, as
discussed at the end of Section 9.13.) Consider the trivial parametrization of
the image of the cube,
9.15 Classical Change of Variable Revisited 483

FTIC CoV
(n = 1) ❖❖ (n = 1)
3+ Fubini
♣♣
❖❖❖
❖❖❖❖ ♣♣♣♣♣
♣♣
❖❖❖
❖❖❖❖ ♣♣♣♣♣
♣♣
❖ #+ s{ ♣♣
FTIC
(general)


CoV
(n > 1)

∫ f ind. of param.
Φ

∫ ω ind. of orient.-pres. param.


Φ

Figure 9.17. Provisional layout of the main results after this section

∆Φ(J) ∶ Φ(J) Ð→ Rn , ∆Φ(J) (x) = x for all x ∈ Φ(J).


Let ∆J be the trivial parametrization of J. In the language of differential
forms, the formula that we need to prove is

∫ ω=∫ Φ∗ ω where ω = f (x) dx. (9.19)


∆Φ(J) ∆J

Here x = (x1 , . . . , xn ) and dx = dx1 ∧ ⋯ ∧ dxn , and the pullback on the right
side of the equality is Φ∗ ω = (f ○ Φ)(x) det Φ′ (x) dx. (Note that applying the
pullback theorem (Theorem 9.10.1) reduces the desired formula to

∫ ω = ∫ ω,
∆Φ(J) Φ

i.e., to independence of parametrization, the one result in this chapter that


relied on the classical change of variable theorem.) The starting idea of this
section is to try to derive (9.19) from the general FTIC.
To see how this might be done, begin by reviewing the derivation of the
one-variable change of variable theorem from the one-variable FTIC, display-
ing the calculation in two parts,

F ′ = F (φ(b)) − F (φ(a))
φ(b) φ(b)
∫ f =∫ (9.20)
φ(a) φ(a)
484 9 Integration of Differential Forms

Figure 9.18. The singular cube Φ

and
∫ (f ○ φ) ⋅ φ = ∫ (F ○ φ) = (F ○ φ)(b) − (F ○ φ)(a).
b b
′ ′
(9.21)
a a
Since the right sides are equal, so are the left sides, giving the theorem. Here
the first version of the one-variable FTIC (Theorem 6.4.1) provides the an-
tiderivative F = ∫φ(a) f of f .
x

Now, starting from the integral on the left side of the desired equal-
ity (9.19), attempt to pattern-match the calculation (9.20) without yet wor-
rying about whether the steps are justified or even meaningful,

∫ ω=∫ dλ = ∫ λ. (9.22)
∆Φ(J) ∆Φ(J) ∂∆Φ(J)

Similarly, the integral on the right side of (9.19) looks like the integral at the
beginning of the calculation (9.21), so pattern-match again,

∫ Φ∗ ω = ∫ d(Φ∗ λ) = ∫ Φ∗ λ. (9.23)
∆J ∆J ∂∆J

Thus it suffices to show that the right sides are equal,

∫ λ=∫ Φ∗ λ.
∂∆Φ(J) ∂∆J

This formula looks like the desired (9.19) but with (n−1)-dimensional integrals
of (n−1)-forms. Perhaps we are discovering a proof of the multivariable change
of variable theorem by induction on the number of variables. But we need to
check whether the calculation is sensible.
Just as the one-variable calculation rewrote f as F ′ , the putative multi-
variable calculation has rewritten ω as dλ, but this needs justification. Recall
that ω = f (x) dx. Although Φ(J) is not a box, an application of Theorem 6.4.1
to the first variable shows that in the small, f takes the form

f (x1 , x2 , . . . , xn ) = D1 F (x1 , x2 , . . . , xn ).
9.15 Classical Change of Variable Revisited 485

Consequently the λ in our calculation can be taken as

λ = F (x) dx2 ∧ ⋯ ∧ dxn ,

provided that whatever we are doing is on a small enough scale. So now we


assume that the box J is small enough that the argument of this paragraph
applies at each point of the nonbox Φ(J). We can do this by partitioning the
original box J finely enough into subboxes J ′ and then carrying out the proof
for each subbox. Alternatively, by Proposition 6.9.3 we may assume that f is
identically 1 and then take F (x) = x1 . Or, to avoid any specific calculation
we may assume that the box J is small enough that Φ(J) is contractible, and
then ω has an antiderivative λ by Poincaré’s theorem, Theorem 9.11.2. Once
we have λ, the objects in (9.23) are noncontroversial and the steps are clear,
except perhaps the tacit exchange of the derivative and the pullback in the
first step of pattern-matching. The remaining issue is what to make of the
symbol-pattern ∫∆Φ(J) dλ = ∫∂∆Φ(J) λ in (9.22). Recall that ∆Φ(J) is the trivial
parametrization of Φ(J). However, in dimension n > 1, Φ(J) is not a box, so
∆Φ(J) is not a cube, and so ∂∆Φ(J) has no meaning. Even if we know the
topological boundary of Φ(J) (the points arbitrarily close to Φ(J) and to its
complement), the topological boundary inherits no canonical traversal from
the trivial parametrization. The calculation is not sensible.
A 1999 article by Peter Lax in the American Mathematical Monthly shows
how to solve this problem. Recall that we are working with a mapping

Φ ∶ J Ð→ Rn .

The box J is compact, and hence so is its continuous image Φ(J). Therefore
some large box B contains them both. If J is small enough then because
det Φ′ > 0 on J, it follows from some analysis that Φ extends to a mapping

Ψ ∶ B Ð→ Rn

such that
• Ψ is the original Φ on J,
• Ψ takes the complement of J in B to the complement of Φ(B) in B,
• Ψ is the identity mapping on the boundary of B.
(See Figure 9.19.) Furthermore, the n-form ω on the original Φ(J) can be
modified into a form ω on the larger set B such that
• ω is the original ω on Φ(J),
• ω = 0 essentially everywhere off the original Φ(J).
And now that the nonbox Φ(J) has been replaced by the box B, the calcula-
tion of the antiderivative form λ such that ω = dλ works in the large.
Let ∆B denote the trivial parametrization of B. Then the properties of Ψ
and ω show that the desired equality (9.19) has become
486 9 Integration of Differential Forms

Figure 9.19. The extension of Φ to B

∫ ω=∫ Ψ ∗ ω,
∆B ∆B

the integrals on both sides now being taken over the same box B. Again
pattern-matching the one-variable proof shows that the integral on the left
side is
∫ ω = ∫ dλ = ∫ λ
∆B ∆B ∂∆B
and the integral on the right side is

∫ Ψ ∗ω = ∫ d(Ψ ∗ λ) = ∫ Ψ ∗ λ,
∆B ∆B ∂∆B

where everything here makes sense. Thus the problem is reduced to proving
that
∫ B λ = ∫ B Ψ λ.

∂∆ ∂∆
And now the desired equality is immediate: since Ψ is the identity mapping on
the boundary of B, the pullback Ψ ∗ in the right-side integral of the previous
display does nothing, and the two integrals are equal. (See Exercise 9.15.1 for a
slight variant of this argument.) The multivariable argument has ended exactly
as the one-variable argument did. We did not need to argue by induction after
all.
In sum, the general FTIC lets us side-step the traditional proof of the
classical change of variable theorem, by expanding the environment of the
problem to a larger box and then reducing the scope of the question to the
larger box’s boundary. On the boundary there is no longer any difference
between the two quantities that we want to be equal, and so we are done.
The reader may well object that the argument here is only heuristic, and
that there is no reason to believe that its missing technical details will be
any less onerous than those of the usual proof the classical change of variable
9.16 The Classical Theorems 487

theorem. The difficulty of the usual proof is that it involves nonboxes, while
the analytic details of how this argument proceeds from the nonbox Φ(J) to
a box B were not given. Along with the extensions of Φ and ω to B being
invoked, the partitioning of J into small enough subboxes was handwaved.
Furthermore, the change of variable mapping Φ is assumed here to be smooth,
whereas in Theorem 6.7.1 it need only be C 1 . But none of these matters is
serious. A second article by Lax, written in response to such objections, shows
how to take care of them. Although some analysis is admittedly being elided
here, the new argument nonetheless feels more graceful to the author of these
notes than the older one.

Exercise

9.15.1. Show that in the argument at the end of this section, we could instead
reason about the integral on the right side that

∫ Ψ ∗ ω = ∫ dλ = ∫ λ.
∆B Ψ ∂Ψ

Thus the problem is reduced to proving that ∫∂∆B λ = ∫∂Ψ λ. Explain why the
desired equality is immediate.

9.16 The Classical Theorems


The classical integration theorems of vector calculus arise from specializing n
and k in the general FTIC. As already noted, the values n = k = 1 give the
one-variable FTIC,
dx = f (b) − f (a).
b df

a dx
If k = 1 but n is left arbitrary then the result is familiar from Section 9.4. For
a curve γ ∶ [0, 1] Ð→ Rn , let a = γ(0) and b = γ(1). Then

dx1 + ⋯ + dxn = f (b) − f (a).


∂f ∂f

γ ∂x1 ∂xn

Setting n = 2, k = 2 gives Green’s theorem: Let A be an open subset


of R2 . For every singular 2-cube Φ in A and functions f, g ∶ A Ð→ R,

∬ ( − ) dx ∧ dy = ∫ f dx + g dy.
∂g ∂f
Φ ∂x ∂y ∂Φ

The double integral sign is used on the left side of Green’s theorem to em-
phasize that the integral is two-dimensional. Naturally the classical statement
doesn’t refer to a singular cube or include a wedge. Instead, the idea classi-
cally is to view Φ as a set in the plane and require a traversal of ∂Φ (also
488 9 Integration of Differential Forms

Figure 9.20. Traversing the boundary in Green’s theorem

viewed as a set) such that Φ is always to the left as one moves along ∂Φ.
Other than this, the boundary integral is independent of how the boundary is
traversed because the whole theory is invariant under orientation-preserving
reparametrization. (See Figure 9.20.)
Green’s theorem has two geometric interpretations. To understand them,
first let A ⊂ R2 be open and think of a vector-valued mapping F⃗ ∶ A Ð→ R2
as defining a fluid flow in A. Define two related scalar-valued functions on A,
curl F⃗ = D1 F2 − D2 F1 and div F⃗ = D1 F1 + D2 F2 .
These are two-dimensional versions of the quantities from exercises 9.8.4
and 9.8.5. Now consider a point p in A. Note that curl F⃗ (p) and div F⃗ (p)
depend only on the derivatives of F⃗ at p, not on F⃗ (p) itself. So replacing F⃗
by F⃗ − F⃗ (p), we may assume that F⃗ (p) = 0, i.e., the fluid flow is stationary
at p. Recall that D1 F2 is the rate of change of the vertical component of F
with respect to change in the horizontal component of its input, and D2 F1 is
the rate of change of the horizontal component of F with respect to change
in the vertical component of its input. The left side of Figure 9.21 shows a
scenario in which the two terms D1 F2 and −D2 F1 of (curl F⃗ )(p) are positive.
The figure illustrates why curl F⃗ is interpreted as measuring the vorticity of F⃗
at p, its tendency to rotate a paddle wheel at p counterclockwise. Similarly,
D1 F1 is the rate of change of the horizontal component of F with respect
to change in the horizontal component of its input, and D2 F2 is the rate of
change of the vertical component of F with respect to change in the verti-
cal component of its input. The right side of Figure 9.21 shows a scenario in
which the terms of (div F⃗ )(p) are positive. The figure illustrates why div F⃗
is viewed as measuring the extent to which fluid is spreading out from p, i.e.,
how much fluid is being pumped into or drained out of the system at the
point. Specifically, the left side of the figure shows the vector field
9.16 The Classical Theorems 489

F⃗ (x, y) = (−y, x)

whose curl and divergence at the origin are

(curl F⃗ )(0) = 2, (div F⃗ )(0) = 0,

and the right side shows (with some artistic license taken to make the figure
legible rather than accurate) the vector field

F⃗ (x, y) = (x, y)

whose curl and divergence at the origin are

(curl F⃗ )(0) = 0, (div F⃗ )(0) = 2.

Figure 9.21. Positive curl and positive divergence

For the two geometric interpretations of Green’s theorem, introduce the


notation

ds = (dx, dy), dn = (dy, −dx).


Ð→ Ð→
dA = dx ∧ dy,
Ð
→ Ð→
The form-vectors ds and dn on ∂Φ are viewed respectively as differential
increment around the boundary and differential outward normal (see Ex-
ercise 9.16.1), while dA is differential area. Then setting F⃗ = (f, g) and
F⃗ = (g, −f ) respectively shows that Green’s theorem says that

∬ curl F⃗ dA = ∫ F⃗ ⋅ ds ∬ div F⃗ dA = ∫ F⃗ ⋅ dn.


Ð
→ Ð→
and
Φ ∂Φ Φ ∂Φ

The resulting two interpretations are


the net counterclockwise vorticity of F⃗ throughout Φ
equals the net flow of F⃗ counterclockwise around ∂Φ
490 9 Integration of Differential Forms

and
the net positive rate of creation of fluid by F⃗ throughout Φ
equals the net flux of F⃗ outward through ∂Φ.
These interpretations appeal strongly to physical intuition.
We can also bring dimensional analysis to bear on the integrals in Green’s
theorem. Again view the vector field F⃗ as a velocity field describing a fluid
flow. Thus each component function of F⃗ carries units of length over time
(for instance, m/s). The partial derivatives that make up curl F⃗ and div F⃗
are derivatives with respect to space-variables, so the curl and the divergence
carry units of reciprocal time (1/s). The units of the area-integral on the left
side of Green’s theorem are thus area over time (1/s ⋅ m2 = m2 /s), as are the
units of the path-integral on the right side (m/s ⋅ m = m2 /s as well). Thus both
integrals measure area per unit of time. If the fluid is incompressible then
area of fluid is proportional to mass of fluid, and so both integrals essentially
measure fluid per unit of time: the amount of fluid being created throughout
the region per unit of time, and the amount of fluid passing through the
boundary per unit of time; or the amount of fluid circulating throughout the
region per unit of time, and the amount of fluid flowing along the boundary
per unit of time.
The physical interpretations of divergence and curl will be discussed more
carefully in the next section.
Setting n = 3, k = 2 gives Stokes’s theorem: Let A be an open subset
of R3 . For a singular 2-cube Φ in A and functions f, g, h ∶ A Ð→ R,

∬ ( − ) dy ∧ dz + ( − ) dz ∧ dx + ( − ) dx ∧ dy
∂h ∂g ∂f ∂h ∂g ∂f
Φ ∂y ∂z ∂z ∂x ∂x ∂y
=∫ f dx + g dy + h dz.
∂Φ

Introduce the notation

ds = (dx, dy, dz) dn = (dy ∧ dz, dz ∧ dx, dx ∧ dy),


Ð
→ Ð→
and

and for a vector-valued mapping F⃗ ∶ R3 Ð→ R3 define

curl F⃗ = (D2 F3 − D3 F2 , D3 F1 − D1 F3 , D1 F2 − D2 F1 ).

Then setting F⃗ = (f, g, h) shows that Stokes’s theorem is

∬ curl F⃗ ⋅ dn = ∫ F⃗ ⋅ ds.
Ð→ Ð

Φ ∂Φ

As with Green’s theorem, the classical statement doesn’t refer to a singular


cube or include a wedge. Instead, Φ is an orientable two-dimensional set in
space, and its boundary ∂Φ is traversed counterclockwise about its normal
vectors. The integrals in the previous display are both independent of how Φ
and ∂Φ are parametrized, provided that the geometry is as just described.
9.16 The Classical Theorems 491

To interpret Stokes’s theorem, think of a mapping F⃗ ∶ R3 Ð→ R3 as de-


scribing a fluid flow in space. The mapping curl F⃗ is interpreted as measuring
the local vorticity of F⃗ around each positive coordinate direction. The form-
Ð
→ Ð

vector dn on Φ is viewed as differential outward normal, while ds on ∂Φ is
viewed as differential increment around the boundary. Thus the interpreta-
tion of Stokes’s theorem is a 3-dimensional version of the first interpretation
of Green’s theorem,
the net tangent vorticity of F⃗ throughout Φ
equals the net flow of F⃗ around ∂Φ.

Setting n = 3, k = 3 gives the divergence theorem (or Gauss’s the-


orem): Let A be an open subset of R3 . For a singular 3-cube Φ in A and
functions f, g, h ∶ A Ð→ R,

∭ ( + + ) dx ∧ dy ∧ dz = ∬ f dy ∧ dz + g dz ∧ dx + h dx ∧ dy.
∂f ∂g ∂h
Φ ∂x ∂y ∂z ∂Φ

Introduce the notation


dV = dx ∧ dy ∧ dz,
and for a vector-valued mapping F⃗ ∶ R3 Ð→ R3 define

div F⃗ = D1 F1 + D2 F2 + D3 F3 .

Then setting F⃗ = (f, g, h) shows that the divergence theorem is

∭ div F⃗ dV = ∬ F⃗ ⋅ dn.
Ð→
Φ ∂Φ

Thus the interpretation of the divergence theorem is a 3-dimensional version


of the second interpretation of Green’s theorem,
the net positive creation of fluid by F⃗ throughout Φ
equals the net flux of F⃗ outward through ∂Φ.
Again, the classical theorem views Φ and ∂Φ as sets, as long as whatever
parametrization of ∂Φ is used to compute the right-side integral has the same
orientation as the boundary of the parametrization of Φ used to compute the
left-side integral.

Exercises

9.16.1. (a) Let γ ∶ [0, 1] Ð→ R2 , t ↦ γ(t) be a curve, and recall the form-
vectors on R2 ds = (dx, dy), dn = (dy, −dx). Compute the pullbacks γ ∗ (ds)
Ð
→ Ð→ Ð→

and γ ∗ (dn) and explain why these are interpreted as differential tangent and
Ð→
normal vectors to γ.
492 9 Integration of Differential Forms

(b) Let γ ∶ [0, 1] Ð→ R3 , t ↦ γ(t) be a curve and Φ ∶ [0, 1]2 Ð→ R3 ,


(u, v) ↦ Φ(u, v) a surface, and recall the form-vectors on R3 ds = (dx, dy, dz),
Ð

dn = (dy ∧dz, dz ∧dx, dx∧dy). Compute the pullbacks γ ∗ (ds) and Φ∗ (dn) and
Ð→ Ð
→ Ð→
explain why these are interpreted respectively as differential tangent vector
to γ and differential normal vector to Φ.

9.16.2. Use Green’s theorem to show that for a planar region Φ,

area(Φ) = ∫ x dy = − ∫ y dx.
∂Φ ∂Φ

Thus one can measure the area of a planar set by traversing its bound-
ary. (This principle was used to construct ingenious area-measuring machines
called planimeters before Green’s theorem was ever written down.)

9.16.3. Let H be the upper unit hemispherical shell,

H = {(x, y, z) ∈ R3 ∶ x2 + y 2 + z 2 = 1, z ≥ 0}.

Define a vector-valued function on R3 ,

F (x, y, z) = (x + y + z, xy + yz + zx, xyz).


Ð→
Use Stokes’s theorem to calculate ∬H curl F ⋅ dn.

9.16.4. Use the divergence theorem to evaluate

∫ x2 dy ∧ dz + y 2 dz ∧ dx + z 2 dx ∧ dy,
∂H

where ∂H is the boundary of the solid unit hemisphere

H = {(x, y, z) ∈ R3 ∶ x2 + y 2 + z 2 ≤ 1, z ≥ 0}.

(Thus ∂H is the union of the unit disk in the (x, y)-plane and the unit upper
hemispherical shell.) Feel free to cancel terms by citing symmetry if you’re
confident of what you’re doing.

9.16.5. Let g and h be functions on R3 . Recall the operator ∇ = (D1 , D2 , D3 ),


which takes scalar-valued functions to vector-valued functions. As usual, define
the Laplacian operator to be ∆ = D11 + D22 + D33 . From an earlier exercise,
∆ = div ○ grad.
(a) Prove that div (g ∇h) = g ∆h + ∇g ⋅ ∇h.
(b) If D is a closed compact subset of R3 with positively oriented bound-
ary ∂D, prove that

∭ (g ∆h + ∇g ⋅ ∇h) dV = ∬
Ð→
g ∇h ⋅ dn.
D ∂D
9.17 Divergence and Curl in Polar Coordinates 493

(Here n is the unit outward normal to D and ∇h⋅n is the directional derivative
of h in the direction of n.) Interchange g and h and subtract the resulting
formula from the first one to get

∭ (g ∆h − h ∆g) dV = ∬ (g ∇h − h ∇g) ⋅ dn.


Ð→
D ∂D

These two formulas are Green’s identities.


(c) Assume that h is harmonic, meaning that it satisfies the harmonic
equation ∆h = 0.
Take g = h and use Green’s first identity to conclude that if h = 0 on the
boundary ∂D then h = 0 on all of D.
Take g = 1 and use Green’s second identity to show that
Ð→
∬ ∇h ⋅ dn = 0.
∂D

What does this say about harmonic functions and flux?

9.17 Divergence and Curl in Polar Coordinates


The picture-explanations given in the previous section to interpret the diver-
gence and the curl are not entirely satisfying. Working with the polar coor-
dinate system further quantifies the ideas and makes them more coherent by
applying to both operators in the same way.
Rather than study the divergence and the curl of a vector field F̃ at a
general point p, we may study the divergence and the curl of the modified
vector field
F (x) = F̃(x + p) − F̃(p)
at the convenient particular point 0, at which the value of F is 0 as well. That
is, we may normalize the point p to be 0 by prepending a translation of the
domain, and we also may normalize F (0) to 0 by postpending a translation
of the range. With this in mind, let A ⊂ R2 be an open set that contains the
origin, and let F be a continuous vector field on A that is stationary at the
origin,
F = (f1 , f2 ) ∶ A Ð→ R2 , F (0) = 0.
At every point other than the origin, F resolves into a radial component and
an angular component. Specifically,

F = Fr + Fθ ,

where

Fr = fr r̂, fr = F ⋅ r̂, r̂ = (cos θ, sin θ) = (x, y)/∣(x, y)∣,


Fθ = fθ θ̂, fθ = F ⋅ θ̂, θ̂ = r̂× = (− sin θ, cos θ) = (−y, x)/∣(x, y)∣.
494 9 Integration of Differential Forms

(Recall that the unary cross product (x, y)× = (−y, x) in R2 rotates vectors
90 degrees counterclockwise.) Here fr is positive if Fr points outward and
negative if Fr points inward, and fθ is positive if Fθ points counterclockwise
and negative if Fθ points clockwise. Since F (0) = 0, the resolution of F into
radial and angular components extends continuously to the origin, fr (0) =
fθ (0) = 0, so that Fr (0) = Fθ (0) = 0 even though r̂ and θ̂ are undefined at
the origin.
The goal of this section is to express the divergence and the curl of F
at the origin in terms of the polar coordinate system derivatives that seem
naturally suited to describe them, the radial derivative of the scalar radial
component of F ,
fr (r cos θ, r sin θ)
Dr fr (0) = lim+ ,
r→0 r
and the radial derivative of the scalar angular component of F ,
fθ (r cos θ, r sin θ)
Dr fθ (0) = lim+ .
r→0 r
However, matters aren’t as simple here as one might hope. For one thing,
the limits are stringent in the sense that they must always exist and take
the same values regardless of how θ behaves as r → 0+ . Also, although F
is differentiable at the origin if its vector radial and angular components Fr
and Fθ are differentiable at the origin, the converse is not true. So first we
need sufficient conditions for the converse, i.e., sufficient conditions for the
components to be differentiable at the origin. Necessary conditions are always
easier to find, so Proposition 9.17.1 will do so, and then Proposition 9.17.2 will
show that the necessary conditions are sufficient. The conditions in question
are the Cauchy–Riemann equations,

D1 f1 (0) = D2 f2 (0),
D1 f2 (0) = −D2 f1 (0).

When the Cauchy–Riemann equations hold, we can describe the divergence


and the curl of F at the origin in polar terms, as desired. This will be the
content of Theorem 9.17.3.
Before we proceed to the details, a brief geometric discussion of the
Cauchy–Riemann equations may be helpful. The equation D1 f1 = D2 f2 de-
scribes the left side of Figure 9.22, in which the radial component of F on
the horizontal axis is growing at the same rate as the radial component on
the vertical axis. Similarly, the equation D2 f1 = −D1 f2 describes the right
side of the figure, in which the angular component on the vertical axis is
growing at the same rate as the angular component on the horizontal axis.
Combined with differentiability at the origin, these two conditions will imply
that moving outward in any direction, the radial component of F is growing
at the same rate as it is on the axes, and similarly for the angular component.
Thus the two limits that define the radial derivatives of the radial and angular
9.17 Divergence and Curl in Polar Coordinates 495

components of F at 0 (these were displayed in the previous paragraph) are


indeed independent of θ. An example of this situation, with radial and angular
components both present, is shown in Figure 9.23.

Figure 9.22. Geometry of the Cauchy–Riemann equations individually

Figure 9.23. Geometry of the Cauchy–Riemann equations together

As mentioned, the necessity of the Cauchy–Riemann equations is the nat-


ural starting point.

Proposition 9.17.1 (Polar differentiability implies differentiability


and the Cauchy–Riemann equations). Let A ⊂ R2 be an open set that
contains the origin, and let F be a continuous vector field on A that is sta-
tionary at the origin,
496 9 Integration of Differential Forms

F = (f1 , f2 ) ∶ A Ð→ R2 , F (0) = 0.

Assume that the vector radial and angular components Fr and Fθ of F are
differentiable at the origin. Then F is differentiable at the origin, and the
Cauchy–Riemann equations hold at the origin.
For example, the vector field F (x, y) = (x, 0) is differentiable at the origin,
but since D1 f1 (0) = 1 and D2 f2 (0) = 0, it does not satisfy the Cauchy–
Riemann equations, and so the derivatives of the radial and angular compo-
nents of F at the origin do not exist.
Proof. As already noted, the differentiability of F at the origin is immedi-
ate, because F = Fr + Fθ and the sum of differentiable mappings is again
differentiable. We need to establish the Cauchy–Riemann equations.
The radial component Fr is stationary at the origin, and we are given
that it is differentiable at the origin. By the componentwise nature of differ-
entiability, the first component Fr,1 of Fr is differentiable at the origin, and
so necessarily both partial derivatives of Fr,1 exist at 0. Since Fr,1 vanishes
on the y-axis, the second partial derivative is 0. Thus the differentiability
criterion for the first component of Fr is

Fr,1 (h, k) − hD1 Fr,1 (0) = o(h, k).

To further study the condition in the previous display, use the formula



fr (x,y)
(x, y) if (x, y) ≠ 0,
Fr (x, y) = ⎨ ∣(x,y)∣

⎪ if (x, y) = 0
⎩0
to substitute h fr (h, k)/∣(h, k)∣ for Fr,1 (h, k). Also, because Fθ is angular,
Fθ,1 vanishes on the x-axis, and so D1 Fθ,1 (0) = 0; thus, since f1 = Fr,1 + Fθ,1 ,
we may substitute D1 f1 (0) for D1 Fr,1 (0) as well. Altogether the condition
becomes
h(fr (h, k)/∣(x, y)∣ − D1 f1 (0)) = o(h, k).
A similar argument using the second component Fr,2 of Fr shows that

k(fr (h, k)/∣(x, y)∣ − D2 f2 (0)) = o(h, k).

And so we have shown that the first Cauchy–Riemann equation holds and a
little more,
fr (h, k)
= D1 f1 (0) = D2 f2 (0).
(h,k)→0 ∣(h, k)∣
lim

For the second Cauchy–Riemann equation we could essentially repeat the


argument just given, but a quicker way is to consider the radial component
of the vector field −F × = fθ r̂ − fr θ̂,


⎪ ∣(x,y)∣
fθ (x,y)
(x, y) if (x, y) ≠ 0,
(−F )r (x, y) = ⎨

×

⎩0 if (x, y) = 0.
9.17 Divergence and Curl in Polar Coordinates 497

This radial component is differentiable at the origin since it is a rotation of the


angular component of the original F , which we are given to be differentiable
at the origin. And −F × = (f2 , −f1 ) in Cartesian coordinates, so as just argued,

fθ (h, k)
= D1 f2 (0) = −D2 f1 (0).
∣(h, k)∣
lim
(h,k)→0

This last display encompasses the second Cauchy–Riemann equation at the


origin.
Note that the argument has used the full strength of the hypotheses, i.e.,
it has used the differentiability at the origin of each component function of Fr
and each component function of Fθ . ⊔

As mentioned, the converse to Proposition 9.17.1 holds too.

Proposition 9.17.2 (Differentiability and the Cauchy–Riemann equa-


tions imply polar differentiability). Let A ⊂ R2 be an open set that con-
tains the origin, and let F be a continuous vector field on A that is stationary
at the origin,
F = (f1 , f2 ) ∶ A Ð→ R2 , F (0) = 0.
Assume that F is differentiable at the origin, and assume that the Cauchy–
Riemann equations hold at the origin. Then the vector radial and angular
components Fr and Fθ are differentiable at the origin.

Proof. Let a = D1 f1 (0) and let b = D1 f2 (0). By the Cauchy–Riemann equa-


tions, also a = D2 f2 (0) and b = −D2 f1 (0), so that the Jacobian matrix of F
at 0 is
a −b
F ′ (0) = [ ].
b a
The condition that F is differentiable at 0 is

F (h, k) − (ah − bk, bh + ak) = o(h, k).

Decompose the quantity in the previous display into radial and angular com-
ponents,

F (h, k) − (ah − bk, bh + ak) = (Fr (h, k) − a(h, k)) + (Fθ (h, k) − b(−k, h)).

Since the components are at most as long as the vector,

Fr (h, k) − a(h, k) = o(h, k) and Fθ (h, k) − b(−k, h) = o(h, k).

That is, Fr and Fθ are differentiable at the origin with respective Jacobian
matrices
0 −b
Fr′ (0) = [ ] Fθ′ (0) = [ ].
a0
and
0a b 0
This completes the proof. ⊔

498 9 Integration of Differential Forms

Now we can return to the divergence and the curl.

Theorem 9.17.3 (Divergence and curl in polar coordinates). Let A ⊂


R2 be a region of R2 containing the origin, and let F be a continuous vector
field on A that is stationary at the origin,

F = (f1 , f2 ) ∶ A Ð→ R2 , F (0) = 0.

Assume that F is differentiable at the origin and that the Cauchy–Riemann


equations hold at the origin. Then the radial derivatives of the scalar radial
and angular components of F at the origin,

fr (r cos θ, r sin θ)
Dr fr (0) = lim+
r→0 r
and
fθ (r cos θ, r sin θ)
Dr fθ (0) = lim+ ,
r→0 r
both exist independently of how θ behaves as r shrinks to 0. Furthermore,
the divergence of F at the origin is twice the radial derivative of the radial
component,
(div F )(0) = 2Dr fr (0),
and the curl of F at the origin is twice the radial derivative of the angular
component,
(curl F )(0) = 2Dr fθ (0).

Proof. By Proposition 9.17.2, the angular and radial components of F are


differentiable at the origin, so that the hypotheses of Proposition 9.17.1 are
met. The first limit in the statement of the theorem was calculated in the
proof of Proposition 9.17.1,

fr (h, k)
Dr fr (0) = = D1 f1 (0) = D2 f2 (0).
∣(h, k)∣
lim
(h,k)→0

This makes the formula for the divergence immediate,

(div F )(0) = D1 f1 (0) + D2 f2 (0) = 2Dr fr (0).

Similarly, again recalling the proof of Proposition 9.17.1,

fθ (h, k)
Dr fθ (0) = = D1 f2 (0) = −D2 f1 (0),
(h,k)→0 ∣(h, k)∣
lim

so that
(curl F )(0) = D1 f2 (0) − D2 f1 (0) = 2Dr fθ (0).


9.17 Divergence and Curl in Polar Coordinates 499

If F is a velocity field then the limit in the formula


fθ (r cos θ, r sin θ)
(curl F )(0) = 2 lim+
r→0 r
has the interpretation of the angular velocity of F at the origin. That is:
When the Cauchy–Riemann equations hold, the curl is twice the an-
gular velocity.
Indeed, the angular velocity ω away from the origin is by definition the rate
of increase of the polar angle θ with the motion of F . This is not the counter-
clockwise component fθ , but rather ω = fθ /r, i.e., ω is the function called gθ
in the proof of Proposition 9.17.1. To understand this, think of a uniformly
spinning disk such as a record on a turntable. At each point except the center,
the angular velocity is the same. But the speed of motion is not constant over
the disk, it is the angular velocity times the distance from the center. That is,
the angular velocity is the speed divided by the radius, as claimed. In these
terms, the proof showed that the angular velocity ω extends continuously to 0,
and that (curl F )(0) is twice the extended value ω(0).
Also, if F is a velocity field then the right side of the formula
fr (r cos θ, r sin θ)
(div F )(0) = 2 lim+
r→0 r
has the interpretation of the flux density of F at the origin. That is:
When the Cauchy–Riemann equations hold, the divergence is the flux
density.
To understand this, think of a planar region of incompressible fluid about the
origin, and let r be a positive number small enough that the fluid fills the area
inside the circle of radius r. Suppose that new fluid is being added throughout
the interior of the circle, at rate c per unit of area. Thus fluid is being added
to the area inside the circle at total rate πr2 c. Here c is called the flux density
over the circle, and it is measured in reciprocal time units, while the units
of πr2 c are area over time. Since the fluid is incompressible, πr2 c is also the
rate at which fluid is passing normally outward through the circle. And since
the circle has circumference 2πr, fluid is therefore passing normally outward
through each point of the circle with radial velocity
πr2 c rc
fr (r cos θ, r sin θ) = = .
2πr 2
Consequently,
fr (r cos θ, r sin θ)
2⋅ = c.
r
Now let r shrink to 0. The left side of the display goes to the divergence of F
at 0, and the right side becomes the continuous extension to radius 0 of the
flux density over the circle of radius r. That is, the divergence is the flux
density when fluid is being added at a single point.
500 9 Integration of Differential Forms

Exercises

9.17.1. Put R2 into correspondence with the complex number field C as fol-
lows:
[ ] ←→ x + i y.
x
y
Show that the correspondence extends to

a −b x
[ ] [ ] ←→ (a + i b)(x + i y).
b a y

Show also that the correspondence preserves absolute value, i.e.,

∣ [ ] ∣ = ∣x + i y∣,
x
y

where the first absolute value is on R2 and the second one on C.

9.17.2. Let A ⊂ R2 be an open set that contains the origin, and let F ∶ A Ð→
R2 be a vector field on A that is stationary at the origin. Define a complex-
valued function of a complex variable corresponding to F ,

f (x + iy) = f1 (x, y) + if2 (x, y), (x, y) ∈ A.

Then f is called complex-differentiable at 0 if the following limit exists:

f (z + ∆z) − f (z)
lim .
∆z→0 ∆z
The limit is denoted f ′ (z).
(a) Suppose that f is complex-differentiable at 0. Compute f ′ (z) first by
letting ∆z go to 0 along the x-axis, and again by letting ∆z go to 0 along
the y-axis. Explain how your calculation shows that the Cauchy–Riemann
equations hold at 0.
(b) Show also that if f is complex differentiable at 0 then F is vector
differentiable at 0, meaning differentiable in the usual sense. Suppose that f
is complex-differentiable at 0, and that f ′ (0) = reiθ . Show that

(div F )(0) = 2r cos θ, (curl F )(0) = 2r sin θ.

(c) Suppose that F is vector-differentiable at 0 and that the Cauchy–


Riemann equations hold at 0. Show that f is complex-differentiable at 0.
Correction to: Calculus and Analysis
in Euclidean Space

Correction to:
J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics,
https://doi.org/10.1007/978-3-319-49314-5

The original version of the book was inadvertently published with a few typesetting
errors, which have now been corrected.

The updated version of the book can be found at


https://doi.org/10.1007/978-3-319-49314-5

© Springer International Publishing AG 2019 C1


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, https://doi.org/10.1007/978-3-319-49314-5_10
Index

absolute value of a vector, 32 Cauchy–Riemann equations, 494


absolute value on Cck (Rn ), 350 Cauchy–Schwarz inequality, 33
properties of, 351 centroid, 302
addition of matrices, 72 chain, 467
affine mapping, 71 chain rule, 147
annulus, 308 in coordinates, 161
arithmetic–geometric mean inequality, change of scale principle, 319
238 change of variable mapping
for cylindrical coordinates, 312
ball, 51 for polar coordinates, 307
n-dimensional of radius r, 319 for spherical coordinates, 313
closed, 202 change of variable theorem
basis of Rn , 28
for differential forms, 459
negative, 121
for multiple integrals, 310
positive, 121
for one-variable integrals, 281
standard, 27
characteristic function of a set, 285
Bernoulli’s inequality, 4
characteristic polynomial, 184
Bolzano–Weierstrass property
circle in a field, 378
in R, 54
cissoid of Diocles, 381
in Rn , 54
boundary closed set, 52
of a closed ball, 202 closure of a set, 330
boundary mapping, 469 common refinement of two partitions,
boundary point of a set, 330 261
bounded compact set, 55
function, 254, 258 complement of a set, 201
mapping, 258 completeness of the real number system,
set, 53 2
box as a monotonic sequence criterion, 3
compact, 258 as a set-bound criterion, 2
face of, 112 component of a vector
unit, 111 normal, 40
parallel, 40
Cartesian product, 257 componentwise nature of linearity, 65

© Springer International Publishing AG 2016 501


J. Shurman, Calculus and Analysis in Euclidean Space,
Undergraduate Texts in Mathematics, DOI 10.1007/978-3-319-49314-5
502 Index

conchoid of Nicomedes, 379 parameter value, 384


congruence, 407 parametrization by arc length, 391
connected set, 57 point, 384
continuity rectifiable, 385
at a point, 44 regular, 385
componentwise nature of, 46 speed, 385
ε-δ at a point, 269 tangent vector, 385
ε-δ on a set, 269 torsion, 399
of the partial differentiation mapping, trace, 384
352 velocity vector, 385
on a set, 44 cycloid, 386
pointwise, 273 cylindrical coordinates, 312
preservation under composition, 45
preservation under uniform limit, 369
derivative
sequential at a point, 269
of a convolution, 358
sequential on a set, 269
of a differential form, 441
uniform on a set, 273
of a mapping, 142
vector space properties of, 45
uniqueness of, 145
contractible space, 461
determinant, 88, 101
convergence
of a basis of Rn , 121
componentwise nature of, 44
properties of, 103
linearity of, 44
convolution, 357 difference of sets, 356
approximation of Cc0 (Rn )-functions differentiability
by, 366 componentwise nature of, 146
approximation of Cck (Rn )-functions differential form, 424
by, 367 closed, 445
approximation of Ic (Rn )-functions derivative of, 441
by, 370 exact, 445
critical point theorem, 176 pullback of, 450
cross product standard presentation of, 427
definition of, 124 differential forms
in Rn , 421 integration of, 425
properties of, 125 meaning of integration of, 434
cube product rule for, 442
singular, 466 syntax of, 424
standard, 466 wedge product of, 439
curl of a vector-valued mapping, 446 differentiation under the integral sign,
in polar coordinates, 498 303
curve, parametrized, 384 Dirac delta function, 348, 362
arc length, 385 directional derivative, 186
binormal vector, 398 discrete set, 58
curvature, 394, 398 distance in Rn , 37
equivalence, 390 properties of, 37
Frenet equations, 395, 399 divergence of a vector-valued mapping,
Frenet frame, 394 446
Frenet vector, 404 in polar coordinates, 498
normal vector, 394, 398 divergence theorem, 491
parameter interval, 384 geometric interpretation of, 491
Index 503

equivalence relation, 391 homogeneous function, 165, 465


Euclidean number, 376 homotopy, 461
Euclidean space, 23
Euclid’s least area problem, 233 identity matrix, 75
Euler’s identity for homogeneous implicit function theorem, 223
functions, 465 induction theorem, 3
extreme value theorem, 4, 56 inner product
of two vectors, 31
field axioms, 1 properties of, 31
flow integral integrability criterion, 265
2-dimensional integral
coordinate, 419 iterated, 294
intrinsic, 419 lower, of a function over a box, 263
3-dimensional of a Cc0 (Rn )-function, 357
coordinate, 419 of a differential form over a chain, 467
general, 420 of a differential form over a surface,
flux integral 425
2-dimensional of a function over a box, 263
coordinate, 419 of a function over a surface, 415
intrinsic, 419 upper, of a function over a box, 263
3-dimensional integral curve, 192
coordinate, 420 interior point of a set, 142
intrinsic, 420 intermediate value theorem, 5
general interval
coordinate, 422 compact, 253
intrinsic, 420 inverse function theorem, 206
Fubini’s theorem, 294 inverse image characterization of
fundamental theorem of integral continuity, 202
calculus inverse image of a mapping, 202
first version, 5 isolated point of a set, 52
general, 477
one-variable Jacobian matrix, 142
first version, 279
second version, 280 known-integrable function, 291, 368
second version, 5
Lagrange multiplier condition, 240
gamma function, 320 Laplace’s equation, 169, 447
Gauss’s theorem, 491 Laplacian operator, 492
gradient, 186 length
gradient of a scalar-valued function, 445 of an interval, 254
Gram–Schmidt process, 41 level set
graph of a function, 286 of a function, 188
Green’s identities, 493 of a mapping, 213
Green’s theorem, 487 limit point of a set, 52
geometric interpretations of, 489 line in R3 , 128
line in a field, 378
harmonic equation, 447 linear combination, 28
helix, 386 linear equation
Hölder’s inequality, 240 homogeneous, 87
504 Index

inhomogeneous, 87 norm of a linear mapping, 71


linear invertibility theorem, 95
linear mapping, 60 open set, 201
transpose of, 70 order axioms, 2
linearity of differentiation, 147 ordered k-tuple from {1, . . . , n}, 424
lower sum, 255, 259 increasing, 426
orthogonal vectors, 38
mapping
affine, 71 Pappus’s theorem, 318
component functions of, 46 parallelepiped, 114
from Rn to Rm , 41 face of, 114
from A to Rm , 41 parameter domain of a surface, 410
linear, 60 partial derivative, 154
rigid, 406 mixed, 167
matrix, 65 second-order, 167
echelon, 84 partition
elementary, 80 of a box, 258
indefinite, 180 of an interval, 254
lower triangular, 86 path-connected set, 58
negative definite, 180 permutation, 99
new column of, 84 inversion in, 100
old column of, 84 sign of, 100
orthogonal, 97, 406 persistence of inequality, 49
positive definite, 180 plane in R3 , 128
recombine, 80 point in a field, 378
scale, 80 polar coordinates, 306
second derivative, 177 product rule
skew-symmetric, 97 for differential forms, 442
transpose of, 77 multivariable, 149
transposition, 80 pullback
triangular, 93 contravariance of, 454
matrix inversion algorithm, 85 properties of, 452
matrix multiplication pullback of a differential form, 450
properties of, 75 pullback theorem, 459
max/min test pullback–determinant theorem, 451
general, 184 pulse function, 355
two-variable, 181 pushforward of a surface, 468
mean value theorem, 5
modulus quotient rule
of a vector, 32 multivariable, 149
properties of, 33
mollifying kernel, 357 real number system, 1
multilinear function, 152 refinement of a partition, 260
multiplication reparametrization of a surface, 416
matrix-by-matrix, 73 orientation-preserving, 436
matrix-by-vector, 66 orientation-reversing, 436
scalar-by-matrix, 73
saddle point, 178
nilpotence of d, 444 sawtooth function, 364
Index 505

scalar multiplication, 25 Taylor remainder, 8


sequence, 42 integral form, 20
convergence of, 43 Lagrange form, 12
limit of, 43 Taylor’s theorem, 11
null, 42 test approximate identity, 362
sequential characterization test function, 349
of bounded sets, 54 the continuous image of a compact set
of closed sets, 52 is compact, 56
of compact sets, 55 topological property, 57
of limit points, 52 transpose
simplex, 474 of a linear mapping, 70
n-dimensional of side r, 306 of a matrix, 77
unit in R3 , 301 triangle inequality, 35
size bounds for a vector, 37 full, 36
smooth function, 347
smooth mapping, 411 upper sum, 255, 259
Snell’s law, 235
Vandermonde matrix, 108
special orthogonal group, 406
vector addition, 24
spherical coordinates, 313
vector space axioms, 26
Stokes’s theorem, 490
vector space over a field, 27
geometric interpretation of, 491
volume
subbox of a partition, 258
k-dimensional in Rn
subinterval of a partition, 254
of a parallelepiped, 413
subsequence, 53
of a surface, 415
substitution of a box, 258
forward, in one-variable integrals, 281 of a set, 285
inverse, in one-variable integrals, 282 volume form, 436
sum of sets, 356 volume-zero criterion, 286
support of a function, 349
surface, 410 wedge product, 439
properties of, 440
Taylor polynomial, 7 skew symmetry of, 440

You might also like