LectureNotes237 PDF
LectureNotes237 PDF
Contents
1 The Topology of Rn 4
1.1 Sets and notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.1 Basic Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1.2 Operations on Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.1.3 Functions Between Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2 Structures on Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.1 The Vector Space Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.2.2 Of Lengths and Such . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.2.3 Cross product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.3 Open, Closed, and Everything in Between . . . . . . . . . . . . . . . . . . . . . . . . 16
1.4 Sequences and Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.4.1 Sequences in R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.4.2 Sequences in Rm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.4.3 Completeness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
1.5 Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.6 Compactness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
1.7 Connectedness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
1.8 Uniform Continuity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
2 Differential Calculus 48
2.1 Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.1.1 Single Variable: R R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.1.2 Vector Valued: R Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
2.1.3 Multivariable Rn R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
2.1.4 Functions Rn Rm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
2.2 The Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
1
2.3 The Mean Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.4 Higher Order Partials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.4.1 Second-Order Partial Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 67
2.4.2 The Chain Rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
2.4.3 Higher-Order Partials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
2.4.4 Multi-indices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
2.5 Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.5.1 A Quick Review . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.5.2 Multivariate Taylor Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
2.5.3 The Hessian Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.6 Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
2.6.1 Critical Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
2.6.2 Constrained Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
3 Local Invertibility 85
3.1 Implicit Function Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
3.1.1 Scalar Valued Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
3.1.2 The General Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
3.1.3 The Inverse Function Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 91
3.2 Curves, Surfaces, and Manifolds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
3.2.1 Curves in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
3.2.2 Surfaces in R3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
3.2.3 Dimension k-manifolds in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
4 Integration 103
4.1 Integration on R . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.1.1 Riemann Sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
4.1.2 Properties of the Integral . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.1.3 Sufficient Conditions for Integrability . . . . . . . . . . . . . . . . . . . . . . 109
4.2 Integration in Rn . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
4.2.1 Integration in the Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
4.2.2 Integration Beyond 2-dimensions . . . . . . . . . . . . . . . . . . . . . . . . . 119
2
4.3 Iterated Integrals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
4.4 Change of Variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.4.1 Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
4.4.2 Integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
3
1 The Topology of Rn
1 The Topology of Rn
As we start our adventure into the world of multivariate and vector calculus, we must first ensure
that everybody is on the same page in terms of notation and basic set theory. While it is entirely
possible that the reader may already be passingly familiar with all of the following topics, one
could dedicate an entire course to exploring this subject, so it is worth meditating over, even if only
superficially. We will begin by reviewing sets and the fundamental operations on sets, then follow
this with functions between such sets.
A set is any collection of distinct objects.1 Some examples of sets might include
Universities in
the alphabet = {a, b, c, . . . , x, y, z} , = {UofT, Ryerson, York} ,
Toronto
{x S : P (x)}
which consists of all the elements in S which make P true. For example, if M is the set of months
in the year, then
This was an example where the resulting set was still finite, but it still demonstrates the compactness
of setbuilder notation.
The following are some important infinite sets that we will see throughout the course:
4
2015
c Tyler Holden
1.1 Sets and notation 1 The Topology of Rn
We can also talk about subsets, which are collections of items in a set and indicated with a
sign. For example, if P is the set of prime numbers, then P Z, since every element on the left (a
prime number) is also an element of the right (an integer). Alternatively, one has N Z Q R.
There is a particular distinguished set, known as the empty set and denoted by , which contains
no elements. Recalling the definition of a vacuous truth, it is not too hard to convince oneself that
empty set is a subset of every set!
2. T = x R : x = a 12 , a N , 4. V = {x Z : x = 3n , n N}.
Union and Intersection Let S be a set and choose two sets A, B S. We define the union of
A and B to be
A B = {x S : x A or x B}
A B = {x S : x A and x B} .
A B A B
AB AB
Figure 1: Left: The union of two sets is the collection of all elements which are in both (though re-
member that elements of sets are distinct, so we do not permit duplicates). Right: The intersection
of two sets consists of all elements which are common to both sets.
Example 1.1
5
2015
c Tyler Holden
1 The Topology of Rn 1.1 Sets and notation
Let I N be an indexing set: Given a collection of sets {Ai }iI in S, one can take the
intersection or union over the entire collection, and this is often written as
[ \
Ai = {x S : i I, x Ai } , Ai = {x S : i I, x Ai } .
iI iI
Example 1.2
Consider the set {x R : sin(x) > 0}. Write this set as as an infinite union of intervals.
Solution. We are well familiar with the fact that sin(x) > 0 on (0, ), (2, 3), (4, 5), etc. If we
let the interval In = (2n, (2n + 1)) then the aforementioned intervals are I0 , I1 , and I2 . We can
convince ourselves that that sin(x) > 0 on any of the In , and hence
[ [
{x R : sin(x) > 0} = In = (2n, (2n + 1)).
nZ nZ
Example 1.3
\
Define In = 0, n1 R. Determine I =
In .
nN
Solution. By definition, I consists of the elements which are in In for every n N. We claim that
I cannot consist of any positive real number. Indeed, if p > 0 then there exists n N such that
1
n < p, which means that p / Ik for all k n, and hence cannot be in I. Since I has no positive real
numbers, and certainly cannot contain any non-positive real numbers, we conclude that I = .
S T
Exercise: Let In = (n, n) R for n N. Determine both n In and n In .
Complement If A S then the complement of A with respect to S is all elements which are
not in A; that is,
Ac = {x S : x
/ A} .
6
2015
c Tyler Holden
1.1 Sets and notation 1 The Topology of Rn
Ac
Figure 2: The complement of a set A with respect to S is the set of all elements which are in S
but not in A.
Example 1.4
S
Determine the complement of I = nZ (2n, (2n + 1)) from Example 1.2, with respect to
R.
Solution. Since I contains all the open intervals of the form (2n, (2n + 1)) we expect its comple-
ment to contain everything else. Namely,
[
Ic = [(2n 1), 2n].
nZ
Exercise:
Cartesian Product The Cartesian product of two sets A and B is the collection of ordered
pairs, one from A and one from B; namely,
A B = {(a, b) : a A, b B} .
A geometric way (which does not generalize well) is to visualize the Cartesian product as sticking a
copy of B onto each element of A, or vice-versa. For our purposes, the main example of the product
will be to define higher dimensional spaces. For example, we know that we can represent the plane
R2 as an ordered pair of points R2 = {(x, y) : x, y R} , while three dimensional space is an ordered
triple R3 = {(x, y, z) : x, y, z R}. In this sense, we see that R2 = R R, R3 = R R R, and
motivates the more general definition of Rn as an ordered n-tuple
Rn = |R {z
R} .
n-times
7
2015
c Tyler Holden
1 The Topology of Rn 1.1 Sets and notation
Exercise: We have swept some things under the rug in defining Rn , largely because the true
nature is technical and boring. There is no immediate reason to suspect that R R R
should be well defined: we first need to check that the Cartesian product is associative; that
is, (R R) R = R (R R). By definition, the left-hand-side is
Syntactically, neither of these looks the same as R3 = {(a, b, c) : a, b, c R}, but nonetheless
they all define the same data.
Given two sets A, B, a function f : A B is a map which assigns to every point in A a unique
point of B. If a A, we usually denote the corresponding element of B by f (a). When specifying
the function, one may write a 7 f (a). The set A is termed the domain, while B is termed the
codomain.
It is important to note that not every element of B needs to be hit by f ; that is, B is not
necessarily the range of f . Rather, B represents the ambient space to which f maps. Also, if either
of the domain or codomain changes the function itself changes. This is because the data of the
domain and codomain are intrinsic to the definition of a function. For example, f : R R given
by f (x) = x2 is a different function than g : R [0, ), g(x) = x2 .
Definition 1.5
Let f : A B be a function.
f (U ) = {y B : x U, f (x) = y} = {f (x) : x U } .
f 1 (V ) = {x A : f (x) V } .
Note that despite being written as f 1 (V ), the preimage of a set does not say anything about
the existence of an inverse function.
8
2015
c Tyler Holden
1.1 Sets and notation 1 The Topology of Rn
A B
f :AB
f (U )
U
Example 1.6
On the other hand, since f ([0, 1]) = [0, 1] we know that f 1 (f ([0, 1])) = f 1 ([0, 1]) for which
Example 1.7
S 2 = (x, y, z) R3 : x2 + y 2 + z 2 = 1 ,
determine f (S 2 ).
Solution. Let (a, b, c) S 2 so that a2 + b2 + c2 = 1. The image of this point under f is f (a, b,
c) =
2 2 2 2 2 2
(a, b). It must be the case that a + b 1, and so f (S ) D = (x, y) R : x + y 1 . We 2
claim that this is actually an equality; that is, f (S 2 ) = D2 . In general, to show that two sets A
and B are equal, we need to show A B and B A. As we have already shown that f (S 2 ) D2 ,
we must now show that D2 f (S 2 ).
Let (a, b) D2 so that a2 + b2 1. Let c = 1 a2 b2 , which is well-defined by hypothesis.
Then a2 + b2 + c2 = 1 so that (a, b, c) S 2 , and f (a, b, c) = (a, b). Thus f (S 2 ) = D2 .
9
2015
c Tyler Holden
1 The Topology of Rn 1.2 Structures on Rn
Definition 1.8
Let f : A B be a function. We say that
Notice that the choice of domain and codomain are exceptionally important determining whether
a function is injective or surjective. For example, the function f : R R given by f (x) = x2 is
not surjective (it misses the negative real numbers), while the function f : R [0, ) is surjective
(there are no negative real numbers to miss).
Example 1.9
Solution.
1. The function f is certainly not injective, since f (x, y, a) = (x, y) = f (x, y, b) for any a and b.
On the other hand, it is surjective, since if (x0 , y0 ) R2 then f (x0 , y0 , 0) = (x0 , y0 ).
2. The function g is injective: to see this, note that if g(a1 , b1 ) = g(a2 , b2 ) then (ea1 , (a21 +1)b1 ) =
(ea2 , (a22 +1)b2 ) which can only happen if ea1 = ea2 . Since the exponential function is injective,
a1 = a2 . This in turn implies that (a21 + 1) = (a22 + 1) and neither can be zero, so dividing
the second component we get b1 = b2 as required. On the other hand, g is not surjective. For
example, there is no point which maps to (0, 0).
3. This function is both injective and surjective. Both are left as simple exercises. We conclude
that h is bijective.
1.2 Structures on Rn
Any student familiar with linear algebra knows that Rn admits a vector space structure: one can
add vectors in Rn and multiply by scalars. For those uninitiated, we briefly review the subject
here.
10
2015
c Tyler Holden
1.2 Structures on Rn 1 The Topology of Rn
Very roughly, a real vector space is any set in which two elements may be added to get another
element of the set, as well as multiplied by a real number. Additionally, there must be an element
0 such that summing against zero does nothing. The full collection of axioms that define a vector
space are too many to write down and are the topic of a linear algebra course, so refer the student
to their favourite textbook.
The elements of the set are called vectors, while the real number multiples are called scalars.
For notation sake, we will denote vectors by bold font x and scalars by non-bold font.
Recall that elements x Rn just look like n-tuples of real numbers. If x = (x1 , . . . , xn ), y =
(y1 , . . . , yn ) are elements of Rn we can add them together and multiply by a scalar c R in a
pointwise fashion
x1 y1 x1 + y1 x1 cx1
x2 y2 x2 + y2 x2 cx2
.. + .. = , c . = . .
.. . .
. . . . .
xn yn xn + yn xn cxn
The zero vector is, unsurprisingly, the n-tuple consisting entirely of zeroes: 0 = (0, . . . , 0). See
Figure 3 for a visualization of vector addition and scalar multiplication.
The diligent student might notice that I have been sloppy in writing vectors: there is a technical
but subtle difference between vectors written horizontally and those written vertically. Once again,
we will not be terribly concerned with the distinction in this course, so we will use whichever
convention is simplest. In the event that the distinction is necessary, we will mention that point
explicitly at the time.
2v1 = (2, 2)
R2
v1 = (1, 1)
v1 + v2 = (3, 0)
v2 = (2, 1)
Figure 3: One may think of a vector as either representing a point in the plane (represented by
the black dots) or as direction with magnitude (represented by the red arrows). The blue arrows
correspond to the sum v1 + v2 and the scalar multiple 2v1 . Notice that both are simply computed
pointwise.
There are three intimately related structures which we will now impose on R, which are the notion
of an inner product, a norm, and a metric. The first is that of an inner product. While there
11
2015
c Tyler Holden
1 The Topology of Rn 1.2 Structures on Rn
are many different kinds of inner products, the one with which we will be most concerned is the
Euclidean inner product, also known as simply the dot product. Given two vectors x = (x1 , . . . , xn )
and y = (y1 , . . . , yn ) in Rn , we write
n
X
hx, yi = x y := xi yi = x1 y1 + x2 y2 + + xn yn .
i=1
Geometrically, the dot product x y is the length of the projection of x onto the unit vector in the
y
y direction, or vice versa. More precisely, if y Rn then y = kyk is a unit vector that points in
the same direction as y, and
hx, yi hx, yi
y = y
kyk kyk2
is the projection of x into y. If hv, wi = 0, we say that v and w are orthogonal, which we recognize
will happen precisely when x and y are perpendicular.
y
xy
kyk
Figure 4: The inner product of x and y, written x y is the length of the projection of the vector
x onto y.
Example 1.10
hv, wi = v1 w1 + v2 w2 + v3 w3 = (1 2) + (1 0) + (2 4) = 10.
Proposition 1.11
These properties are straightforward to verify and are left as an exercise for the student.
12
2015
c Tyler Holden
1.2 Structures on Rn 1 The Topology of Rn
The next structure is called a norm, and prescribes a way of measuring the length of a vector.
Our motivation comes from the one-dimensional case, where we know that the absolute value | |
is used to measure distance. As such, we define kk : Rn R as the function
n
!1/2 q
p X
kxk := hx, xi = x2i = x21 + x22 + + x2n .
i=1
First, we recognize that this generalizes the Pythagorean Theorem in R2 , since if x = (x, y) then
the vector x looks p
like the hypotenuse of a triangle with side lengths x and y. The length of the
hypotenuse is just x2 + y 2 = kxk (See Figure 5).
x = (x, y)
2
y
p x2 +
y
k=
kx
Figure 5: In R2 , the length of a vector can be derived from the Pythagorean theorem. The norm
kk generalizes this notion to multiple dimensions.
Exercise:Let x = (x, y, z) R3 . Determine the length of this vector using the Pythagorean
theorem and confirm that one gets the same value as kxk.
Example 1.12
A very important relationship between the inner product and the norm of v is the Cauchy-
Schwarz inequality:
Proposition 1.13
If x, y Rn then
| hx, yi | kxkkyk.
This proof is not terribly enlightening, nor is it very intuitive. The student may refer to the
textbook for a proof.
13
2015
c Tyler Holden
1 The Topology of Rn 1.2 Structures on Rn
Example 1.14
Let v = (1, 1, 2) and w = (2, 0, 4) as in Example 1.10. Compute kvk and kwk and confirm
that the Cauchy-Schwarz inequality holds.
Solution. We already saw that hv, wi = 10. Computing the norms one gets
kvk = 6, kwk = 20
so that kvkkwk = 120 which is greater than 10 = 100.
Proposition 1.15
Proof. The first two properties follow immediately from properties of the inner product and are
left as an exercise for the student. We resolve thus to prove the Triangle Inequality. Here one has
By taking the square root of both sides, we get the desired result.
The triangle inequality is so named because it relates the sides of a triangle. Indeed, if x, y Rn
and we form the triangle whose vertices are the point x, y and (x + y), then the length of x + y
will be less than the sum of the other two side angles. Equality will occur precisely when x = cy
for some c R.
Finally, one has a metric, which is a method for determining the distance between two vectors.
If x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) then the Euclidean metric is
n
!
X p
d(x, y) = kx yk = (xi yi )2 = (x1 y1 )2 + + (xn yn )2 .
i=1
14
2015
c Tyler Holden
1.2 Structures on Rn 1 The Topology of Rn
Proposition 1.16
Let x, y, z R3 .
All of these properties follow immediately from the properties of the norm and are left as an
exercise for the student. We will often omit the d(x, y) notation as some students may find it
confusion, though this is typically how metrics are denoted in more abstract courses.
In R3 , the cross product of two vectors is a way of determining a third vector which is orthogonal
to the original two. It is defined as follows: If v = (v1 , v2 , v3 ) and w = (w1 , w2 , w3 ) then
v w = (v2 w3 w2 v3 , w1 v3 v1 w3 , v1 w2 w1 v2 ).
This is rather terrible to remember though, so if the student is familiar with determinants, it can
be written as
k
v w = det v1 v2 v3 .
w1 w2 w3
xy
Example 1.17
15
2015
c Tyler Holden
1 The Topology of Rn 1.3 Open, Closed, and Everything in Between
As we mentioned, this new vector should be orthogonal to the other two. Computing the dot
products, we have
Exercise:
1. Show that v w = w v.
3. Show that if w = v for some R, then v w = 0. Conclude that the cross product
of two vectors in R3 is non-zero if and only if the vectors are linearly independent.
The goal of the next several sections is to discuss the notion of topology, which is the coarse grained
geometry and structure of a space. In single variable calculus, one was exposed to the notions of
open intervals (a, b), closed intervals [a, b], and the knowledge that some intervals are neither open
nor closed. What motivates the nomenclature for these sets? Intuitively, the idea seems to be that
the set (a, b) does not contain its endpoints a and b: it contains points which are arbitrarily close,
but not those two specific points. A closed interval does contain its endpoints, it is closed off.
Our goal is to extend this notion to Rn , where the addition of dimensions significantly complicates
our picture. However, we can at least start somewhere nice, by defining the generalization of an
interval:
Definition 1.18
Let x Rn and r > 0 a real number. We define the open ball of radius r at the point x as
Br (x) := {y Rn : kx yk < r} .
Recalling that kx yk is equivalent to the distance between x and y, the open ball Br (x) is
nothing more than the collection of points which are a distance at most r from x. This indeed
generalizes the interval, since in R1 we have
16
2015
c Tyler Holden
1.3 Open, Closed, and Everything in Between 1 The Topology of Rn
Br (x)
x r
Figure 7: The open ball of radius r centred at x consists of all points which are a distance r from
x.
One hopes that this is fairly intuitive: A set is bounded if we can put a ball around it. If
can place
we
2
a ball
around the set, it cannot grow arbitrarily large. For example, the set S =
(x, y) R : xy > 0 consists of the first and third quadrants of the plane. Since both x and
y can become arbitrarily large is absolute value, no ball centred at the origin entirely contains
S. On the other hand, C = (x, y) R2 : (x a)2 + (y b)2 r2 is bounded for any choice of
a, b, c R.
Exercise: For an arbitrary choice of a, b, r R, determine the open ball that bounds C as
defined above.
These balls will be our way of looking around a point; namely, if we know something Br (x)
then we know what is happening within a distance r of the point x. We can use these open balls
to define different types of points of interest:
Definition 1.20
Let S Rn be an arbitrary set.
If S is a set, we define the interior of S, denoted S int to be the collection of interior points of
S. We define the boundary of S, denoted S, to be the collection of boundary points of S.
We should take a moment and think about these definitions, and why they make sense. Let us
start with a boundary point. Intuitively, a boundary point is any point which occurs at the very
fringe of the set; that is, if I push a little further I will leave the set. An interior point should be
a point inside of S, such that if I move in any direction a sufficiently small distance, I stay within
the set. This is exactly what Definition 1.20 is conveying. By definition, if x is an interior point
then we must have that x S; however, boundary points do not need to be in the set. We start
17
2015
c Tyler Holden
1 The Topology of Rn 1.3 Open, Closed, and Everything in Between
S
b
Figure 8: The point b is a boundary point. No matter what size ball we place around b, that ball
will intersect both S and S c . On the other hand, p is an interior point, since we can place a ball
around it which lives entirely within S.
Let S = (1, 1]. What are the interior points and the boundary points of S?
Solution. We claim that any point in (1, 1) is an interior point. To see that this is the case, let
p (1, 1) be an arbitrary point. We need to place a ball around p which lives entirely within
(1, 1). To do this, assume without loss of generality that p 0. If p = 0 then we can set r = 12
and B1/2 (0) = (1/2, 1/2) (1, 1). Thus assume that p 6= 0 and let r = 1p 2 , which represents
half the distance from p to 1.
We claim that Br (p) (1, 1). Indeed, let x Br (p) be any point, so that |x p| < r by
definition. Then
|x| = |x p + p| |x p| + p
1p
r+p= +p
2
1+p
= <1
2
where in the last inequality we have used the fact that p < 1 so 1 + p < 2. Thus x (1, 1), and
since x was arbitrary, Br (p) (1, 1).
The boundary points are 1, where we note that even though 1 / (1, 1], it is still a boundary
point. To see that +1 is a boundary point, let r > 0 be arbitrary, so that Br (p) = (1 r, 1 + r).
We then have
18
2015
c Tyler Holden
1.3 Open, Closed, and Everything in Between 1 The Topology of Rn
Example 1.22
Solution. We claim that Q = R. Since both the irrationals and rationals are dense in the real
numbers, we know that every non-empty open interval in R contains both a rational and irrational
number. Thus let x R be any real number, and r > 0 be arbitrary. The set Br (x) is an open
interval around x, and contains a rational number, showing that Br (x) Q 6= . Similarly, Br (x)
contains an irrational number, showing that Br (x) Qc 6= , so x Q. Since x was arbitrary, we
conclude that Q = R.
Exercise: Show that it is impossible for a boundary point to also be an interior point, and
vice-versa.
Definition 1.23
A set S Rn is said to be open if every point of S is an interior point; that is, S is open if
for every x S there exists an r > 0 such that Br (x) S. The set S is closed if S c is open.
Example 1.24
Solution. We need to show that around every point in S we can place an open ball that remains
entirely within S. Choose a point p = (px , py ) S, so that py > 0, and let r = py /2. Consider the
ball Br (p), which we claim lives entirely within S. To see that this is the case, choose any other
point q = (qx , qy ) Br (p). Now
py
|qy py | kq pk < r =
2
p py
which implies that qy > py 2y = 2 > 0. Since qy > 0 this shows that q S, and since q was
arbitrary, Br (p) S as required.
S
r
p
q
py
Figure 9: The upper half plane is open. For any point, look at its y-coordinate py and use the ball
of radius py /2.
19
2015
c Tyler Holden
1 The Topology of Rn 1.3 Open, Closed, and Everything in Between
Example 1.25
Solution. One certainly hopes that this result is true, otherwise the name would by quite the
misnomer. To show this, let x Rn and r > 0 both be arbitrary, and consider S = Br (x). We
need to show that every point in S can in turn be enclosed with a smaller ball which lives entirely
within S. Choose some p S and let d = kx pk so that d < r be definition.
We claim that the ball of radius r0 = (r d)/2 > 0 will work. To see this, choose an arbitrary
y Br0 (p) so that kp yk < r0 . One has that
kx yk kx pk + kp yk triangle inequality
rd r+d
d + r0 = d + =
2 2
2r
<r since d < r .
2
Since y was arbitrary, Br0 (p) Br (x) as required. d
r
p
r0
y
d
Proposition 1.26
Proof. [] Assume that S is closed, and for the sake of contradiction assume that S * S. Choose
an element x S which is not in S, so that x S c . Now since S is closed, S c is open, so we can
find an > 0 such that B (x) S c . However, this is a contradiction: since x S then every open
ball must intersect both S and S c , and this shows that B (x) is an open ball around x which fails
to intersect S. We thus conclude S S as required.
[] We will proceed by contrapositive, and show that if S is not closed, then S 6 S. If S is
not closed, then S c is not open, and hence there is some point x S c such that for every r > 0,
20
2015
c Tyler Holden
1.4 Sequences and Completeness 1 The Topology of Rn
Br (x) S 6= . Certainly Br (x) S c 6= (since both sets contain x) and hence x S. Thus x is
a point in S S c , and so S 6 S.
Just as in the case of intervals in R, it is possible for a set to be neither open nor closed. A
previous exercise showed that a point in a set cannot be both a boundary point and an interior
point, so failing to be open somehow amounts to containing some of your boundary points. If you
have all of your boundary points, Proposition 1.26 shows that you are actually closed. Thus sets
which fail to be both open or closed contain some of their boundary points, but not all of them.
By adding all the boundary points, we can close off a set.
Definition 1.27
If S Rn then the closure of S is the set S = S S.
Exercise:
The closure of an interval (a, b) is the closed interval [a, b]. The student should check that the
closure of the open ball in Rn is
Br (x) = {y Rn : kx yk r}
where we note that the inequality need no longer be strict. Similarly, the closure of the open half
plane in Example 1.24 is the closed half plane
The student is already passingly familiar with the notion of sequences in R. We will quickly review
the pertinent points before introducing sequences in Rn .
1.4.1 Sequences in R
The rough idea of a sequence is that it resembles an ordered collection of real numbers. We can
make this more formal by writing a sequence as a map x : N R, so that x(n) is a real number.
For example, the sequence x(n) = n2 is such that
For brevity of notation, one often writes xn instead of x(n), so that for example, x4 = 16. In
addition to this, we often choose to conflate the function x itself with its (ordered) image in R, in
21
2015
c Tyler Holden
1 The Topology of Rn 1.4 Sequences and Completeness
This is the subsequence which picks out every other member of the original sequence:
1 4 9 16 25 36 49 64 81 100
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10
Figure 11: The subsequence xnk picks out every other number from the sequence, and defines a
new sequence.
Definition 1.28
Let (xn )
n=1 be a sequence in R. We say that
1. (xn ) is bounded if there exists an M > 0 such that |xn | M for every n N,
Example 1.29
1. xn = n1 ,
2. yn = (1)n
3. zn = 2n .
Solution.
1 1. The sequence xn = n1 is bounded and decreasing. Setting M = 1 we have |xn | =
1 since n 1. In addition, it is well known that xn+1 = 1 < 1 = xn , so the sequence
n n+1 n
is decreasing as required.
2. This sequence just oscillates between the numbers 1, so it is certainly bounded (choose
M = 1). On the other hand, it is neither increasing nor decreasing, since y2n > y2n+1 but
also y2n+2 > y2n+1 .
3. This sequence is increasing, since zn+1 = 2n+1 = 2zn > zn . However, the sequence is
unbounded.
22
2015
c Tyler Holden
1.4 Sequences and Completeness 1 The Topology of Rn
We can talk about when such sequences converge, in a manner similar to horizontal asymptotes
of functions:
Definition 1.30
n
If (xn )
n=1 is a sequence in R, we say that (xn ) converges with limit L, written as (xn ) L,
if for every > 0 there exists an N N such that whenever n > N , |xn L| < .
This definition says that by progressing sufficiently far into the sequence, we can ensure that
the xn get arbitrarily close to L.
Example 1.31
1
Solution. Let > 0 be given, and choose an N N such that N < . If n > N then
1 1
|xn 0| = < <
n N
It seems intuitive to expect that the limit of a sequence should be unique, so that we may talk
about the limit of the sequence. We demonstrate this with the following proposition:
Proposition 1.32
If (xn )
n=1 is a sequence in R such that (xn ) x and (xn ) y then x = y; that is, limits of
convergent sequences are unique.
Proof. It is sufficient to show that for every > 0 we have |x y| < . Indeed, this will show that
|x y| cannot be positive, and so must necessarily be zero, from which it will follow that x = y.
Let > 0 be arbitrary. As (xn ) x there exists N1 N such that |xn x| < 2 . By the same
token, there exists N2 N such that |xn y| < 2 . Let N = max {N1 , N2 } and fix any n > N , so
that
|x y| |x xn | + |xn y| + = .
2 2
Our profound laziness as mathematicians means that if we can avoid doing more work, we will.
We can use the following proposition, akin to the limit laws for functions, to infer convergence of
sequences and their limits.
23
2015
c Tyler Holden
1 The Topology of Rn 1.4 Sequences and Completeness
4. If M 6= 0 then the sequence (an /bn ) converges and (an /bn ) L/M .
Proof. The proof of these are almost identical to those of the limit laws for functions. We will
prove (1) and leave the remainder as a (non-trivial) exercise:
Assume that (an ) L and (bn ) M . Let > 0 be given and choose M1 , M2 N such that if
k M1 then |an L| < 2 and if k M2 then |bn M | < 2 . Let M = max {M1 , M2 }, so that if
k M then
|an + bn (L + M )| |an L| + |bn M | < + = ./qedhere
2 2
There is also a version of the Squeeze Theorem for sequences, again proven in almost an identical
fashion:
Theorem 1.34: Squeeze Theorem for Sequences
Let (an ), (bn ), and (cn ) be sequences in R, and assume that for sufficiently large k we have
an bn cn . If (an ) L and (cn ) L, then (bn ) is also convergent with limit L.
We know from our previous experience that every convergent sequence is bounded (prove it
yourself!). A partial converse is the following:
If (an ) is bounded from above and non-decreasing, then (an ) is convergent with its limit
given by sup {an : n N}.
Proof. Let L = supn an , which we know exists by the completeness axiom. Let > 0 be given. By
definition of the supremum, we know that there exists some M N such that
L < aM L.
24
2015
c Tyler Holden
1.4 Sequences and Completeness 1 The Topology of Rn
A similar argument shows that the theorem also holds if (an ) is bounded and non-increasing.
Example 1.36
2n
Determine whether the sequence an = n! is convergent.
an+1 2n+1 n! 2
= = ,
an (n + 1)! 2n n+1
so that if n 2 we have an+1 < an and the sequence is decreasing. It is easy to see that an is
always positive, hence bounded below by 0. By the Monotone Convergence Theorem we know that
(an ) converges.
Proposition 1.37
If (xn )
n=1 x is a convergent sequence, then every subsequence (xnk ) is also convergent
with the same limit.
Proof. Let (xnk ) be any subsequence and note that nk k for all k (prove this). Let > 0 be
given. Since (xn ) x there exists N N such that if n > N then |xn x| < . This N will also
work for (xnk ), since if k > N then nk > nN > N , implying that |xnk x| < .
1.4.2 Sequences in Rm
With our rapid review of sequences in R, we can now begin considering sequences in Rm . A
sequence in Rm is any function x : N Rm . Just like before, we will often write such as sequences
as xn := x(n). For example, the map x(n) = (n, n2 1) is a sequence in R2 whose first few elements
are given by
x1 = (1, 0), x2 = (2, 3), x3 = (3, 8), x4 = (4, 15), x5 = (5, 24), ...
If n : N N then one can define a subsequence by x(k(n)) = xkn . For example, if n(k) = 3k 1,
then
xk1 = (2, 3), xk2 = (5, 24), xk3 = (8, 63), xk4 = (11, 120), ...
25
2015
c Tyler Holden
1 The Topology of Rn 1.4 Sequences and Completeness
Our interest lies principally with sequences which converge, for which the definition is almost
identical to the one for sequences in R:
Definition 1.39
Let (xn ) n n
n=1 be a sequence in R . We say that (xn ) converges with limit x R , written
(xn ) x, if for every > 0 there exists an N N such that whenever n > N then
kxn xk < .
The corresponding theorems about uniqueness of limits, the limit laws, and the Squeeze The-
orem all hold in Rn as well as R, with the only effective change being that the absolute value | |
becomes the norm kk. The student is encouraged to prove these in multiple dimensions to check
that they also work.
We may feel comfortable with the definition of convergent sequences, since it is only a slight
modification of what we have seen repetitively in both this course and its prequel. However, with
our discussion of balls, we now have an opportunity to associate a strong geometric interpretation
to the idea of convergence. The condition kxn xk < says that xn is in B (x), so the definition
of convergence can equivalently be colloquialized by saying that (xn ) x if
Every ball around x contains all but finitely many points of the sequence.
Example 1.40
1 1
Show that the sequence xn = (xn , yn ) = n , n2 (0, 0).
26
2015
c Tyler Holden
1.4 Sequences and Completeness 1 The Topology of Rn
x1 x2 x3 x4 x5 x
1 . 2 2
Solution. Let > 0 be given, and choose N such that N < 2
If n > N then n < N and
r r
1 1 1 1 2 1
k(xn , yn ) (0, 0)k = 2
+ 4 = 1+ 2 1+ 2
n n n n n n2
2
< < .
n
One can use sequences to characterize the closure of a set, and hence determine whether or not
a set is closed. If S Rn , we say that (xn ) is a sequence in S if xn S for every n N. The
closure of S is the collection of all limit points of convergent sequences in S:
Proposition 1.41
If S Rn , then x S if and only if there exists a convergent sequence (xn ) in S such that
(xn ) x.
Proof. [] Assume that x S so that every ball around around x intersects S. We need to construct
a sequence in S which converges to x. For each n N, choose an element xn B1/n (x) S, which
is non-empty by assumption (See Figure 12). By construction, the sequence (xn ) n=1 is a sequence
in S, so we need only show that (xn ) x. Let > 0 be given and choose N such that N1 < .
When n > N we have n1 < N1 < , and by construction xn B1/n (x) B (x), or equivalently
1
kxn xk < < .
n
[] Let (xn )n=1 be a convergent sequence in S with limit point x. If > 0 is any arbitrary real
number, then there exists an N N such that for all n > N we have xn B (x). Since xn S,
this implies that B (x) S 6= . Since was arbitrary, x S or x S. In either case, x S.
Since we know that a set S is closed if and only if S = S, one immediately gets the following
Corollary.
Corollary 1.42
27
2015
c Tyler Holden
1 The Topology of Rn 1.4 Sequences and Completeness
x1
S
x2
x3
x
Figure 12: In the proof of Proposition 1.41, we need to construct a sequence which converges to x.
This is done by choosing points in successively smaller balls around x.
1.4.3 Completeness
Our goal in this section is to extract convergent subsequences from bounded sequences, in an
effort to facilitate our future discussion of compactness. We have already reviewed the Monotone
Convergence Theorem, from which the following (equivalent) Theorem follows:
I1 I2 I3 I4 Ik
k
is a nested collection of intervals, and (bk ak ) 0; that is, the length of the intervals
is getting smaller. Then the intersection of these intervals is non-empty, and in particular
consists of a single element, say p. Notationally,
\
Ik = {p} .
k=1
Since the lengths of the intervals tend to zero, this is the only possible point in the intersection
(once again, provide a more rigorous proof of this fact).
28
2015
c Tyler Holden
1.4 Sequences and Completeness 1 The Topology of Rn
Theorem 1.44
Proof. The idea of the proof will be to exploit Theorem 1.43 by successively bisecting the sequence
into two halves. This will lead to a chain of nested intervals, which must have a single point in
common. We will then construct a sequence which converges to this point.
More formally, let (an )
n=1 be a bounded sequence, and M > 0 be such that |an | M for all
n N. In particular, an [M, M ]. Consider the two halves [M, 0] and [0, M ], one of which must
contain infinitely many elements of the sequences. Call this interval I1 . We inductively construct
the closed interval In as follows: Assume that In1 has been given, and split In into two halves. At
least one of these halves must contain infinitely many elements of the set, so choose one and call it
In .
By construction,
I1 I2 I3 ,
and the length of the interval Ik is M/2k1 . Clearly, as k the length of the subintervals tends
to 0, and as such the Nested Interval Theorem implies there exists a point p which is contained in
every such interval.
We now construct a sequence which converges to p. Let xk1 be any element of (xk ) which lives
in I1 . We construct xkn inductively as follows: Assume that xkn1 has been specified. Since In
contains infinitely many elements, there exists an element in In which is further along the sequence
than xkn1 . Call this element xkn .
M
Finally, we show that (xkn ) p. Let > 0 be given and choose N N such that 2N 1
< . If
n > N then
M M
|xkn x| < (length of In ) < n1 < N 1 <
2 2
as required.
We wish to extend this to discuss sequences in Rn . Though it no longer makes sense to talk
about increasing or decreasing sequences (there is no natural way of ordering n-tuples), we can still
talk about when a sequence is bounded.
Definition 1.45
A sequence (xn ) m
n=1 in R is bounded if there exists M > 0 such that kxn k M for every
n N.
Proposition 1.46
Proof. We will give the explicit proof for n = 2, which contains all the important ideas, and
comment on how to generalize it afterwards. Let xn = (xn , yn ) be a bounded sequence in R2 . Note
that |xn | kxn k and so the sequences (xn ) and (yn ) are each bounded in R. It is very tempting to
29
2015
c Tyler Holden
1 The Topology of Rn 1.4 Sequences and Completeness
simply take a convergent subsequence of each, but the problem is that we cannot stitch them back
together (See Remark 1.38).
Instead, let (xnk ) be a convergent subsequence of (xn ), with limit say x. Using the same indices,
consider the subsequence (ynk ). This sequence does not necessarily converge, but it is bounded, so it
in turn has a convergent subsequence (ynk` ) y. We claim that the (sub)subsequence (xnk` , ynk` )
converges. We already know that (ynk` ) y. Furthermore, since (xnk` ) is a subsequence of (xnk ),
which we know is convergent, Proposition 1.37 implies that (xnk` ) x. By Exercise 1.4.2, since
each component converges, (xnk` , ynk` ) converges, as required.
Cauchy sequences are precisely those sequences whose elements get closer together the further
we travel into the sequence. Indeed, if we translate the definition of a Cauchy sequence, it says
By going far enough into a Cauchy sequence, we can ensure that any two elements are
as close together as we want.
The benefit of Cauchy sequences is that they seem to encapsulate the basic behaviour of a convergent
sequence, but one does not need to know a priori the limit itself. The following proposition confirms
this suspicion.
Proposition 1.48
If (xn ) n
n=1 is a sequence in R , then (xn ) is Cauchy if and only if (xn ) is convergent.
Solution. [] This is the easier of the two directions. Assume that (xn ) is convergent with limit
point x and let > 0 be given. Choose N N such that if n > N then kxn xk < 2 . We claim
that this N works for the definition of a Cauchy sequence. Indeed, let k, n > N so that
kxk xn k kxk xk + kxn xk < + =
2 2
as required.
[] Conversely, let us now assume that (xn ) is Cauchy. We will first show that (xn ) is bounded.
Setting = 1 there exists an N N such that whenever n > N then kxn xN k < 1, from which
it follows that
kxn k < kxn xN k + kxN k = 1 + kxN k.
By setting M = max {kx1 k, . . . , kxN k, 1 + kxN k} then kxn k M for all k N.
By Proposition 1.46, (xn ) thus has a convergent subsequence (xnk ), say with limit point x. We
now claim that the original sequence actually converges x as well. Indeed, let > 0 be chosen, and
N1 N be such that for all k, ` > N1 we have kxk x` k < 2 . Similarly, choose K N such that
30
2015
c Tyler Holden
1.5 Continuity 1 The Topology of Rn
for all k > K we have kxnk xk < 2 . Fix an integer k > N1 such that nk > K so that if n > K
we have
kxn xk < kxn xnk k + kxnk xk < + = .
2 2
Definition 1.49
We say that S Rn is complete if every Cauchy sequence converges.
Proposition 1.48 implies that Rn is complete. We leave it as an exercise for the student to show
that S Rn is complete if and only if S is closed.
1.5 Continuity
We pause our discussion of sequence for the moment (to be resumed quite shortly) to discuss the
notion of limits and continuity for functions of several variables. Let us briefly recall the definitions
in a single variable, upon which we will generalize our discussion to multiple dimensions.
Definition 1.50
Let f : R R with c, L R. We say that lim f (x) = L if for every > 0 there exists > 0
xc
such that whenever 0 < |x c| < then |f (x) L| < .
We say that f is continuous at c if lim f (x) = f (c). If f is continuous at every point in its
xc
domain, we simply say that f is continuous.
Continuity is a way of saying that the function behaves well under limits, or equivalently that
limits can be taken inside the function, since
lim f (x) = f lim x = f (c).
xc xc
This idea that continuous functions permit one to interchange the function evaluation with the
limit will become more evident in a second. We presume that the student is still familiar with these
notions (albeit perhaps a bit rusty), so we will not explore them further at this time and instead
pass to multivariable functions.
Of particular interest will be functions of the form f : Rn R (though a similar conversation
holds for functions f : Rn Rm ). For functions of a single variable, the idea of a limit is that as
x gets arbitrarily close to c, the function value f (x) becomes arbitrarily close to L. These notions
were made formal by way of the absolute value, which measured distance: |x c| is the distance
between x and c, while |f (x) L| is the distance between f and L. In Rn we have adapted to use
the norm to measure distance, so it seems natural to replace |x c| with kx ck.
31
2015
c Tyler Holden
1 The Topology of Rn 1.5 Continuity
Definition 1.51
Let f : Rn Rm with c Rn and L Rm . We say that
lim f (x) = L
xc
if for every > 0 there exists a > 0 such that whenever 0 < kx ck < then kf (x) Lk <
.
Note that these are different norms; the norm for kx ck is the Rn norm, while the norm for
kf (x) Lk is in Rm . The student is likely familiar with the unwieldy - approach to limits, and
we assure the reader that this situation is significantly more exacerbated in multiple dimensions.
Example 1.52
Solution. Recall that in general, for any arbitrary (a, b) R2 one has
p p
|a| a2 + b2 , |b| a2 + b2 (1.1)
Let > 0 be given and choose = 2 . Assume that (x, y) R2 satisfy k(x, y) (1, 1)k < so that
Example 1.53
xy
Show that lim p = 0.
(x,y)(0,0) x2 + y 2
Solution. Let > 0 be given and choose = . If (x, y) R2 satisfy k(x, y)k < then
p p
xy |x||y| x2 + y 2 x2 + y 2
0 = p
p p
x2 + y 2 x2 + y 2 x2 + y 2
p
= x2 + y 2 = k(x, y)k < .
Example 1.54
32
2015
c Tyler Holden
1.5 Continuity 1 The Topology of Rn
Solution. Let > 0 be given and choose = . Notice that
3
= 3k(x 1, y)k2
and as such
kf (x) Lk 3kx (1, 0)k < .
Recall that
lim f (x) exists lim f (x) exists and lim f (x) exists.
xc xc+ xc
This represents the fact that the limit exists if and only if the limit is the same regardless of which
path I take to get to c. The problem in Rn is much more difficult, since even in R2 the number
of ways in which a limit can be approached is infinite. In Example 1.53 we took the limit as
(x, y) (0, 0). We can approach the origin (0, 0) along the x-axis, the y-axis, or along any line in
R2 (see Figure 13). In fact, one need not even approach along lines, you can approach along any
path in R2 that leads to the origin. For the limit to exist overall, the limit along every possible
path to the origin must exists, and they must all be equal.
Figure 13: Even in R2 , there are infinitely many ways of approaching a point. For a limit to exist,
the limit along each path must exist and must be equal to that achieved from every other path.
Example 1.55
x2 y 2
Show that the limit lim does not exist.
(x,y)(0,0) x4 + y 4
Solution. Let us approach the origin along the straight lines y = mx, where m R is arbitrary. If
2 y2
the limit exists, it must be the same regardless of our choice of m. Let f (x, y) = xx4 +y 4 , and note
33
2015
c Tyler Holden
1 The Topology of Rn 1.5 Continuity
x2 (mx)2 m2 x4
lim f (x, mx) = lim = lim
x0 x0 x4 + (mx)4 x0 x4 + m4 x4
m 2 x4 m2
= lim = lim
x0 (m4 + 1)x4 x0 m4 + 1
m 2
= 4 .
m +1
This limit clearly depends upon the choice of m, and so we conclude that the limit does not
exist.
The inquisitive reader might suspect that it is only straight lines that pose problems. For
example, could it be the case that if the function exists along every line y = mx then the limit can
be guaranteed to exist? The following examples shows that this is note the case:
Example 1.56
2xy 2
Show that the function f (x, y) = x2 +y 4
admits a limit along every line y = mx, but fails
along the parabola x = my 2 .
Solution. Proceeding as suggested, we take the limit along the lines y = mx:
2(my 2 )y 2 2my 4 2m 2m
lim f (my 2 , y) = lim 2 2 4
= lim 2 4 4
= lim 2 = 2
y0 y0 (my ) + y y0 m y + y y0 m + 1 m +1
and this clearly depends on m. We conclude that the limit does not exist.
Things seem rather hopeless: The - definition is tricky to work with, and the above examples
show that we cannot even limits be evaluating along typical paths. What progress can we possibly
make? Our salvation lies with the fact that the Squeeze Theorem also holds for functions f : Rn
R.
Theorem 1.57: Multivariable Squeeze Theorem
The proof is identical to that of the single variable squeeze theorem, so we leave it as an exercise.
34
2015
c Tyler Holden
1.5 Continuity 1 The Topology of Rn
Example 1.58
3x2 y 2
Show that lim = 0.
(x,y)(0,0) x2 + y 2
Example 1.59
y 4 sin2 (xy)
Determine the limit lim .
(x,y)(0,0) x2 + y 2
Solution. Taking absolute values and using the fact that | sin(xy)| 1 and y 2 x2 + y 2 we get
4 2
y4 y 2 (x2 + y 2 )
y sin (xy)
0 2 2
2 2
= y2.
x +y x +y (x2 + y 2 )
As both sides tend to zero as (x, y) (0, 0) we conclude that
2 2
y sin (xy)
lim =0
(x,y)(0,0) x2 + y 2
Now that we have tools for discussing limits, we can move onto the notion of continuity, which
in a multivariable context is nearly identical to the single variable definition.
Definition 1.60
A function f : Rn Rm is continuous at c Rn if
4 2
For example, the function f (x, y) = y xsin (xy)
2 +y 2 from Example 1.59 is undefined at (0, 0), but if
we define ( 4 2
y sin (xy)
x2 +y 2
, if x 6= 0
g(x, y) =
0, if (x, y) = (0, 0)
3 xc xc
Recall that |f (x)| f (x) |f (x)|, so if |f (x)| 0, the Squeeze Theorem implies that f (x) 0.
35
2015
c Tyler Holden
1 The Topology of Rn 1.5 Continuity
( )
Figure 14: If (an ) a, then by going far enough into our sequence (blue) we can guarantee that
we will be in -neighbourhood of a. The image of these points are the f (an ) (brown), which live in
the desired -neighbourhood because of the continuity of f .
Proof. [] Assume that f is continuous, and let (an ) a. We want to show that (f (an )) f (a).
Let > 0 be given. Since f is continuous, there exists a > 0 such that for each x satisfying
kx ak < we have kf (x) f (a)k < . Since (an ) is convergent, there exists an N N such that
for all n N we have kan ak < . Combining these, we see that whenever n N we have
Theorem 1.61 shows that a function is continuous if and only if it it maps convergent sequences
to convergent sequences. This is precisely what we mean when we say that we can interchange a
function with the limit, since if (xn ) a then
lim f (xn ) = f lim xn = f (a).
n n
36
2015
c Tyler Holden
1.5 Continuity 1 The Topology of Rn
Proof. [] Assume that f is continuous and let U Rm be an open set. Let x f 1 (U ) be arbitrary
and consider f (x) U . Since U is open, there exists and > 0 such that B (f (x)) U , and since
f is continuous, let > 0 be the choice of delta which corresponds to this epsilon. We claim that
B (x) f 1 (U ). Indeed, let y B (x) so that kx yk < . By continuity, kf (x) f (y)k <
which shows that f (y) B (f (x)) U , thus y f 1 (U ) as required.
f 1 (U ) U
f
x
f (x)
B (x)
B (f (x))
Figure 15: To show that the pre-image of open sets is open, we use the fact that the condition
kf (x) f (y)k < is exactly the same thing as looking at an -ball around f (x).
[] Assume that the preimage of open sets is open, for which we want to show that f is
continuous, say at x. Let > 0 be given, and set U = B (f (x)). Certainly we have x f 1 (U ),
and since this is an open set by assumption, there exists a > 0 such that B (x) f 1 (U ). We
claim that this choice of delta will satisfy the continuity requirement. Indeed, let y be a point such
that kx yk < ; that is, y B (x). Since B (x) f 1 (U ) we know that f (y) f (B (x)) U =
B (f (x)); that is, kf (y) f (x)k < , as required.
Example 1.63
Solution. This is the same set as in Example 1.24, wherein we showed that S was open by
constructing an open ball around every point. Consider the function f : R2 R given by
f (x, y) = y. The student can convince him/herself that this function is continuous, and more-
over, that S = f 1 ((0, )). Since (0, ) is open in R and f is continuous, it follows that S is open
as well.
37
2015
c Tyler Holden
1 The Topology of Rn 1.6 Compactness
Many of the theorems about continuous functions in a single variable carry over to multiple
dimensions, for example
Theorem 1.64
This is a simple theorem to prove, so it is left as an exercise for the student. It is straightforward
to show that the functions x y, and xy are continuous, which immediately imply that the sum
and product of continuous functions is also continuous.
1.6 Compactness
In our study of calculus on R, there is a very real sense in which the sets [a, b] are the best
behaved: They are closed, which means we need to not worry about the distinction between
infimum/supremum and minimum/maximum, and they are bounded so need not worry about
wandering off to infinity. In fact, one might recall that the Extreme Value Theorem was stated for
an interval of this type.
We have since explored the notions of closed and boundedness in multiple dimensions, and once
again it seems as though the same benefits afforded in the single variable case also extend to Rn .
We give such sets a very special name:
Definition 1.65
A set S Rn is compact if it is both closed and bounded.
Example 1.66
1. As mentioned, the interval [a, b] R is compact. More generally, any closed ball
Br (x) Rn is compact.
2. As finite unions of closed and bounded sets are closed and bounded, the finite union
of compacts sets is compact.
3. The set consisting of a single point is compact. By the previous example, every finite
set is also compact.
5. The rationals Q R are neither closed nor bounded, and hence are not compact.
38
2015
c Tyler Holden
1.6 Compactness 1 The Topology of Rn
Exercise: Prove property (2); that is, show that a finite union of compact sets is still com-
pact. Give an example to show that the result is not true if we allow infinite unions.
It turns out that this definition, while convenient conceptually, does not lend itself to proving
results about compact sets. Nor does it generalize to more abstract spaces. As such, we have the
following two equivalent definitions of compactness:
Theorem 1.67
1. S is compact,
3. [Heine-Borel] Every open cover of S S admits a finite subcover; that is, if {Ui }iI is a
collection of open
S sets such that S iI Ui , then there exists a finite subset J I
such that S iJ Ui .
Proof. This is typically stated as two separate theorems: The Heine-Borel Theorem and the
Bolzano-Weierstrass Theorem, in which one shows that each of the corresponding alternate defini-
tions are equivalent to closed and bounded. We will only prove Bolzano-Weierstrass, as Heine-Borel
is more complicated.
[(1) (2)] Suppose that S is closed and bounded, and let (xn ) n=1 S. Since S is bounded, so
too is (xn ), in which case Theorem 1.44 implies there exists a convergent subsequence (xnk ) x.
A priori, we only know that x Rn , but since S is closed, by Corollary 1.42 we know that x S.
Thus (xnk ) is a convergent subsequence.
[(2) (1)] We will proceed by contrapositive. Assume therefore that S is either not closed or not
bounded.
If S is not closed, there exists x S \ S. By Corollary 1.42 there exists a sequence (xn )
n=1 S
such that (xn ) x. Since (xn ) converges, by Proposition 1.37 every subsequence also converges,
and to the same limit point. Thus (xn ) is a sequence in S with no convergent subsequence in S.
Now assume that S is not bounded. One can easily construct a sequence (xn ) such that
n
kxn k . Necessarily, any subsequence of xn also satisfies this property, and so (xn ) has no
convergent subsequence.
Remark 1.68 There are many more equivalent definitions of compactness, some of which
are equivalent, depending on the more general topological context. In general, none of
these definitions are actually equivalent. The statement corresponding to Heine-Borel is the
true definition of compactness, though it is sometimes known as quasi-compactness, while
the Bolzano-Weierstrass definition is referred to as sequential compactness.
39
2015
c Tyler Holden
1 The Topology of Rn 1.6 Compactness
One of the more potent results about compact sets is the following theorem
Theorem 1.69
Proof. We will proceed via the Bolzano-Weierstrass theorem. Consider a sequence (yn ) n=1 in f (K),
for which our intent is to find a convergent subsequence. By definition of f (K), for each yn there
exists an xn K such that yn = f (xn ), and hence we can define a sequence (xn ) in K. Since K is
compact, there exists a convergent subsequence (xnk ) x, with x K.
We claim that the corresponding subsequence (ynk ) converges to f (x). Indeed, since f is con-
tinuous, we know that
lim f (xnk ) = f lim xnk = f (x)
k k
and since x K, we know f (x) f (K). Thus (ynk ) is a convergent subsequence in f (K), and we
conclude that f (K) is compact.
K
f (K)
y5
x1 x3 y12
f y8
y6 y3
y4
x7 x2 x5
y2 y7
x4
x 6
y1
x8
x12
Figure 16: Proof of Theorem 1.69. We start with a random sequence (yn ) in f (K) and look at the
points (xn ) in K which map to (yn ). We choose a convergent subsequence (xnk ) (green), and use
that subsequence to define a convergent sequence (ynk ) (green) in f (K).
This now immediately implies a familiar theorem from single variable calculus:
40
2015
c Tyler Holden
1.7 Connectedness 1 The Topology of Rn
Proof. Since f is continuous and K is compact, by Theorem 1.69 we know that f (K) is compact,
and as such is both closed and bounded. Since f (K) R, the completeness axiom implies that
sup f (K) and inf f (K) both exist. Since f (K) is closed, the supremum and infimum are actually
in f (K), so there exist xmin , xmax K such that
f (xmin ) = inf f (K), f (xmax ) = sup f (K),
and by definition of inf and sup, for every x K
f (xmin ) = inf f (K) f (x) sup f (K) = f (xmax )
as required.
1.7 Connectedness
Connectedness is an expansive and important topic, but one which is also quite subtle. The true
definition embodies pathological cases which we will not be of concern in the majority of our work,
and so it is more intuitive to introduce a weaker notion known as path connectedness.
Intuitively, we would like something to be connected if it cannot be decomposed into two
separate pieces. Hence we might say that a set S is not connected if there exist S1 , S2 such that
S = S1 S2 and S1 S2 = . This latter condition is important to guarantee that the two sets do
not overlap. Unfortunately, this condition does not actually capture the idea we are trying convey.
For example, one expects that the interval S = (0, 2) should be connected: it looks like all one
piece. Nonetheless, we can write (0, 2) = (0, 1) [1, 2), so that if S1 = (0, 1) and S2 = [1, 2) then
S = S1 S2 and S1 S2 = .
The remedy is to enforce a condition on the closure of each set; namely, that S 1 S2 = and
S1 S 2 = . Ensuring that these intersections are empty ensures that our sets are far enough apart.
Definition 1.71
A set S Rn is said to be disconnected if there exist non-empty S1 , S2 S such that
1. S = S1 S2 ,
2. S 1 S2 = and S1 S2 = .
This definition is such that it is much easier to show that a set is disconnected rather than
connected, since to show that a set is connected we must then show that there is no disconnection
amongst all possible candidates.
41
2015
c Tyler Holden
1 The Topology of Rn 1.7 Connectedness
Example 1.72
1. S = [0, 1] [2, 3] R,
2. Q R,
3. T = (x, y) R2 : y 6= x .
Solution.
1. The disconnection for this case is evident: by setting S1 = [0, 1] and S2 = [2, 3], all conditions
are satisfied. Hence (S1 , S2 ) is a disconnection of S.
S1 S2 = (, ] (, ) = .
3. Our set T looks like the plane with the line y = x removed. Since the line y = x somehow splits
the space, one might be unsurprised that this set is disconnected. Let S1 = {(x, y) : y > x}
and S2 = {(x, y) : y < x}, so that T = S1 S2 . Furthermore,
Remark 1.73 Examples (2) and (3) above show that the elements of the disconnection
can be arbitrarily close to one another yet still form a disconnection.
Proposition 1.74
Despite the simplicity of the statement, the proof of this result is non-trivial. It can be found
in the textbook, so we leave it as an exercise for the interested student.
So in general, it seems that proving that a set is connected can prove quite bothersome. An
excellent tool for proving connectedness will be the following weaker definition:
Definition 1.75
If S Rn then a path in S is any continuous map : [0, 1] S. We say that S is path-
connected if for every two points a, b S there exists a path : [0, 1] S such that (0) = a
and (1) = b.
42
2015
c Tyler Holden
1.7 Connectedness 1 The Topology of Rn
Intuitively, a set is path connected if between any two points in our set, we can draw a line
between those two points which never leaves the set.
Example 1.76
Solution. Let c, d [a, b] be arbitrary, and define the map : [0, 1] [a, b] by (t) = td + (1 t)c.
One can easily check that is continuous, and (0) = c, (1) = d. We conclude that [a, b] is path
connected.
Naturally, the solution above would also work for (half) open intervals. We invite the student
to compare this proof to the one for Proposition 1.74, to see the difference in complexity required
to show that a set is connected as compared to path connected.
Example 1.77
Solution. Consider Figure 17 which suggests how we might proceed. If the two components lie in
the same half of the plane, we can connected them with a straight line. If they lie in separate halves
of the plane, we can connected them with lines that must first go through the origin.
b2
a
1 (t)
2 (t)
(t)
b1
Figure 17: If a and b1 lie in the same plane, we can connect them with a straight line. If a and b2
lie in separate planes, we can connect them with a line through the origin.
Choose two points a = (a1 , a2 ), b = (b1 , b2 ) S. Our first case will be to assume that both a
and b lie in the same half of the plane. Without loss of generality, assume that a1 , b1 > 0. Define
the path
(t) = at + (1 t)b = (a1 t + (1 t)b1 , a2 t + (1 t)b2 ).
Since a1 and b1 are both positive, the x-coordinate of the path a2 t + (1 t)b2 is also always positive.
Thus is a path entirely in S.
43
2015
c Tyler Holden
1 The Topology of Rn 1.7 Connectedness
For our other case, assume then that a1 < 0 and b1 > 0. Consider the two paths 1 (t) = a(1 t)
and 2 (t) = bt, both of which are paths from their respective points to the origin, which remain
entirely within S. By concatenating these paths, we can define a new path
(
1 (2t) t [0, 1/2]
(t) = .
2 (2t 1) t [1/2, 1]
It is easy to check that is continuous, (0) = a and (1) = b. As each constituent path lies
entirely within S, so too does the concatenated path, as required. We conclude that S is path
connected.
Theorem 1.78
The continuous image of a (path) connected set is (path) connected. More precisely, if
f : Rn Rm is continuous and S Rn is (path) connected, then f (S) is (path) connected.
Proof. We will show the (much simpler) proof when S is path connected, and leave the connected
case as an exercise.
Assume then that S is path connected, and consider f (S). Let a, b f (S) be any two points,
and choose x, y S such that f (x) = a and f (y) = b. Since S is path connected, there is a path
: [0, 1] S such that (0) = x and (1) = y. We claim that f : [0, 1] f (S) is a path in
f (S) connecting a and b. Indeed, since and f are both continuous their composition f is also
continuous. Evaluating the endpoints, we have
Proof. Regardless of whether we allow V to be connected or path connected, we know that the
image f (V ) is an interval. Since f (a), f (b) f (V ) then [f (a), f (b)] f (V ), and the result
follows.
I have mentioned that path connectedness is a strictly weaker notion of connectedness; that
is, any path connected set is necessarily connected, but the converse need not be true. This is
demonstrated by the following proposition and the example thereafter.
Proposition 1.80
44
2015
c Tyler Holden
1.8 Uniform Continuity 1 The Topology of Rn
Proof. We will proceed by contradiction. Assume then that S Rn is path connected but not
connected, so there exists a disconnection (S1 , S2 ). Choose a S1 and b S2 and let : [0, 1] S
be a path from a to b. Since is continuous, P = ([0, 1]) is necessarily connected. On the other
hand, let P1 = P S1 and P2 = P S2 , so that P1 P2 = (S1 S2 ) P = P , while
P1 P2 = (P S 1 ) S2 S 1 S2 = ,
To see that connected does not imply path connected, consider the following set, known as the
Topologists Sine Curve:
1
x, sin : x R \ {0} (0, 0).
x
It is possible to show that this set is connected (convince yourself of this) but not path connected
(also convince yourself of this). Thus path connectedness is not equivalent to connectedness. A
partial converse is given by the following:
Proposition 1.81
Stronger than continuity, there is a notion of uniform continuity, which plays nicer with Cauchy
sequences than a simple continuous function. The idea is as follows: If we write out the - definition
of continuity, in full quantifiers, we get
The fact that the delta is specified after both the and the point x means that (, x) is a function
of both these terms; that is, changing either or the point x will change the necessary value of .
This is perhaps unsurprising, since the choice of really corresponds to how quickly the function
is growing at a point (See Figure 18).
The idea of uniform continuity is that given a fixed > 0, one can find a which works for
every point x.
Definition 1.82
Let D Rn and f : D Rm . We say that f is uniformly continuous if for every > 0, there
exists a > 0 such that for every x, y D satisfying kx yk < then kf (x) f (y)k < .
As stated, the definition of uniform continuity implies that only depends upon the choice of ,
not on the particular point that we choose. Intuitively, uniformly continuous function are in some
sense bounded in how quickly they are permitted to grow.
45
2015
c Tyler Holden
1 The Topology of Rn 1.8 Uniform Continuity
x1 x2 x3 x
1 2 3
Figure 18: For a fixed > 0, the value of depends on the choice of the point x. In fact, the faster
a function grows at a point, the smaller the corresponding will be.
Example 1.83
Solution. Let > 0 and choose = 2 . Let x, y R be any points such that |x y| < , and notice
that
|f (x) f (y)| = |(2x + 5) (2y + 5)| = 2|x y| < 2 = .
The domain is an exceptionally important piece of information when determining uniform con-
tinuity, as the following example shows.
Example 1.84
Solution. Let > 0 and choose = 4 . Let x, y R be such that |x y| < . Since x, y [2, 2]
we know that 2 < x, y < 2 so
and moreover
|f (x) f (y)| = |x2 y 2 | = |x + y||x y| < 4|x y| < 4 =
as required. On the other hand, suppose for the sake of contradiction that f is uniformly continuous.
Let = 1 and choose the > 0 guaranteed by uniform continuity. Choose x R such that |x| > 1/,
and set y = x + /2. Clearly |x y| < , but
2
2
= x + 2 |x| > 1 =
x x +
2
46
2015
c Tyler Holden
1.8 Uniform Continuity 1 The Topology of Rn
which is a contradiction.
Notice that the proof for why f fails to be uniformly continuous cannot be applied to g, precisely
because g is only defined on the interval [2, 2] and as such, we cannot guarantee there exists an
x such that |x| > 1/.
So far, our examples have been limited to those function R R, and naturally the situation
becomes more complicated in higher dimensions. Luckily, with the use of compactness, we can
prove the following theorem:
Theorem 1.85
Proof. The proof of this theorem is particularly slick using the Heine-Borel theorem, but as our
characterization of compactness has been principally the Bolzano-Weierstrass theorem, we will
proceed with that.
Assume, for the sake of contradiction, that f is not uniformly continuous; that is, there exists an
> 0 such that for all > 0 we can find a pair x, y D such that kx yk < and kf (x) f (y)k
. Let > 0 be as given, and for each n N let n = n1 . Define xn , yn to be the pair such that
kxn yn k < n and kf (xn ) f (yn )k .
The sequence (xn ) n=1 is a sequence in the compact set D, and so by Bolzano-Weierstrass, it
n
has a convergent subsequence (xnk ) x. Since kxn yn k 0, one can show that (ynk ) x
as well. Since f is continuous,
Exercise: Prove the above theorem using the open covering version of compactness.
This allows us to immediately deduce that some functions are uniformly continuous, with-
out havingp to go through the trouble of proving the - version. For example, the function
f (x, y) = sin(x) + cos2 (y), defined on B1 (0) is uniformly continuous by virtue of the fact that f
is continuous and B1 (0) is compact.
47
2015
c Tyler Holden
2 Differential Calculus
2 Differential Calculus
2.1 Derivatives
f (a + h) f (a)
lim exists,
h0 h
where error(h) is the corresponding error in the linear approximation. For the approximation to
be good, the error should go to zero faster than linearly in h; that is,
error(h)
lim = 0.
h0 h
f (a + h) f (a) mh
lim = 0.
h0 h
One can manipulate (2.1) to show that m = f 0 (a) under the usual definition. Of course,
everything we know about single variable calculus is still true: The product rule, the chain rule,
our theorems regarding differentiability. We will not replicate the list here, for it is too large and
the student should be well familiar with it.
The first and simplest generalization of the derivative comes from looking at vector valued functions
: R Rn . Such function are often visualized as parameterized paths in Rn .
48
2015
c Tyler Holden
2.1 Derivatives 2 Differential Calculus
Example 2.2
1. Consider the function 1 : [0, 2) R2 , t 7 (cos(t), sin(t)). By plotting the values of the
function for t [0, 2), we see that 1 traces out the unit circle in R2 .
2. The map 2 : (0, ) R2 given by 2 (t) = (t cos(t), t sin(t)) is a spiral (see Figure 19b).
3. The function 3 : R R3 given by 3 (t) = (t, cos(t), sin(t)) is a Helix (see Figure 19a).
zy
x
(b)
Figure 19: Examples of parameterized curves and their derivatives. Left: A helix in R3 . Right: A
spiral in R2 .
Definition 2.3
We say that a function : R Rn is differentiable if at t0 if
(t0 + h) (t0 )
0 (t0 ) = lim
h0
h
1 (t0 + h) 1 (t0 ) n (t0 + h) n (t0 )
= lim , . . . , lim
h0 h h0 h
exists. If is differentiable at every point in its domain, we will say that is differentiable.
Thus a vector-value function of a single variable is differentiable precisely when each of its
component functions is differentiable, and the derivative may be computed by differentiating
each component separately. For example, we can immediately deduce that the curve (t) =
(et , cos(t2 ), (t2 + 1)1 ) is differentiable everywhere, since each of its component functions are dif-
ferentiable everywhere, and moreover its derivative is given by
0 2t
(t) = et , 2t sin(t2 ), .
(1 + t2 )2
49
2015
c Tyler Holden
2 Differential Calculus 2.1 Derivatives
Example 2.4
Solution. In every case we need only read off the derivatives of each component:
1. (f )0 = 0 f + f 0 ,
2. (f g)0 = f 0 g + f g0 ,
Proof. We will do the proof for (2) and leave the others as an exercise for the students. Let
f (t) = (f1 (t), . . . , fn (t)) and g(t) = (g1 (t), . . . , gn (t). Differentiating their dot product yields
d d
(f (t) g(t)) = (f1 (t)g1 (t) + + fn (t)gn (t))
dt dt0
= f1 (t)g1 (t) + f1 (t)g10 (t) + + fn0 (t)gn (t) + fn (t)gn0 (t)
2.1.3 Multivariable Rn R
The previous section represented the simplest generalization of the derivative to multiple dimen-
sions. In this section, we will now examine what happens when our function takes in multiple
parameters. This situation is significantly more complicated: In the previous two sections, only
a single variable existed to be differentiated. Now that our functions have multiple parameters,
making sense of how to meaningfully define a derivative becomes its own challenge.
Let f : Rn R be a function. We can visualize this function by thinking about its graph,
50
2015
c Tyler Holden
2.1 Derivatives 2 Differential Calculus
f (x)
Rn
x
as illustrated in figure 20. In this case, what does it mean to behave linearly? The correct notion
of a linear object in Rn+1 is that of an n-plane. An n-plane through the origin has the equation
c1 x1 + c2 x2 + + cn xn + cn+1 xn+1 = c x = 0.
In the case where n = 1 then this reduces to c1 x1 + c2 x2 = 0, which we recognize as a straight line
through the origin. If we instead would like this plane to pass through a point a Rn+1 , we can
change this to
0 = c (x a) = c x c a
Recall that the limit h 0 means that we approach 0 from every possible direction, which is
why we had to use an n-plane to capture the idea of approaching the point a from every conceivable
direction. One can show that the equation of the tangent n-plane at f (a) is given by xn+1 =
f (a) + f (a) x so that f (a + h) f (a) f (a) h represents the error between between the value
of the function and that of the tangent plane. Once again, the condition on differentiability means
that this error goes to zero faster than linearly.
51
2015
c Tyler Holden
2 Differential Calculus 2.1 Derivatives
Example 2.7
Show that the function f (x, y) = x2 + y 2 is differentiable at the point a = (1, 0) with
f (1, 0) = (2, 0). Determine more generally what f (a) should be for general a.
which is precisely what we wanted to show. More generally, let a = (x, y) and f (a) = (c1 , c2 ), so
that differentiability becomes
If either 2x c1 , 2y c2 6= 0 then this limit does not exist, which implies that c1 = 2x and c2 = 2y;
that is, f (x, y) = (2x, 2y).
Remark 2.8
1. It was very necessary that we considered the entire f (a + h) f (a) f (a) h term,
since the cancellations were necessary to ensure that the limit exists. As such, we
cannot just drop the f (a) h like we were able to do when our functions were maps
from R to R.
2. Notice that our gradient f (x) = (2x, 2y) contains the terms 2x and 2y which are
the derivatives of x2 and y 2 respectively. So it seems like the gradient might still be
related to the one-dimensional derivatives.
Theorem 2.9
52
2015
c Tyler Holden
2.1 Derivatives 2 Differential Calculus
= 0.
Partial Derivatives: We take a small detour to develop some machinery before returning to
differentiability. We have seen that it can be very difficult to capture how a limit approaches a
point in multiple dimensions, precisely because there are infinitely many possible ways to approach
a point. The same argument works for differentiability: There is no obvious way of writing down
the rate of change of a function in infinitely many directions simultaneously.
However, we know from linear algebra that we do not have to be able to describe every vector
in Rn , only a finite subset of basis vectors, from which every other vector can be built through a
linear combination. We will apply this idea here, and determine the rate of change of the function
f in each of standard unit vectors.
Definition 2.10
Write (x1 , . . . , xn ) to denote the coordinates of Rn . If f : Rn R, we define the partial
derivative of f with respect to xi at a = (a1 , . . . , an ) Rn as
f f (a1 , . . . , ai + h, . . . , an ) f (a1 , . . . , an )
(a) = lim .
xi h0 h
f
That is, xi
is the one-variable derivative of f (x1 , . . . , xn ) with respect to xi , where all other
variables are held constant.
Example 2.11
Solution. Remember that when computing the partial derivative with respect to xi , we treat all
other variables as constants. Hence
f
= y + 2xz cos(x2 z)
x
f ey
=x+ 2
y z
f 2ey
= x2 cos(x2 z) 3 .
z z
53
2015
c Tyler Holden
2 Differential Calculus 2.1 Derivatives
f
It can be quite cumbersome to write xi
, so we will often interchange it with any of the following
when it is unambiguous:
f
, xi f, i f, fxi , fi .
xi
This will be particularly convenient when we start taking higher order partial derivatives.
Recall in Example 2.7 we showed that if f (x, y) = x2 +y 2 then f (x, y) = (2x, 2y) = (x f, y f ).
Is this just a coincidence, or does it hold more generally?
Theorem 2.12
Proof. This is actually a fairly natural result. If differentiability means that the limit exists from
every direction and partial derivatives are only capturing information about a single direction, it
seems natural that one would imply the other.
More directly, let ei = (0, . . . , 1, 0, . . . , 0) be the standard unit vector in the i-th direction. We
| {z }
i-times
are going to use our knowledge that the function is differentiable and approach along the i-th
coordinate axis. Indeed, let h = hei and f (a) = (c1 , . . . , cn ) so that
f (a + h) f (a) f (a) h
0 = lim
h0 khk
f (a1 , . . . , ai + h, . . . , an ) f (a1 , . . . , an )
= lim ci
h0 h
[Note: The final equation above is always true, but how we arrive at it depends on the sign of h.
f
Convince yourself of this!] Re-arranging gives x i
= ci . Since this holds for arbitrary i, we conclude
that x
f (a) = (x1 f, . . . , xn f ) .
It is important to note however that the converse of this theorem is not true; that is, it is
possible for the partial derivatives to exist but for the function to not be differentiable. Indeed, it
is precisely because the partials only measure the differentiability in finitely many directions that
the converse direction does not hold. Consider
(
xy
2 2, if (x, y) 6= (0, 0)
f (x, y) = x +y (2.2)
0 if (x, y) = (0, 0).
We know that this function is not continuous at (0, 0) (for example, approach along the line y = mx)
and so has no chance of being differentiable at (0, 0). Nonetheless, the partial derivatives exist at
(0, 0) since
54
2015
c Tyler Holden
2.1 Derivatives 2 Differential Calculus
Let f : Rn R and a Rn . If i f (x) all exist and are continuous in an open neighbourhood
of a, then f is differentiable at a.
The proof of this theorem is very slick, but is not terribly enlightening. As such, we omit its
proof and leave it as an exercise for the student (the proof may be found in Folland). Once again
let f be the function in (2.2). Notice that its partial derivatives are given by
f y 3 x2 y f x3 y 2 x
= 2 , = 2
x (x + y 2 )2 y (x + y 2 )2
and these functions do not have limits as (x, y) (0, 0) (try the line y = x). Hence the partial
derivatives are not continuous, and Theorem 2.13 does not apply.
Definition 2.14
We define the collection of C 1 functions on U to be
1 n n i f exists and is continuous
C (R , R) = f : R R : .
i = 1, . . . , n
That is, a function f is C 1 if all of its partial exist and are continuous.
All C 1 functions are automatically differentiable by Theorem 2.13; however, there are differen-
tiable functions which are not C 1 . For example, the function
(x2 + y 2 ) sin 1 , if (x, y) 6= (0, 0)
f (x, y) = x2 +y 2 (2.3)
0 if (x, y) = (0, 0)
is everywhere differentiable, but its partial derivatives are not continuous at (0, 0).
Exercise: Show that (2.3) is differentiable but its partial derivatives are not continuous at
(0, 0).
We have presented a lot of theorems and counter-examples, so lets take a moment to summarize
what we have said:
Directional Derivatives: Partial derivatives gave us the ability to determine how a function
was changing along the coordinate axes, but what if we want to see how the derivative is changing
along other vectors? This is done via directional derivatives:
55
2015
c Tyler Holden
2 Differential Calculus 2.1 Derivatives
Definition 2.15
Let f : Rn R and a Rn . If u Rn is a unit vector (kuk = 1) then the directional
derivative of f in the direction u at a is
f (a + tu) f (a) d
u f (a) = lim = f (a + tu).
t0 t dt t=0
This represents an idea that is prevalent throughout mathematics, and especially the field
of differential geometry. First of all, notice that : R Rn given by (t) = a + tu is the
straight line through a in the direction of u, and hence is a curve. By composing with f , we
get g = f : R Rn R which is just a normal, one-variable function, and hence can be
differentiated as normal. We know that 0 (t) = u is the velocity vector of the curve, so to see how
the function behaves in the direction u we look at how the function f behaves in a neighbourhood
of our point a, and differentiate at t = 0 to get the behaviour in this direction.
Example 2.16
Theorem 2.17
Proof. The idea is almost exactly the same as Theorem 2.12. We will approach along the line a+tu
and use differentiability to conclude that the limit exists. As such, let h = tu for t R, so that
f (a + h) f (a) f (a) h
0 = lim
t0 khk
f (a + tu) f (a) tf (a) u
= lim
t0
t
f (a + tu) f (a)
= lim f (a) u.
t0 t
56
2015
c Tyler Holden
2.1 Derivatives 2 Differential Calculus
Example 2.18
Verify the result from Example 2.16 by using the above theorem.
Solution. Our function f (x, y) = sin(xy) + ex is clearly differentiable, as its partial derivatives exist
and are continuous:
f f
= y cos(xy) + ex , = x cos(xy).
x y
At the point a = (0, 0) the gradient is f (0, 0) = (1, 0), and so
1 2 1
u f (0, 0) = f (0, 0) u = (1, 0) , = .
5 5 5
Exercise: Show that the converse of Theorem 2.17 is false; that is, there is a function in
which every directional derivative exists at a point, but the function is not differentiable.
2.1.4 Functions Rn Rm
Our motivation for defining the derivative thus far has been that a function is differentiable if it
can be approximated by a linear function, with an error that tends to zero faster than linear. In
such instances, that linear approximation is what we call the derivative. The same story will hold
for functions f : Rn Rm .
So what does it mean for a function L : Rn Rm to be linear? Here, the word linear has the
same interpretation as it does in linear algebra; that is, for every x, y Rn
The student is hopefully familiar with the fact that such maps can be represented by matrices, with
respect to some basis. In particular, if L : Rn Rm then L must take an n-vector to an m-vector,
and thus must be an m n-matrix, say A. In this basis, we can write L(x) = Ax. Thus we would
like to say something along the lines of A function f : Rn Rm is differentiable at a Rn if there
exists a matrix A such that
f (a + h) = f (a) + Ah + error(h).
For this approximation to do a good job, the error should tend to zero faster than linearly, leading
us to the following definition:
57
2015
c Tyler Holden
2 Differential Calculus 2.1 Derivatives
Definition 2.19
A function f : Rn Rm is differentiable at the point a Rn if there exists an m n matrix
A such that
kf (a + h) f (a) AhkRm
lim = 0. (2.4)
h0 khkRn
We often denote the quantity A by Df (a), referred to as the Jacobian matrix.
Example 2.20
We will have to proceed by the Squeeze Theorem. Taking the entire difference quotient into
consideration, we have
p p
kf (a + h) f (a) AhkRm h1 h21 + h23 h1 h21 + h23
0 =p 2 p 2 = h1 .
khkRn h1 + h22 + h23 h1 + h23
As both the upper and lower bounds limit to 0, we conclude that f is differentiable as required.
We would like to find a much better way of determining Df (a) than using the limit definition.
If f is differentiable, then
lim kf (a + h) f (a) Df (a)hk = 0.
h0
Furthermore, the norm of a vector tends to zero if and only if each of its terms also tends to zero.
Write [Df (a)]i for the i-th row of Df (a) and let f (x) = (f1 (x), . . . , fm (x)). Then (2.4) is equivalent
to the statement that for each i = 1, . . . , m we have
Notice that this is exactly the definition of the gradient, and so [Df (a)]i = fi (a). We thus get
the following result for free:
58
2015
c Tyler Holden
2.2 The Chain Rule 2 Differential Calculus
Proposition 2.21
Example 2.22
Solution. By definition, the derivative is the matrix of partial derivatives, so we can compute this
to be
sin() r cos()
df (r, ) = .
cos() r sin()
Example 2.23
59
2015
c Tyler Holden
2 Differential Calculus 2.2 The Chain Rule
The proof of the Chain Rule in even a single dimension is tricky. The addition of multiple
dimensions only serves to make the proof messy, so we will omit it. Here now it is important to
make the distinction between which objects are treated as rows and which are treated as columns.
If f : Rn Rm then Df (a) should reduce a gradient when m = 1, and should be curve derivative
when n = 1. In particular, this implies that the gradient of a function Rn R is a row vector,
while the derivative of a function R Rn is a column vector.
There are a few notable cases that we should take into account. Let g : R Rn and f : Rn R
so that f g : R R. By the Chain Rule, we must then have
d
(f g)(t) = f (g(t)) g0 (t)
dt
f 0 f
= g (t) + + g 0 (t).
x1 g(t) 1 xn g(t) n
Using Leibniz notation, let y = f (x) and set (x1 , . . . , xn ) = g(t) = (g1 (t), . . . , gn (t)) so that
gi0 (t) = dx
dt . Our derivative now becomes
i
d y x1 y xn
(f g) = + + .
dt x1 t xn t
Once again, it seems as though the derivatives are cancelling, though this is not the case.
Example 2.25
Let g(t) = (sin(t), cos(t), t2 ) and f (x, y, z) = x2 + y 2 + xyz. Determine the derivative of f g.
Solution. One does not need to use the chain rule here, since we can explicitly write
f (g(t)) = f (sin(t), cos(t), t2 ) = sin2 (t) + cos2 (t) + t2 sin(t) cos(t) = 1 + t2 sin(t) cos(t),
f (g(t)) g0 (t) = (2 sin(t) + t2 cos(t), 2 cos(t) + t2 sin(t), sin(t) cos(t)) (cos(t), sin(t), 2t)
= 2 sin(t) cos2 (t) + t2 cos(t) 2 cos(t) sin 9t) t2 sin2 (t) + 2t cos(t) sin(t)
= 2t sin(t) cos(t) + t2 cos2 (t) t2 sin2 (t).
60
2015
c Tyler Holden
2.2 The Chain Rule 2 Differential Calculus
(f g)(x) = f (g(x))Dg(x)
g1 g1 g1
t1 t2 tn
g2 g2 g2
f f
t1 t2 tn
= ,..., .
x1 xn
.. .. .. ..
. . . .
gm gm gm
t1 t2 tn
y x1 y x2 y xn
(f g)(x) = + + + .
ti x1 ti x2 ti xn ti
Example 2.26
and so
2 2
(f g)(t1 , t2 ) = 2t1 t2 + t22 et1 t2 , t21 + 2t1 t2 et1 t2 .
d
(f g)(t) = Df (g(t)) g0 (t).
dt
Example 2.27
d
Let f (x, y) = (xy, x + y, x y) and g(t) = (t, t2 ). Compute dt (f g) (t).
61
2015
c Tyler Holden
2 Differential Calculus 2.2 The Chain Rule
(f g)(t) = (t3 , t + t2 , t t2 )T
and so
d
(f g)(t) = (3t2 , 1 + 2t, 1 2t)T .
dt
On the other hand,
y x
Df (x, y) = 1 1 , g 0 (t) = (1, 2t)T
1 1
so by the Chain Rule 2
3t2
t t
d 1
(f g)(t) = 1 1 = 1 + 2t .
dt 2t
1 1 1 2t
Exercise: In Example 2.28 we used the Chain Rule without explicitly computing the map
f g. Write down the map f g, compute its derivative, and verify the result of Example
2.28.
62
2015
c Tyler Holden
2.2 The Chain Rule 2 Differential Calculus
(a) The function f (x, y) = (x + y, y 2 ) acts on an orthogonal grid in the way pictured.
(b) The function f (x, y) = (x cos(y), x sin(y)) acts on an orthogonal grid in the way pictured.
Figure 21: One can visualize maps Rn Rm by how they map orthogonal grids.
output (elements of Rm ). If we are lucky and m = n, we can try to visualize how such functions
work by looking at how orthogonal grids transform (see Figure 21).
So what should derivatives do in this regime? The idea is roughly as follow: Given a point
a Rn and an infinitesimal change in a direction u, we want to characterize how our function
transforms that infinitesimal change. Alternatively, pretend that we are driving a car in Rn and
our path is described by the curve : R Rn and satisfies
(0) = a, 0 (0) = u;
that is, we pass through the point a at the time t = 0 and here we have a certainly velocity
vector u. Now let f : Rn Rm be a differentiable (hence continuous) function. The composition
f : R Rm is a path in Rm , and so (f )0 (0) = v describes the velocity vector at the point
(f )(0) = f (a). By the chain rule, we know that
namely, Df (a) describes how our velocity vector u transforms into the velocity vector v. In fact,
this holds regardless of the choice of curve through a, and so
Df (a) describes how velocity vectors through a transform into velocity vectors through
f (a).
63
2015
c Tyler Holden
2 Differential Calculus 2.3 The Mean Value Theorem
Change in scale: The quantity Df (a) represents how velocity vectors transform at the point a.
If f : Rn Rn then Df (a) is actually a square matrix. A result that the student may be familiar
with is that given a linear transformation A : Rn Rn and a set S, then
Area(A(S)) = det(A)Area(S).
Of course, we have not been very careful by what the word area means, but this is something we
will fix in a later section. Thus Df (a) can tell us information about how infinitesimal volumes
change near a, and leads to the following:
Definition 2.29
If f : Rn Rn is differentiable at a, then we define the Jacobian (determinant) of f to be
det df (a).
The Jacobian will appear a great deal in later sections, but we will not have too much occasion
to use it now. The idea is that the Jacobian describes infinitesimally how areas change under the
map f .
Example 2.30
Determine the Jacobian determinant of the maps f (r, ) = (r cos(), r sin()) and g(x, y) =
(x + y, y 2 ).
Solution. These are the maps plotted in Figure-21, and it is a straightforward exercise to compute
the Jacobian matrices to be
cos() r sin() 1 1
Df (r, ) = Dg(x, y) = .
sin() r cos() 0 2y
Thus taking determinants, we get the Jacobian determinants
det Df (r, ) = r, det Dg(x, y) = 2y.
The Mean Value Theorem is one of the most interesting theorem of mathematics. It appears
relatively innocuous at first sight, but leads to a plethora of powerful results. In this section we
will take a brief moment to examine whether the MVT generalizes to multiple dimensions, and if
so how that generalization takes hold.
To begin with, we recall the statement of the Mean Value Theorem:
Theorem 2.31: Mean Value Theorem
If f : [a, b] R is continuous on [a, b] and differentiable on (a, b), then there exists c (a, b)
such that
f (b) f (a) = f 0 (c)(b a). (2.5)
One can apply the MVT to prove several important results, such as the following:
64
2015
c Tyler Holden
2.3 The Mean Value Theorem 2 Differential Calculus
y
f (b)
x
a c b
Figure 22: The Mean Value Theorem says that there is a point on this graph such that the tangent
line has the same slope as the secant between (a, f (a)) and (b, f (b)).
1. If f : [a, b] R is differentiable with bounded derivative, say |f 0 (x)| M for all x, y [a, b],
then |f (y) f (x)| M |y x|.
2. If f 0 (x) 0 for all x [a, b] then f is the constant function on [a, b].
3. If f 0 (x) > 0 for all x [a, b] then f is an increasing (and hence injective) function.
This is but a short collection of useful theorems; naturally, there are many more.
As a first look at whether or not the MVT generalizes, we should consider functions of the type
f : R Rn . If one were to guess as to what kind of statement a mean value theorem here might
have, it would probably be something of the form:
One should check that the equality sign above even makes sense. The left-hand-side consists of a
vector in Rn , while the right-hand-side consists of multiplying a scalar (b a) with a vector f 0 (c).
Okay, so the result does make sense. However, applying this to even simple functions immediately
results in nonsense.
For example, consider the function f : [0, 2] R3 given by f (t) = (cos(t), sin(t)). This
certainly satisfies our hypotheses, as it is every continuous and everywhere differentiable. On the
other hand, f (0) = (1, 0) and f (2) = (1, 0) so that f (1) f (0) = (0, 0). However, this would then
imply that there exists a c such that
and this is impossible, since there is no point at which both sin(t) and cos(t) are zero.
There is a way to fix this, but we are not interested in how to do this at the moment.
65
2015
c Tyler Holden
2 Differential Calculus 2.3 The Mean Value Theorem
Let U Rn and let a, b U be such that the straight line connecting them lives entirely
within U . More precisely, the curve : [0, 1] Rn given by (t) = (1 t)a + tb satisfies
(t) U for all t [0, 1]. If f : U R is a function such that f is continuous on [0, 1]
and differentiable on (0, 1), then there exists a t0 (0, 1) such that c = (t0 ) and
Proof. The idea is that we have used the chain rule to reduce this multivariate function to a real-
valued function of a single variable. Thinking of the line (t) = a(1 t) + tb as a copy of the
interval [0, 1] inside of U , restricting f to this line gives a function f : [0, 1] R to which we
can apply the original MVT.
More formally, we know that f : [0, 1] R is continuous on [0, 1] and differentiable on (0, 1),
so by the Mean Value Theorem there exists t0 (0, 1) such that
Now (f )(1) = f ((1)) = f (b) and (f )(0) = f ((0)) = f (a). In addition, the Chain Rule
tells us that
(f )0 (t0 ) = f ((t0 )) 0 (t0 ) = f (c) (b a).
Combining everything together, we get
as required.
Important to the statement of the Mean Value Theorem is the fact that the line segment
connecting a and b lives entirely within U . Conveniently, we have already seen that convex sets
satisfy this property for any pair of points within the set.
Corollary 2.33
Corollary 2.34
66
2015
c Tyler Holden
2.4 Higher Order Partials 2 Differential Calculus
Exercise: The proofs of Corollaries 2.33 and 2.34 are almost identical to their single variable
equivalents. Prove these theorems.
For differentiable functions of the type f : R R, a lot of information about f could be derived
not only from its first derivative f 0 , but from its higher order derivatives f (n) . For example, if f
represents some physical quantity such as position as a function of time, we know that f 0 is its
velocity, f 00 is its acceleration, and f (3) is its jerk. This means that the higher-order derivatives are
essential when modelling differential equations. We used an infinite number of derivatives when
computing Taylor series, and we exploited the second derivative test to determine optimality of
critical points. All of these applications and more will extend to functions f : Rn R.
The first step is second-order derivatives; that is, to differentiate a function twice. Interestingly
though, we now have many different ways of computing a second derivative. For example, if
f : R2 R then there are four possible second derivatives:
f f f f
xx f = , xy f = , yx f = , yy f = .
x x x y y x y y
The terms xx f, yy f are called pure partial derivatives, while xy f, yx f are called mixed partial
derivatives. In general, given a function f : Rn R, there are n2 different second-order partial
derivatives.
Example 2.35
Determine the second-order partial derivatives of the function f (x, y) = exy + x2 sin(y).
Solution. This is a matter of straightforward computation. The first order partial derivatives are
given by
f f
= yexy + 2x sin(y), = xexy + x2 cos(y).
x y
To compute the second order partials, we treat each of the first order partials as functions of x and
y and repeat the process:
Example 2.36
67
2015
c Tyler Holden
2 Differential Calculus 2.4 Higher Order Partials
The fact that xx f = yy f in Example 2.36 is a consequence of the symmetry of the function
f (x, y) = ecos(xy) . However, somewhat more surprising is that in both of the previous two examples
our mixed partial derivatives were the same. It turns out that this is a fairly common occurrence.
Theorem 2.37: Clairuts Theorem
This is a technical theorem, and to present a readable version of this proof will require some sort
of sophistry (either making an argument about the ability to interchange limits, or an argument
about the existence of points in the Mean Value Theorem). In either case, we encourage the student
to think hard about this theorem, but to not worry about the proof. To make our lives a little bit
easier, we introduce the following class of functions:
Definition 2.38
Let U Rn be an open set. We define C 2 (U, R) to be the collection of f : Rn R whose
second partial derivatives exist and are continuous at every point in U .
If f is a C 2 function, Clairuts theorem immediately imply that its mixed partial derivatives
exist, are continuous, and hence are equal.
Despite having constantly and consistently cautioned against treating differentials as fractions,
there have not been too many instances to date where ignoring this advice could have caused any
damage. Here at last our efforts will be vindicated, as we show the student some of the deeper
subtleties in using higher-order partial derivatives in conjunction with the chain rule.
Lets start with a simple but general example. To make a point, we will write all partial
derivatives using Leibniz notation. Let u = f (x, y) and suppose that both x, y are functions of
2
(s, t); that is, x(s, t) and y(s, t). Lets say that we wish to compute su2 . Using the chain rule, we
can find the first order partial as
u u x u y
= + .
s x s y s
68
2015
c Tyler Holden
2.4 Higher Order Partials 2 Differential Calculus
2u
u u x u y
= = + .
s2 s s s x s s y s
u x u 2 x
u x
= + product rule
s x s s x s x s2
2
2 u y x u 2 x
u x
= + + chain rule
x2 s xy s s x s2
2 u x 2 2 u y x u 2 x
= + + .
x2 s xy s s x s2
2 u y 2 2 u y x u 2 y
u y
= + + .
s y s y 2 s xy s s y s2
2u 2 u x 2 2 u y 2 2 u y x u 2 x u 2 y
= + + 2 + + . (2.6)
s2 x2 s y 2 s xy s s x s2 y s2
This is only a single partial derivative. The same procedure must also be used to compute xy u
and yy u. These are left as exercises for the student.
Exercise: Hurt your brain a little bit more! Let u = f (x, y, s) and x(s, t) and y(s, t). Now
determine ss u.
We have limited our discussion to just second-order partial derivatives, in hopes that this simplest
of cases would serve as a gentle introduction. Even in this case though, Equation (2.6) shows
that things can get unpleasant very quickly. We begin by generalizing Clairuts theorem to higher
dimensions.
Definition 2.39
If U Rn is an open set, then for k N we define C k (U, R) to be the collection of functions
f : Rn R such that the k-th order partial derivatives of f all exist and are continuous on
U . If the partials exist and are continuous for all k, we say that f is of type C (U, R).
69
2015
c Tyler Holden
2 Differential Calculus 2.4 Higher Order Partials
If f : U Rn R is of type C k , then
i1 ,...,ik f = j1 ,...,jk f
Notice that
C k (U, R) C k1 (U, R) C k2 (U, R) C 1 (U, R).
So in particular, if f is of type C k , then we know that the mixed partials all agree up to and
including order k.
Now lets make sure that we understand what Clairuts theorem is saying. For example, if
f : R3 R is of type C 4 , then the theorem does not say that all the fourth order derivatives are
the same (there are 81 fourth order derivatives). Rather, the theorem says the partial derivatives
of the same type are equivalent:
2.4.4 Multi-indices
When a function is of type C k , then we know that in computing a k-th order derivative the order of
the derivatives does not matter, only the total number of derivatives we take with respect to each
variable. This suggests a very convenient notation. In the above example, we can write (2, 1, 1) to
capture the fact that we are differentiating the first variable twice, the second variable one, and
the third variable once. This leads us to the notion of a multi-index.
A multi-index is a tuple of non-negative integers = (1 , . . . , n ). The order of is the sum
of its components
|| = 1 + 2 + + n .
We define the multi-index factorial to be
! = 1 !2 ! n !.
x = x1 1 x2 2 xnn
and if f : Rn R we write
|| f
= .
x1 1 x2 2 xnn
70
2015
c Tyler Holden
2.5 Taylor Series 2 Differential Calculus
The multi-index factorial and exponential will be crucial pieces of notation in the following
section. For now, we would like to capitalize on partial derivative notation. So for example, if
f : R4 R and we endow R4 with the coordinates (x, y, z, w), then
et cetera.
Before talking about how multivariate Taylor series work, lets review what we learned in the single
variable case. We have seen that the derivative can be used as a tool for linearly approximating a
function. If f is differentiable at a point a, then for x near a we have the approximation
Note that this is also sometimes written in terms of the distance h = x a from a, so that
f (a + h) f (a) + f 0 (a)h.
Again, the top equation is a function of the absolute position x, while the bottom equation is a
function of the relative distance h. The relationship between these two representations of Taylor
series are akin to the two equivalent definitions for the derivative at a:
f (x) f (a) f (a + h) f (a)
f 0 (a) = lim = lim .
xa xa h0 h
Now one can extend the conversation beyond just linear approximations, and introduce quadratic,
cubic, and quartic approximations. More generally, given some n N we can set pn,a (x) =
(k)
cn xn + cn1 xn1 + + c1 x + c0 and ask what conditions on the ck guarantee that f (k) (a) = pn,a (a).
This is a fairly straightforward exercise, and the student will find that
n
f (k) (a) X f (k) (a)
ck = , so that pn,a (x) = (x a)k .
k! k!
k=0
In order to ensure that this is a good approximation, we need to look at the error term rn,a (x) =
f (x) pn,a (x). In particular, for pn,a (x) to represent a good k-th order approximation to f , we
should require that the remainder tends to zero faster than k-th order; that is,
rn,a (x)
lim = 0.
xa (x a)k
There are many different approximations to rn,a (x), which vary depending on the regularity of the
function (is f of type C n or C n+1 ?), or on the technique used to approximate the error. In general
we will only be working with C functions, so we are not going to concern ourselves too much
with regularity. It is quite a mess to introduce all of the technical approximations, so we content
ourselves with only deriving a single one, called Lagranges form of the remainder.
71
2015
c Tyler Holden
2 Differential Calculus 2.5 Taylor Series
Proof. All of the conditions of Rolles theorem apply with f (a) = f (b), so there exists a 1 (a, b)
such that f 0 (1 ) = 0. Similarly, we know that f 0 is continuous on [a, b] and differentiable on
(a, b), and f 0 (a) = f 0 (1 ) = 0, so there exists 2 (a, 1 ) such that f 00 (2 ) = 0. We can continue
inductively in this fashion, until f (n) (a) = f (n) (k ), so that there exists c := n+1 (a, n ) (a, b)
such that f (n+1) (c) = 0, as required.
f (n+1) (c)
rn,a (x) = (x a)n+1 . (2.7)
(n + 1)!
Proof. Assume for the moment that x > a and define the function
(t a)n+1
g(t) = rn,a (t) rn,a (x)
(x a)n+1
so that g(a) = g(x) = 0. Writing rn,a (t) = f (t) pn,a (t) we have
f 00 (a) f (n) (a) (t a)n+1
g(t) = f (t) f 0 (a)(t a) (t a)2 (t a)n rn,a (x) .
2 n! (x a)n+1
It is straightforward to check that
f (n) (a) (n + 1)! (t a)n+1k
g (k) (t) = f (k) (t)f (k) (a)f (k+1) (a)(xa) (ta)nk rn,a (x)
(n k)! (n + 1 k)! (x a)n+1
so that g (k) (a) = 0 for all k = 1, . . . , n. By the Higher Order Rolles Theorem, we know there exists
a c (a, x) such that g (n+1) (c) = 0, but this is precisely equivalent to
(n + 1)!
0 = g (n+1) (c) = f (n+1) (c) rn,a (x)
(x a)n+1
we we can re-arrange to get (2.7).
Corollary 2.43
rn,a (x)
lim = 0.
xa |x a|n
72
2015
c Tyler Holden
2.5 Taylor Series 2 Differential Calculus
Proof. Since f is of type C n+1 we know that f (n+1) is continuous on I. Since I is open and a I,
we can find a closed interval J such that a J I. By the Extreme Value Theorem, there exists
M > 0 such that that |f (n+1) (x)| M for all x J. Since f is n + 1 times differentiable in a
neighbourhood of a, Theorem 2.42 implies that
= 0.
This corollary implies that the Taylor remainder is a good approximation, since the error van-
ishes faster than order n. Moreover, in the proof we found that we could bound rn,a (x) as
M
|rn,a (x)| |x a|n+1 (2.8)
(n + 1)!
for some M > 0. This allows us to determine error bounds on Taylor series.
Example 2.44
Let f (x) = sin(x) and g(x) = ex . Determine the number of terms needed in the Taylor series
to ensure that the Taylor polynomials at a = 0 are accurate to within 8 decimal places on
[1, 1].
Solution. This is a problem you might have if you worked for a classical calculator company. If
your calculator is only capable of holding eight significant digits then you need only ensure accuracy
to eight digits, so you need to determine how many terms of the Taylor polynomial you need to
program.
For f (x) = sin(x) we know that regardless of how many derivatives we take, |f (k) (x)| 1 for
all x, and since we are looking at the interval [1, 1], we know that |x a| = |x| 1. Substituting
this information into (2.8) we get that |rn,a (x)| [(k + 1)!]1 . We need to find a value of k such
that 1/(k + 1)! < 108 . The student can check that this first happens when k = 11.
Similarly, for g(x) = ex we know that g (k) (x) = ex , and on the interval [1, 1] we can bound
this as |g (k) (x)| < 3. We still have |x a| < 1, so (2.8) gives us |rn,a (x)| 3[(k + 1)!]1 , which also
becomes smaller than 108 when k = 11.
Just like with the Multivariate Mean Value Theorem, we will introduce the multivariate Taylor
Series by examining what happens when we restrict our function to a line. For simplicity, assume
73
2015
c Tyler Holden
2 Differential Calculus 2.5 Taylor Series
that S Rn is a convex set and choose some point a = (a1 , . . . , an ) S around which we will
compute our Taylor series for f : S R. Let x0 = (x10 , . . . , xn0 ) S be some point at which we
want to compute f (x0 ) and consider the line
Pre-composing f by we get the function g : R R, g(t) = f ((t)). Notice that g(0) = f (a) and
g(1) = f (x). Furthermore, since g is a real-valued function of a single variable, it admits a Taylor
polynomial centred at 0, which can be evaluated at t = 1:
n
X g (k) (0)
g(1) = + remainder. (2.9)
k!
k=0
Lets look at the derivatives of g. The first derivative is easily computed via the chain rule, and we
get
g 0 (t) = (x0 a) f (a + t(x0 a)).
If we think of = x 1 , . . . , xn then we can define a new operator
(x0 a) = (x10 a1 ) + + (xn0 an ) ,
x1 xn
and g 0 (t) = [(x0 a) ] f (a + t(x0 a)). Differentiating k-times in general will give us
This is theoretically complete, but computationally quite messy. Lets see if we can get a better
grip on what these operators [(x0 a) ]k look like. For the sake determining what this looks
like, let n = 2 and a = (0, 0), so that
Notice that we get a perfect correspondence between the coefficient and the derivatives. For exam-
ple, the coefficient of fyx is y0 x0 . The last line is written in multi-index notation, where the order
of every multi-index in 2. One can imagine this also works for general n and general a, so that
X k!
[(x0 a) ]k f = ( f ) (a) (x0 a) .
!
||=k
74
2015
c Tyler Holden
2.5 Taylor Series 2 Differential Calculus
X ( f ) (a)
f (x) = (x a) + rn,a (x)
!
||n
Example 2.45
Determine the 2nd order Taylor polynomial for f (x, y) = sin(x2 + y 2 ) about a = (0, 0).
Example 2.46
Determine the 2nd order Taylor polynomial for f (x, y) = xey at a = (0, 0).
Something interesting is happening here: We know that the Taylor series for ex and sin(x) are
X xk X (1)k x2k+1
ex = , sin(x) = .
k! (2k + 1)!
k=0 k=0
75
2015
c Tyler Holden
2 Differential Calculus 2.5 Taylor Series
We know that if f : Rn R is at least class C 2 , then there are n2 second-order partial derivatives
ij f, i, j {1, . . . , n}. Moreover, the mixed partial derivatives commute, so that ij f = ji f . This
information can all be conveniently written in terms of a matrix:
Definition 2.47
If f : Rn R is of class C 2 then the Hessian matrix of f at a Rn is the symmetric
n n-matrix of second order partial derivatives:
11 f (a) 12 f (a) 1n f (a)
21 f (a) 22 f (a) 2n f (a)
H(a) = .
.. .. . . ..
. . . .
n1 f (a) n2 f (a) nn f (a)
The Hessian matrix makes writing down the Taylor series of a function very compact. Notice
that the first order terms of the Taylor expansion are given by
X 1
( f ) (a)(x0 a) = f (a) (x0 a).
!
||=1
Similarly, the second order terms involve the second-order partials and can be written as
X 2
( f ) (a)(x0 a) = (x0 a)T H(a)(x0 a),
!
||=2
76
2015
c Tyler Holden
2.6 Optimization 2 Differential Calculus
Example 2.48
Determine the Hessian of the function f (x, y, z) = x2 y + eyz at the point (1, 1, 0).
Solution. We start be computing the gradient f = (2xy, x2 + zeyz , yeyz ). The Hessian is now the
matrix of second order partial derivatives, and may be computed as
2y 2x 0
H(x, y, z) = 2x z 2 eyz eyz (1 + zy) .
0 eyz (1 + zy) y 2 eyz
We can take one extra step and evaluate the gradient at this point f (1, 1, 0) = (2, 1, 1), and write
down the Taylor series:
x1 x1
1
f (x) = f (1, 1, 0) + f (1, 1, 0) y 1 + (x 1, y 1, z)H(1, 1, 0) y 1 + O(kxk3 )
2
z z
x1 2 2 0 x1
1
= 2 + (2, 1, 1) y 1 + (x 1, y 1, z) 2 0 1
y 1 + O(kxk3 )
2
z 0 1 1 z
1
= 2 + 2(x 1) + (y 1) + z + (x 1)2 + 2(x 1)(y 1) + (y 1)z + z 2 + O(kxk3 ).
2
We can make even further strides if we allow ourselves to import a powerful theorem from linear
algebra:
Theorem 2.49: Spectral Theorem
Writing A in the basis guaranteed by the Spectral Theorem is called the eigendecomposition
of A. In the eigendecomposition, the matrix A is a diagonal matrix with the eigenvalues on the
diagonal. We will make use of the spectral theorem in the following section.
2.6 Optimization
When dealing with differentiable real-valued functions of a single variable f : [a, b] R we had
a standard procedure for determining maxima and minima. This amounted to checking critical
points on the interior (a, b) and then checking the boundary points. The necessity of checking the
boundary separately arose from the non-differentiability of the function at the boundary. In the
77
2015
c Tyler Holden
2 Differential Calculus 2.6 Optimization
Definition 2.51
If f : Rn R is differentiable, we say that c Rn is a critical point of f if f (c) = 0.a If
c is a critical point, we say that f (c) is a critical value. All points which are not critical are
termed regular points.
a
More generally, if f : Rn Rk then we say that c Rn is a critical point if Df (c) does not have maximal
rank.
We see that the above definition of a critical point agrees with the our usual definition when
n = 1; namely, that f 0 (c) = 0.
Example 2.52
f (x, y) = x3 + y 3 , g(x, y, z) = xy + xz + x
Notice that critical points do not need to be isolated: one can have entire curves or planes
represent critical points. The important property of critical points is that they give a schema for
determining when a point is a maximum or minimum, through the following theorem:
78
2015
c Tyler Holden
2.6 Optimization 2 Differential Calculus
Proposition 2.53
Proof. We shall do the proof for the case when c corresponds to a local maximum and leave the proof
of the other case to the student. Since c is a local maximum, we know there is some neighbourhood
I D of c such that for all x I, f (x) f (c).
Since c corresponds to a maximum of f , for all h > 0 sufficiently small so that c + h I, we
have that f (c + h) f (c). Hence f (c + h) f (c) 0, and since h is positive, the difference quotient
satisfies f (c+h)f
h
(c)
0. In the limit as h 0+ we thus have
f (c + h) f (c)
lim 0 (2.10)
h0+ h
Similarly, if h < 0 we still have f (c + h) f (c) 0 but now with a negative denominator our
difference quotient is non-negative and
f (c + h) f (c)
lim 0. (2.11)
h0 h
Combining (2.10) and (2.11) and using the fact that f is differentiable at c, we have
f (c + h) f (c) f (c + h) f (c)
0 lim = f 0 (c) = lim 0
h0 h h0+ h
which implies that f 0 (c) = 0.
Of course, we know that this proposition is only necessary, not sufficient; that is, there are
critical points which do not yield extrema. The quintessential example is the function f (x) = x3 ,
which has a critical point at x = 0, despite this point being neither a maximum nor minimum. A
more interesting example, which we leave for the student, is the function f (x) = x sin(x), which
has infinitely many critical points but no local maxima or minima.
Our theme for the last several sections has been to adapt our single-variable theorems to mul-
tivariate theorems by examining the behaviour of functions through a line. This part will be no
different.
Corollary 2.54
Proof. We do the case where c is a maximum and leave the other case as an exercise. Since c is a
maximum, we know there is a neighbourhood U Rn containing c such that f (x) f (c) for all
x U . Since this holds in general, it certainly holds locally along any line through c; that is, for
any unit vector u Rn there exists > 0 such that for all t (, ), we have
79
2015
c Tyler Holden
2 Differential Calculus 2.6 Optimization
Since g attains its maximum at t = 0 (an interior point), Proposition 2.53 implies that g 0 (0) = 0.
Using the chain rule, this implies that f (c) u = 0. This holds for all unit vectors u, so in
particular if we let u = ei = (0, . . . , 1, . . . , 0) be one of the standard unit normal vectors, then
0 = f (c) ei = xi f (c).
Once again, this theorem will be necessary, but not sufficient. For example, consider the func-
tions f1 (x, y) = x2 + y 2 and f2 (x, y) = y 2 x2 . Both function have critical points at (x, y) = (0, 0),
however the former is a minimum while the later is not. In particular, the latter function gives an
example of a saddle point. Graphing functions is a terrible way to determine maxima and minima
though, so we need to develop another criteria for determining extrema. This comes in the form of
the second derivative test.
Proposition 2.55
3. If H(c) has a mix of positive and negative eigenvalues, then c is a saddle point.
We will not give the proof of this proposition, but instead present a heuristic which essentially
captures the idea of the proof. Recall from our discussion at the end of 2.5.3 that H(c) admits an
eigendecomposition with eigenvectors {i }ni=1 . As f (c) = 0, the second-order Taylor polynomial
tells us that in this basis, our function is approximately
n
X
f (x) = f (c) + (x c)T H(c)(x c) = f (c) + i (xi ci )2 .
i=1
If all of the eigenvalues are positive, this function is approximately an upward facing paraboloid
centered at c, meaning that it has a minimum. Similarly, if all the eigenvalues are negative, it is a
downward facing paraboloid and hence c is a maximum. In the case where the i are of mixed sign,
we have that the function looks like a maximum in the direction of the eigenvectors corresponding
to positive eigenvalues, and a minimum in the direction of eigenvectors corresponding to negative
eigenvalues, and hence is a saddle point.
Example 2.56
Determine the critical points of the function f (x, y) = x4 2x2 + y 3 6y and classify each
as a maxima, minima, or saddle point.
Solution. The gradient can be quickly computed to be f (x, y) = (4x(x2 1), 3(y 2 2)). The first
component is zero when x = 0, 1 and the second component is zero when y = 2, giving six
80
2015
c Tyler Holden
2.6 Optimization 2 Differential Calculus
critical points: (0, 2), (1, 2), and (1, 2). The Hessian is easily computed to be
12x2 4 0
H(x, y) = .
0 6y
Since the
matrix is diagonal, its eigenvalues
are exactly the 12x2 4 and 6y. Thus the maximum
is (0, 2), the minima are (1, 2), and the other three points are saddles.
There is one additional kind of critical point which can appear. The above discussion of maxima,
minima, and saddle points amounted to the function looking as though it had either a maximum
or a minimum in every direction, and whether or not those directions all agreed with one another.
This has not yet captured the idea of an inflection point.
Definition 2.57
If f : Rn R is C 2 and c is a critical point of f , then we say that c is a degenerate critical
point if f is rank H(c) < n.
Example 2.58
Show that the function f (x, y) = y 2 x3 has a degenerate critical point at (x, y) = (0, 0).
Solution. The gradient is f (x, y) = (3x2 , 2y) which indeed has a critical point at (0, 0). Fur-
thermore, the Hessian is
6x 0 0 0
H(x, y) = , H(0, 0) =
0 2 0 2
so H(0, 0) has rank 1, and we conclude that (0, 0) is a degenerate critical point.
In the special case of function f : Rn R, one can use the determinant of the Hessian to
quickly ascertain whether critical points are maxima or minima.
Proposition 2.59
Proof. For any matrix, the determinant is the product of the eigenvalues (this follows immediately
from the spectral theorem and the fact that the determinant is basis independent). Since f : R2 R
81
2015
c Tyler Holden
2 Differential Calculus 2.6 Optimization
the Hessian has two eigenvalues. If the determinant is negative, this means that the two eigenvalues
have different signs and hence the critical point is a saddle point. If the determinant is positive,
both eigenvalues have the same sign, and we need only determine if both are positive or negative.
This last check can be done by looking at 11 f .
The previous section introduced the notion of critical points, which can be used to determine
maxima/minima on the interior of a set. However, what happens when we are given a set with
empty interior? Similarly, if one is told to optimize over a compact set, it is not sufficient to only
optimize over the interior, one must also check the boundary.
We have seen problems of constrained optimization before. A typical example might consist of
something along the lines of
You are building a fenced rectangular pasture, with one edge located along a river.
Given that you have 200m of fencing, find the dimensions which maximize the volume
of the pasture.
x x
Translating this problem into mathematics, we let x be the length and y be the width of the pasture.
We must then maximize the function f (x, y) = xy subject to the constraint 2x + y = 200. The
equation 2x + y = 200 is a line in R2 , so we are being asked to determine the maximum value of
the function f along this line. The way that this was typically handled in first year was to use
the constraint to rewrite one variable in terms of another, and use this to reduce our function to a
single variable. For example, if we write y = 200 2x then
The lone critical point of this function occurs at x = 50, which gives a value of y = 100, and one
can quickly check that this is the max.
Another technique that one could employ is the following: Recognizing that 2x + y = 200 is
just a line in R2 , we can parameterize that line by a function (t) = (t, 200 2t). The composition
f is now a function in terms of the independent parameter t, yielding f ((t)) = 200t 2t2 which
of course gives the same answer.
The fact that our constraint was just a simple line made this problem exceptionally simple.
What if we wanted to optimize over a more difficult one-dimensional space, or even a two dimen-
sional surface? Once again we can try to emulate the procedures above, and we may even meet
82
2015
c Tyler Holden
2.6 Optimization 2 Differential Calculus
with some success. However, there is a more novel way of approaching such problems, using the
method of Lagrange multipliers.
Theorem 2.60
f (c) = G(c).
Proof. Let : (, ) S be any path such that (0) = c, so that 0 (0) is a vector which is tangent
to S at c. Since (t) S for all t (, ), by the definition of S we must have G((t)) = 0.
Differentiating at t = 0 yields the identity
0 = G(c) 0 (0).
Example 2.61
Use the method of Lagrange multipliers to solve the problem given in Figure 23.
Solution. The constraint in our fencing problem is given by the function G(x, y) = 2x+y 200 = 0.
We can easily compute f (x, y) = (y, x) and G(x, y) = (2, 1), so by the method of Lagrange
multipliers, there exists R such that f (x, y) = G(x, y); that is,
y 2
= .
x 1
We thus know that y = 2, x = , and substituting this into 2x y = 200 gives 4 = 200. Thus
= 50, from which we conclude that y = 2 = 100 and x = = 50 as required.
Example 2.62
4
Here we are sweeping some stuff under the rug. In particular, one must believe us that since G is C 1 then
S = G1 (0) is a smooth surface, so that its tangent plane has dimension n 1.
83
2015
c Tyler Holden
2 Differential Calculus 2.6 Optimization
Solution. The constraint equation is given by G(x, y, z) = x2 +2y 2 +3z 2 1 = 0. When we compute
our gradients, the method of Lagrange multipliers gives the following system of equations:
yz = 2x
xz = 4y
xy = 6z
If we combine this with the constraint x2 + 2y 2 + 3z 2 = 1 we have four equations in four unknowns,
though all the equations are certainly non-linear! Herein we must be clever, and start manipulating
our equations to try and solve for (x, y, z). Notice that if we play with the term xyz then depending
on how we use the associativity of multiplication, we can get an additional set of conditions. For
example
and all of these must be equal. We can make a small simplification by removing a factor of 2 to get
x2 = 2y 2 = 3z 2 . (2.12)
Example 2.63
Determine the maximum and minimum of the function f (x, y) = x2 + 2y 2 on the disk
x2 + y 2 4.
Solution. We begin by determining critical points on the interior. Here we have f (x, y) = (2x, 4y)
which can only be (0, 0) if x = y = 0. Here we have f (0, 0) = 0.
Next we determine the extreme points on the boundary x2 + y 2 = 4, for which we set up the
constraint function G(x, y) = x2 + y 2 4 with gradient G(x, y) = (2x, 2y). Using the method of
84
2015
c Tyler Holden
3 Local Invertibility
2x = 2x
4y = 2y
Case 1 (x 6= 0): If x 6= 0 then we can solve 2x = 2x to find that = 1. This implies that
y = 2y which is only possible if y = 0. Plugging this into the constraint gives x2 = 4 so that
x = 2, so our candidate points are (2, 0), which give values f (2, 0) = 4.
Case 2 (y 6= 0): If y 6= 0 then we can solve 4y = 2y to find that = 2. This implies that
2x = 4x which is only possible if x = 0. Solving the constraint equation thus gives the candidates
(0, 2), which gives values f (0, 2) = 8.
The case where = 0 gives no additional information. Hence we conclude that the minimum
occurs at (0, 0) with a value of f (0, 0) = 0, while the maximum occurs at the two points (0, 2)
with a value of f (0, 2) = 8.
If multiple constraints are given, the procedure is similar, except that we now need additional
multipliers. More precisely, if G : Rn Rm is given by G(x) = (G1 (x), . . . , Gm (x)), we set
S = G1 (0), and we are tasked with optimizing f : S R, then if c S is a maximum or
minimum there exist 1 , . . . m R such that
m
X
f (c) = i Gi (c).
i=1
3 Local Invertibility
Given the plethora of ways of defining functions, curves, or surfaces over Rn , a natural question
which arises is whether such characterizations are (locally) invertible. For example, given a function
f C 1 (R2 , R2 ), (x, y) 7 (xy, xey ), is there a differentiable function f 1 : R2 R2 which inverts
it everywhere? If not, can we find a function which at least inverts it locally, or perhaps conditions
which tell us which points are troublesome for inverting?
Alternatively, what if one is given the zero locus of a C 1 function F (x, y) = 0 and is asked to
determine y as a function of x? What conditions guarantee that this is possible? This section is
dedicated to elucidating this information.
We begin by analyzing the latter case first; namely, given a C 1 function F : Rn+k Rk , when can
we solve the equation
F(x1 , . . . , xn , y1 , . . . , yk ) = 0
for the yi as functions of the xi ? More precisely, do there exist C 1 functions fi : Rn R such that
yi = f (x1 , . . . , xn ). In this great of generality, it can be difficult to see the forest for the trees, so
we treat the k = 1 case as a special example to glean some insight into the problem.
85
2015
c Tyler Holden
3 Local Invertibility 3.1 Implicit Function Theorem
Consider a function F C 1 (Rn , R) and endow Rn+1 with the coordinates (x1 , . . . , xn , y), whose
purpose is to make our exposition clear with regards to which variable is solved in terms of the
other variables. Can we solve the equation F (x, y) = 0 for y as a function of x? Alternatively, can
we realize F (x, y) = 0 as the graph of a function y = f (x)? Some simple examples suggest that
the answer could be yes.
Unfortunately, it turns out that such examples are exceptionally rare and in general the answer
is no:
1 y 1
0.8
1 1
0.6
0.4
x
4 2 2 4 1
(a) A plot of the graph (x2 +1)y 3 = 1. It is easily (b) The circle x2 + y 2 = 1 cannot be written as
to believe that this curve can be written as the the graph of a function: It fails the vertical line
graph of a function. test.
The primary difference between Examples 3.1 and 3.2 is that the former was in a sense injec-
tive with respect to y (since y 3 is one-to-one) while the latter was not (y 2 is two-to-one). This
example in hand, the situation seems rather dire: even such simple examples preclude the hope of
solving one variable in terms of another. Nonetheless, one could argue that there are parts of the
86
2015
c Tyler Holden
3.1 Implicit Function Theorem 3 Local Invertibility
circle x2 +y 2 = 1 that look like the graphs, one being y = 1 x2 while the other is y = 1 x2 .
If it was our lofty goal of solving y as a function of x everywhere that presented a problem, perhaps
by restricting ourselves to local solutions we might make more progress.
Since calculus is, in many ways, the study of functions by looking at their linear approximations,
lets see what happens in the simplest non-trivial case where F (x, y) is linear:
n
X
F (x, y) = 1 x1 + + n xn + yn + c = i xi + y + c.
i=1
In this case, it is easy to see that we can solve for y as a function of x so long as 6= 0. Now recall
that if F (x, y) is a (not necessarily linear) C 1 function, and (a, b) satisfies F (a, b) = 0, then the
equation of the tangent hyperplane at (a, b) is given by
F F F
(a, b)x1 + + (a, b)xn + (a, b)y + d
x1 xn y
F
= x F (a, b) x + (a, b)y + d = 0
y
for some constant d. By analogy, F y (a, b) plays the role of , which suggests that so long as
F
y (a, b) 6= 0, y should be solvable as a function of x in a neighbourhood of (a, b).
Theorem 3.3
If F (x, y) is C 1 on some neighbourhood U Rn+1 of the point (a, b) Rn+1 , F (a, b) = 0, and
F 1
y (a, b) 6= 0, then there exists an r R0 together with a unique C function f : Ba (r) R
such that F (x, f (x)) = 0 for all x Ba (r).
Proof. We break our proof into several steps: we begin by showing that there is an r > 0 such that
for each x0 Ba (r) there exists a unique y0 such that F (x0 , y0 ) = 0. We call the mapping which
takes x0 7 y0 the function f (x, y). After this, we show that this function is actually differentiable.
Existence and Uniqueness: This spirit of this part of the proof is actually akin to the proof of
injectivity mentioned in the previous aside. Without loss of generality, assume that F
y (a, b) > 0,
so that there exists r1 > 0 such that y F > 0 on the neighbourhood Ba,b (r1 ) R n+1 . By taking
smaller r1 if necessary, we can ensure that Ba,b (r) U .
Now the positivity of y F on Ba,b (r) ensures that
F (a, b r1 ) < 0, F (a, b + r1 ) > 0.
Once again, by continuity there exists 1 , 2 > 0 such that F (x, b r1 ) < 0 for all x Ba (1 ) and
F (x, b + r1 ) > 0 for all x Ba (2 ). Let r = min {r1 , 1 , 2 }, so that for any fixed x0 Ba (r) we
have F (x0 , b r1 ) < 0 and F (x0 , b + r1 ) > 0. By the single variable Intermediate Value Theorem,
there is at least one y0 Bb (r) such that F (x0 , y0 ) = 0. Furthermore, because F (x0 , y) is strictly
increasing as a function of y, this y is unique by the Mean Value Theorem.
87
2015
c Tyler Holden
3 Local Invertibility 3.1 Implicit Function Theorem
y
y
1
(a, b)
r1
x
x0 x
(a) The graph of F (x, y) = x2 + y 2 1, wherein
(b) Notice how the bottom of the rectangle lies
the blue represents where F (x, y) < 0 and the red
entirely within the blue, and the top lies entirely
where F (x, y) > 0. The arrows are the values of
F within the red.
y .
Differentiability: Fix some x0 Ba (r), set y0 = f (x0 ), and choose h R sufficiently small so
that hi = hei satisfies x0 + hi Ba (r). Define k = f (x0 + hi ) f (x0 ) to be the ith difference
quotient, so that y0 + k = f (x0 + hi ). Now F (x0 + hi , y0 + k) = F (x0 , y0 ) = 0 since both points lie
in Ba (r), so by the Mean Value Theorem there exists some t (0, 1) such that
F F
0 = F (x0 + hi , y0 + k) F (x, y0 ) = h (x0 + thi , y0 + tk) + k (x0 + thi , y0 + tk).
xi y
Re-arranging we can write
F
f (x0 + hi ) f (x0 ) k x (x0 + thi , y + tk)
= = F i .
h h y (x0 + thi , y0 + tk)
F
As the quotient on the right-hand-side consists of continuous functions and y 6= 0 in Ba,b (r),
taking the h 0 limit yields
F
f xi (x0 , y0 )
= F , (3.1)
xi y (x0 , y0 )
which is a continuous function.
A useful consequence of the proof of Theorem 3.3 is equation (3.1) which gives a formula for the
partial derivatives of f (x). This is not surprising though, since if y = f (x) satisfies F (x, f (x)) = 0
then we may differentiate with respect to xj to find that
j F
0 = j F + n+1 F j f, j f =
n+1 F
which agrees with what we found in the course of the proof.
Recall that when implicit differentiation is used in first year calculus, we wave our hands and
tell the student to assume that what we are doing is kosher. The Implicit Function Theorem is the
88
2015
c Tyler Holden
3.1 Implicit Function Theorem 3 Local Invertibility
theorem which justifies the fact that this can be done in general (though naturally only at the places
where the theorem actually applies). Furthermore, while equation (3.1) is useful theoretically, it
effectively amounts to implicit differentiation, which is what we will often use to actually compute
these derivatives.
Corollary 3.4
This corollary is of course immediate. The fact that F 6= 0 means that at every point, one of
the components j F 6= 0. We may then apply Theorem 3.3 to solve for xj in terms of the remaining
variables, and the result follows.
Example 3.5 Recall that the circle defined by the zero locus of F (x, y) = x2 + y 2 1
cannot globally be solved for either x or y. However, F (x, y) = (2x, 2y), which means that
whenever y 6= 0 we may determine y in terms of x, and vice versa. Indeed, this is what we
expect, since any neighbourhood about the points (0, 1) is necessarily two-to-one in terms
of y. Furthermore, if y 6= 0, let y = f (x) be the local solution for y in terms of x. From
equation (3.1) the derivative df
dx is then
df 1 F 2x x
= = = .
dx 2 F 2y y
This
agrees with both implicit differentiation as well as explicit differentiation of y =
1 x2 , and is left as an exercise for the student. N
Example 3.6
Consider the function F (x, y, z) = (2x + y 3 z 2 )1/2 cos(z). If S = f 1 (0), determine which
variables may be determined by the others in a neighbourhood of (1, 1, 0) and compute the
corresponding partial derivatives.
Solution. First notice that F (1, 1, 0) = 0 so that this point is in S. We need to determine which
partial derivatives are non-zero at (1, 1, 0), so we compute to find
1 p
F = p 1, 32 y 2 , z + 2x + y 3 z 2 sin(z) .
2x + y 3 z 2
At the point (1, 1, 0) this reduces to F (1, 1, 0) = (1, 3/2, 0), so we may find C 1 functions f
and g such that x = f (y, z) and y = g(x, z), but it is not possible to solve for z in a neighbourhood
of (1, 1, 0).
89
2015
c Tyler Holden
3 Local Invertibility 3.1 Implicit Function Theorem
f y F 3
= = y2
y x F 2
f z F p
= = z sin(z) 2x + y 3 z 2 .
z x F
Similarly, for y = g(x, z) we have
g x F 2
= = 2
x y F 3y
p
g z F 2z 2 sin(z) 2x + y 3 z 2
= = .
z y F 3y 2
Again, the student may check that this is consistent with implicit differentiation.
The general case of a C 1 function F : Rn+k Rk is not much more difficult: The major change
will be evaluating what the analogous condition to F y 6= 0 should be. Let x = (x1 , . . . , xn ) and
y = (y1 , . . . , yk ). We once again return to the case where F (x, y) is a linear function. In this case,
let A Mkn (R) and B Mkk (R) be real matrices, and define F (x, y) = Ax + By + c for some
c Rk . If (x0 , y0 ) is some point where F (x0 , y0 ) = 0, then we can express y as a function of x if
and only if the matrix B is invertible.
If F (x, y) is now a general function, the set F (x, y) = 0 defines at surface of dimension at most
n in Rn+k . Let F (x, y) = (F1 (x, y), . . . , Fk (x, y)) for C 1 functions Fi : Rn+k R. The Jacobian
of F (x, y) is given by
F1 F1 F1 F1
x1 xn
y1 yk
.. .. .. .. .. ..
. . . . . .
dF = Fk Fk Fk Fk
x1 xn y1 yk
| {z } | {z }
A(x,y)Mkn (C 1 (Rn+k )) B(x,y)Mkk (C 1 (Rn+k ))
so that the tangent plane to F 1 (0) at (x0 , y0 ) is given by A(x0 , y0 )x + B(x0 , y0 )y + d = 0. This
Fi
tells us that our analogy to F
y 6= 0 from the single-variable case should now be changed to yj ij
should be an invertible function.
Theorem 3.7: General Implicit Function Theorem
The proof of this theorem is done via induction on k, but it is quite messy and not particularly
enlightening so we omit the proof. Of more immediate interest is whether we can determine an
90
2015
c Tyler Holden
3.1 Implicit Function Theorem 3 Local Invertibility
equation for the partial derivatives of the fk (x). The boring answer to this is that we simply
differentiate the equation F (x, f (x)) = 0 with respect to xj to determine the result, but this does
not clear things up as much as a simple example.
Example 3.8
3u2
2v 12 2
d(u,v) F = , d(u,v) F(2, 1, 2, 1) =
4u 12v 3 8 12
which has determinant 128 6= 0 and so is invertible. By Theorem 3.7 we know that (u, v) may
thus be determined as functions of (x, y) in a neighbourhood of this point; say u = g1 (x, y) and
v = g2 (x, y).
Now in order to determine the derivatives, we differentiate the function F(x, y, u, v) implicitly
with respect to x and y, keeping in mind that u = g1 (x, y) and v = g2 (x, y). We thus have
2x 3u2 u v
x + 2v x =0
2y 4u u 3 v
x + 12v x
u
3u2 2v
x 2x
=
4u 12v 3 v
x
2y
u 1
3u2 2v
x 2x
v =
x
4u 12v 3 2y
u
24xv 3 + 4vy
x 1
v = .
x
8uv36u2 v 3 8ux + 6u2 y
Note that this solution makes sense in spite of the fact that the u and v appear in the solution,
since u = g1 (x, y) and v = g2 (x, y) implies that these are functions of x, y alone.
If we are clever, we can use the Implicit Function Theorem to say something about invertibilty.
Consider for example a function F : R2 R and its zero locus S = F 1 (0). If both x F and
y F are non-zero at a point (a, b), the Implicit Function Theorem implies that we can write y in
91
2015
c Tyler Holden
3 Local Invertibility 3.1 Implicit Function Theorem
terms of x and vice-versa, in a neighbourhood of (a, b). More precisely, there exists f, g such that
y = f (x) and x = g(y) locally.
By taking a sufficiently small neighbourhood around (a, b), we can guarantee that both f and
g are injective (convince yourself this is true), and so by single variable results, both f and g have
inverses. For example, this means that f 1 (y) = x. But the Implicit Function Theorem also told
us that the function g satisfying g(y) = x was unique, so necessarily g = f 1 .
We conclude that the Implicit Function Theorem might be able to say something about deter-
mining when a function is invertible. This culminates in the following theorem:
Theorem 3.9: The Inverse Function Theorem
It turns out that the Inverse Function Theorem and the Implicit Function Theorem are actually
equivalent; that is, the Implicit Function Theorem can be proven from scratch then used to prove
the Inverse Function Theorem, or vice versa. We already have the Implicit Function Theorem, so
we might as well use it, not to mention that the scratch proof of the Inverse Function Theorem
is rather lengthy and uses the Contraction Mapping Theorem.
Example 3.10
92
2015
c Tyler Holden
3.2 Curves, Surfaces, and Manifolds 3 Local Invertibility
Since e2x is never zero, Df(x,y) will be invertible for any choice of (x, y), so the Inverse Function
Theorem can be applied everywhere.
The Implicit Function Theorem is the key to determining the appropriate definition of what it
means for something to be smooth. Intuitively, an object is smooth if it contains no corners, such
as a sphere. On the other hand, something like a cube will not be smooth, as each vertex and
edge of the cube are sharp. It turns out that this is not the best of way of thinking about smooth:
For example, what happens when we are given a surface, such as the lemniscate (see Figure 26)?
When we draw the lemniscate, we can do it is a smooth fashion so that no sharp edges ever appear;
nonetheless there does seem to be something odd about what happens at the overlap point.
Figure 26: The sphere should be smooth, the cube should not be, but who knows about the
lemniscate?
The criteria that we will see will look mysterious in the absence of geometric intuition, but the
key to testing smoothness is to look at the tangent space. At each point on a curve, surface, or
higher dimensional object, there is the notion of a line, plane, or hyperplane which is tangent to
that point. Each of these tangent spaces just looks like a vector space, and thus has an associated
dimension. A space is smooth if its tangent space does not change dimension.
For example, we will see that every point on the sphere has a two dimensional tangent space.
For the cube, the interior of the faces will have two dimensional tangent spaces, while the edges
will have 1-dimensional tangent spaces, and the vertices will have a 0-dimensional tangent space.
There are several ways of actually defining these spaces, such as via the graph of a function,
through a parameterization, or as the zero locus of a function. In each of these cases, the dimensions
of the domain and codomain will play an important role. In this section, we take an introductory
look at the relationship between 1-dimensional curves, 2-dimensional surfaces, and n-dimensional
manifolds.
93
2015
c Tyler Holden
3 Local Invertibility 3.2 Curves, Surfaces, and Manifolds
3.2.1 Curves in R2
We have thus far seen three different ways of defining one-dimensional objects. Here we pay
particular attention to the case of one-dimensional objects in R2 . A curve can be written as
(f ) = {(x, f (x)) : x R} .
2. The zero locus of a function: Let F : R2 R, and let C = F 1 (0). This object will generally
be one-dimensional, as F (x, y) = 0 yields one equation with two-variables, meaning we can
(locally) solve one in terms of the other.
3. As the image of a parametric function: Let s : (a, b) R2 by given by t 7 (s1 (t), s2 (t)), and
define the curve to be p((a, b)).
3
For example, the curve which is the graph of f (x) = x2 , is the same as the zero locus of
F (x, y) = y 3 x2 and the parametric curve p(t) = (t3 , t2 ).
2 1 1 2
3
Figure 27: The curve defined by the graph of f (x) = x2 , the zero locus of F (x, y) = y 3 x2 and
the parametric function p(t) = (t3 , t2 ).
Unfortunately, the set of all curves defined by one method need not be the same as those defined
by another.
Proposition 3.11
Every curve that can be expressed as the graph of a function f : R R may also be written
as the zero locus of a function F : R2 R and parametrically as the image of p : R R2 .
Proof. Fix some f : R R with graph (f ) = {(x, f (x)) : x R}. Define the function F (x, y) =
y f (x) and notice that
94
2015
c Tyler Holden
3.2 Curves, Surfaces, and Manifolds 3 Local Invertibility
In the parametric case, define the parametric function p : R R2 by t 7 (t, f (t)). Once again we
have
p(R) = {(t, f (t)) : t R} = (f ),
as required.
The converse of this proposition is not true. For example, the circle is impossible to write
as the graph of a function, since it fails the vertical line test. However, the circle is the zero
locus of F (x, y) = x2 + y 2 1, or the image of the function p : [0, 2) R2 given by p(t) =
(cos(t), sin(t)). Since graphs cannot describe even simple shapes like a circle, they are rarely used
to define manifolds.
Now we are more interested in assessing the smoothness properties of a curve C. Our time
studying calculus has told us that if the function f : R R is C 1 then its graph defines a curve
which is smooth. However, since not all curves of interest can be written as the graphs of functions,
this will fail to be a good definition. Instead, we will just require that the curve locally look like
the graph of a smooth function:
Definition 3.12
A connected set C R2 is said to be a smooth curve if every point a C has a neighbourhood
N on which C N is the graph of a C 1 function. If C is not connected, then we shall say
that C is smooth if each of its connected components is a smooth curve.
Unfortunately, if the curve is defined as a zero locus or parametrically, we will not be able to
read off the smoothness of the curve just by looking at the defining function. To see what kind
of things can go wrong, consider the function F (x, y) = y 3 x2 (Figure 27). This is certainly a C 1
function (and is in fact infinitely differentiable), but the zero locus it defines is the curve y 3 = x2 .
The student can easily check that this curve is not differentiable when x = 0, and so cannot be
written as the graph of a C 1 function.
Thus the best we can hope to do is locally. Our work with the Implicit Function Theorem
tells us that F 6= 0 on F 1 (0) will guarantee that our curve looks locally like the graph of a C 1
function, but what is the condition we should impose on parametric definition? The following is a
first step in the right direction:
Theorem 3.13
2. Let p : (a, b) R2 be a C 1 function and let S = p(a, b). If t0 (a, b) satisfies p0 (t) 6= 0
then there is an open subinterval I (a, b) such that p(I) is the graph of a C 1 -curve.
2. The image of p is p(a, b) = {(p1 (t), p2 (t)) : t (a, b)}. Morally, we would like to change
variables by setting x = p1 (t), inverting to write t = p11 (x) and substitute to write C as
C = (x, p2 (p1
1 (x))) . However, there is no reason to suspect that p1 should be invertible.
95
2015
c Tyler Holden
3 Local Invertibility 3.2 Curves, Surfaces, and Manifolds
The solution to this is effectively given by the Implicit Function Theorem, which in a sense
says that we can locally invert.
More rigorously, since p0 (t0 ) = (p01 (t0 ), p02 (t0 )) 6= 0 then one of the p0i (t0 ) 6= 0. Without loss of
generality, assume that p01 (t0 ) 6= 0. Define the C 1 function F (x, t) = xp1 (t) with x0 = p1 (t0 )
so that F (x0 , t0 ) = 0 and t F (x0 , t0 ) = p01 (t0 ) 6= 0. By the Implicit Function Theorem, we
may solve for t in terms of x; that is, there exists a C 1 -function g such that t = g(x) in a
neighbourhood of (x0 , t0 ). Thus
Since p2 and g are both C 1 , so too is their composition p2 g, and this shows that the image
of p(t) is the graph of the C 1 -curve p2 g as required.
Example 3.14
Solution. If we differentiate p0 (t) = (cos(t) t sin(t), sin(t) + t cos(t)) and it is not clear whether or
not this function is ever zero. Instead, lets try to rewrite this curve as the zero locus of a function
F : R2 R. Set x = t cos(t) and y = t sin(t) and notice that x2 + y 2 = t2 . We can isolate t by
recognizing that t = arctan(y/x), and so the curve as defined in the same thing as the zero locus
of the function F (x, y) = x2 + y 2 arctan2 (y/x). Its gradient is given by
2 arctan(y/x) 2x arctan(y/x)
F (x, y) = 2x + , 2y .
x2 + y 2 x2 + y 2
The only time that this can be zero is when (x, y) = (0, 0); however, the restriction of t (0, 2)
makes this impossible.
An important point to note is that part (2) of the theorem indicates there is an open interval
I (a, b) on which the function is the graph of a C 1 -curve, but this does not mean that there is an
open neighbourhood of R2 on which this is a smooth curve. What could possibly go wrong? Well,
the map could fail to be injective:
96
2015
c Tyler Holden
3.2 Curves, Surfaces, and Manifolds 3 Local Invertibility
Example 3.15 Under Definition 3.12 we know that the lemniscate cannot be a smooth
curve, since at the point of overlap there is no neighbourhood on which the function looks
like the graph of a smooth curve. In parametric equations, one has (Figure 28)
1
p : R R2 , t 7 (cos(t), sin(t) cos(t)) . (3.3)
1 + sin2 t
Notice that
1
p0 (t) = sin(t)[2 + cos2 (t)], cos(2t)[1 + sin2 (t)] sin(t) cos(t) sin(2t) .
2 2
(1 + sin (t))
This is never zero, since the first argument is only zero at t = n whereas the second
argument is identically 1 at n. But the function is certainly not injective since it is periodic.
In fact, even restricting our attention to (0, 2) we see that p(/2) = p(3/2) = (0, 0). Thus
there is no neighbourhood of (0, 0) where p(0, 2) looks like the graph of a C 1 function.
Even worse, there are two different values of the derivative at (0, 0) depending on whether
we take t = /2 or 3/2.
p0 (/2) = 1
2 (1, 1) , p0 (3/2) = 21 (1, 1).
0.4
1 0.5 0.5 1
0.4
Figure 28: The lemniscate drawn with the parametric equation given by (3.3). The thick blue line
is the set p((/4, 3/4)), which is the graph of a C 1 -function. However, the whole curve fails to be
smooth.
So what exactly did Theorem 3.13 tell us? It told us that since p0 (/2) 6= 0 there is a neigh-
bourhood around /2 whose image looked like the graph of a C 1 -function. However, this did not
take into consideration the more global nature of the curve. The way to remedy this situation is
as follows:
97
2015
c Tyler Holden
3 Local Invertibility 3.2 Curves, Surfaces, and Manifolds
Definition 3.16
If I R is an interval, a C 1 map : I R2 is said to be
Hence if is a regular, Theorem 3.13 implies that there is a neighbourhood of each point whose
image looks like the graph of a C 1 -function. Simplicity guarantees that no funny overlaps can
happen, and this is what is needed for the curve to be smooth.
Example 3.17
Solution. Differentiating, we get p0 (t) = (3t2 , 2e2t ) and since the second component is never zero, p
is certainly regular. On the other hand, both component functions t3 and e2t are injective, implying
that p is also injective, so p is simple. We conclude that the image of p is smooth.
Summary: There are three ways to define curves: as the graph of a function R R, as the
zero locus of a function R2 R, or as the image of a parameterization R R2 . Smoothness is
determined as follows:
Curves in Rn : Generalizing our discussion so far, a curve in Rn may be described in one of three
ways:
98
2015
c Tyler Holden
3.2 Curves, Surfaces, and Manifolds 3 Local Invertibility
Dimension of the Tangent Space: It was mentioned previously that the idea is to examine
the dimension of the tangent space, and see whether or not it changes as we move along our curve.
Notice that when p : R Rn that p0 (t0 ) is a vector which is tangent to the curve. Hence so long
as p0 (t0 ) 6= 0, the tangent space is always one dimensional. Conversely, if the curve is given by the
zero locus of F : Rn Rn1 then there are (n 1)-vectors {Fi }. If these are linearly dependent,
they define an (n 1)-hyper plane in Rn . The perpendicular to this hyperplane is the tangent
space, which will again be one dimensional.
3.2.2 Surfaces in R3
We have looked at 1-dimensional spaces of R2 and how to generalize them to Rn . Now we increase
the dimension of the space itself. The simplest case will be to look at 2-dimensional spaces (surfaces)
or R3 , afterwhich we will have a sufficient idea of the general procedure to be able to discuss k-
dimensional spaces of Rn .
Much akin to our treatment of curves, there are three ways to discuss surfaces:
Example 3.18 Fix r, R > 0 and consider the parameterization g : [0, 2) [0, 2) R3
given by
g(, ) = [(R + r cos()) cos(), (R + r cos()) sin(), r sin()]
or equivalently, the zero locus of the function
These define a torus, where r is the radius of the sweeping circle, and R is the radius to the
center of that circle
Definition 3.19
A smooth surface of R3 is a connected subset S R3 such that, for every p S there exists
a neighbourhood N of p such that S N is the graph of a C 1 function f : R2 R.
The Implicit Function Theorem again tells us that F 1 (0) will be a smooth surface so long as
F (x) 6= 0 for all x F 1 (0), but the case of the parametric definition is slightly more subtle.
Consider a linear parametric function
f (s, t) = us + vt + w.
99
2015
c Tyler Holden
3 Local Invertibility 3.2 Curves, Surfaces, and Manifolds
x
R
If u and v are linearly dependent, the image of p will be a line, while if they are linearly independent
the image will define a plane. Since we are only interested in bona-fide surfaces, we need to preclude
the possibility of a line. When p(s, t) is not just a linear function, the usual generalization tells
us that we need to tangent-vectors to be linearly independent; that is, p p
s and t must be linearly
independent. This can equivalently be phrased as saying that the matrix
p p
s t
must be full-rank.
Theorem 3.20
The proof of this theorem is almost identical to that of Theorem 3.13 and is left as an exercise for
the student. It should be evident that a connected level set of a C 1 map F : R3 R with nowhere
vanishing gradient defines a smooth surface. For the parametric definition, we once again require
both regularity of the surface (linearly independent tangents) and simplicity (global injectivity).
100
2015
c Tyler Holden
3.2 Curves, Surfaces, and Manifolds 3 Local Invertibility
Example 3.21
Find a zero-locus description of the surface, and find (using both the parametric and zero
locus pictures) the points where the surface is singular.
and this is certainly never zero. This implies that the surface does not have any singularities.
Surfaces in General: More generally, a surface in Rn may be defined by the zero locus of a
function F : Rn Rn2 , or the image of a parametric function p : R2 Rn . To see what the
general conditions for smoothness should be, we again think about the tangent space. For the zero
locus picture, there are (n 2) elements in the set {Fi }. If they are linearly independent, then
they span an (n 2)-dimensional hyperplane of Rn , whose orthogonal complement is the tangent
space of the surface. In the parametric picture, 1 p and 2 p form a basis for the tangent space, so
for this to be two dimensional, we require that they are everywhere linearly independent.
We now discuss (one last time!) how to form k-dimensional subspaces of Rn . There are two methods
we will consider:
101
2015
c Tyler Holden
3 Local Invertibility 3.2 Curves, Surfaces, and Manifolds
If our space is M = F1 (0), the conditions which guarantee that the defined object is a smooth
k-manifold is that
rank DF(x) = n k, x F1 (0).
While if M = p(U Rk ) then we must have p(t) is injective on U and
rank 1 p(t) k p(t) = k, t U.
In fact, rather than remembering which dimension corresponds to which, it is sufficient to state
that either DF(x) or Dp(t) must have maximal rank at every point on the surface. Rather than
rehash our tangent space argument in this case, the student should try to convince his/herself that
rank DF(x0 ) is the rank of the normal plane at x0 , while rank [i p(t0 )] is the rank of the tangent
plane at t0 .
102
2015
c Tyler Holden
4 Integration
4 Integration
Having effectively completed our study of differential calculus, we now move on to integral calculus.
Students often find integral calculus more difficult than differential calculus, typically because
computations are not nearly as straightforward as the recipe book style offered by differentiation.
Nonetheless, it turns out that integration is actually a far more sound theory: it is easier to make
rigorous in general contexts.
We will begin with a review of integration on the line (I say review because it will almost
certainly be new material), before moving onto the general theory for integrating variables in several
dimensions.
4.1 Integration on R
Given a sufficiently nice function f : R R, the idea of integration on the interval [a, b] is to
estimate the signed5 area between the graph of the function and the x-axis. The heuristic idea of
how to proceed is to divide [a, b] into subintervals and approximate the height of the function by
rectangles. We then take a limit as the length of the subintervals goes to zero, and if we get a
well-defined number, we call that the integral.
Unfortunately, there is no canonical choice for either how to divide [a, b], nor for how high to
make the rectangles. Typical choices for height often include left/right endpoints, or inf/sup values
of the function on each subinterval, but of course these are not the only choices.
Aside: It turns out that Riemann integration, or integrating by partitions of the domain, is an
inferior choice as there are many functions which are not integrable. A much more prudent choice
is to actually break up the range of the function and integrate that way, in a manner known as
Lebesgue integration. Unfortunately, Lebesgue integration is beyond the scope of the course.
that is, the length of P is the length of the longest interval whose endpoints are in P .
One should think of partitions as a way of dividing the interval [a, b] into subintervals. For
example, on [0, 1] we think of the partition P = 0 < 31 < 23 < 1 as breaking [0, 1] into [0, 1/3]
[1/3, 2/3] [2/3, 1]. If P[a,b] is the set of all finite partitions of [a, b] then ` : P[a,b] R+ gives us
a worst-case scenario for the length of the subintervals, in much the same way as the sup-norm.
5
Signed area simply means that area above the x-axis will be positive, while area below the x-axis will be negative
103
2015
c Tyler Holden
4 Integration 4.1 Integration on R
The idea is that when we do integration, we are going to want to take partitions whose length
between endpoints gets smaller, corresponding to letting the width of our approximating rectangles
get smaller. The number `(P ) then describes the widest width, which in a sense is our worst
rectangle.
Definition 4.2
If P and Q are two partitions of [a, b], then Q is a refinement of P if P Q.
Note that P and Q cannot be compared, since one is not a subset of the other. However,
P R and Q R, so R is a common refinement of both P and Q.
It is not too hard to see that any two sets in P[a,b] admit a common refinement: Given two
partitions P, Q P[a,b] , define R = P Q so that P R and Q R.
Definition 4.4
Given a function f : [a, b] R, a Riemann sum of f with respect to the partition P =
{x0 < x1 < < xn1 < xn } is any sum of the form
n
X
S(f, P ) = f (ti )(xi xi1 ), ti [xi1 , xi ].
i=1
Note that while the Riemann sum S(f, P ) certainly depends on how we choose the sampling
ti , we will often choose to ignore this fact. Some typical choices of Riemann sum that the student
has likely seen amount to particular choices of the ti . In the first case, we have the left- and right-
endpoint Riemann sums
n
X n
X
L(f, P ) = f (xi1 )(xi xi1 ), R(f, P ) = f (xi )(xi xi1 ).
i=1 i=1
Of far greater use are the lower and upper Riemann sums, defined as follows. Fix a partition
P P[a,b] and f : [a, b] R. Define
so that mi is the smallest value that f takes on [xi1 , xi ] while Mi is the largest. Now set
n
X N
X
U (f, P ) = Mi (xi xi1 ), u(f, P ) = mi (xi xi1 ).
i=1 i=1
The idea of the integral is that regardless of what partition we choose or how we choose to
sample the partition, we should always arrive at the same answer. This leads us to the formal
definition of Riemann integrability.
104
2015
c Tyler Holden
4.1 Integration on R 4 Integration
Definition 4.5
We say that a function f : [a, b] R is Riemann integrable on [a, b] with integral I if for
every > 0 there exists a > 0 such that whenever P P[a,b] satisfies `(P ) < then
|S(f, P ) I| < .
Rb
The element I is often denoted I = a f (x) dx.
We know that the student abhors the idea of - proofs, so it would not surprise us if headaches
are currently abound. Lets take a moment and read into what the definition of integrability really
means: Roughly speaking, a function is Riemann integrable with integral I if we can approximate
I arbitrarily well by taking a sufficiently fine partition P .
There are many different ways of defining Riemann integrability depending on how one chooses
to set up the problem. We give here a statement of some equivalent definitions:
Theorem 4.6
1. f is Riemann integrable,
3. For every > 0 there exists a partition P P[a,b] such that U (f, P ) u(f, P ) < ,
4. For every > 0 there exists a > 0 such whenever P, Q P[a,b] satisfy `(P ) < and
`(Q) < then |S(f, P ) S(f, Q)| < .
Students coming from Math 137 will recognize definition (2) as the statement that the lower
and upper integrals are equal. Indeed, the supremum over the lower Riemann sums is the lower
integral, and vice-versa for the upper integrals.
Each of these definitions offers its own advantage. For example, (1) and (2) are useful for the-
oretical reasons but are highly intractable for determining which functions are actually integrable.
On the other hand, (3) and (4) are exceptionally useful as they do not require one to actually know
the integral. In particular, (3) is useful because the upper and lower Riemann sums are nicely
behaved, while (4) is useful because it offers the flexibility to choose samplings.
Example 4.7
Solution. If c = 0 then there is nothing to do. Let us use definition (3) to proceed, and assume
without loss of generality that c > 0. The advantage of using definition (3) is that we get to
choose the partition, which gives us a great deal of power. Let n be any positive integer such that
c(ba)2
n < (more on how to choose this later). Since our function is increasing, minima will occur
at left endpoints, and maxima will occur at right endpoints. Choose a uniform partition of [a, b]
105
2015
c Tyler Holden
4 Integration 4.1 Integration on R
ba
into n + 1-subintervals P = {a = x0 , x1 , . . . , xn = b}, where xi = a + n i, so that
n1 n1
X c(b a) X
u(f, P ) = f (xk )(xk+1 xk ) = xk
n
k=0 k=0
n1 n1
X c(b a)X
U (f, P ) = f (xk+1 )(xk+1 xk ) = xk+1 .
n
k=0 k=0
n1
c(b a) X
U (f, P ) u(f, P ) = (xk+1 xk )
n
k=0
c(b a)
= (b a)
n
< .
cb
f (x) = cx
U (f, P )
u(f, P )
ca
x1 x2 x3 x4 x5
Figure 30: One can visually see why the difference between U (f, P ) and u(f, P ) results in a tele-
scoping sum. For example, the red rectangle on [x1 , x2 ] is the same area as the blue rectangle on
[x2 , x3 ], so they cancel in the difference.
Example 4.8
106
2015
c Tyler Holden
4.1 Integration on R 4 Integration
Solution. Let P = {0 = x0 < x1 < < xn = 1} be an arbitrary partition of Q [0, 1], and recall
that Q is dense in [0, 1] while the irrationals R \ Q are dense in [0, 1]. Hence on each subinterval
[xi1 , xi ] we have
so in particular
n
X n
X
U (f, P ) = Mi (xi xi1 ) = (xi xi1 )
i=1 i=1
= x1 x0 = 1
Xn
u(f, P ) = mi (xi xi1 ) = 0
i=1
so that U (f, P ) u(f, P ) = 1. Since this holds for arbitrary partitions, any < 1 will fail the
definition of integrability, so Q is not integrable.
107
2015
c Tyler Holden
4 Integration 4.1 Integration on R
Theorem 4.9
5. Monotonicity of Integral: If f, g are integrable on [a, b] and f (x) g(x) for all
x [a, b] then
Z b Z b
f (x) dx g(x) dx.
a a
These proofs are standard and fairly fundamental results. We will not go into them at this
time, but encourage the student to give them a try.
Of course, we also have the following important theorem which guarantees that integral calculus
is actually computable:
Theorem 4.10: The Fundamental Theorem of Calculus
Rx
1. If f is integrable on [a, b] and x [a, b] define F (x) = a f (t)dt. The function F is
continuous on [a, b] and moreover, F 0 (x) exists and equals f (x) at every point x at
which f is continuous.
108
2015
c Tyler Holden
4.1 Integration on R 4 Integration
The fundamental theorem say that, up to functions being almost the same and additive
constants, the processes of integration and differentiation are mutually inverting. The proof is a
standard exercise and so we omit it.
Theorem 4.6 gave multiple equivalent definitions for integrability, each with its own strengths
depending on context. Of great use was parts (3) and (4) which gave conditions on integrability
without needing to know the limiting integral. Unfortunately, these criteria fail to really expound
upon which of our everyday functions are integrable.
There are a great deal of functions, absent of any regularity conditions such as continuity or
differentiability, which prove to be integrable. Example 4.8 shows that there are also functions
which fail to integrable. We will develop several sufficient conditions for integrability, one which
looks similar to Bolzano-Weierstrauss and one which amounts to being almost continuous,
which is certainly the case with most functions we have seen and will see.
Theorem 4.11
Proof. The idea of the proof is the upper and lower Riemann sums are very easy to write down
for monotone functions, and the fact that f is additionally bounded means that we can make
the difference between the upper and lower Riemann sums arbitrarily small (which is one of our
integrability conditions). In fact, the proof is effectively identical to the one given in Example 4.7
(see Figure 30).
More formally, assume without loss of generality that f is increasing on [a, b] (just replace f with
f if it is decreasing and apply Theorem 4.9 (3)). For any partition P = {a = x0 < x1 < < xn = b}
we then have that the lower and upper Riemann sums are determined by the left- and right-
endpoints on each interval:
n
X n
X
u(f, P ) = f (xi1 )(xi xi1 ), U (f, P ) = f (xi )(xi xi1 ).
i=1 i=1
Let > 0 be given and choose < [f (b) f (a)]1 . Let P be any partition of [a, b] such that
`(P ) < , so that
n
X
U (f, P ) u(f, P ) = [f (xi ) f (xi1 )] (xi xi1 )
i=1
Xn
[f (xi ) f (xi1 )]
i=1
(f (b) f (a))
(f (b) f (a)) < .
f (b) f (a)
109
2015
c Tyler Holden
4 Integration 4.1 Integration on R
Since was arbitrary, Theorem 4.6 part (3) implies that f is integrable.
Note: We could have used uniform partitions here, which would have removed the need to take
< [f (b) f (a)]1 . Try repeating the proof using uniform partitions to test whether you actually
understand the proof.
Theorem 4.12
It is tempting to use Theorem 4.11, since f is certainly bounded and we should be able to
restrict f to intervals on which it is monotone. Applying Theorem 4.9 part (1) we would then be
done. However, this does not work, since it can be shown that there are continuous functions on
[a, b] which are not monotone on any interval! (Think about the function sin(1/x) and consider
yourself this is not monotone in any interval around 0. Such functions are similar.) Luckily, we can
actually just prove the theorem directly:
Proof. The idea of the theorem is as follows: Continuous function on compact sets are necessarily
uniformly continuous: in effect, this means that we can control how quickly our function grows by
choosing neighbourhoods of identical but sufficiently small size. By choosing a partition to have
length smaller than these neighbourhoods, we can thus control the distance between the maximum
and minimum of a function on each subinterval, and force the upper and lower Riemann sums to
converge.
More formally: Let > 0 be given. Since any continuous function on a compact set is uniformly
continuous, we can find a > 0 such that whenever |x y| < then |f (x) f (y)| < ba . Now let
P = {x0 < < xn } be a partition such that `(P ) < . The restriction of f to each subinterval
[xi1 , xi ] is still continuous, and so by the Extreme Value Theorem, f must attain its maximum
and minimum on [xi1 , xi ]. Let M correspond to the max and m correspond to the min so that
Mi = f (M ) and mi = f (m ). Since M , m [xi1 , xi ] we have |M m | |xi xi1 | < so that
Mi mi = |Mi mi | = |f (M ) f (m )| < .
ba
Hence the difference in Riemann sums becomes
n n
X X
U (f, P ) u(f, P ) = (Mi mi )(xi xi1 ) (xi xi1 )
ba
i=1 i=1
n
X
(xi xi1 ) = (b a) = .
ba ba
i=1
With any luck, your previous courses have taught you that integration over a single point yields
an integral of 0, regardless of the function. In essence, this occurs because a single point has no
width, and so any Riemann sum over it is zero. We should be able to readily extend this to any
finite number of points, so that an integral over a finite set is still zero, but what happens when
we want to talk infinitely many points? What does it mean to have zero width in this case?
110
2015
c Tyler Holden
4.1 Integration on R 4 Integration
|Mi mi | <
ba
M m
xi1 xi
|xi xi1 | <
Figure 31: Since our function is uniformly continuous, whenever |xy| < then |f (x)f (y)| < ba .
By choosing a partition for which the maximal length of a subinterval is less than , we can ensure
that the difference between the upper and lower integrals on each region is bounded.
Definition 4.13
If I = [a, b] let the length of I be `(I) = b a. If P(R) is the power-set of R, we define the
Jordan outer measure a as the function m : P(R) R0 given by
n I k is an interval
n
X
m(S) = inf `(Ik ) : [ .
S Ik
k=1
k=1
If m(S) exists and m(S) = 0, we say that S is Jordan measurable. If m(S) = 0 we say that
S has Jordan measure zero.
a
There is a much more useful notion called the Lebesgue measure, which is essentially the same as the
Jordan measure except that we no longer consider a finite covering by intervals, and instead take a countable
collection of intervals.
Most well behaved sets that we can think of are Jordan measurable. An example of a set which
is not Jordan measurable is Q [0, 1]. Notice that m((Q [0, 1])) = m([0, 1]) = 1, so that its
boundary does not have zero measure.
Example 4.14
Let S be a set containing a single point. Show that S has zero Jordan measure.
Solution. Let S = {x} so that the point has a name. It suffices to show that for every > 0,
m(S) < (why?). Notice that I = x 2 , x + 2 covers S, and `(I) = . Since m(S) is the
infimum over all such covers, we have m(S) < `(I) = as required.
Since integration does not seem to recognize individual points, we suspect that changing a
111
2015
c Tyler Holden
4 Integration 4.1 Integration on R
It seems likely that f and g have the same integral on [0, 2]. In order to show that this is
true, we apply a tried-and-tested analysis technique, which essentially involves ignoring the
point which is different and taking a limit. More rigorously, for sufficiently small > 0, let
U = (1 , 1 + ). On V = [0, 2] \ U = [0, 1 ] [1 + , 2] we have that f (x) = g(x), and
these are integrable since they are continuous on V . Furthermore, by Theorem 4.9 we have
Z 2 Z Z
g(x) dx = g(x) dx + g(x) dx
0
ZV ZU
= f (x) dx + g(x) dx.
V U
R R2
We want to show that in the limit 0 we get U g(x) dx 0, so that 0 f (x) dx =
R2
0 g(x) dx. While the approximation is rather terrible, notice that g(x) 0 for all x U
and
max g(x) = 106 ,
xU
Theorem 4.16
If S [a, b] is a Jordan measure zero set, and f : [a, b] R is bounded and continuous
everywhere except possibly at S, then f is integrable.
Proof. Let M and m be the supremum and infimum of f on [a, b] and let > 0 be given. Since S has
k
P measure zero, we can find a finite collection of intervals (Ij )j=1 such that S j Ij [a, b]
Jordan
and j `(Ij ) < 2(M m) . Set W = j Ij and V = [a, b] \ W . Since f is continuous on V , it is
integrable on V and hence there exists some partition P such that U (f |V , P ) u(f |V , P ) < 2 . If
necessary, refine P so that it contains the endpoints of the intervals Ij . Writing the upper and
lower Riemann sums over [a, b] we get
Since we already know how to bound the V contribution, we need now only look at the W contri-
112
2015
c Tyler Holden
4.2 Integration in Rn 4 Integration
thus
y
M
m
x
I1 I2
Figure 32: The set W = I1 I2 contains the discontinuities of our function. Since our function is
continuous away from W , we can make the difference between the upper and lower sums as small
as we want, hence we need only bound the function on W . The difference in height will always be
at worst M m, but we can make the length of the intervals I1 and I2 as small as we want, making
the W contribution arbitrarily small.
Corollary 4.17
Rb
If f, g are integrable on [a, b] and f = g up to a set of Jordan measure zero, then a f (x) dx =
Rb
a g(x) dx.
This is an easy corollary, whose proof effectively emulates that of Remark 4.15, so we leave it
as an exercise for the student.
4.2 Integration in Rn
The process of integration for Rn is effectively identical to that of R, except now we must use
rectangles instead of intervals, rectangles being a possible analog for higher-dimensional intervals.
We start by focusing on R2 to gain a familiarity with the concepts before moving to general Rn .
Note: It could be argued that the generalization of a closed interval [a, b] is a closed ball. One
can develop the following theory with balls, but taking the area/volume of balls usually involves a
nasty factor of hanging around. We want to avoid this, so let us just use rectangles.
113
2015
c Tyler Holden
4 Integration 4.2 Integration in Rn
cp By realizing (non-canonically) R2 = RR, we can define a rectangle R in R2 as any set which can
be written as R = [a, b] [c, d]: this truly looks like a rectangle if drawn in the plane. A partition
of R may then be given by a partition of [a, b] and [c, d]; namely, if Px = {a = x0 < < xn = b}
and Py = {c = y0 < < ym = d} are partitions of their respective intervals, then P = Px Py is
a partition of R, with subrectangles
i=1,...,n
Rij = [xi1 , xi ] [yj1 , yj ], j=1,...,m.
y3
y2
R32
y1
y0
x0 x1 x2 x3 x4
It should be intuitively clear that the area of Rij will be given by A(Rij ) = (xi xi1 )(yj yj1 ),
in which case a Riemann sum for f : R2 R over the partition P is given by
X
S(f, P ) = f (tij )A(Rij ), tij Rij .
i=1,...,n
j=1,...,m
The notion of left- and right-Riemann sums no longer make sense, but certainly the upper and
lower Riemann sums are still well-defined:
" #
X X
U (f, P ) = sup f (x) A(Rij ), u(f, P ) = inf f (x) A(Rij ).
xRij xRij
i=1,...,n i=1,...,n
j=1,...,m j=1,...,m
The usual definitions of Riemann integrability then carry over directly from Definition 4.5.
Restricting ourselves to just one definition for the moment, we will then say that f : R R is
Riemann integrable if for any > 0 we can find a partition P such that U (f, P ) u(f, P ) < , and
we will write the integral as
ZZ ZZ
f dA, or f (x, y) dx dy.
R
114
2015
c Tyler Holden
4.2 Integration in Rn 4 Integration
Theorem 4.18
RR
4. Subnormality:
RR If f is integrable on R and |f | is integrable on R and f dA
|f | dA.
For any reasonably nice set, one can think of the Jordan measure as the area. For example, if
B2 = (x, y) R2 : x2 + y 2 1 is the unit disk, then m(B 2 ) = (though this is extremely tough
to show by hand!). Intuitively, zero-measure sets of R2 are those which do not have any area, and
one would suspect that one-dimensional objects should have no area.
Example 4.20
Show that the set S = [0, 1] {0} R2 has zero Jordan measure.
115
2015
c Tyler Holden
4 Integration 4.2 Integration in Rn
each of which has an area of k12 . The student can check that S k1
i=0 Ri so that {Ri } cover S.
Moreover, there are exactly k such squares, so their total area is k k12 = k1 . Since the Jordan
measure is the infimum over all possible measures, we have that (S) k1 . Since we chose k
arbitrarily, we can make (S) as small as we want, showing that (S) = 0.
R0 R1 R2
1
k
(0, 0) (1, 0)
1
k
This likely seemed like an unnecessarily difficult way of doing the problem: certainly we could
have just placed an rectangle of length 1 and height k1 around the interval and let k shrink to zero.
The important point here is that as we let k grow, the number of rectangles increased proportional
to k, while the area decreased proportional to k 2 .
Theorem 4.21
If f : R R2 is of class C 1 , then for every interval I R we have that f (I) has zero content.
Proof. As mentioned before, the idea of the proof is that the image of R under a C 1 function has
no width, but how do we show this? By thinking of f (t) = (f1 (t), f2 (t)) as a curve, its derivatives
f 0 (t) = (f10 (t), f20 (t)) represent the velocity of the curve. If we take the maximum horizontal speed
C = max f10 (t), then by restricting to an interval [a, b], we see that the maximum horizontal distance
that the function can travel is bounded above by C (b a); that is, distance = speed time.
Proceeding similarly with the vertical direction means that we can put f ([a, b]) into a box whose
area is proportional to C(b a)2 , and since we have control over how to partition our curve, we
can always force this number as small as we want.
More formally, let I be a fixed interval and > 0 be given. Since f is of class C 1 , we know that
|f 0 (t)| is continuous and hence attains its max and min on I. Let S = maxtI |f 0 (t)| and choose an
2 2
integer k such that k > `(I) S . Let P be a uniform partition of I into k sub-intervals and notice
then that r
`(I) 1
`(P ) = < .
k S k
Fix a sub-interval [xi , xi+1 ] and apply the Mean Value Theorem to the component functions
116
2015
c Tyler Holden
4.2 Integration in Rn 4 Integration
and similarly |f2 (xi ) f2 (xi1 )| < k . Hence f ([xi , xi1 ]) is contained is a box whose with area
p
p 2
at most k = k . Since there are k such partitions, this means that f (I) can covered by k-
rectangles whose total width at is at most k k = . Since was arbitrary, this completes the
proof.
ymax
xmax
Figure 33: By looking at the maximum speed that the function attains, one can find the worst
case box the each subinterval (marked by black dots) fits into. As we increase the number of
subintervals, the number of necessary boxes increases linearly, while the area of each box decreases
proportional to the n-th power.
Definition 4.22
A curve f : [a, b] Rn is said to be piecewise C 1 if it is C 1 at all but a finite number of
points.
Corollary 4.23
Proof. The proof of this corollary is immediate. If S has a boundary defined by a piecewise smooth
curve, then its boundary has zero Jordan measure by Theorem 4.21. This is precisely the definition
for S to be Jordan measurable.
Theorem 4.24
117
2015
c Tyler Holden
4 Integration 4.2 Integration in Rn
Thus the function f S : R R is just f (x) on S and is identically 0 everywhere else. Note
that the choice of enveloping rectangle really doesnt affect f S since we have extended f by zero
outside of S.RR We would now like to check that f S is integrable on R so that it makes sense to
write down S f dA.
Theorem 4.25
Proof. It is easy to convince ourselves that the discontinuities of the characteristic function S
occur exactly at the boundary S. If S is Jordan measurable, then m(S) = 0. The discontinuities
of f are also Jordan measure zero, hence the total discontinuities of f S has zero measure, so this
function is integrable.
More rigorously, fix a rectangle R such that S R. Let D be the set of discontinuities of
f and note that the set of discontinuities of S is given by S. It then follows that the set of
discontinuities of f S on R is D S. Since the union of zero measure sets has zero measure, f S
has zero-measure discontinuities on R and hence is Riemann Integrable by Theorem 4.24.
118
2015
c Tyler Holden
4.2 Integration in Rn 4 Integration
Corollary 4.26
Z
If S R2 is Jordan measurable then m(S) = S .
S
Now we generalize things for (hopefully!) the last time. A rectangle in Rn is any set of the form
R = [a1 , b1 ] [an , bn ],
and has volume V (R) = (b1 a1 ) (bn an ). A partition of R may be specified by an
n-partitions of R, each one decomposing [ai , bi ]. For (i1 , . . . , in ) a collection of positive integers, let
R(i1 ,...,in ) be the sub-rectangle corresponding to the (i1 , . . . , in ) element. A tagged Riemann sum
over R is any sum of the form
X
S(f, P ) = f t(i1 ,...,in ) V (R(i1 ,...,in ) ), t R(i1 ,...,in ) .
(i1 ,...,in )
As usual, one can define the upper U (f, P ) and lower u(f, P ) Riemann sums using the supremum
and infimum, in which case we say that f : R Rn R is integrable precisely when for every
> 0 there exists a partition P such that
U (f, P ) u(f, p) < .
To extend the definition of the integral beyond rectangles, we once again introduce the Jordan
measure. The Jordan measure of a set S is defined as the infimum of the volumes of all covering
rectangles, and S is Jordan measurable if its boundary has measure zero. If k < n then the image
of a C 1 map f : Rk Rn has Jordan measure zero. A function f : S R is then integrable if S is
Jordan measurable and if the set of discontinuities of f on S has Jordan measure zero. We denote
the integral of such a function as:
Z Z Z Z Z Z
n
f dV = f (x)d x = f (x1 , . . . , xn ) dx1 dxn .
S
Let S Rn be a compact, connected, and Jordan measurable set, with continuous functions
f, g : S R satisfying g 0. Then there exists a point a S such that
Z Z Z Z
f (x)g(x)d x = f (a) g(x)dn x.
n
S S
Proof. Since S is compact and f is continuous on S, it attains its max and min on S, say M and
m respectively. Since g 0 we have
Z Z Z Z Z Z
m g(x)dn x f (x)g(x)dn x M g(x)dn x.
S S S
119
2015
c Tyler Holden
4 Integration 4.3 Iterated Integrals
or equivalently
S f (x)g(x)dn x
R R
m R R M.
S g(x)dn x
Since f is continuous and S is connected, f is surjective on [m, M ] and hence the Intermediate
Value Theorem implies the middle term is f (a) for some a S, as required.
The student has likely noticed that this section is filled with theory, and zero computation. The
reason for this is that computing integrals in multiple dimensions is an incredibly difficult thing to
do. The reason is that for any partitioning subrectangle, we are looking at the supremum/infimum
of our function restricted to that n-dimensional rectangle. In a sense, we have to integrate in all
n-dimension simultaneously. This is not easy to do, so our next section will introduce a method by
which we integrate our function in slices.
In developing the theory of integration in the plane and higher, it was necessary to consider par-
titions of rectangles and hence, in essence, to consider the area of function with respect to an
infinitesimal area dA. Of importance is that this area term encapsulates information about every
dimension simultaneously, but simultaneity is a computational obstacle. For example, when learn-
ing to differentiate a multivariate function, we needed to invest a great deal of energy into simply
analyzing the change of the function in a single, specific direction (ie the partial derivatives). If we
want to know how the function is changing in an arbitrary direction, we then have the directional
derivative du f = f u, so that the gradient f somehow represents that simultaneous derivative
of f at any point.
Consider now the problem of computing the upper sum U (f, P ) for a function f on a partition
P . For each subrectangle Rij , one would need to determine the supremum of f on Rij . If our
function is C 1 , even this involves solving for critical points on the interior, then using the method
of Lagrange multipliers on the boundary. What a nightmare!
From our single variable calculus days, we know that integration is often more difficult that the
formulaic recipe-following nature of differentiation. The fact that simultaneous differentiation
required so much work does not bode well for the idea of simultaneous integration. So as math-
ematicians, we wont bother trying to figure it out. Instead, we will apply the mathematicians
favourite tool: We will reduce simultaneous integration to a problem we have solved before: one
dimensional integration.
As always, we start out with a rectangle R = [a, b] [c, d] in the plane, partitioned into P =
Px Py = {x0 < xn } {y0 < ym }. The prototypical Riemann sum which corresponds to this
partition is
X X
S(f, P ) = f (xf
ij )A(Rij ) = f (xi , yj )xi yj
i{1,...,n} i{1,...,n}
j{1,...,m} j{1,...,m}
where (xi , yj ) [xi1 , xi ] [yj1 , yj ] and xi = (xi xi1 ), yj = (yj yj1 ). Now if we look at
120
2015
c Tyler Holden
4.3 Iterated Integrals 4 Integration
Now strictly speaking, what we have done here is not kosher, since in particular we had to
assume two things:
1. The limit `(P ) 0 is equivalent to first doing `(Px ) 0 then `(Py ) 0, and
If we make these assumptions and add a pinch of rigour (which we will not do here), we get
Of course, the theorem also holds with the roles of x and y reversed.
Example 4.29
2 y
Determine the volume under the function f (x, y) = xex on the rectangle R = [0, 1][0, 1].
Solution. Since f is a continuous function on R it is integrable, and so certainly each of the slices
fy (x) or fx (y) are integrable as well. We will do the calculation both ways to show that the integral
121
2015
c Tyler Holden
4 Integration 4.3 Iterated Integrals
f (x, y)
)
y0
g(
y
x
y0
Figure 35: Fixing a y0 , we look at the function f (x, y0 ). If this function is integrable for each y0 ,
Rb
then the value of g(y0 ) is precisely a f (x, y0 ) dx, the shaded region. If g is also integrable, then we
can compute the integral of f by these slices.
yields the same results. If we integrate first with respect to x then y, we have
1 x2 y 1
Z 1 Z 1 Z 1
x2 y
xe dx dy = e dy
0 0 0 2 x=0
Z 1
1
= (e 1) ey dy
2 0
1 1 1
= (e 1) ey 0 = (e 1)(e1 1)
2 2
= 1 cosh(1).
Of course, the above example was very simple since we could decompose our function f (x, y) =
f1 (x)f2 (y), but the result still holds even when such a decomposition is not possible.
Now rectangles are rather boring objects about which to integrate, so we again look at Jordan
measurable sets S R2 . In particular, we will suppose that S has its boundary defined by piecewise
C 1 curves; say
S = {(x, y) : a x b, (x) y (x)} .
In this case, our integration becomes
Z Z b "Z (x)
#
f dA = f (x, y) dy dx.
S a (x)
122
2015
c Tyler Holden
4.3 Iterated Integrals 4 Integration
Often times, the most difficult part of solving an iterated integral question comes from determining
the bounding functions, though sometimes we are fortunate and they are already prescribed.
Example 4.30
y
Find the integral of the function f (x, y) = x5 +1
on the intersection of
{y 0} {x 1} y x2 .
Solution. In any situation of performing iterated integrals, it is best to draw a diagram of the region
over which we are integrating. In our case, we can see that the region may be summarily described
as
S = (x, y) : 0 x 1, 0 y x2 .
y = x2
Note that the region in Example 4.30 also could have been described by
S = {0 y 1, y x 1} ,
123
2015
c Tyler Holden
4 Integration 4.3 Iterated Integrals
S
y=x
x
1
Figure 36
This probably would not have worked as nicely though, since x51+1 is not easy to integrate. This
suggests that being able to rewrite our domain is a useful skill, since sometimes we are given the
boundary, but the problem is not amenable to the given description.
Example 4.31
2
Determine the integral of the function f (x, y) = ey on the region bounded by the lines
y = 1, x = 0 and y = x.
Solution. The region is a simple triangle, given in Figure 36, which can be written as either of the
following two sets
S = {0 x 1, x y 1}
= {0 y 1, 0 x y} .
2
but the function ey has no elementary anti-derivative, and we are stuck. On the other hand, using
the second description gives
Z Z 1 Z y
y2
f dA = e dx dy
S 0 0
Z 1h ix=y Z 1
y2 2
= xe dy = yey dy
0 x=0 0
1
1 y2 1
= e = (e 1).
2 y=0 2
124
2015
c Tyler Holden
4.3 Iterated Integrals 4 Integration
Example 4.32
y 2 = 2x + 6
y =x1
Solution. We begin by drawing a rough picture of what the boundary looks like. Notice that the
intersection of these two lines occurs when
(x 1)2 = 2x + 6, x2 4x 5 = 0, x = 5, 1,
which corresponds to the pairs (1, 2) and (5, 4). Now our figure shows that it will be very hard
to write this as {a x b, (x) y (x)}, so instead we try to switch the variables. In that
case, notice that we can write S as
1 2
S = 2 y 4, y 3 x y + 1 .
2
125
2015
c Tyler Holden
4 Integration 4.3 Iterated Integrals
Thus far we have been fortunate: most of our examples are clearly C 1 on the region on which
they are defined, and all the hypotheses of Fubinis theorem become easily verified. However, there
are instances where Fubini will not hold, as the following example demonstrates.
Example 4.33
xy(x2 y 2 )
Consider the function f (x, y) = on the rectangle R = [0, 1] [0, 1].
(x2 + y 2 )3
Solution. Let us navely assume that Fubinis theorem applies. Notice that f is symmetric in x
and y with the exception of a negative sign in the numerator. Hence
1 1+y 2
xy(x2 y 2 ) y(u 2y 2 )
Z Z
1 substitution with
dx = du
0 (x2 + y 2 )3 2 y2 u3 u = x2 + y 2
2 Z 1+y2
y 1+y 1
Z
3 1
= 2
du y du
2 y2 u y2 u3
1+y2
y3
y
= +
2u 2u2 y2
y y3
= +
2(1 + y 2 ) 2(1 + y 2 )2
y
= .
2(1 + y 2 )2
1 1 2 1
= = .
4 u 1 8
The computation in the other order is exactly the same, except one gets an extra negative sign
coming from the original substitution u = x2 + y 2 . Thus
1 Z 1 Z 1 Z 1
xy(x2 y 2 ) xy(x2 y 2 )
Z
dy dx = dx dy
0 0 (x2 + y 2 )3 0 0 (x2 + y 2 )3
and the integrals are not equal. The reason why Fubinis theorem fails is that f is not integrable
on R. Indeed, f is not even bounded on R and so certainly cannot be integrable.
One might wonder if the only way the solutions will disagree is a minus-sign. The answer is no,
as can be checked by using a non-symmetric rectangle. As an exercise, the student should check
that if the rectangle R = [0, 2] [0, 1] is used instead, the resulting integrals will differ in value as
well as sign.
126
2015
c Tyler Holden
4.3 Iterated Integrals 4 Integration
Triple! Integrals: Of course we have limited our discussion thus far to functions of two variables,
but there was no reason to (other than to keep ourselves from headaches). Naturally, we can extend
to three dimensions and beyond, and so perform integration in n-variables. However, because
drawing diagrams is so critical for doing iterated integrals, we typically tend to avoid doing them
in 4-dimensions or greater. In this course, we will not see integrals in more than 3-variables.
This being said, what happens when we want to integrate a function in three variables? The
solution is to proceed just as before, except that now we write our domain as
Example 4.34
RRR
Determine S z dA if S is the set bounded by the planes x = 0, y = 0, z = 0 and x+y +z =
1.
Solution. This shape is a tetrahedron whose boundaries are the three standard unit normals
{ei }i=1,2,3 and the origin (0, 0, 0). Now 0 x 1 is evident, and projecting into the xy-plane
we see that 0 y 1 x. Finally, we clearly have that 0 z 1 x y so that
ZZZ Z 1 Z 1x Z 1xy
z dA = z dz dy dx
S 0 0 0
"Z 1xy #
Z 1 1x 2
z
= dy dx
0 0 2 0
1 Z 1x 1x
1 1 (1 x y)3
Z Z
1 2
= (1 x y) dy dx = dx
2 0 0 2 0 3 0
1
1 1 (1 x)4
Z
3 1 1
= (1 x) dx = =
6 0 6 4 0 24
Example 4.35
RRR
Determine S (2x + 4z)dV where S is the region bounded by the planes y = x, z = x,
z = 0, and y = x2 .
Solution. The student should stare at these equations for some time and try to visualize the space.
In particular, a nice parameterization of the space can be given as
S = 0 x 1, x2 y x, 0 z x .
127
2015
c Tyler Holden
4 Integration 4.4 Change of Variables
There is a great idea amongst physicists that the properties of a physical system should be invariant
of how you choose to look at that system. Consider for example, a driver racing around a circular
track. We should be able to determine fundamental physical facts about the driver regardless of
whether we are looking at the driver from the stands, from the center of the track, or even from
the backseat of the car. However, each point of view offers its own advantages and disadvantages.
For example, the observer at the center of the track only sees a change in the angle of the car
relative to the observer, with the distance remaining constant. On the other hand, the backseat
observer will see the driver experience the fictitious centrifugal force, while the external observers
will simply see inertia.
Another exceptionally important example is the theory of special relativity. Effectively, if one
starts with the simple (but unintuitive) assumption that the speed of light is constant in every
frame of reference, then much of theory of special relativity (such as time/length dilation, breaking
simultaneity) can be derived simply by analyzing what happens from different view points. This
section is dedicated to analyzing how this is done mathematically, and how we can use this to make
headway on difficult integrals.
4.4.1 Coordinates
It is difficult to describe what we mean by a set of coordinates without using more technical lan-
guage. The effective idea is that a coordinate system should be a way of (uniquely) and continuously
describing a point in your space. Cartesian coordinates are those with which we are most familiar,
and are given by (x, y), describing the horizontal and vertical displacement of a point from the
origin. However, the origin itself corresponds to an arbitrary choice: choose some other point in
the plane and call that the origin, and notice that fundamentally, our space has not changed. For
example, a circle x2 + y 2 = 1 is in many ways the same as the circle (x a)2 + (y b)2 = 1 for any
choice of (a, b), we have simply moved it. Such a transformation is called a translation and are
described as functions f (x, y) = (x a, y b).
Similarly, one might choose to change how we want to measure distances, resulting in a scaling
of the from f (x, y) = (x, y) for , 6= 0 (when < 0 this corresponds to reflecting about the
y-axis, and similarly < 0 is reflection about the x-axis). We could even rotate our coordinate
128
2015
c Tyler Holden
4.4 Change of Variables 4 Integration
system by an angle via the map f (x, y) = (cos()x+sin()y, cos()ysin()x). Combining scaling,
rotations, and translations, one gets affine transformations f (x, y) = (c1 x+c2 y +c3 , d1 x+d2 y +d3 ).
But of course we have seen many other types of coordinate systems. For example, polar coordi-
nates are described by the function (x, y) = f (r, ) = (r cos(), r sin()). In R3 we have cylindrical
and spherical coordinates:
Though one problem faced with these set of coordinates is that without restrictions on r, , , ,
the coordinate system may not be unique! For example, the following all represent the same set of
points
(cos(), sin()) = (cos(), sin()) = ( cos(0), sin(0)).
For polar coordinates we thus demand that r (0, ) and [0, 2). For spherical coordinates,
one takes (0, ), [0, ], and [0, 2). Unfortunately, this means that we must make a
sacrfice in the collection of points we are able to represent, for example, the origin (0, 0) cannot be
written in polar coordinates. Hence our function is a map f : (0, ) [0, 2) R2 \ {(0, 0)}.
There are countless other types of coordinate systems one might want to use, for example
(x, y) = f (, ) = (e , 2 ), though again we run into uniqueness issues and need to restrict our sets
in order to have a good coordinate system. For example, in this case our good coordinate system
is between the sets f : R [0, ) (0, ) [0, ).
So what restriction should we place on f to ensure that we have a good coordinate system
between sets U, V Rn ? Just for things to be unique we should certainly require that f : U V
is bijective (so that f 1 : V U exists) and for things to play well with calculus, we should also
require that f and f 1 be differentiable.
Definition 4.36
If U, V Rn and f : U V is a C 1 bijection with C 1 inverse f 1 : V U , then we say
that f is a diffeomorphism.
It may not be immediately obvious that f 1 is differentiable, but the absence of the origin (0, 0)
ensures that this is the case.
129
2015
c Tyler Holden
4 Integration 4.4 Change of Variables
Lets see how areas change under this transformation. Consider a rectangle [a, b] [, ] in
(r, )-space; that is, a r b and . Applying f , we get an arc-segment, as illustrated in
Figure 37.
r a x
a b
Figure 37: How a simple square in polar coordinates (left) changes under the map f (r, ) =
(r cos(), r sin()).
Exterior Algebra: There is another technique for deriving this relationship, although the rigours
of the theory would take us very far afield. Instead, one can summarize the basic rules of how to
manipulate infinitesimal terms. Let dx, dy, and dz represent three infinitesimal terms (though
this generalizes to a higher number of terms, and not just Cartesian coordinates).
1. The order of multiplication matters: dx dy 6= dy dx, so pay careful attention to the ordering,
dx dy dz = dy dx dz = dx dz dy,
130
2015
c Tyler Holden
4.4 Change of Variables 4 Integration
dx dx = 0, dx dy dy = 0, dx dz dx = 0.
In fact, notice that rules 3 and 4 are very similar to determinants: Interchanging two columns
corresponds to introducing a minus sign, and if two rows are linearly dependent, the determinant
is zero. This is not a coincidence, as it turns out the exterior algebra for infinitesimals is intimately
related to determinants.
4.4.2 Integration
The content of this section is extraordinarily useful but the proofs are cumbersome and not par-
ticularly enlightening. Consequently, we will motivate the situation by analyzing what happens in
the one-dimensional case, before stating the major theorem (without proof).
In the one-dimensional case, there is not much in the way of variable changing that can be done!
Nonetheless, the student has already seen a plethora of examples which greatly emulate coordinate
changing: The method of substitution. For example, when integrating the equation
Z 3
x
21
dx,
2 x
the student should (hopefully) realize that the appropriate substitution here is u = x2 1 so that
du = 2x dx, and the integral becomes
Z 3
1 81
Z
x
2
dx = du = [ln |u|]83 = ln(8) ln(3).
2 1 x 2 3 u
In effect, the theory behind why this works is that we have realized that working in the x-coordinate
system is rather silly since it makes our integral look complicated. By changing to the u = 1 + x2
coordinate system, the integral reduces to something which we can easily solve.
The theory is as follows (though our presentation might seem a bit backwards compared to how
such integrals are usually computed): The fundamental theorem of calculus easily tells us that
Z b Z g(b)
0
f (g(x))g (x) dx = f (u) du (4.2)
a g(a)
131
2015
c Tyler Holden
4 Integration 4.4 Change of Variables
where u = g(x) so that du = g 0 (x) dx. The idea is that by introducing the auxiliary function
u = g(x) we were able to greatly reduce the problem to something more elementary, and that is
the goal of changing variables.
Unfortunately, there is never a single way to change variables, and it can make our notation a
bit of a headache. For example, what if we had instead chosen the substitution u = 1 x2 in the
previous example, so that the integral became
1 8 1
Z 3 Z
x
2
dx = du.
2 x 1 2 3 u
Notice that the bounds of integration are in the wrong order, since certainly 3 > 8. We of
course fix this by introducing a negative sign and interchanging the bounds and arrive at the same
answer, but the point is that we do not want to have to worry about whether we have changed
the orientation 6 of the interval (since this will become a grand nightmare in multiple dimensions!).
Hence if I = [a, b], we will write (4.2) as
Z Z
0 0
f (g(x))|g (x)| dx = f (u) du.
I g(I)
What is bothersome about this equation is that g appears on both sides of the equation. If g is a
change of coordinates (so that it is a diffeomorphism on I), then there is no harm in replacing g
with g 1 . Let J = g(I) so that we get
Z Z
0 0
f (g(x))|g (x)| dx = f (u) du.
g 1 (J) J
The term | det DG| is known as the Jacobian of the change of variables.
Again, this proof is laborious and of no great value, so we omit it here. Note that this effectively
say that the element | det DG(x)| represent the scaling of the volume element we had discussed
before. Indeed, previously we saw that the change in area resulting from using polar coordinates
was to multiply by r. If (x, y) = G(r, ) = (r cos(), r sin()) then
cos() r sin()
| det DG(r, )| = det
sin() r cos()
= r cos2 () + r sin2 ()
=r
6
This is a remarkably subtle but important point that does not manifest in 1-dimension but proves to be truly
inconvenient in higher dimensions. There is an entire theory of orientability of surfaces and higher dimensional spaces,
and if your space is not orientable then it is difficult to do integration.
132
2015
c Tyler Holden
4.4 Change of Variables 4 Integration
which exactly agrees with our previous assessment that dx dy = r dr d. The following are the
two most often used coordinate changes in three dimensions:
Example 4.39
This is not terribly surprising: cylindrical coordinates are polar coordinates with the
z-direction unaffected. Hence we only expect the scaling to occur in the xy-dimensions,
and this is indeed what we see.
Example 4.40
Let (u, v) = (er cos(), er sin()). Determine du dv as a function of dr d and vice versa.
and so du dv = e2r dr d.
To compute dr d in terms of du, dv one could try to find the inverse of the coordinate
transformation, but that would prove exceptionally difficult. Instead, recognize that u2 + v 2 = e2r
and hence
1 du dv
dr d = 2r du dv = 2 .
e u + v2
133
2015
c Tyler Holden
4 Integration 4.4 Change of Variables
Example 4.41
2 2
Let R = (x, y) R2 : 1 x2 + y 2 3 . Evaluate R ex +y dA.
RR
Solution. Our region R is simply the area between the circles of radius 1 and 3, so we use polar
coordinates. Let (x, y) = (r cos(), r sin()) so that S = [1, 3) [0, 2) is just a rectangle in (r, )
space, and G : S T is a diffeomorphism. Integrating using change of variables gives
ZZ Z 3 Z 2 Z 3
x2 +y 2 r2 2
e dA = e r dr d = 2 rer dr d
R 1 0 1
h 2 i3
= er = e3 e
r=1
Example 4.42
Let S be the
RR region bounded by the curves xy = 1, xy = 3, x2 y 2 = 1 and x2 y 2 = 4.
2 2
Compute T (x + y ) dA.
Solution. The region suggests that we should take a change of variables of the form u = xy and
v = x2 y 2 , so that setting
T = {1 u 3, 1 v 4}
Example 4.43
Find the area bounded between the sphere x2 + y 2 + z 2 = 4 and the cylinder x2 + y 2 = 1.
Solution. Let B be the region bounded. Lets use cylindrical coordinates, so that dx dy dz =
r dr d dz. Now by drawing a picture, it is clear that we are symmetric about reflection in the
xy-plane, so we need only find the volume bounded by the upper-half hemisphere and the cylinder,
B+ . The total area will be governed by r (0, 1) and (0, 2), but our z coordinate will by
134
2015
c Tyler Holden
4.4 Change of Variables 4 Integration
p
represented by z = 4 x2 y 2 = 4 r2 . Our integral is thus
Z Z 2 Z 1Z 4r2
dx dy dz = r dz dr d
B+ 0 0 0
Z 2p Z 1
= 4 r2 dr d
r
0 0
Z 2 1
1
= (4 r2 )3/2 d
0 3 r=0
2 h i
= 83 3
3
Hence the fully bounded area is 2B+ = 4
3 83 3 .
135
2015
c Tyler Holden
5 Vector Fields
5 Vector Fields
y
(2, 1)
1
x
2 3 4 5 6
1 F(2, 1)
We can visualize vector fields by choosing multiple points and drawing the vectors which cor-
respond to them, as in Figure 38
Vector fields can be used to describe physical fields and forces. For example, the force exhibited
at a single point by an electromagnetic field or gravity may be conveyed as a vector field. On the
other hand, a vector field might describe the flow of a liquid, such as water in a stream or air over
wing. Our goal in this section is to see how we can use vector fields to compute useful quantities,
which often have physical interpretations such as flux or work.
Depending on whether a given function is vector-valued or not, there are multiple different kinds
of derivatives that we can take. In this section, we are going to look at four such operators: the
gradient (which you have already seen), divergence, curl, and the Laplacian. In turns out that the
first three of these are all actually the same operator in disguise, but that is a rather advanced
topic which we (probably) wont cover in this class. The trick in all of these cases is to think of the
nabla operator as a vector whose components are the partial derivative operators. Hence in Rn ,
the nabla operator is
= , ,..., .
x1 x2 xn
136
2015
c Tyler Holden
5.1 Vector Derivatives 5 Vector Fields
y y
x x
y y
x x
Figure 38: A visualization of several vector fields. Using many points gives us an intuitive idea for
how these vectors flow.
137
2015
c Tyler Holden
5 Vector Fields 5.1 Vector Derivatives
The gradient measures how quickly the function f is changing in each of the given coordinate
axes, and f in its totality gives the direction of steepest ascent. As an example computation,
if f (x, y, z) = z sin(xy) then
f (x, y, z) = [z sin(xy)] , [z sin(xy)] , [z sin(xy)]
x y z
= (zy cos(xy), zx cos(xy), sin(xy)) .
F1 Fn
div F = F = + + .
x1 xn
The divergence is a measure of the infinitesimal flux of the vector field; that is, the amount of
the field which is passing through an infinitesimal surface area. As an example, if F(x, y, z) =
(x2 , y 2 , z 2 ) then
2 2 2
div F(x, y, z) = x + y + z
x y z
= 2x + 2y + 2z.
The curl measures the infinitesimal circulation of the vector field. Furthermore, notice that
the curl only makes sense in R3 . There are higher dimensional analogs of the curl, but they
are very messy to write down. For example, if F(x, y, z) = (x2 y, xyz, x2 y 2 ) then
2f 2f
2 f = f = + + .
x21 x2n
Notice that one can also write 2 = so that the Laplacian is the divergence of the
gradient. In essence, the Laplacian measures the infinitesimal rate of change of the function
f in outward rays along spheres. If f (x, y, z) = x2 y + z 3 , then an example of computing the
Laplacian is given by
2
2 2
2 2 3
2 3
2 3
f (x, y, z) = x y+z + x y+z + x y+z
x2 y 2 z 2
= 2y + 6z.
138
2015
c Tyler Holden
5.1 Vector Derivatives 5 Vector Fields
All of these vector derivatives are exceptionally important in physics and mathematics. Of
particular note is the Laplacian, which is central to the study of partial differential equations and
harmonic analysis.
We have already seen the gradient: it physically represents the direction of steepest ascent. The
names associated to divergence and curl are also done with a purpose. We do not yet have the tools,
but one can show that the curl of a vector field in R3 corresponds to infinitesimal circulation of
the vector field (how quickly the field is spinning around), while the divergence is the infinitesimal
flux of the vector field (how quickly the field is spreading out). For this reason, if F is a vector
field such that curl F = 0, we say that F is irrotational. Similarly, if div F = 0 we say that F is
incompressible.
Proposition 5.1
Proof. The majority of these are straightforward if laborious, so we will only do one as an example.
Lets show that
curl (f G) = f curl G + (grad f ) G.
curl (f G)1 = (f G3 ) (f G2 )
y z
f G3 f G2
= G3 + f G2 f
y y z z
G3 G2 f f
=f + G3 G2
y z y z
= f (curl G)1 + [grad f G)]1
= [f (curl G)1 + grad f G)]1 .
Hence the x-coordinates of both vectors agree. Since all other components follow precisely the same
reasoning (just replace y and z with z and x respectively) the result follows.
The next two identities worth pointing out are the following:
139
2015
c Tyler Holden
5 Vector Fields 5.2 Arc Length
In fact, in higher level mathematics this is effectively contained in definition of divergence and curl.
A very nice diagram is the following:
scalar grad vector curl vector div scalar
,
function fields fields functions
which is (up to renaming some things) called the de Rham complex.
While arc length is a subject that can be discussed in introductory calculus, its generalization to
curves in Rn will prove important for this section so we re-iterate its treatment here. Given a curve,
one can naively attempt calculate its arc length by approximating it with successively finer and
finer piecewise linear curves. The non-calculus way of doing this requires suprema and partitions
and involves introducing a notion of rectifiability (read as: ability to break into piecewise functions).
This offers the advantage that it allows us to compute the arc length of many curves which cannot
be described as C 1 functions, but will not be useful for our purposes.
Instead, lets use our formidable calculus experience to formulate an expression for arc length.
Recall that given two points x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) in Rn their distance is described
as " n #1
p X 2
x + dx
|dx|
(xn , yn ) x
Figure 39: Left: We can approximate a C 1 curve by a piecewise linear curve. Right: In the
infinitesimal limit, the length of each piecewise linear segment is | dx.
We often write ds = | dx|, which we call an element of arc. The total arc length will then
be given by integrating ds over the length of the curve. As it stands, this is not a particularly
attractive element to work with, so to facilitate our computations we introduce a parameterization
of our curve. Assume that C is given by the equation x = g(t), where g : [a, b] Rn , so that
0 dg1 dgn
dx = g (t)dt = ,..., dt
dt dt
140
2015
c Tyler Holden
5.2 Arc Length 5 Vector Fields
This has a particularly appealing physical interpretation: If g(t) describes the position of a particle
with respect to time, then g 0 (t) is its velocity and |g 0 (t)| is its speed. By integrating the speed over
all time, we then get the distance travelled which is precisely the arc length.
Example 5.2
Solution. This is a result with which we are all familiar, but that familiarity is because we were
told that it is true, and not because we have ever derived the solution ourselves. Our curve in
question is the circle of radius r, which we know admits a very simple parametric descriptions as
as required.
Notes:
1. Typically, the arc-element ds = |g 0 (t)|dt is exceptionally difficult to integrate, since the square
root of a sum is rarely amenable to standard tricks.
2. The arc-length formula computes total distance travelled, not necessarily the arc length of the
graph of the curve. For example, in Example 5.2, letting 0 t 4 corresponds to traversing
the circle twice. It is easy to see that our arc-length in this case is 4r = 2 2r. Thus even
though the graph only shows a single circle, the parameterization travelled the circle twice.
141
2015
c Tyler Holden
5 Vector Fields 5.3 Line Integrals
Proposition 5.3
Scalar functions: The next type of integration we are going to look at is a generalization of our
multivariable integration. Let f : U Rn R be a continuous function and C U a smooth
curve. We already know how to integrate f over U to get the total volume between the graph of
f (x) and the Rn plane. What if instead we wanted to know the area which lies between the the
curve C and the graph of f (x)? The answer is the line integral of a scalar function, defined as
follows: Let g : [a, b] Rn be a parameterization of C, and take
Z Z b
f ds = f (g(t))|g0 (t)| dt.
C a
If we think about this formula, we will see that it gives the desired result. In particular, if f 1
then our formula just returns the formula for arc length. By including the f (g(t)) term, we are
weighting the value of the curve at each point by the value that the curve takes on f (x). Integrating
over all of [a, b] then gives the area under f (x) which lies on the curve C.
Example 5.4
1
where C is the curve g(t) = (2 sin(t), 2 cos(t), 2t2 ) from [0, 1].
R
Compute C 1+z/2
142
2015
c Tyler Holden
5.3 Line Integrals 5 Vector Fields
f (C)
U
Figure 40: Integrating a scalar-valued function over U yields the area under the graph of f , but
only along the curve C.
1
f (g(t)) = f (2 sin(t), 2 cos(t), 2t2 ) =
1 + t2
so our line integral becomes
Z Z 1
1 1 p
ds = 2 1 + t2 dt
C 1 + z2 0 1 + t2 | {z }
|g 0 (t)|
| {z }
f (x,y,z)
Z 1
1
=2
0 1 + t2
Z /4
sec2 ()
=2 d let t = tan()
0 sec()
/4
= 2 ln |sec() + tan()|0
p 1
= 2 ln t + 1 + t2
0
= ln | 2 + 1|.
Alternatively, one could realize that d
dt sinh1 (t) = 1
1+t2
and that sinh1 (t) = ln(t + t2 + 1).
Vector Fields: In my experience, line integrals of scalar functions are not particularly interesting
and do not often manifest naturally (either in mathematics or otherwise). However, of far greater
utility is computing line integrals through vector fields. The set up is as follows: Let F : Rn Rn
be a vector field, and C Rn be some smooth curve. Parameterize this curve by g : [a, b] Rn ,
143
2015
c Tyler Holden
5 Vector Fields 5.3 Line Integrals
and think of the vector field acting on the curve at each point t0 [a, b]. Our goal is to compute
the total amount of work done by the vector field on the curve.
To put this into a more physical setting, imagine a fish that swims along a curve C, and let
F : R3 R3 be a vector field describing the current of the water at each point. As the fish swims
through the water, the current acts on the fish by pushing it in the direction indicated by the vector
field. We want to compute the amount of work done by vector field on the fish.
If we are travelling in the direction dx then the force experienced is given by F dx. In our
original set up (in Rn ) our line integral is thus
Z Z Z b
F dx = (F1 dx1 + + Fn dxn ) = F(g(t)) g0 (t)dt.
C C a
The change of variable formula will quickly convince you that this formulas magnitude is invariant
under change of parameterization, but notice that it can change sign. In our fish-analogy, imagine
the fish swimming with the current in a river, versus swimming exactly the same path but against
the current of the river. In both cases, the total magnitude of work is the same, but in one instance
the fish had to exert energy to work against the current, and in the other the fish was moved by
the current with no energy required. Hence orientation can change the sign of the line integral.
Furthemore, notice that multiplying and dividing by the speed function |g 0 (t)| and recalling that
the element of arc satisfies ds = |g 0 (t)| dt we have
b
g0 (t) 0 b
Z Z
F(g(t)) |g (t)|dt = F(g(t)) T(t)ds
a |g0 (t)| a
g0 (t)
where T(t) = |g0 (t)| is the unit speed vector. The component F(g(t)) T(t) is the projection of F in
the direction of T, and is precisely the component of the field F doing work in the direction of T.
Example 5.5
Solution. Clearly g 0 (t) = (1, 2t, 2t) and F(g(t)) = F(t, t2 , t2 ) = (t5 , t4 , t2 ) so their dot product yields
F(g(t)) g 0 (t) = (t5 , t4 , t2 ) (1, 2t, 2t) = t5 + 2t5 + 2t3 = 3t5 + 2t3 .
Integrating gives
Z 1 Z 1
0
F(g(t)) g (t)dt = 3t5 + 2t3
0 0
1 6 1
= t + t4 0
2
= 1.
144
2015
c Tyler Holden
5.4 Greens Theorem 5 Vector Fields
Example 5.6
R
Evaluate the line integral C F dx if F is the same vector field in Example 5.5 but C is the
curve
C = (x, y, z) : x2 + y 2 = 1, z = 1 .
Solution. We can parameterize C via the function g(t, z) = (cos(t), sin(t), 1) where 0 t 2.
One can compute that
These examples are not typical in that they were actually easily solved. Example 5.5 was simple
because everything was written as polynomials, while Example 5.6 magically became zero before
having to integrate. In general, line integrals will yield exceptionally nasty integrands, necessitating
that we expand our line integral toolbox.
Line integrals can be quite tricky to compute and so we would like to develop tools to facilitate
their computation. Before proceeding, we will have to make a few preliminary definitions:
Definition 5.7
1. A simple closed curve of Rn is a simple curve (see Definition 3.16) whose endpoints
coincide.
Example 5.8
1. The circle S 1 is a simple closed curve. Indeed, choose g(t) = (cos(t), sin(t)), 0 t 2
to parameterize the circle. It is clearly injective on (0, 2), and the endpoints coincide
since g(0) = g(2).
2. The set [0, 1]{2} is certainly compact, but is not a regular region. Indeed, its interior
is the set (0, 1) whose closure is [0, 1].
Given a regular region S R2 with a piecewise smooth boundary S, we define the Stokes
orientation to be the orientation of the boundary such that the interior of the set is on the left.
145
2015
c Tyler Holden
5 Vector Fields 5.4 Greens Theorem
Figure 41: A regular region with a piecewise smooth boundary and the Stokes orientation. Notice
that the orientation on the internal boundary is the opposite of the external boundary.
Notice in particular that this will mean that a space with a hole in it will have its external boundary
oriented counter-clockwise, while the interior boundary will be oriented clockwise.
Theorem 5.9: Greens Theorem
If S R2 is a regular region with piecewise smooth boundary S, endowed with the Stokes
orientation, and F : R2 R2 is a C 1 -vector field, then
Z ZZ
F2 F1
F dx = dA.
S S x1 x2
The full proof requires an argument that every space S in the hypothesis admits a decomposition
into simple sets. Rather than harp on this point, we will choose examine how the proof would
ideally work given such a simple set.
Before doing that however, let us take a second to think about what this theorem is saying:
Depending on how we choose to look at it, we can determine what is happening on the interior of S
just by looking at something on its boundary S; or vice-versa, we can determine something about
whats happening on the boundary of S, just by looking at whats happening on the interior. A
priori, this is quite a surprising result: why should information about the interior and boundary
be in any way related?
On the other hand, an argument can be made that the Fundamental Theorem of Calculus only
cares about information on the boundary (and this will manifest in the proof), or alternatively that
if we know how our vector field is leaving/entering the set, then we can infer a lot of information
about the interior. Either way, Greens theorem is powerful and deserves some time contemplating.
Proof. Recall from our time doing iterated integrals that it is often convenient to be able to write a
region as being bounded either as a function of x or a function of y. We will say that S is x-simple
if it can be written as
S = {a x b, 1 (x) y 2 (x)}
and y-simple if
S = {c y d, 1 (y) x 2 (y)} .
Assume then that S is both x-simple and y-simple, and consider for now only the x-simple
146
2015
c Tyler Holden
5.4 Greens Theorem 5 Vector Fields
2 (x)
C2
x=a C x=b
1 C3
C4
1 (x)
description. Label the edges S = CR1 + C2 + C3 + C4 as illustrated in Figure 42. For this part of
the proof, we are going to compute C F1 dx. On the straight line components (corresponding to
x = a and x = b) we have dx = 0, and hence
Z Z Z
F1 dx = F1 dx + F1 dx.
C C2 C4
Notice that 2 (x) runs from b to a in the Stokes orientation, so we must introduce a negative
sign to get
Z Z b Z b
F1 dx = F1 (x, 1 (x)) dx F1 (x, 2 (x)) dx. (5.2)
C a a
On the other hand, applying the Fundamental Theorem of Calculus to the following iterated inte-
gral:
ZZ Z b Z 2 (x) Z b
F1 F1
dA = dx = [F1 (x, 2 (x)) F1 (x, 1 (x))] dx. (5.3)
S y a 1 (x) y a
Proceeding in precisely the same manner but using y-simple description of S results in
Z ZZ
F2
F2 dy = dA.
S S x
More generally, the remainder of the proof hinges upon the ability to decompose S into subsets
which are both x- and y-simple; namely S = S1 Sn where the Sn have disjoint interior and are
147
2015
c Tyler Holden
5 Vector Fields 5.4 Greens Theorem
Figure 43: To prove Greens Theorem on more general regions, we decompose the region into
subregions which are both x- and y-simple.
xy-simple. We will omit the fact that any regular region with piecewise smooth boundary admits
such a decomposition.
Notice that interior boundaries (those that make up part of the boundary of Si but not of S)
have orientations which cancel each other out. By the additivity of line integrals and iterated
integrals, the result then follows.
Example 5.10
Solution. This would be an exceptionally difficult integral to calculate if one was not permitted to
use Greens Theorem; however, it becomes almost trivial after applying Greens Theorem. Let D
be the interior of the radius R-circle, which we know has area R2 . Greens Theorem gives
I Z Z F
p
5
y2 2 F1
2y + 1 + x dx + 5x e dy = dA
C | {z } | {z } D x y
F1 F2
ZZ
= (5 2) dA
D
= 3R2 .
We did not even have to compute the iterated integral since we know the area of D!
Example 5.11
R
Determine the line integral S F dx where F(x, y) = (1, xy) and S is the triangle whose
vertices are (0, 0), (1, 0) and (1, 1), oriented counter clockwise.
148
2015
c Tyler Holden
5.5 Exact and Closed Vector Fields 5 Vector Fields
Lets compute the line integral and verify that we get the same result. Parameterize the line
C1 by g1 (t) = (t, 0) for 0 t 1, yielding
Z Z 1 Z 1
F dx = F(g1 (t)) g10 (t)dt = (1, 0) (1, 0)dt = 1
C1 0 0
Finally, we parameterize C2 and g3 (t) = (1 t, 1 t) for 0 t 1 (we choose this over g3 (t) = (t, t)
to keep the correct orientation).
Z Z 1
F dx = (1, (1 t)2 ) (1, 1)dt
C3 0
Z 1
4
= 1 (1 t)2 dt = .
0 3
exactly as we expected.
Line integrals can have some pretty surprising properties. In fact, it turns out that one can actually
use line integrals to tell you something about the geometry of a surface (though this is a rather
advanced topic for this course). Nonetheless, lets set up the framework for utilizing/understanding
how this works.
149
2015
c Tyler Holden
5 Vector Fields 5.5 Exact and Closed Vector Fields
In particular, the integral only depends on the endpoints (a) and (b) of the curve C.
Proof. Assume that F = f and let C be some oriented curve with parameterization : [a, b] Rn .
Straightforward computation then reveals that
Z Z b
F dx = F((t)) 0 (t) dt
C a
Z b
= f ((t)) 0 (t) dt by the chain rule
a
Z b
d
= f ((t)) dt
a dt
= f ((b)) f ((a))
In general we know that the choice of curve makes a significant difference to the value of the
line integral, so this theorem tells us that there is a particular class of vector fields on which the
line integral does not seem to care about the path we choose. These vector fields are so important
that we give them a special name.
Definition 5.13
Any vector field F : Rn Rn satisfying F = f for some C 1 -function f : Rn R is called
an exact vector field. The function f is sometimes referred to as a scalar potential.
Example 5.14
3. H(x, y, z) = (x + y, x + z, y + z).
150
2015
c Tyler Holden
5.5 Exact and Closed Vector Fields 5 Vector Fields
f f g
= +
y y y
and compare this to F2 . With any luck, we will be able to solve g(y, z) = g(y, z)+h(z), and perform
a similar technique to compute h. Of course, at the end of the day we can only evaluate f up to a
constant, but this constant will not affect the value of the integral.
2. This example requires a bit more work. We integrate the first term with respect to x to
get f (x, y, z) = x2 y + g(y, z) for some function g(y, z). Differentiating with respect to y and
setting f
y = G2 we get
f g g
= x2 + = x2 + cos(z), = cos(z).
y y y
We integrate to find that g(y, z) = y cos(z) + h(z) for some yet to be determined function
h(z), giving f (x, y, z) = x2 y + y cos(z) + h(z). Differentiating with respect to z we get
f
z = y sin(z) which is exactly G3 . Hence h(z) is a constant, which we might as well set to
0, and we conclude that f (x, y, z) = x2 y + y cos(z).
Example 5.15
Determine the line integral C F dx where F(x, y, z) = (2xy, x2 + cos(z), y sin(z)) and C
R
is the curve
C = (x, y, z) : x2 + y 2 + z 2 = 1, y = z, y 0 ,
Solution. The curve C lies on the intersection of the unit sphere S 2 and the plane z = y. This
would normally be a full circle, except for the fact that the condition y 0 ensures that we only
pass through one hemisphere. One could parameterize this and try to compute the line integral
by hand, except that the result is truly terrifying. Instead, all one needs to realize is that the
endpoints of this curve are (1, 0, 0). Furthermore, in Example 5.14 we showed that F = f where
f (x, y, z) = x2 y + y cos(z). Consequently, the line integral is just
Z
F dx = f (1, 0, 0) f (1, 0, 0) = 0.
C
151
2015
c Tyler Holden
5 Vector Fields 5.5 Exact and Closed Vector Fields
We would like to explore whether there are other vector fields for which line integrals only depend
upon endpoints. To this end, we have the following lemma:
Lemma 5.16
If F is a continuous vector field on an open set U Rn then the following are equivalent:
1. If C1 and C2 are any two oriented curves in U with the same endpoints, then
Z Z
F dx = F dx.
C1 C2
Proof.
(1)(2) Pick a point a on C and declare that C has both endpoints equal to a. Clearly, these are the
same endpoints as the constant curve at a, which we call C, and so by (1) we have
Z Z
F dx = F dx = 0
C C
where we note that integrating over the constant curve will certainly give a result of zero.
(2)(1) Let the endpoints of C1 be called a and b. Since C2 has the same endpoints, we may define
a closed curve C as the one which traverses C1 from a to b, and then traverses C2 from b to
a. Now C2 has the opposite orientation of C1 , so applying (2) we get
Z Z Z
0= F dx = F dx F dx,
C C1 C2
Any vector field which satisfies either of the above (equivalent) conditions is called an conser-
vative vector field. The name is derived from physics: In a system in which energy is conserved,
only the initial and terminal configurations of the state determine energy transfer and the system
ignores all other things which happen in between.
The FTC for Line Integrals tells us that exact vector fields are conservative. It turns out that
that this exhausts the list of all conservative vector fields.
Theorem 5.17
152
2015
c Tyler Holden
5.5 Exact and Closed Vector Fields 5 Vector Fields
Lx,h x + hi
x
Cx+h
Cx
Figure 44
Proof. () This follows from the Fundamental Theorem of Calculus for Line Integrals.
() Conversely, assume that F : Rn Rn is a conservative vector field. Without loss of
generality, we shall assume that S is connected (otherwise just do the following for each connected
component). Fix some point a S, and define for each x S let Cx be a curve from a to x (which
always exists since open connected sets are path-connected) and define the function
Z
f (x) = F dx.
Cx
This is well defined since, by assumption, the definition is invariant of our choice of curve Cx . Now
we claim that f = F and is C 1 , and both claims will be obvious if we show that i f = Fi for
each i = 1, . . . , n.
To see that this is the case, we will show that the ith partial of f is precisely Fi . Fix x S
and choose h = hei sufficiently small so that the line Lx,h between x and x + h remains in S (see
Figure 44). Let Cx+h be Cx followed by Lx,h so that
Z Z Z
f (x + h) = F dx = F dx + F dx
Cx+h Cx Lx,h
Z
= f (x) + F dx.
Lx,h
153
2015
c Tyler Holden
5 Vector Fields 5.5 Exact and Closed Vector Fields
Parameterize the line Lx,h be g(t) = x + tei for 0 t h so that g 0 (t) = ei and
Z Z h
F dx = F(x1 , . . . , xi1 , xi + t, xi+1 , . . . , xn ) (0, . . . , 0, 1, 0, . . . , 0)dt
Lx,h 0
Z h
= Fi (x1 , . . . , xi + t, . . . , xn )dt.
0
f
Computing xi we have
f (x + h) f (x)
Z
f 1
= lim = lim F dx
xi h0 h h0 h Lx,h
1 h
Z
= lim Fi (x1 , . . . , xi + t, . . . , xn )dt
h0 h 0
= Fi (x) LHopital
Theorem 5.17 is a very nice condition, but it is fairly intractable to compute all possible line
integrals, and it can often be difficult to ascertain whether you are the gradient of a function.
There is a relatively simple necessary condition to determine whether a vector field is conser-
vative. If F = f then Fi = i f . Since mixed partials commute, we then have
i Fj = i j f = j i f = j Fi ,
or alternatively
Fi Fj
= 0, i 6= j. (5.4)
xj xi
Vector fields which satisfy (5.4) are called closed vector fields. However, not all closed vector fields
are exact. Also, notice that if we are working in R3 then the components given in (5.4) correspond
to the components of the curl. Hence closed vector fields of R3 are irrotational.
154
2015
c Tyler Holden
5.5 Exact and Closed Vector Fields 5 Vector Fields
1
Example 5.18 Consider the vector field F (x, y) = x2 +y 2
(y, x) defined on the open set
R2 \{(0,0)} . It is easy to see that
F2 x y 2 x2
= =
x x x2 + y 2 (x2 + y 2 )2
F1 y y 2 x2
= = ,
y y x2 + y 2 (x2 + y 2 )2
If F were conservative, this would have to be zero; hence F is an example of a closed vector
field which is not exact.
So what went wrong with the above example? Essentially, because our vector field F is not
C 1 unless our space has a hole at the origin (0, 0), our vector field/line integral was able to detect
that hole. In fact, try computing the above line integral around any closed curve which does not
contain the origin, and you will see that the result is zero.
It turns out that closed vector fields are locally exact. In order to describe what we mean, we
must introduce a new definition:
Definition 5.19
A set U Rn is said to be star-shaped if there exists a point a U such that for every point
x U the straight line connected x to a is contained in U .
Notice that every convex set is star shaped, though the converse need not be true. For example,
Figure 45 gives an example of a star shaped set in R2 that is not convex.
Theorem 5.20: Poincare Lemma
Proof. Without loss of generality, lets assume that U is star shaped about the origin. For any
x U let x (t) = tx be the straight line connecting the origin to x, and define the function
Z Z 1
f (x) = F dx = F1 (tx)x1 + + Fn (tx)xn dt.
x 0
Notice that this is well defined since x (t) U for all t, and there is a unique straight line connecting
155
2015
c Tyler Holden
5 Vector Fields 5.6 Surface Integrals
Figure 45: An example of a star shaped set which is not convex. The point a satisfies the required
definition, while the point b does not.
Hence f = F as required.
Line integrals captured the idea of a vector field doing work on a particle as it travelled a particular
path. A similar idea is the surface integral, which calculates the amount of flux of a vector field
passing through a surface.
Just as when we calculated the arc length of a curve, in order to compute surface area we are going
to heuristically examine what an area element might look like. To do this, recall that given two
linearly independent vectors v, w R3 , the area of the parallelogram with vertices 0, v, w, v + w is
given by |v w| (Figure 46). The idea is to use this, but apply it to infinitesimal elements to get
the corresponding area of a surface, so that the infinitesimal area-element is dA = | dx dy|.
If we set dA = | dx dy|, it should not come as a surprise that the surface area of a surface S
156
2015
c Tyler Holden
5.6 Surface Integrals 5 Vector Fields
v+w
y
x
Area = |v w|
Figure 46: The area of a parallelogram in R2 is given by the determinant of the defining edges.
but just as in the case of arc-length, this is a infeasible equation if we dont have a parameterization
of the surface.
Thus let G : R R2 R3 be a parameterization of a surface S in R3 , and fix some (u0 , v0 ) R2 .
Applying infinitesimal translations du and dv to (u0 , v0 ), we get the corresponding vectors
G G
G(u, v + dv) G(u, v) = dv, G(u + du, v) G(u, v) = du.
v u
These are our two vectors with which we will use to compute the area element. Just as in the
R3 case, we will take their cross product to get
G G
dA = du dv,
u v
and integrating over the whole surface thus gives us our surface area
ZZ
G G
A(S) = du dv.
R u v
If we use coordinates, this expression will even look a little like our arc-length formula. Set (x, y, z) =
G(u, v) so that G G
u = (xu , yu , zu ) and v = (xv , yv , zv ). Notice that
G G e1 e2 e3
u v = det xu yu zu
xv yv zv
= |(yu zv zu yv , zu xv xu zv , xu yv yu xv )|
s
(y, z) 2 (z, x) 2 (x, y) 2
= + +
(u, v) (u, v) (u, v)
157
2015
c Tyler Holden
5 Vector Fields 5.6 Surface Integrals
This works if we have a parameterization of our surface, but what if we are given S as the
graph of a C 1 -function? Recall that if z = f (x, y) then we can write this parametrically as
G(u, v) = (u, v, f (u, v)) in which case
s 2 2
G G f f
u v = 1 + x + y .
Example 5.21
Find the surface area of surface defined by x2 + y 2 + z = 25, lying above the xy-plane.
20
10
4
4
4 4
Solution. We can write our surface as the graph of the function z = 25 x2 y 2 , so that
z z
= 2x, = 2y,
x y
p
and the surface element is dA = 1 + 4x2 + 4y 2 . The easiest way to integrate this is going to
be through polar coordinates. Notice that in this case we have z = 25 r2 , and since z > 0 this
implies that 0 r 5, and 0 2. Thus our integral becomes
ZZ p
A(S) = 1 + 4x2 + 4y 2 dx dy
S
Z 2 Z 5 p
= r 1 + 4r2 dr d
0 0
101
Z
= u du u = 1 + 4r2
4 1
3/2 101
= u = (1013/2 1)
6 1 6
As with line integrals, surface integrals are going to depend on a choice of orientation, so what does
it mean to orient a surface? An orientation of a surface S is a consistent (read: continuous) choice
of normal vector to the surface. One can think of this as saying I wish to everywhere define what
it means to be right-handed, and an orientation does that.
158
2015
c Tyler Holden
5.6 Surface Integrals 5 Vector Fields
The idea of a surface integral is thus the following: Given a vector field F : R3 R3 and a
surface S, we want to compute the flux of the vector field through the surface. Effectively, if we
again think of a vector field as representing forces or the flow of a fluid, the flux represents the
amount of force/fluid passing through S. The vector field travelling in the direction n is given by
F n and so the surface integral is given by integrating each of these components:
ZZ
F n dA.
S
Notice that F n is precisely the vector field projected onto the normal of the surface, and so at
the surface represents the vector field passing through the surface. Of course, this is not easily
computed without a parameterization. If G : R R2 S R3 is such a parameterization, our
flux becomes ZZ ZZ
G G
F n dA = F(G(u, v)) du dv.
S R u v
Example 5.22
Evaluate the flux of F(x, y, z) = (x2 + y, y 2 z, x2 y) through the surface S = [0, 1] [0, 1] {0},
oriented pointing in the positive z-direction.
159
2015
c Tyler Holden
5 Vector Fields 5.6 Surface Integrals
Sometimes it is necessary to break our surfaces into several pieces in order to determine the
integral, as the next example demonstrates.
Example 5.23
RR
Evaluate S F n dA where F (x, y, z) = (0, y, z) and S is the surface defined by
S = y = x2 + z 2 : 0 y 1 x2 + z 2 1 : y = 1 ,
0.5
0.5 x
1 0.5 0.5 1
0.5
Solution. This space looks like the paraboloid, capped by the unit disk. Rather than trying to
handle both parts of S at the same time, we break it into the paraboloid S1 and the disk D
separately.
We can parameterize the paraboloid as (x, y, z) = (r cos(), r2 , r sin()) with 0 r 1 and
0 2. Then
G G
= (cos(), 2r, sin()), = (r sin(), 0, r cos()).
r
The cross product is then easily computed and we get
G G
= 2r2 cos(), r, 2r2 sin() .
r
160
2015
c Tyler Holden
5.7 The Divergence Theorem 5 Vector Fields
Hence
G G
F(G(r, )) = (0, r2 , r sin()) (2r2 cos(), r, 2r2 sin())
r
= r3 1 + 2 sin2 () .
Now the tricky part of doing the cap is making sure that we choose a parameterization of the
cap which gives the Stokes orientation (that is, always points in the positive y-direction). The
student can verify that
satisfies
G G
= (0, r, 0),
r
so that this is actually oriented the wrong way! This is fine, and we can continue to work with
this parameterization, so long as we remember to re-introduce a negative sign at the end of our
computation. Now
ZZ Z 1 Z 2
F n dA = r dr d =
D 0 0
so properly orienting gives the result . Adding both factors we get + () = 0, so we conclude
that the flux is 0.
The Divergence Theorem (or Gauss Theorem) is the analog of Greens theorem for surface integrals.
Proof. As with Greens Theorem, we will only provide a very simplified proof which nonetheless
captures the idea of the Divergence Theorem.
Assume that R is an xy-simple set, so that we can write it as
161
2015
c Tyler Holden
5 Vector Fields 5.7 The Divergence Theorem
where W R2 is some regular region. Breaking the statement of the Divergence Theorem into its
vector components, our aim is to show that
ZZ ZZZ
F3
F3 e3 n dA = dV
R R x3
where e3 = (0, 0, 1) is the standard unit normal in the positive z-direction. Note that e3 n = 0
along the vertical line segments occurring along the boundary of W , while e3 is consistent with the
orientation of the top surface (x, y, 2 (x, y)) and is the opposite orientation of the bottom surface
(x, y, 1 (x, y)). Thus,
ZZ ZZ
F3 e3 n dA = [F3 (x, y, 2 (x, y)) F3 (x, y, 1 (x, y))] dx dy
R W
ZZ Z 2 (x,y)
F3
= (x, y, z) dz
W 1 (x,y) x3
ZZZ
F3
= dV
R x3
Example 5.25
defined by
C = {1 x 1, 1 y 1, 0 z 2} ,
oriented so that the normal points outwards.
Solution. This would normally be a rather tedious exercise: The reason is that the vector field
provides no obvious symmetry, requiring that we compute the surface integral through each of
the six faces of the separately and then add them all up (try it yourself!). With the Divergence
Theorem however, this becomes much more simple. It is not too hard to see that the cube is a
regular region and F is a C 1 vector field, hence the Divergence Theorem applies and
ZZ ZZZ
F n dA = div F dA
S C
Z 1 Z 1 Z 2
3y 2 + x dz dy dx
=
1 1 0
Z 1 3 y=1
=2 y + xy y=1
1
Z1
=4 [1 + x] dx = 8.
1
162
2015
c Tyler Holden
5.8 Stokes Theorem 5 Vector Fields
Example 5.26
S = x2 + y 2 + z = 4 x2 + y 2 4, z = 0 .
Solution. This is an almost impossible exercise to approach from the definition, so instead we use
the Divergence Theorem. One can easily compute that
Hence if V is the filled paraboloid so that V = S then our surface integral becomes
ZZ ZZZ
F n dA = (3x2 + 3y 2 )dV.
S V
where D is the unit disk. Changing to polar coordinates (or cylindrical if we skip the previous step)
gives
Z 2 Z 2 Z 4r2 Z 2
3r3 dz d dr = 6 r3 (4 r2 ) dr d
0 0 0
0
64
= 6 16 = 32.
6
Stokes Theorem, in another form, is the ultimate theorem from which Greens Theorem and the
Divergence Theorem are derivative; albeit we will not likely see this form in this class. Hence we
present to you, the baby Stokes theorem. The idea of Stokes theorem is that we take a step
back, and examine how one computes line integrals in R3 in general.
Unsurprisingly, the theorem says something about looking at the boundary of a set. Since we
know that such integrations are dependent upon orientation, we need to define our final notion of
positive/Stokes orientation. Let S be a smooth surface in R3 with piecewise smooth boundary S.
We say that S is positively oriented or endowed with the Stokes orientation with respect to S if,
whenever t is the tangent vector of a parameterization of S and n is the orientation of S, then
n t points into S. More informally, if we walk around S, our body aligned with n, then S will
be to the left.
163
2015
c Tyler Holden
5 Vector Fields 5.8 Stokes Theorem
Let S be a smooth surface with piecewise smooth boundary S, endowed with the Stokes
orientation. If F : R3 R3 is a C 1 - vector field in a neighbourhood of S, then
Z ZZ
F dx = (curl F) n dA.
S S
Proof. First note that if S is just a region in the xy-plane, then n = (0, 0, 1) and so
F2 F1
(curl F) n =
x1 x2
Thus we see that Stokes theorem in the xy-plane is just Greens theorem.
Now assume that S is a surface which does not live in one of the coordinate planes and let
G : W S be a parameterization of S, where the region W lives in the uv-plane. Furthermore,
assume that G preserves boundaries and gives an orientation which coincides with the orientation
of S (if G(u, v) gives the opposite orientation, just switch the roles of u and v).
The idea is that since the boundaries are preserved under G and since Stokes theorem is just
Greens theorem, we will pullback the calculation to the uv-plane and apply Greens Theorem.
As always, we shall do this component by component; in particular, we shall just look at the F1
component. In effect, take F = (F1 , 0, 0) so that this amounts to showing
Z ZZ
F1 F1
F1 dx1 = 0, , n dA.
S S x3 x2
On the other hand, using the Chain rule and Greens Theorem, the left-hand-side yields
Z ZZ
x x x x
F1 du + dv = F1 F1 du dv
W u v u v v u
Z ZW
F1 (z, x) F1 (x, y)
= du dv.
W x3 (u, v) x2 (u, v)
164
2015
c Tyler Holden
5.8 Stokes Theorem 5 Vector Fields
Example 5.28
so that
I Z 2
F dr = (cos(), cos(), 2 sin()) ( sin(), cos(), sin()) d
C 0
Z 2
= cos() sin() + cos2 () 2 sin2 () d
0
= 0 + 2 = 2.
We can parameterize our surface is almost exactly the same way as the curve (though now we let
our radius vary) as
Hence
g g
= (cos(), sin(), cos()), = (r sin(), r cos(), r sin(0)
r
giving an area element of
g g
= (r, 0, r).
r
Integrating gives
Z 2 Z 2 Z 2 Z 2
(1, 0, 0) (r, 0, r) dr d = r dr d
0 0 0 0
1 2 2
= 2 r
2 r=0
= .
165
2015
c Tyler Holden
5 Vector Fields 5.9 Tensor Products
Example 5.29
Solution. It is clear that S is just the unit circle in the xy-plane, and so we can parameterize it as
g(t) = (cos(t), sin(t)) for t [0, 2]; however, it makes this integral almost impossible to compute
directly. Our goal is thus to use Stokes theorem, so we compute the curl to be
F3 F2 F1 F3 F2 F1
F= , ,
y z z x x y
!
z zy x
= p , xez p , x2 .
2 1 + x2 + zy 3 1 + x2 + zy
Unfortunately, the unit normal on S is constantly changing and the integral is still rather horrific.
However, one of the beautiful things about Stokes theorem is that it tells us is that the line integral
over C can be computed in terms of an integral over S, but it does not say which S that has to be.
In particular, if there is a more convenient S to choose, we should take it!
We notice then that C is just the boundary of the unit disk S 0 = (x, y, 0) : x2 + y 2 = 1 ,
We know that, given two vectors in the same space, there isnt a very meaningful way of multi-
plying them together to get a vector back. Sure, one can perform pointwise multiplication on the
components, but the object that is returned is not useful for studying vector spaces. Furthermore,
what happens if we want to multiply two vectors which are from different vector spaces?
We are faced with the following challenge: Given two R vector spaces V and W , is there a
meaningful way to multiply them together? What is meaningful? Our motivation is the following
two examples:
166
2015
c Tyler Holden
5.9 Tensor Products 5 Vector Fields
1. To devise a method for describing product states. This is especially useful in statistic and
quantum mechanics, as will be described later.
2. To approximate or describe multilinear objects via linear objects, in the most efficient way
possible. This is the reason of greatest interest to mathematicians, and will be our principal
motivation.
Again, the important property here is the idea of multilinearity, which we define below:
Definition 5.30
Let V1 , . . . , Vn and W be vector spaces and T : V1 Vn W be a map. We say that T
is multilinear if for each i {1, . . . , n}, the map T is linear in the i-th component when all
other components are held constant.
Remark 5.31 Consider the map f : R R R given by f (x, y) = xy. This map is
multilinear since
and similarly f (x, y1 + y2 ) = f (x, y1 ) + f (x, y2 ). However, f is not a linear map of vector
spaces, since
f (x1 , y1 ) + (x2 , y2 ) = f (x1 + x2 , y1 + y2 )
= (x1 + x2 )(y1 + y2 ) = x1 y1 + x1 y2 + y1 x2 + y2 x2
= [f (x1 , y1 ) + f (x2 , y2 )] + [f (x1 , y2 ) + f (x2 , y1 )] .
There are lots of interesting multilinear maps that appear in linear algebra, but the failure of
their linearity means they cannot be properly studied within the realm of linear algebra (where
only linear maps are permitted). For example, the determinant map is multilinear: If dim V = n
| {z
then det : V V} R is a multilinear map. The student can check that the following are also
n-times
multilinear (but not linear) maps:
The properties that we would like our product to satisfy should be natural, in the sense that it
should perform very much like a product, and in particular if we temporarily write the product of
v V and w W as (v, w), then it should satisfy
167
2015
c Tyler Holden
5 Vector Fields 5.9 Tensor Products
In order to ensure that these things happen, we will effectively force them to happen.
Definition 5.32
Given a set S, we define the free vector space on S to be a vector space F (S) such that S is
a basis for F (S).
It turns out that free vector spaces are unique, up to an invertible linear map (an isomorphism),
and this is easily determined since vector spaces are uniquely defined by the cardinal of their
dimension.
Example 5.33 f S = {v1 , v2 , v3 } then F (S) is the (real) vector space with S as a basis.
In particular, the elements of S are linearly independent and span F (S), so every vector
v F (S) can be written uniquely as
v = c1 v1 + c2 v2 + c3 v3
for some ci R. If ei R3 is the standard basis vectors for R3 , then T : F (S) R3 given
by T (vi ) = ei is an invertible linear map, so that F (S)
= R3 .
To ensure that our desired properties happen, we will construct a vector space with these
properties. Consider the space F (V W ), which is the free vector space whose basis is given by
all the elements of V W . Note that this is a very large vector space: if one of V and W is not
the trivial vector space, then F (V W ) is an infinite dimensional vector space.
Next, we will consider the subspace S F (V W ) generated by the following elements:
168
2015
c Tyler Holden
5.9 Tensor Products 5 Vector Fields
Proposition 5.34
If V, W are real vector spaces and dim V = n, dim W = m then the following facts hold true
in the vector space V W :
is a basis for V W .
This proposition is fairly involved, so we will omit its proof. However, note that property (4)
in particular tells us that we can use the tensor product to turn multilinear maps into linear maps
over a different vector space, and hence study those maps using the tools of linear algebra. In fact,
the correspondence in (4) is bijective, so we will sometimes not distinguish between f and F
Dual Spaces: If V is a vector space, we define its dual vector space, denoted V as
V = {f : V R : f is linear} .
The student can check that this is a vector space, and if dim V = n then dim V = n. Furthermore,
there is a canonical isomorphism : V (V ) defined by (v)f = f (v).
Let {ei } be a basis for V . We define the dual basis {fi } of V to be the basis which satisfies
the condition (
1 i=j
fi (ej ) = .
0 i 6= j
Now if f : V k R is a multilinear map, then the corresponding linear function on the tensor
product space (which we will also denote by f , is a linear map f : V k R; that is, f (V k ) =
(V )k . Since {fi } is a basis for V , {fi1 fik } is a basis for (V )k and hence we can write
X
f= ci1 ,...,ik fi1 fik .
(i1 ,...,ik )
For notation sake, this is rather clumsy. Recall that in the discussion of Taylor Series, we learned
about multi-indices. If I = (i1 , . . . , ik ) we will denote by fI = fi1 fik , and we can rewrite
the above as X
f= cI fI
I
We can go one step further, and say that if f : V1 W1 and g : V2 W2 are both linear maps,
there is an induced map f g : V1 V2 W1 W2 given by
(f g)(v1 v2 ) = f (v1 ) g(v2 ).
169
2015
c Tyler Holden
5 Vector Fields 5.9 Tensor Products
If {fi } and {hi } are dual bases for V1 and V2 respectively, and we write f and g in terms of their
dual bases
X X
f= ci fi , g= dj hj
i j
P
then their product is f j = ij ci dj fi hj .
Exercise:
1. Check that (V )k
= (V k ) and (V )k
= (V k ) .
So tensor products give us a means of studying multilinear maps using linear tools, so long as we
are willing to modify our vector space. There are two very important types of multilinear maps in
which one is typically interested: Let f : V V W be a multilinear map.
f (v1 , . . . , vj , . . . , vi , . . . , vn ) = f (v1 , . . . , vi , . . . , vj , . . . , vn ).
f (v1 , . . . , vj , . . . , vi , . . . , vn ) = f (v1 , . . . , vi , . . . , vj , . . . , vn ).
Symmetric tensors often arise in the study of inner products or hermitian products, since those
maps are symmetric multilinear. However, this is not the goal of our discussion, so we will not
spend much time thinking symmetric maps. Instead, we will be more interested in anti-symmetric
maps.
Proposition 5.35
This proposition is not too difficult and its proof is left as an exercise for the student. It can
be shown that the collection of k-multilinear alternating maps is a vector subspace of the space of
k-multilinear maps, and as such we will denote this set by k (V ). To determine the dimension of
this subspace, we need to introduce a basis:
170
2015
c Tyler Holden
5.9 Tensor Products 5 Vector Fields
Let V be a vector space with basis {ei } and let {fi } be a dual basis for V . If I = (i1 , . . . , ik )
is a multi-index, define the map f I : V k R by
f1 (v1 ) f1 (v2 ) f1 (vk )
f2 (v1 ) f2 (v2 ) f2 (vk )
f I (v1 , . . . , vk ) = det . .. .
.. ..
.. . . .
fk (v1 ) fk (v2 ) fk (vk )
Proposition 5.36
If V is an n-dimensional vector space with dual basis {fi } for V , the set
n o
f (i1 ,...,ik ) : i1 < i2 < . . . < ik
n
is a basis k (V ). Consequently, dim k (V ) =
k .
We define the wedge product as the following map on the f I defined above
f I f J = f IJ ,
and extend linearly.
Proposition 5.37
v w = (1)k` w v.
3. Associativity (u v) w = u (v w),
4. If I = (i1 , . . . , ik ) then f I = f i1 f ik .
Okay, that is enough about tensors in general. It is now time to look at differential forms and how
they are defined. Let S be a n-manifold, and for each p S let Vp be the tangent space at p.
Choose a basis {v1p , . . . , vnp } be a basis for this tangent space, and {dxp1 , . . . , dxpn } be a basis of its
dual space Vp . A differential k-form is a C 1 function S pS k (Vp ); that is, a function which
F
assigns to every point p S an element of the dual space of the tangent space at p. The collection
of differential k-forms on S is denoted k (S).
Let us consider the case when S is a 3-manifold.
171
2015
c Tyler Holden
5 Vector Fields 5.9 Tensor Products
The 1-forms are functions which look p 7 f (p)dxp1 + g(p)dxp2 + h(p)dxp3 . We will often drop
the p dependence and just write f dx1 + gdx2 + hdx3 .
The 2 forms look like f dx1 dx2 + gdx1 dx3 + hdx2 dx3 .
Exterior Derivative: The exterior derivative is a map d : k (S) k+1 (S) defined as follows:
If f : S R is a function, then
n
X f
df = dxk
xk
k=1
Relation to Vector Fields: In R3 there are ways to realize differential forms as vector fields.
Identify the 2-form = F1 dydz +F2 dxdz +F3 dxdy with the vector field F = (F1 , F2 , F3 ).
These identifications allow us to realize the exterior derivative as our vector derivatives gradient,
curl, and divergence. Indeed, if f : S R is a 0-form/function, then
f f f f f f
df = dx + dy + dz , , = f.
x y z x y z
If = F1 dx + F2 dy + F3 dz F = (F1 , F2 , F3 ) then
172
2015
c Tyler Holden
5.9 Tensor Products 5 Vector Fields
Interestingly, one can show that d d = 0 regardless of the dimension of the manifold and the
forms to which it is being applied.
Pullbacks: Let F : S T be a function between manifolds, and let {dx1 , . . . , dxn } be differential
forms on T . One can define the pullback of a differential form on T to be the differential form on
S given by
F (f dx1 dxn ) = (f F )d(x1 F ) d(xn F )
where xi F = Fi is the ith component of the function F . For example, let S = [0, 1] [0, 2]
and T = D1 where D1 is the unit disk. Define the map F : S T by F (r, ) = (r cos(), r sin()).
The pullback of the form dx dy is then given by
In fact, if we think carefully about how differential forms are defined, then if F : S T is a
diffeomorphism with {x1 , . . . , xn } the coordinates of S and {y1 , . . . , yn }, then
Fi
F (f dy1 dyn ) = (f F ) det dx1 dxn .
xj
Effectively, this allows us to write the Change of Variable Theorem as follows:
Theorem 5.38: Change of Variables
Note however that this version of the Change of Variables Theorem does keep track of orienta-
tion, so it is not quite identical to Theorem 4.38.
173
2015
c Tyler Holden
5 Vector Fields 5.9 Tensor Products
Stokes Theorem: The power of differential forms is that it allows us to generalize Stokes
theorem to higher dimensions, and to see in fact that Greens Theorem, the Divergence Theorem,
and Baby Stokes Theorem, are all equivalent. As with all the aforementioned cases, one needs to
talk about some suitable notion of the orientation of the boundary with respect to the thing which
it bounds. As a general rule, we orient the boundary in a manner that points inwards.
Theorem 5.39: Stokes Theorem
Our notions of closed and exact conservative vector fields now extends to the context of differential
forms.
Definition 5.40
Let be an n-form ina Rk .
It was mentioned before that d d = 0. This means that all exact forms are closed, since if
= d then d = d(d) = 0. In particular, this means that B k (S) Z k (S). In general, the
converse is not true. For example, the one form
x y
dx 2 dy 1 (R2 \ {0})
x2 +y 2 x + y2
is closed but not exact. As in the case of conservative vector fields, the problem is somehow captured
by the presence of the hole at the origin. If our space does not have holes, then all closed vector
fields will be exact, as exemplified by the following generalized version of Poincares Lemma:
Theorem 5.41: Poincares Lemma
Finally, one can precisely measure the failure of closed forms from being exact, by computing
the de Rham cohomology of a space. Let S be a smooth k-manifold, and recognize that we have
the following chain of vector spaces
0
d / 0 (S) d / 1 (S) d / d / k1 (S) d / k (S) d /0
174
2015
c Tyler Holden
5.9 Tensor Products 5 Vector Fields
The image of d is the closed forms, and the kernel of d is the exact forms, so one defines the
k-th de Rham cohomology as
175
2015
c Tyler Holden