Introduction To Vector Spaces, Vector Algebras, and Vector Geometries
Introduction To Vector Spaces, Vector Algebras, and Vector Geometries
Richard A. Smith
Preface
Vector spaces appear so frequently in both pure and applied mathematics
that I felt that a work that promotes self-study of some of their more ele-
mentary appearances in a manner that emphasizes some basic algebraic and
geometric concepts would be welcome. Undergraduate mathematics majors
in their junior or senior year are the intended primary readers, and I hope
that they and others sufficiently mathematically prepared will find its study
worthwhile.
.
Copyright 2005-2010. Permission granted to transmit electronically. May
not be sold by others for profit.
.
Richard A. Smith
[email protected]
Contents
1 Fundamentals of Structure 7
1.1 What is a Vector Space? . . . . . . . . . . . . . . . . . . . . . 7
1.2 Some Vector Space Examples . . . . . . . . . . . . . . . . . . 8
1.3 Subspace, Linear Combination and Span . . . . . . . . . . . . 9
1.4 Independent Set, Basis and Dimension . . . . . . . . . . . . . 9
1.5 Sum and Direct Sum of Subspaces . . . . . . . . . . . . . . . . 15
1.6 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2 Fundamentals of Maps 21
2.1 Structure Preservation and Isomorphism . . . . . . . . . . . . 21
2.2 Kernel, Level Sets and Quotient Space . . . . . . . . . . . . . 24
2.3 Short Exact Sequences . . . . . . . . . . . . . . . . . . . . . . 27
2.4 Projections and Reflections on V = W ⊕ X . . . . . . . . . . . 30
2.5 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3
4 CONTENTS
5 Vector Algebras 57
5.1 The New Element: Vector Multiplication . . . . . . . . . . . . 57
5.2 Quotient Algebras . . . . . . . . . . . . . . . . . . . . . . . . . 58
5.3 Algebra Tensor Product . . . . . . . . . . . . . . . . . . . . . 60
5.4 The Tensor Algebras of a Vector Space . . . . . . . . . . . . . 61
5.5 The Exterior Algebra of a Vector Space . . . . . . . . . . . . . 62
5.6 The Symmetric Algebra of a Vector Space . . . . . . . . . . . 68
5.7 Null Pairs and Cancellation . . . . . . . . . . . . . . . . . . . 70
5.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Fundamentals of Structure
Four things which one might reasonably expect to be true are true.
7
8 CHAPTER 1. FUNDAMENTALS OF STRUCTURE
Exercise 1.3.1 As a vector space over itself, R has no proper subspace. The
set of all integers is a subgroup, but not a subspace.
Exercise 1.3.3 For any set S of vectors, hSi is the intersection of all sub-
spaces that contain S, and therefore hSi itself is the only subspace of hSi that
contains S.
From the result of the exercise above and Zorn’s Lemma applied to inde-
pendent sets which contain a given independent set, we obtain the following
important result.
Theorem 2 Every vector space has a basis, and, more generally, every in-
dependent set is contained in some basis.
Exercise 1.4.4 Consider a finite spanning set. Among its subsets that also
span, there is at least one of smallest size. Thus a basis must exist for any
vector space that has a finite spanning set, independently verifying what we
already know from Theorem 2.
12 CHAPTER 1. FUNDAMENTALS OF STRUCTURE
Exercise 1.4.5 Over any field F, the set of n n-tuples that have a 1 in one
position and 0 in all other positions is a basis for F n . (This basis is called
the standard basis for F n over F.)
Exercise 1.4.6 A finite set of vectors is a basis for a vector space if and
only if each vector in the vector space has a unique representation as a linear
combination of this set: {x1 , . . . , xn } (with distinct xi , of course) is a basis
if and only if each v = a1 · x1 + · · · + an · xn for unique scalars a1 , . . . , an .
P P
Exercise 1.4.7 If S is a finite independent set, s∈S cs · s = s∈S ds · s
implies cs = ds for all s.
Example 7 In this brief digression we now apply the preceding two propo-
sitions. Let v0 , v1 , . . . , vn be vectors in a vector space over the field F, and
suppose that v0 is in the span V of the other vi . Then the equation
ξ1 · v1 + · · · + ξn · vn = v0
1.4. INDEPENDENT SET, BASIS AND DIMENSION 13
ξ1 · v1 + · · · + ξm · vm = v0 − ξm+1 · vm+1 − · · · − ξn · vn
has a unique solution for ξ1 , . . . , ξm for each fixed set of values we give the
other ξi , since the right-hand side is always in V. Thus ξ1 , . . . , ξm are func-
tions of the variables ξm+1 , . . . , ξn , where each of these variables is allowed
to range freely over the entire field F. When F is the field R of real num-
bers, we have deduced a special case of the Implicit Function Theorem of
multivariable calculus. When the vi are d-tuples (i. e., elements of F d ), the
original vector equation is the same thing as the general set of d consistent
numerical linear equations in n unknowns, and our result describes how, in
the general solution, n − m of the ξi are arbitrary parameters upon which the
remaining m of the ξi depend.
We have now reached a point where we are able to give the following key
result.
Proof: The theorem holds in the case when no elements are replaced.
Suppose that the elements y1 , ..., yN of C have replaced N elements of B
and the resulting set BN is still a basis for V. An element yN +1 of C r
{y1 , ..., yN } has a unique linear representation as a linear combination with
nonzero coefficients of some finite nonempty subset X = {x1 , ..., xK } of BN .
There must be an element x∗ ∈ X r {y1 , ..., yN } because yN +1 cannot be a
linear combination of {y1 , ..., yN } since C is independent. In BN replace x∗
with yN +1 . Clearly, the result BN +1 still spans V.
The assumption that x ∈ BN +1 is a linear combination of the other el-
ements of BN +1 will be seen to always contradict the independence of BN ,
proving that BN +1 is independent. For if x = yN +1 , then we can immedi-
ately solve for x∗ as a linear combination of the other elements of BN . And if
x 6= yN +1 , writing x = a·yN +1 +y, where y is a linear combination of elements
14 CHAPTER 1. FUNDAMENTALS OF STRUCTURE
Corollary 9 If a vector space has a finite basis, then each of its bases is
finite, and each has the same number of elements.
Exercise 1.4.9 Let the vector space V have the same finite dimension as its
subspace U. Then U = V.
Proof: Let B be an infinite basis for a vector space and let C be another
basis for the same space. For each y ∈ C let Xy be the finite nonempty
subset of B such that y is the linear combination of its elements with nonzero
1.5. SUM AND DIRECT SUM OF SUBSPACES 15
for each y we have |Xy | 6 |C| and therefore |B| 6 |C × C| = |C|, where the last
equality is a well-known, but not so easily proved, result found in books on
set theory and in online references such as the Wikipedia entry for Cardinal
number. Similarly, we find |C| 6 |B|. The Schroeder-Bernstein theorem then
gives |B| = |C|.
Exercise 1.5.1 For any sets of vectors, the sum of their spans is the span
of their union.
Exercise 1.5.2 Each sum of subspaces is equal to the set of all finite sums
of vectors selected from the various subspace summands, no two vector sum-
mands being selected from the same subspace summand.
Example 11 In R2 , let X denote the x-axis, let Y denote the y-axis, and
let D denote the line where y = x. Then (X + Y) ∩ D = D while (X ∩ D) +
(Y ∩ D) = 0.
Proof: Σ ⊂ Σ where Σ denotes the sum of all the summands with the
exception of U. By the definition of direct sum, U ∩ Σ = 0, and hence
U ∩ Σ = 0 as was to be shown.
This lemma leads immediately to the following useful result.
1.6 Problems
1. Assuming only that + is a group operation, not necessarily that of an
Abelian group, deduce from the rules for scalars operating on vectors
that u + v = v + u for any vectors v and u.
2. Let T be a subspace that does not contain the vectors u and v. Then
v ∈ hT ∪ {u}i ⇔ u ∈ hT ∪ {v}i .
4. Let B and B 0 be bases for the same vector space. Then for every x ∈ B
there exists x0 ∈ B 0 such that both (B r {x}) ∪ {x0 } and (B 0 r {x0 }) ∪
{x} are also bases.
Fundamentals of Maps
Exercise 2.1.1 Let V and W be vector spaces over the same field and let S
be a spanning set for V. Then any map f : V → W is already completely
determined by its values on S.
21
22 CHAPTER 2. FUNDAMENTALS OF MAPS
Theorem 19 Let V and W be vector spaces over the same field and let B be
a basis for V. Then given any function f0 : B → W, there is a unique map
f : V → W such that f agrees with f0 on B.
Proof: A one-to-one map sends its domain bijectively onto its image and
this image is a subspace of the codomain. Suppose a set in the one-to-one
map’s image is dependent. Then clearly the inverse image of this set is also
dependent. An independent set therefore cannot be sent to a dependent set
by a one-to-one map.
On the other hand, suppose that a map sends the distinct vectors u and
v to the same image vector. Then it sends the nonzero vector v − u to 0, and
hence it sends the independent set {v − u} to the dependent set {0}.
It is easy to see that any map preserves dependent sequences, and one-
to-one maps preserve dependent sets.
2.1. STRUCTURE PRESERVATION AND ISOMORPHISM 23
Because their inverses are also maps, the bijective maps are the isomor-
phisms of vector spaces. If there is a bijective map from the vector space V
onto the vector space W, we say that W is isomorphic to V. The notation
V∼ = W is commonly used to indicate that V and W are isomorphic. Viewed
as a relation between vector spaces, isomorphism is reflexive, symmetric and
transitive, hence is an equivalence relation. If two spaces are isomorphic to
each other we say that each is an alias of the other.
Theorem 22 Let two vector spaces be over the same field. Then they are
isomorphic if and only if they have bases of the same cardinality.
The following well-known and often useful result may seem somewhat
surprising on first encounter.
have equal dimension, the image of a basis must in fact then be a mini-
mal spanning set for the codomain, and therefore a basis for it. Expressing
elements in terms of a basis and its image, we find that, due to unique rep-
resentation, if the map sends u and v to the same element, then u = v.
For infinite-dimensional spaces, the preceding theorem fails to hold. It
is a pleasant dividend of finite dimension, analogous to the result that a
function between equinumerous finite sets is one-to-one if and only if it is
onto.
Proposition 26 Let L be a level set and K be the kernel of the map f . Then
for any v in L, L = v + K = {v + x | x ∈ K}.
Given any vector v in the domain of f , it must be in some level set, and
now we know that this level set is v + K, independent of the f which has
domain V and kernel K.
Corollary 28 Maps with the same domain and the same kernel have iden-
tical level set families. For maps with the same domain, then, the kernel
uniquely determines the level sets.
2.2. KERNEL, LEVEL SETS AND QUOTIENT SPACE 25
Corollary 30 V/K depends only on V and K, and not on the map f that
has domain V and kernel K.
The next result tells us, for one thing, that kernel subspaces are not
special, and hence the quotient space exists for any subspace K. (In the vector
space V, the subspace K is a complementary subspace, or complement,
of the subspace K if K and K haveL disjoint bases that together form a basis
for V, or what is the same, V = K K. Every subspace K of V has at least
one complement K, because any basis A of K is contained in some basis B
of V, and we may then take K = hB r Ai.)
If we specify domain V and kernel K, we have the level sets turned into
the vector space V/K as an alias of the image of any map with that domain
and kernel. Thus V/K can be viewed as a generic image for maps with
domain V and kernel K. To round out the picture, we now single out a map
that sends V onto V/K.
V → V/K ←→ Image(f ) ,→ W
where V → V/K is generic, denoting the natural projection p, and the re-
mainder is the nongeneric specialization to f via an isomorphism and an
inclusion map. Our next result details how the generic map p in fact serves
in a more general way as a universal factor of maps with kernel containing
K.
Exercise 2.3.1 Let V be R3 , let H be the (x, y)-plane, and let K be the
x-axis. Interpret the theorem above for this case.
Now consider the diagram below where X and Y are subspaces and again
the rows and columns are short exact sequences and the second maps are
inclusions while the third maps are natural projections.
0 0
↓ ↓
0 → X ∩Y → Y → Y/(X ∩ Y) → 0
↓ ↓
0 → X → X +Y → (X + Y)/X → 0
↓ ↓
X /(X ∩ Y) (X + Y)/Y
↓ ↓
0 0
That these really are maps is easy to establish. We call PW|X the projection
onto W along X and we call RW|X the reflection in W along X . (The
function φ in the proof of Proposition 31 was the projection onto K along K,
so we have already employed the projection type to advantage.) Denoting
the identity map on V by I, we have
PX |W = I − PW|X
and
RW|X = PW|X − PX |W = I − 2PX |W = 2PW|X − I.
It bears mention that a given W generally has many different complements,
and if X and Y are two such, PW|X and PW|Y will generally differ.
The image of PW|X is W and its kernel is X . Thus PW|X is a self-map
with the special property that its image and its kernel are complements. The
image and kernel of RW|X are also complements, but trivially, as the kernel
of RW|X is 0 = {0}. The kernel of RW|X is 0 because w and x are equal only
if they are both 0 since W ∩ X = 0 from the definition of direct sum.
A double application of PW|X has the same effect as a single application:
2
PW|X = PW|X ◦ PW|X = PW|X (PW|X is idempotent). We then have PW|X ◦
2.4. PROJECTIONS AND REFLECTIONS ON V = W ⊕ X 31
2.5 Problems
8. There is a map on R2 that sends all three of (1,0), (1,1), and (1,2) to the
same nonzero vector and therefore sends a dependent set to an independent
one.
34 CHAPTER 2. FUNDAMENTALS OF MAPS
Chapter 3
35
36 CHAPTER 3. MORE ON MAPS AND STRUCTURE
Example 40 Let N denote the natural numbers {0, 1, . . .}. Let V be the
subspace of the function space RN consisting of only those real sequences
a0 , a1 , . . . that are ultimately zero, i. e., for which there exists a smallest
N ∈ N (called the length of the ultimately zero sequence), such that P an = 0
for all n > N . Then φ : V → R defined by φ(a0 , a1 , . . .) = n∈N αn an
is a linear functional given any (not necessarily ultimately zero) sequence
of real coefficients α0 , α1 , . . .. (Given α0 , α1 , . . . that is not ultimately zero,
it cannot be replaced by an ultimately zero sequence that produces the same
functional, since there would always be sequences a0 , a1 , . . . of greater length
for which the results would differ.) The B # derived from the
basis B =
{(1, 0, 0, . . .) , (0, 1, 0, 0, . . .) , . . .} for V does not span V > because B # only
contains elements corresponding to sequences α0 , α1 , . . . that are ultimately
zero. Also note that by Theorem 19, each element of V > is determined by an
unrestricted choice of value at each and every element of B.
suppose that v and w are in V and that Fv = Fw so that for all f ∈ V > ,
f (v) = f (w). Then f (v − w) = 0 for every f ∈ V > . Let B be a basis for V,
and for each x ∈ B let x> be the corresponding coordinate function. Let X
be a finite subset of B such that v − w = x∈X ax · x. Applying x> to v − w
P
for each x ∈ X , we conclude that there is no nonzero ax . Hence v = w, and
Θ is one-to-one as claimed.
Thus it has been shown that each vector v ∈ V defines a unique element
Fv ∈ V >> . But the question still remains as to whether Θ is onto, that is
whether every F ∈ V >> is an Fv for some v ∈ V. We claim that Θ is onto if
V is finite-dimensional. This isbecause if we have bases B and B > as above
and we form the basis B >> = x>> >>
1 , . . . , xn dual to B > , we readily verify
that x>>
i = Fxi for each i. Thus for F = a1 · x>> 1 + · · · + an · x>>
n , and
v = a1 · x1 + · · · + an · xn , we have F = Fv .
We have proved the following.
Θ does not depend on any choice of basis, so V and Θ (V) ⊂ V >> are
isomorphic in a “natural” basis-invariant manner. We say that Θ naturally
embeds V in V >> and we call Θ the natural injection of V into V >> ,
or, when it is onto, the natural correspondence between V and V >> .
Independent of basis choices, a vector in V can be considered to act directly
on V > as a linear functional. If one writes f (v), where f ∈ V > and v ∈ V,
either of f or v can be the variable. When f is the variable, then v is
really playing the rôle of a linear functional on V > , i. e., an element of
V >> . One often sees notation such as hf, vi used for f (v) in order to more
clearly indicate that f and v are to viewed as being on similar footing, with
either being a functional of the other. Thus, as convenience dictates, we may
identify V with Θ (V) ⊂ V >> , and in particular, we agree always to do so
whenever Θ is onto.
3.3 Annihilators
Let S be a subset of the vector space V. By the annihilator S 0 of S is
meant the subset of V > which contains all the linear functionals f such that
f (s) = 0 for all s ∈ S. It is easy to see that S 0 is actually a subspace of V >
38 CHAPTER 3. MORE ON MAPS AND STRUCTURE
and that S 0 = hSi0 . Obviously, {0}0 = V > . Also, since a linear functional
that is not the zero functional, i. e., which is not identically zero, must have
at least one nonzero value, V 0 = {0}.
Exercise 3.3.1 Because of the natural injection of V into V >> , the only
x ∈ V such that f (x) = 0 for all f ∈ V > is x = 0.
0
m+1 , . . . , xn 1 + · · · + bn · x n ∈ U
then f (x) = 0
for each x ∈ {x 1 , . . . , xm } so that b1 = · · · = bm = 0 and
therefore U 0 ⊂ x> m+1 , . . . , x >
n .
(where x> ∈ B > and y > ∈ C > are coordinate functions corresponding respec-
tively to x ∈ B and y ∈ C, of course.)
Theorem 43 The kernel of f > is the annihilator of the image of the map
f : V → W.
When the codomain of f has finite dimension m, say, then the image of f
has finite dimension r, say. Hence, the kernel of f > (which is the annihilator
of the image of f ) must have dimension m − r. By the Rank Plus Nullity
Theorem, the rank of f > must then be r. Thus we have the following result.
Exercise 3.4.3 For Φ ⊂ V > , let Φ denote the subset of V that contains all
the vectors v such that ϕ (v) = 0 for all ϕ ∈ Φ. Prove analogs of Theorems 42
and 43. Finally prove that when the map f has a finite-dimensional domain,
the rank of f > equals the rank of f (another Rank Theorem).
x> (y).
40 CHAPTER 3. MORE ON MAPS AND STRUCTURE
This result is most often applied in the case when V = W and the iso-
morphism f (now an automorphism) is specifically being used to effect a
change of basis and the contragredient is being used to effect the correspond-
ing change of the dual basis.
Selecting from a product space only those functions that have finitely
many nonzero values gives a subspace called the weak product space. As
an example, the set of all formal power series with coefficients in a field F
is a product space (isomorphic to the F N of the exercise above) for which
the corresponding weak product space is the set of all polynomials with
coefficients in F. The weak product space always equals the direct sum of
the “internalized” factors, and
U for that reason is also called the external
direct sum. WeQwill write x∈D Wx for the weak product space derived
from the product x∈D Wx .
For weak products, it is the maps from them, rather than the maps into
them, that have the special property.
42 CHAPTER 3. MORE ON MAPS AND STRUCTURE
U
Theorem 47 There exists exactly one map F : x∈D Wx → V such that
F ◦ ηx = ϕx where for each x ∈ D, ϕx : Wx → V is a prescribed map.
U P
Proof: The function F that sends f ∈ x∈D Wx to {x∈D|f (x)6=0} ϕx (f (x))
U readily seen to be a map such that F ◦ ηx = ϕx . Suppose that
is U the map G :
x∈DPWx → V satisfies G ◦ ηx = ϕx for each x ∈ PD. Let f ∈ x∈D Wx . Then
f = {x∈D|f (x)6=0} ηx (f (x)) so that G (f ) = {x∈D|f (x)6=0} G (ηx (f (x))) =
P
{x∈D|f (x)6=0} ϕx (f (x)) = F (f ), which shows the uniqueness of F .
U L
Exercise 3.7.1 x∈D Wx = x∈D ηx (Wx ).
L
U ∼ U
Exercise 3.7.2 x∈D x = x∈D Ux in a manner not requiring any choice
of basis.
The last two theorems lay a foundation for two important isomorphisms.
As before, with all vector spaces over the same field, let the vector space V
be given, and let a vector space Wx be given for each x in some set D. Then,
utilizing map spaces, we may form the pairs of vector spaces
( )
Y Y
M= {V → Wx } , N = V → Wx
x∈D x∈D
and ( )
0 0
Y ]
M = {Wx → V} , N = Wx → V .
x∈D x∈D
Theorem 48 M ∼
= N and M ∼
0 0
=N .
Proof:
f ∈ M means that for each x ∈ D, f (x) = ϕx for some map ϕx : V →
Wx , and hence there exists exactly one map F ∈ N such that πx ◦ F = f (x)
for each x ∈ D. This association of F with f therefore constitutes a well-
defined function Λ : M → N which is easily seen to be a map. The map
Λ is one-to-one. For if Λ sends both f and g to F , then for each x ∈ D,
3.7. MAPS INVOLVING PRODUCTS 43
>
U ∼
= x∈D Wx> . (Compare with Example 40 at
Q
Exercise 3.7.4 x∈D Wx
the beginning of this chapter.)
We note that the isomorphisms of the two exercises and theorem above
do not depend on any choice of basis. This will generally be the case for
the isomorphisms we will be establishing. From now on, we will usually skip
pointing out when an isomorphism does not depend on specific choices, but
will endeavor to point out any opposite cases.
Up to isomorphism, the order of the factors in a product does not matter.
This can be proved quite readily directly from the product definition, but the
theorem above on maps into a product gives an interesting approach from
another perspective. We also incorporate the “obvious” result that using
aliases of the factors does not affect the product, up to isomorphism.
Theorem 49 x∈D Wx ∼
Q Q 0
= x∈D Wσ(x) where σ is a bijection of D onto
0
itself, and for each x ∈ D there is an isomorphism θx : Wx → Wx .
Q
Proof: In Theorem 46, take the product to be x∈D Wx , take V =
Q 0
x∈D Wσ(x) , and take ϕx = θx ◦ πσ −1 (x) so that there exists a map Ψ from
Q 0 Q
x∈D Wσ(x) to x∈D Wx such that πx ◦ Ψ = θx ◦ πσ −1 (x) . Interchanging
the product spaces and
Q applying the theorem again, there exists a map
0
Φ from x∈D Wx to x∈D Wσ(x) such that πσ−1 (x) ◦ Φ = θx−1 ◦ πx . Then
Q
3.8 Problems
1. Let V be a vector space over the field F. Let f ∈ V > . Then f > is the
>
map that sends the element ϕ of (F 1 ) to (ϕ (1)) · f ∈ V > .
f (x) = w
f (x) = 0
has only the trivial solution x = 0. (This result is sometimes called the
Alternative Theorem. However, see the next problem.)
f (x) = w
if and only if
0
w ∈ Kernel f > ,
i. e., if and only if ϕ (w) = 0 for all ϕ ∈ Kernel f > . (The reason this
6. Let V and W have the respective finite dimensions m and n over the
finite field F of q elements. How many maps V → W are there? If m > n
how many of these are onto, if m 6 n how many are one-to-one, and if m = n
how many are isomorphisms?
46 CHAPTER 3. MORE ON MAPS AND STRUCTURE
Chapter 4
f (v1 , . . . , vk−1 , a · u + b · v, . . . , vn ) =
Theorem 50 Let V1 , . . . , Vn , W be vector spaces over the same field and for
each Vi let the basis Bi be given. Then given the function f0 : B1 × · · · × Bn →
W, there is a unique multilinear transformation f : V1 × · · · × Vn → W such
that f agrees with f0 on B1 × · · · × Bn .
47
48 CHAPTER 4. MULTILINEARITY AND TENSORING
n (n−linear)
o
V1> · · · Vn> is a subspace of V1 × · · · × Vn
N N
→ F , and it is easy
to see that both have the same dimension, so they are actually equal. This
and our observations immediately above then give us the following result.
Theorem 54 Let V1 , . . . , Vn be vector spaces over the field F. Then
N N >∼n (n−linear)
o
(V1 · · · Vn ) = V1 × · · · × Vn → F .
If the Vi are all finite-dimensional, we also have
N N ∼n (n−linear)
o>
V1 · · · Vn = V1 × · · · × Vn → F
and
n (n−linear)
o
Vn )> ∼
= V1> · · · Vn> = V1 × · · · × Vn
N N N N
(V1 ··· → F .
4.4. TENSOR PRODUCTS IN GENERAL 51
Exercise 4.4.1 Make sense of the tensor product concept in the case where
n = 1, so that we have 1-linearity and the tensor product of a single factor.
Exercise 4.4.2 A tensor product space is spanned by the image of its as-
sociated
N tensor
N product function: h{v1 ⊗ · · · ⊗ vn | v1 ∈ V1 , . . . , vn ∈ Vn }i =
V1 · · · Vn .
0 0 0
that Θ ◦ Υ = Υ ◦ Φ and there is a (unique) map Θ : V⊗ → V⊗ such that
0 0 0 0 0
Θ ◦ Υ = Υ ◦ Φ−1 . From this we readily deduce that Υ = Θ ◦ Θ ◦ Υ and
0 0
Υ = Θ ◦ Θ ◦ Υ. Hence the map Θ has the inverse Θ and is therefore an
isomorphism.
Up to isomorphism, tensor multiplication of vector spaces may be per-
formed iteratively.
Theorem 58 Let V1 , . . . , Vn be vector spaces over the same field. Then for
any integer k such that 1 6 k < n there is an isomorphism
N N N N N N N
Θ : (V1 · · · Vk ) (Vk+1 · · · Vn ) → V1 · · · Vn
such that
Θ ((v1 ⊗ · · · ⊗ vk ) ⊗ (vk+1 ⊗ · · · ⊗ vn )) = v1 ⊗ · · · ⊗ vn .
N N
Proof: Set V× = V1 × · · · × Vn , V⊗ = V1 · · · Vn , V × = V1 × · · · × Vk ,
N N N N
V ⊗ = V1 · · · Vk , V × = Vk+1 × · · · × Vn , and V ⊗ = Vk+1 · · · Vn .
For fixed vk+1 ⊗ · · · ⊗ vn ∈ V ⊗ we define the k-linear function f vk+1 ⊗···⊗vn :
V × → V⊗ by f vk+1 ⊗···⊗vn (v1 , . . . , vk ) = v1 ⊗ · · · ⊗ vn . Corresponding to
f vk+1 ⊗···⊗vn is the (unique) map Θvk+1 ⊗···⊗vn : V ⊗ → V⊗ such that
But Θv1 ⊗···⊗vk (vk+1 ⊗ · · · ⊗ vn ) = Θvk+1 ⊗···⊗vn (v1 ⊗ · · · ⊗ vk ) so that the for-
mula is equivalent to
X X
f (x, y) = av1 ⊗···⊗vk · bvk+1 ⊗···⊗vn · Θvk+1 ⊗···⊗vn (v1 ⊗ · · · ⊗ vk ) .
Θ ((v1 ⊗ · · · ⊗ vk ) ⊗ (vk+1 ⊗ · · · ⊗ vn )) = v1 ⊗ · · · ⊗ vn .
0 N
We define an n-linear function f : V× → V ⊗ V ⊗ by setting
0
f (v1 , . . . , vn ) = (v1 ⊗ · · · ⊗ vk ) ⊗ (vk+1 ⊗ · · · ⊗ vn ) .
0 0 N
Corresponding to f is the (unique) map Θ : V⊗ → V ⊗ V ⊗ such that
0
Θ (v1 ⊗ · · · ⊗ vn ) = (v1 ⊗ · · · ⊗ vk ) ⊗ (vk+1 ⊗ · · · ⊗ vn ) .
0 0
Thus each of Θ ◦ Θ and Θ ◦ Θ coincides with the identity on a spanning
set for its domain, so each is in fact the identity map. Hence the map Θ has
0
the inverse Θ and therefore is an isomorphism.
Vn ∼
N N N N N N N
Exercise 4.4.3 (· · · ((V1 V2 ) V3 ) ··· Vn−1 ) = V1 · · · Vn .
V3 ∼
= V1 (V2 V3 ) ∼
N N N N N N
Corollary 59 (V1 V2 ) = V1 V2 V3 .
The two preceding theorems form the foundation for the following general
associativity result.
Theorem 60 Tensor products involving the same vector spaces are isomor-
phic no matter how the factors are grouped.
4.4. TENSOR PRODUCTS IN GENERAL 55
Theorem 61 Let V be a vector space over the field F, and let F be consid-
ered
Nalso to be a vector spaceN
over itself. Then there is an isomorphism from
F V (and another from V F) to V sending a ⊗ v (and the other sending
v ⊗ a) to a · v.
4.5 Problems
N
1. Suppose that in V1 V2 we have u ⊗ v = u ⊗ w for some nonzero
u ∈ V1 . Is it then possible for the vectors v and w in V2 to be different?
>
N N >∼ N N >
2. When all Vi are finite-dimensional, V1 · · · Vn = (V 1 · · · Vn )
via the unique map Φ : V1> · · · Vn> → (V1 · · · Vn )> for which
N N N N
Φ (ϕ1 ⊗ · · · ⊗ ϕn ) = ϕ1 · · · ϕn .
Thus, for example, if the Vi are all equal to V, and V has the basis
>
B, then Φ x> > > >
1 ⊗ · · · ⊗ xn = x1 · · · xn = (x1 ⊗ · · · ⊗ xn ) , where each
xi ∈ B.
Chapter 5
Vector Algebras
(t + u) ∗ v = t ∗ v + u ∗ v, and u ∗ (v + w) = u ∗ v + u ∗ w
57
58 CHAPTER 5. VECTOR ALGEBRAS
a · (u ∗ v) = (a · u) ∗ v = u ∗ (a · v)
Exercise 5.1.1 v ∗ 0 = 0 ∗ v = 0.
Exercise 5.2.1 Let V be a vector algebra and let K be the kernel of an algebra
map from V. Then in V/K, (u + K) ∗ (v + K) = u ∗ v + K for all u, v ∈ V.
(u + I) ∗ (v + I) = (u ∗ v) + I
For any vector algebra V and ideal I C V, the natural projection function
p : V → V/I that sends v to the coset v +I is a vector space map with kernel
I, according to Proposition 33. p is also an algebra map if we employ the
modular product, for then p (u)∗p (v) = (u + I)∗(v + I) = u∗v+I = p(u∗v).
Thus we have the following result.
(v ⊗ w) ∗ (v 0 ⊗ w 0 ) = (v ∗ v 0 ) ⊗ (w ∗ w 0 )
N N
and extending as a bilinear function on (V W) × (V W) so that
! !
X X X
vi ⊗ wi ∗ vj 0 ⊗ wj 0 = (vi ∗ vj 0 ) ⊗ (wi ∗ wj 0 ) .
i j i,j
N
This does give a well-defined vector multiplication for V W.
N
Proof: Consider the function from V × W × V × W to V W that sends
0 0 0 0
(v, w, v , w ) to (v ∗ v ) ⊗ (w ∗ w ). This function is linear N in each
N variable
N
separately and therefore there is a vector space map from V W V W
0 0 0 0
N
to V W that sends v ⊗ Nw ⊗ vN⊗ wNto (v ∗ v )N⊗ (wN ∗wN ). Since there is
an isomorphism from (V W) (V W) to V W V W that sends
0
(v ⊗ w) ⊗ (v N ⊗ w 0 )Nto vN⊗ w ⊗ v 0 ⊗Nw 0 , there is therefore a vector space
map from (V W) (V W) to V W that sends (v ⊗ w) ⊗ (v 0 ⊗ w 0 ) to
0 0
(v ∗ v ) ⊗ (w ∗ wN), and corresponding to
Nthis vector space map is 0a bilinear
W that sends (v ⊗ w, v ⊗ w 0 ) to
N
function µ : (V W) × (V W) → V
0 0
(v ∗ v ) ⊗ (w ∗ w ).
N
V W with this vector multiplication function is the algebra tensor
product of the vector algebras V and W over the field F.
Exercise 5.3.1 If the vector algebras V and W over the field F are both
commutative, or both associative, or both unital, then so is their algebra
tensor product.
5.4. THE TENSOR ALGEBRAS OF A VECTOR SPACE 61
X
v1 ⊗ · · · ⊗ vp = aj · v1 ⊗ · · · ⊗ vi−1 ⊗ vj ⊗ vi+1 ⊗ · · · ⊗ vp
j6=i
where Sp denotes the set of all permutations of {1, . . . , p}. In the event that
two of the factors, say vl and vm , l < m, are equal in r, the terms in the
summation above occur in pairs with the same coefficient since
where Ap is the set of all even permutations of {1, . . . , p}. We say that the
two e-products v1 ⊗ · · · ⊗ vp and vσ(1) ⊗ · · · ⊗ vσ(p) are of the same parity,
or of opposite parity, according as σ is an even, or an odd, permutation of
{1, . . . , p}. We see therefore that the dependent basis monomials, along with
the sums of pairs of independent basis monomials of opposite parity, span D.
Let X = {x1 , . . . , xp } be a subset of the basis B for V. From the ele-
ments of X , K = p! independent basis monomials may be formed by mul-
tiplying together the elements of X in the various possible orders. Let
T1 = {t1 , t3, . . . , tK−1 } be the independent basis monomials of degree p,
with factors from X , and of the same parity as t1 = x1 ⊗ · · · ⊗ xp , and
let T2 = {t2 , t4 , . . . , tK } be those of the opposite parity. Then
T = {t1 , t1 + t2 , t2 + t3 , . . . , tK−1 + tK }
is a set of K independent elements with the same span as T1 ∪ T2 . Moreover,
we claim that for any s ∈ T1 and any t ∈ T2 , s + t is in the span of T r {t1 }.
It suffices to show this for s+t of the form ti +tk where i < k and exactly one
of i, k is odd, or, what is the same, for s + t = ti + ti+2j+1 . But ti + ti+2j+1 =
(ti + ti+1 ) − (ti+1 + ti+2 ) + (ti+2 + ti+3 ) − · · · + (ti+2j + ti+2j+1 ), verifying the
claim. The following result is now clear.
Proposition 64 Let B be a basis for V. From each nonempty subset X =
{x1 , . . . , xp } of p elements of B, let the sets T1 = {t1 , t3, . . . , tK−1 } and T2 =
{t2 , t4 , . . . , tK } comprising in total the K = p! degree p independent basis
monomials be formed, T1 being those that are of the same parity as t1 =
x1 ⊗ · · · ⊗ xp , and T2 being those of parity opposite to t1 , and let
T = {t1 , t1 + t2 , t2 + t3 , . . . , tK−1 + tK } .
Then hT i = hT1 ∪ T2 i and if s ∈ T1 and t ∈ T2 , then s + t ∈ hT r {t1 }i.
Let A0 denote the set of all dependent basis monomials based on B, let A1
denote the union of all the sets T r {t1 } for all p, and let E denote the union
Nthe singleton sets {t1 } for all p, and {1}. Then A0 ∪ A1 ∪ E is a basis
of all
for V, A0 ∪ A1 is a N basis for the ideal D of all linear combinations of
dependent e-products in V, and E is a basis for a complementary subspace
of D.
Inasmuch as any independent set of vectors is part of some basis, in the
light of Proposition 31 we then immediately infer the following useful result
which also assures us that in suppressing the dependent e-products we have
not suppressed any independent ones.
5.5. THE EXTERIOR ALGEBRA OF A VECTOR SPACE 65
N V
We may restrict the domain of the natural projection π∧ : V → V
to pN V and theVcodomain to p V and thereby obtain
N V
Np the vector space map
π∧p : V p V → p
V which has kernel D p = D ∩ V. It is easy
Np to see
p Np
that
Np V = V/Dp soNthat π∧p is the natural projection of V onto
V/Dp . Let Υp : V p → p V be a tensor product function, and let f be a
p-linear function from V p to a vector space W over the same field as V. Then
by the universal
Np property of the tensor product, there is a unique vector space
map f⊗ : V → W such that f = f⊗ ◦ Υp . Assume now that f is zero
on dependent p-tuples, so that the corresponding f⊗ is zero on dependent
e-products of degree Vp p. Applying Theorem 34, we infer the existence of a
unique map f∧ : V → W such that f⊗ = f∧ ◦ π∧p . Thus there is a
universal property for the pth exterior power, which we now officially record
as a theorem. (An alternating p-linear function is one that vanishes on
dependent p-tuples.)
The map p f of the above exercise is commonly called the pth exterior
V
power of f .
Let f : V → V be a vector space map from the n-dimensional Vnvector
Vnspace
V to itself. Then by the above exercise there is a unique map f: V→
66 CHAPTER 5. VECTOR ALGEBRAS
Vn
V such thatV n f (v1 ∧ · · · ∧ vn ) = f (v1 ) ∧ · · · ∧ f (vn ). Since n V is
V V
1-dimensional, n f (t) = a · t for some scalar a = det f , the determinant
of f . Note that the determinant of f is independent of any basis choice.
However, the determinant is only defined for self-maps on finite-dimensional
spaces.
Suppose the n-dimensional vector space aliases V and W have the respec-
P {x1 , . . . , xn }, and {y1 , . . . , yn }. The map f : V → W then sends
tive bases
xj to i ai,j · yi for some scalars ai,j . We have
! !
X X
f (x1 ) ∧ · · · ∧ f (xn ) = ai,1 · yi ∧ · · · ∧ ai,n · yi =
i i
5.5. THE EXTERIOR ALGEBRA OF A VECTOR SPACE 67
X
= aσ(1),1 · · · aσ(n),n · yσ(1) ∧ · · · ∧ yσ(n) =
σ∈Sn
!
X σ
= (−1) aσ(1),1 · · · aσ(n),n · y1 ∧ · · · ∧ yn
σ∈Sn
The universal property of exterior powers leads nat once to the conclusion o
(alt p−lin)
that if V is over the field F, the map space subspace V p → F of alter-
>
nating p-linear functionals is isomorphic to ( p V) . There is also an analog
V
of Theorem 54, Vp which we now start to develop by exploring the coordinate
functions on V V relative to a basis B for V. Let t = v1 ∧· · ·∧vp be V an exterior
e-product in p V. Let x1 ∧ · · · ∧ xp be a typical basis vector of p V, formed
from elements xi ∈ B, and let t be expanded in terms of such basis vectors.
Then the coordinate function (x1 ∧ · · · ∧ xp )> corresponding to x1 ∧ · · · ∧ xp
>
is the element of ( p V) that gives the coefficient of x 1 ∧ · · · ∧
V
xp in this>ex-
pansion of t. This coefficient is readily seen to be det x> i (v j , where xi is
)
the coordinate function on V that corresponds to xi ∈ B. Thus we are led to
consider what are easily seen to be alternating p-linear functionals fφ1 ,...,φp of
the form fφ1 ,...,φp (v1 , . . . , vp ) = det [φi (vj )] where the φi are linear functionals
68 CHAPTER 5. VECTOR ALGEBRAS
M = {(ξ1 , ν1 ), . . . , (ξm , νm )}
where the ξj are the distinct elements of {x1 , . . . , xp } and νj is the multiplicity
with which ξj appears as a factor in t. Putting ν = (ν1 , . . . , νm ) and |ν| =
ν1 + · · · + νm , we then have |ν| = p. Given a particular multiset M, let
T0 = {t1 , . . . , tK } be the set of all basis monomials to which M corresponds.
Here K = |ν| ν
is the multinomial coefficient given by |ν|
ν
= |ν|!/ν! where
ν! = (ν1 !) · · · (νm !). From each such T0 we may form the related set T =
{t1 , t1 − t2 , . . . , tK−1 − tK } with the same span. Now ti − ti+j = (ti − ti+1 ) +
· · · + (ti+j−1 − ti+j ) so that the difference of any two elements of T0 is in
hT r {t1 }i. We therefore have the following result.
Proof: Suppose that the algebra has no null pair. The equation u∗x = u∗y
with u 6= 0 implies that u ∗ (x − y) = 0 which then implies that x − y = 0
since the algebra has no null pair. Similarly x ∗ v = y ∗ v with v 6= 0 implies
x − y = 0. Thus in each case, x = y as was to be shown.
On the other hand, assume that cancellation is supported. Suppose that
u ∗ v = 0 with u 6= 0. Then u ∗ v = u ∗ 0, and canceling u gives v = 0. Hence
5.8. PROBLEMS 71
5.8 Problems
1. Give an example of an algebra map f : A → B where A and B are both
unital with unit elements 1A and 1B respectively, but f (1A ) 6= 1B .
73
74 CHAPTER 6. VECTOR AFFINE GEOMETRY
v + b1 · (x1 − w1 ) + · · · + bn · (xn − wn )
v1 + a2 · (v2 − v1 ) + · · · + an · (vn − v1 ) ,
and our typical directional expression may be written as the linear expression
a2 · (v2 − v1 ) + · · · + an · (vn − v1 )
(Assigning Ø the standard dimension of −1, the formula above may be written
An affine frame for the affine flat Φ is an affine independent subset that
affine-spans the flat.
Exercise 6.6.3 An affine frame for the affine flat Φ is a subset of the form
{v} ∪ (v + B) where B is a basis for the directional subspace Φv .
Exercise 6.6.4 Let A be an affine frame for the affine flat Φ.PThen each
vector of Φ has a unique expression as an affine combination x∈X ax · x
where X is some finite subset of A and all of the scalars ax are nonzero.
Exercise 6.6.5 Let the affine flat Φ have the finite affine framePX . Then
each vector of Φ has a unique expression as an affine combination x∈X ax ·x.
The scalars ax are the barycentric coordinates of the vector relative to
the frame X .
f (h) = α (v + h) − α (v)
for all h ∈ Φv . Then f is a vector space map that is independent of the choice
of v.
The vector space map f that corresponds to the affine map α as in the
theorem above is the underlying vector space map of α, which we will
denote by αv . Thus for any affine map α we have
α (v) = αv (v − u) + α (u)
Other nice properties of affine maps are now quite apparent, as the fol-
lowing exercises detail.
Exercise 6.7.8 Under an affine map, the images of strictly parallel affine
flats are strictly parallel affine flats.
6.8. SOME AFFINE SELF-MAPS 81
Letting h = t − u and
τh (v) = v + h = δt,u;1 (v)
for all v ∈ Φ gives the special dilation τh : Φ → Φ which we call a translation
by h. Every translation is invertible, τh−1 = τ−h , and τ0 is the identity
map. The translations on Φ correspond one-to-one with the vectors of the
directional subspace Φv of which Φ is a coset. We have τh ◦ τk = τk ◦ τh =
τk+h and thus under the operation of composition, the translations on Φ are
isomorphic to the additive group of Φv . Since every translation is a dilation
δt,u;b with b = 1, from the proposition above we infer the following result.
82 CHAPTER 6. VECTOR AFFINE GEOMETRY
6.10 Problems
1. When F is the smallest field, namely the field {0, 1} of just two elements,
any nonempty subset S of a vector space V over F contains the translate
v + a · (x − v) for every v, x ∈ S and every a ∈ F. Not all such S are affine
flats, however.
On the other hand, if 1 + 1 6= 0 in F, a nonempty subset S of a vector
space over F is an affine flat if it contains v + a · (x − v) for every v, x ∈ S
and every a ∈ F.
What about when F is a field of 4 elements?
Chapter 7
7.1 Notations
Throughout this chapter, V is our usual vector space over the field F. We
generally omit any extra assumptions regarding V or F (such as dim V = 6 0)
that each result might need to make its hypotheses realizable. Without fur-
ther special mention, we will frequently use the convenient abuse of notation
P = {P } so that the point P ∈ A (V) is the singleton set that contains the
vector P ∈ V.
Proposition 78 There is one and only one line that contains the distinct
points P and Q.
83
84 CHAPTER 7. BASIC AFFINE RESULTS AND METHODS
On the other hand, suppose that l is a line that contains both P and Q.
Since l is a line that contains P , it has the form l = P + lv where lv is one-
dimensional. Since l is an affine flat that contains both P and Q, it contains
the translate P + (Q − P ) so that Q − P ∈ lv , and therefore lv = k v and
l = k.
The line that contains the distinct points P and Q is P +A Q, of course.
We will write P Q to mean the line P +A Q for (necessarily distinct) points
P and Q.
Exercise 7.2.1 Dimensional considerations then imply that the affine sum
of distinct intersecting lines is a plane.
Proposition 80 Given a point P and a line l, there is one and only one
line m that contains P and is parallel to l.
B ''
A '
A A ''
B '
P
C C '' C '
7.4 Barycenters
We now introduce a concept analogous to the physical concept of “center
of gravity,” which will help us visualize the result of computing a linear ex-
pression and give us some convenient notation. Given the points A1 , . . . , An ,
consider the linear expression a1 · A1 + · · · + an · An of weight a1 + · · · + an 6= 0
based on A1 , . . . , An . The barycenter based on (the factors of the terms of)
this linear expression is the unique point X such that
(a1 + · · · + an ) · X = a1 · A1 + · · · + an · An .
(If the weight were zero, there would be no such unique point X.) The
barycenter of a1 · A1 + · · · + an · An , which of course lies in the affine span of
{A1 , . . . , An }, will be denoted by £ [a1 · A1 + · · · + an · An ]. Clearly, for any
scalar m 6= 0, £ [(ma1 ) · A1 + · · · + (man ) · An ] = £ [a1 · A1 + · · · + an · An ],
and this homogeneity property allows us to use £ [a1 · A1 + · · · + an · An ] as
a convenient way to refer to an affine expression by referring to any one of its
nonzero multiples instead. Also, supposing that 1 6 k < n, a = a1 +· · ·+ak 6=
0, a 0 = ak+1 + · · · + an 6= 0,
Xk = £ [a1 · A1 + · · · + ak · Ak ] ,
and
Xk 0 = £ [ak+1 · Ak+1 + · · · + an · An ] ,
it is easy to see that if a + a 0 6= 0 then
£ [a1 · A1 + · · · + an · An ] = £ [a · Xk + a 0 · Xk 0 ] .
A ''
A
C ''
B ''
C 0 = £ [a · A + qb 0 · B 0 + qc · C] .
so that k = 1 and
so that l = 1 and
C 00 = £ [(1 − qp)a · A + b 0 · B 0 ] .
a 00 · A 00 + (1 − pq)b 00 · B 00 + (−c 00 ) · C 00 = 0
C X
A B
£ [a ·A + b ·B ]
Figure for Above Exercise
90 CHAPTER 7. BASIC AFFINE RESULTS AND METHODS
The next result is the affine version of Proposition VI.2 from Euclid’s
Elements. By P QR we will always mean the plane that is the affine sum of
three given (necessarily noncollinear) points P, Q, R.
Theorem 83 (Similarity Theorem) Let X be a point in ABC and not
on AB, BC, or CA, and let W = BX ∩ AC. Then
−−→ −−→ −−→
WC WX CX
CX k AB ⇒ = =
−−→ −−→ −→
WA WB AB
and −−→ −−→
WC WX
= ⇒ CX k AB.
−−→ −−→
WA WB
W
C X C X
W
A B B A
Two Possible Figures for the Similarity Theorem
Proof: “Let X be a point in ABC and not on AB, BC, or CA ” is
just the geometric way to say that there exist nonzero scalars a, b, c such
that X = £ [a · A + b · B + c · C]. Suppose first that CX k AB. Then
by the Subcenter Exercise above, a + b = 0 and we may replace a with
−b to give X = £ [(−b) · A + b · B + c · C] or c · (X − C) = b · (B − A).
−−→ −→
Hence CX/AB = b/c. Employing the Subcenter Exercise again, we find
−→
that W = BX ∩ AC = £ [(−b) · A + c · C] so that W divides AC in the
−−→ −−→
ratio c : −b and therefore W C/W A = b/c. X may also be obtained by
the piecemeal calculation X = £ [(c − b) · W + b · B] which is the same as
−−→ −−→
c · (X − W ) = b · (B − W ) or W X/W B = b/c. This proves the first part.
−−→ −−→ −−→ −−→
On the other hand, assume that W C/W A = W X/W B. We may suppose
that these two equal vector ratios both equal b/c for two nonzero scalars b, c.
Then c · (C − W ) = b · (A − W ) and c · (X − W ) = b · (B − W ). Subtracting
the first of these two equations from the second, we obtain c · (X − C) =
b · (B − A), which shows that CX k AB as desired.
7.7. A VECTOR RATIO PRODUCT AND COLLINEARITY 91
|
C
A '
B '
B A C '
Figure for the Theorem of Menelaus
On the other hand, suppose that the product of the ratios is −1. We may
suppose that
A 0 = £ [1 · B + x · C] , B 0 = £ [y · A + x · C] , C 0 = £ [y · A + z · B]
where none of x, y, z, 1 + x, y + x, y + z is zero. Thus
x y z
· · = −1
1 x y
and hence z = −1. Then we have
(1 + x) · A 0 + (−y − x) · B 0 + (y − 1) · C 0 = 0
and (1 + x) + (−y − x) + (y − 1) = 0. We conclude in light of Exercise 6.4.2
that A 0 , B 0 , C 0 are collinear as required.
|
C A '
A C ' B A C ' B
Figures for the Theorem of Ceva
7.8. A VECTOR RATIO PRODUCT AND CONCURRENCY 93
An Enhanced Affine
Environment
95
96 CHAPTER 8. AN ENHANCED AFFINE ENVIRONMENT
A (Φ+ ), and the rôle of Φv then will be played by its alias {0}×Φv . To simplify
the notation, we will refer to {a} × Φv as Φa , so that Φ1 is playing the rôle of
Φ and Φ0 is playing the rôle of Φv . The rôle of any subflat Ψ ⊂ Φ is played by
the subflat Ψ1 = {1} × Ψv ⊂ Φ1 , of course. No matter what holds for Φ and
its directional subspace, Φ1 is disjoint from its directional subspace Φ0 . Lines
through the point O = {0} in Φ+ are of two fundamentally different types
with regard to Φ1 : either such a line meets Φ1 or it does not. The nonzero
vectors of the lines through O that meet Φ1 will be called point vectors, and
the nonzero vectors of the remaining lines through O (those that lie in Φ0 )
will be called direction vectors. There is a unique line through O passing
through any given point P of Φ1 , and since any nonzero vector in that line
may be used as an identifier of P , that line and each of its point vectors will
be said to represent P . Point vectors are homogeneous representatives of
their points in the sense that multiplication by any nonzero scalar gives a
result that still represents exactly the same point. To complete our view,
each line through O lying in Φ0 will be called a direction, and it and each of
its nonzero vectors will be said to represent that direction. Thus the zero
vector stands alone in not representing either a point or a direction.
© +
P © 1
© 0
O
.
Illustrating Inflating Φ
8.1. INFLATING A FLAT 97
We are now close to the viewpoint of projective geometry where the lines
through 0 actually are the points, and the n-dimensional subspaces are the
(n − 1)-dimensional flats of a projective structure. However, the inflated-flat
viewpoint is still basically an affine viewpoint, but an enhanced one that is
close to the pure projective one. From now on, we assume that any affine
flat has been inflated and freely use the resulting enhanced affine viewpoint.
Suppose now that P, Q, R ∈ Φ1 are distinct points represented by the
point vectors p, q, r. Our geometric intuition indicates that P, Q, R are
collinear in Φ1 if and only if p, q, r are coplanar in Φ+ . The following propo-
sition confirms this.
k l m
·p+ ·q+ · r = 0.
a b c
On the other hand, suppose that there are scalars f, g, h, not all zero,
such that f · p + g · q + h · r = 0. Then
f a · (i + u) + gb · (i + v) + hc · (i + w) = 0.
Since i ∈
/ Φ0 , f a + gb + hc = 0 and P, Q, R are collinear by Exercise 6.4.2.
the flat where all the points lie is completely separated from its directional
subspace.
If we are given two distinct points P, Q represented by point vectors p, q,
the above proposition implies that any point vector in the span of {p, q}
represents a point on the line P Q, and conversely, that every point of P Q
in Φ1 is represented by a point vector of h{p, q}i. But besides point vectors,
there are other nonzero vectors in h{p, q}i, namely direction vectors. It is
easy to see that the single direction contained in h{p, q}i is V = h{p, q}i ∩ Φ0
which we refer to as the direction of P Q. We will consider V to be a
generalized point of P Q, and we think of V as lying “at infinity” on P Q.
Doing this will allow us to treat all nonzero vectors of h{p, q}i in a uniform
manner and to say that h{p, q}i represents the line PQ. This will turn out to
have the benefit of eliminating the need for separate consideration of related
“parallel cases” for many of our results.
Exercise 8.1.1 Let P, Q be distinct points represented by point vectors p, q,
and let the direction V be represented by the direction vector v. Then the lines
h{p, v}i ∩ Φ1 and h{q, v}i ∩ Φ1 are parallel and h{p, v}i ∩ h{q, v}i = h{v}i.
(Thus we speak of the parallel lines P V and QV that meet at V .)
Besides lines like P V and QV of the above exercise, there can be lines of
the form V W where V and W are distinct directions. We speak of such lines
as lines at infinity. If V is represented by v and W by w, then the nonzero
vectors of h{v, w}i (all of which are direction vectors) are exactly the vectors
that represent the points of V W . Thus we view the “compound direction”
h{v, w}i as a generalized line. Each plane has its line at infinity, and planes
have the same line at infinity if and only if they are parallel.
Proof: The corresponding small letter will always stand for a representing
vector. We can find representing vectors such that
p = a + a0 = b + b0 = c + c0
so that
a − b = b 0 − a 0 = c 00
b − c = c 0 − b 0 = a 00
c − a = a 0 − c 0 = b 00 .
Hence a 00 + b 00 + c 00 = 0.
While we were able to use simpler equations, this proof is not essentially
different from the previous one. What is more important is that new in-
terpretations now arise from letting various of the generalized points be at
infinity. We depict two of these possible new interpretations in the pair of
figures below.
B ''
A '
A '
A '' A
B B '
A P
B '
B C
P
C '
C C '
can say that the fact that any two of A 00 , B 00 , C 00 lie at infinity means that
the third must also. Thus the right-hand figure depicts the result that if any
two of the three corresponding pairs AB, A 0 B 0 , etc., are pairs of parallels,
then the third is also.
Within the context of an inflated flat we can readily show that there is a
one-to-one correspondence between the (n − 1)-dimensional subflats of Φ and
the n-dimensional subspaces of Φ+ . Moreover, n points of Φ affine-span one
of its (n − 1)-dimensional subflats if and only if every set of vectors which
represents those points is an independent set in Φ+ . (The concept of affine
span of points is generalized through use of the span of the representing
vectors to include all points, not just “finite” ones.)
The criterion for independence provided by exterior algebra (Corollary 65)
leads to the conclusion that a nonzero exterior e-product (a blade) represents
a subspace of Φ+ and therefore also represents a subflat of Φ in the same
homogeneous fashion that a single nonzero vector represents a point. That
is, if {v1 , . . . , vn } is an independent n-element set of vectors in Φ+ , then
the n-blade v1 ∧ · · · ∧ vn homogeneously represents an (n − 1)-dimensional
subflat of Φ. This is based on the following result.
then
d
X d
X
v1 ∧ · · · ∧ vn = ··· · ai1 ,1 · · · ain ,n · xi1 ∧ · · · ∧ xin
i1 =1 in =1
and collecting terms on a particular set of basis elements (those that have
subscripts that increase from left to right) we then get
!
X X σ
(−1) aσ(i1 ),1 · · · aσ(in ),n · xi1 ∧ · · · ∧ xin
16i1 <···<in 6d σ
in ,1 · · ·
16i1 <···<in 6d a ain ,n
8.5. DUAL DESCRIPTION VIA ANNIHILATORS 103
We may phrase this result in terms of points and infer the results of the
following exercise which also contains a well-known result about the inde-
pendence of the columns of a d × n matrix and the determinants of its n × n
minors.
We now point out and justify what is from a geometric standpoint perhaps
the most important result concerning the annihilator, namely that for any
104 CHAPTER 8. AN ENHANCED AFFINE ENVIRONMENT
(W + X )0 = W 0 ∩ X 0 .
0
W0 + X 0 =W ∩X .
Now suppose that f and f −1 are given in terms of the elements of the
basis B by
d
X d
X
−1
f (xj ) = ai,j · xi and f (xj ) = αi,j · xi
i=1 i=1
Therefore
d
! d
!
X X
ω = t1 ∧ · · · ∧ tn = ai,1 · xi ∧ ··· ∧ ai,n · xi
i=1 i=1
and
d
! d
!
X X
ω 0 = t> >
n+1 ∧ · · · ∧ td = αn+1,i · x>
i ∧ ··· ∧ αd,i · x>
i .
i=1 i=1
are complements in {1, .., d}. As our coordinates we take the coefficients
of the relevant blades in the expressions for ω and ω 0 above, namely the
determinants X
pi1 ,...,in = (−1)σ aσ(i1 ),1 · · · aσ(in ),n
σ
and X
πin+1 ,...,id = (−1)σ αn+1,σ(in+1 ) · · · αd,σ(id )
σ
where each sum is over all permutations σ of the indicated subscript set, and
as usual, (−1)σ = +1 or −1 according as the permutation σ is even or odd.
The following result attributed to the noted German mathematician Carl
G. J. Jacobi (1804-1851) provides the final key.
Then
det [ci,j ] = (det g) det [γi,j ]
where [ci,j ] is the n×n matrix with elements ci,j = bi,j , 1 6 i, j 6 n, and [γi,j ]
is the (d − n) × (d − n) matrix with elements γi,j = βi+n,j+n , 1 6 i, j 6 d − n.
Then
g (xj ) , j = 1, . . . , n,
g (h (xj )) =
xj , j = n + 1, . . . , d.
We have
and
d
X
−1 −1 −1 −1
g (xj ) = f q (xj ) = f xρ(j) = αi,ρ(j) · xi
i=1
d
X d
X
= αi,ij · xi = βi,j · xi .
i=1 i=1
Theorem
V 90 Let Φ+ have the basis B = {x1 , . . . , xd }. Choose a basis B ∧ for
Φ+ made up of exterior products (including the empty product which V we
take to equal 1) of the elements of B. Similarly choose a basis V B for V Φ>
>∧
+
made up of exterior products of the elements of B . Let H : Φ+ → Φ>
>
+
be the vector space map such that for each xi1 ∧ · · · ∧ xin ∈ B ∧
Exercise 8.6.2 The H of the theorem above does not depend on the partic-
ular order of the factors used to make up each element of the chosen bases
B ∧ and B >∧ . That is, for any n and any permutation
1 ··· d
ρ=
i1 · · · id
we have
H (xi1 ∧ · · · ∧ xin ) = (−1)ρ · x> >
in+1 ∧ · · · ∧ xid
and therefore
ρ
H −1 (x> >
in+1 ∧ · · · ∧ xid ) = (−1) · xi1 ∧ · · · ∧ xin .
Exercise 8.8.1 The proposition above implies, as expected, that the point
represented by the basis vector x1 lies in the coordinate hyperplanes repre-
0
sented by x> i , i = 2, . . . , d.
Proposition
> 0 92 Let >1 6 k 6 n 6 d. The flat represented by the subspace
0
X = xi1 ∩ · · · ∩ xik intersects the flat represented by the n-dimensional
subspace Y in a flat of dimension at least n − k if and only if the flat repre-
sented by Y has a zero Plücker coordinate corresponding to each of the basis
elements xj1 ∧ · · · ∧ xjn such that {i1 , . . . , ik } ⊂ {j1 , . . . , jn }.
Proof: Suppose first that the flat represented by Y has a zero Plücker
coordinate corresponding to each of the basis elements xj1 ∧ · · · ∧ xjn such
that {i1 , . . . , ik } ⊂ {j1 , . . . , jn }. The flats represented by X and Y do in-
tersect since by the previous proposition the flat represented by Y inter-
sects (several, perhaps, but one is enough) a flat represented by a subspace
> 0 0
xj 1 ∩ · · · ∩ x> jn such that {i1 , . . . , ik } ⊂ {j1 , . . . , jn } and must therefore
intersect the flat represented by its factor > 0X . Denote by x̄m the basis ele-
ment that represents the hyperplane xm . Then X is represented by the
regressive product ξ = x̄i1 ∨ · · · ∨ x̄ik , and each basis n-blade xj1 ∧ · · · ∧ xjn
is a regressive product of the d − n hyperplane blades x̄mn+1 , . . . , x̄md such
that mn+1 , . . . , md ∈ {1, . . . , d} r {j1 , . . . , jn }. Hence the regressive product
η ∨ ξ, where η is a blade that represents the flat that Y represents, consists
of a sum of terms of the form
and upon careful scrutiny of these terms we find that each is zero, but only
because the flat represented by Y has a zero Plücker coordinate correspond-
ing to each of the basis elements xj1 ∧ · · · ∧ xjn such that {i1 , . . . , ik } ⊂
{j1 , . . . , jn }. The terms such that {j1 , . . . , jn } excludes at least one element
of {i1 , . . . , ik } have each excluded i appearing as an m, while the terms
such that {i1 , . . . , ik } ⊂ {j1 , . . . , jn } have no i appearing as an m. Hence
η ∨ ξ = 0 if and only if the flat represented by Y has a zero Plücker coor-
dinate corresponding to each of the basis elements xj1 ∧ · · · ∧ xjn such that
{i1 , . . . , ik } ⊂ {j1 , . . . , jn }. But η ∨ ξ = 0 if and only if the dimension of
X + Y is strictly less than d. The dimension of X is d − k and the dimension
of Y is n so that by Grassmann’s Relation (Corollary 37)
and hence
n − k < dim (X ∩ Y) .
The flat represented by X ∩ Y thus has dimension at least n − k.
Suppose on the other hand that X and Y intersect in a flat of dimension
at least n − k. Then reversing the steps above, we recover Equation (∗) so
that η ∨ ξ = 0 which can hold only if the flat represented by Y has a zero
Plücker coordinate corresponding to each of the basis elements xj1 ∧ · · · ∧ xjn
such that {i1 , . . . , ik } ⊂ {j1 , . . . , jn }.
Exercise 8.8.2 From Grassmann’s Relation alone, show that the subspaces
X and Y of the proposition above always intersect in a subspace of dimension
at least n − k. Hence, without requiring any zero Plücker coordinates, we get
the result that the flats represented by X and Y must intersect in a flat of
dimension at least n − k − 1.
We wish to compute its intersection with the plane 2, which is then the line
2 − 3 + 4 + 1 .2 = −3.2 + 4.2 + 1.2 = −14 + 13 − 34.
We now have an answer in the form of a line blade, but we may wish also
to know at least two points that determine the line. A point on a line can
be obtained by taking its regressive product with any coordinate plane that
8.9. SOME EXAMPLES IN SPACE 113
does not contain it. The result of taking the regressive product of our line
with each coordinate plane is as follows.
−3 + 4 + 1 .2.1 = −3.2.1 + 4.2.1 = 4 − 3
−3 + 4 + 1 .2.2 = 0
−3 + 4 + 1 .2.3 = 4.2.3 + 1.2.3 = −1 − 4
−3 + 4 + 1 .2.4 = −3.2.4 + 1.2.4 = −1 − 3.
Exercise 8.9.1 Find the intersection and the affine sum of the two lines
(1̄) .(1̄ + 2̄ + 3̄ + 4̄) and (1̄ + 2̄) . (1̄ + 3̄ + 4̄).
Exercise 8.9.2 Suppose that the regressive product of two blades equals a
nonzero scalar. What is the meaning of this both for the corresponding sub-
spaces of Φ+ and for the corresponding flats of Φ? Give an example where
the blades both are lines in space.
Proof: We may assume that the given Plücker coordinates were obtained
as minor determinants in the manner described in Section 8.4 above. Using
the notation of that section, each extended coordinate array value may be
written in determinant form as
aj ,1 · · · aj ,n
1 1
.. .
.
Pj1 ,...,jn = . .
· · · .
ajn ,1 · · · ajn ,n
We assume without loss of generality that the given nonzero entry is P1,...,n .
We will first show that each of the n files containing P1,...,n is in the span of
the columns Aj of the matrix A = [ai,j ], so that each is a coordinate d-tuple
of a vector in the subspace represented by the blade with the given Plücker
coordinates. Let the files containing P1,...,n form the columns Bj of the d × n
matrix B = [bi,j ] so that the file forming the column Bj has the elements
···
a1,1 a1,n
.. ..
. .
aj−1,1 ··· aj−1,n
Xn
bi,j = P1,...,j−1,i,j+1,...,n = ai,1 ··· ai,n = cj,k ai,k ,
aj+1,1 ··· aj+1,n
k=1
.. ..
. .
an,1 ··· an,n
where the last expression follows upon expanding the determinant about row
j containing the ai,k . Note that the coefficients cj,k are independent of i, and
hence
Xn
Bj = cj,k Ak
k=1
showing that the file forming the column Bj is in the span of the columns of
A.
To verify the independence of the vectors for which the Bj are coordinate
d-tuples, let us examine the matrix B. Observe that the top n rows of B are
P1,...,n times the n × n identity matrix. Hence B contains an n × n minor with
a nonzero determinant, and the vectors wj for which the Bj are coordinate
d-tuples must therefore be independent, since it follows from what we found
in Section 8.4 that w1 ∧ · · · ∧ wn is not zero.
116 CHAPTER 8. AN ENHANCED AFFINE ENVIRONMENT
Exercise 8.10.1 For each nonzero extended coordinate, apply the method
of the proposition above to the Plücker coordinates of the line obtained in
the first example of the previous section. Characterize the sets of nonzero
extended coordinates that are sufficient to use to yield all essentially different
obtainable factorizations. Compare your results to the points that in the
example were obtained by intersecting with coordinate planes.
Exercise 8.10.2 Use the method of the proposition above to factor the line
λ of the second example of the previous section into
V the regressive product of
>
plane blades. Do
V this first by factoring H (λ) in Φ+ and then carrying the
result back to
V Φ+ . Then also do it by factoring λ directly in the exterior
algebra on Φ+ that the ∨ product induces (where the “vectors” are the
plane blades).
β = v1 ∧ · · · ∧ vn
where the sum is over all subsets I ⊂ {1, . . . , l} such that |I| = n, where |I|
denotes the number of elements of the set I. Each aI is determined by
aI · u 1 ∧ · · · ∧ u l = u I ∧ γ
where the one adapted to the vs is intentionally set up using (−1)II rather
than (−1)II , and the sums are over only those subsets I such that |I| = n.
The aI and bI are determined by
aI · u1 ∧ · · · ∧ ul = uI ∧ γ and bI · v1 ∧ · · · ∧ vm = γ ∧ vI .
We can get some useful expressions for the aI and bI by first creating some
blades that represent U and V and contain γ as a factor. To simplify our
writing, denote u1 ∧ · · · ∧ ul by α and v1 ∧ · · · ∧ vm by β. Supposing that
8.12. ADAPTED COORDINATES AND ∨ PRODUCTS 119
aI · α ∧ βe = b · uI ∧ β and bI · α
e ∧ β = a · α ∧ vI
and since
a · α ∧ βe = α
e ∧ γ ∧ βe = b · α
e∧β
we then have
aI bI
·α
e ∧ γ ∧ βe = uI ∧ β and ·α
e ∧ γ ∧ βe = α ∧ vI .
ab ab
e ∧ γ ∧ β,
α e u ∧ β, and α ∧ v are each the progressive product of d vectors and
I I
therefore are scalar multiples of the progressive product of all the vectors of
any basis for Φ+ such as the fixed underlying basis B V = {x1 , . . V
. , xd } that we
use for the purposes of defining the isomorphism H : Φ+ → Φ> + and the
resulting regressive product that it produces. We might as well also then use
B for extracting scalar multipliers in the above equations. We therefore define
[ ξ ], the bracket of the progressive product ξ of d vectors, via the equation
ξ =[ ξ ]·x1 ∧ · · · ∧ xd . [ξ ] is zero when ξ = 0 and is nonzero otherwise (when
ξ is a d-blade). Thus we have
ab ab
aI = · [ uI ∧ β ] and bI = · [ α ∧ vI ] .
e ∧ γ ∧ βe ]
[α e ∧ γ ∧ βe ]
[α
Dropping the factor that is independent of I, we find that we have the two
equal expressions
X X
(−1)II [ uI ∧ β ] · uI = (−1)II [ α ∧ vI ] · vI (∗∗)
I⊂{1,...,l} I⊂{1,...,m}
|I|=n |I|=n
where
1 ··· n n+1 ··· n+d−l n+d−l+1 ··· d
ρ=
j1 ··· jn jl+1 ··· jd jn+1 ··· jl
so that
and therefore α ∨ β = (−1)σ · xj1 ∧ · · · ∧ xjn = µ(α, β). This example leads
us to conjecture that it is always the case that µ(α, β) = α ∨ β, a result that
we will soon prove.
Exercise 8.12.1 Verify the results of the example above in the case where
d = 4, u1 = x3 , u2 = x2 , u3 = x1 , and v1 = x3 , v2 = x2 , v3 = x4 . (In
evaluating the second expression of (∗∗), be careful to base I and I on the
subscripts of the vs, not on the subscripts of the xs, and notice that σ comes
from the bracket.)
8.12. ADAPTED COORDINATES AND ∨ PRODUCTS 121
V V
While ∨ is a bilinear function defined on all of Φ+ × Φ+ , µ is only
defined for pairs of blades that have degreesVthat sum Vto at least d. Extending
µ as a bilinear function defined on all of Φ+ × Φ+ will allow us prove
equality with ∨ by considering only what happens
V Vto the elements
V of B ∧ ×B ∧ .
We thus define the bilinear function µ e : Φ+ × Φ+ → Φ+ by defining
it on B ∧ × B ∧ as
µ (xJ , xK ) if |J| + |K| > d,
µ
e (xJ , xK ) =
0 otherwise.
where J, K ⊂ {1, . . . , d}. We now show that when α and β are blades that
have degrees that sum to at least d, µ e (α, β) = µ (α, β), so that µ
e extends µ.
Suppose that
X X
α= aJ · x J and β= bK · x K .
J K
Then, since the bracket is clearly linear, using in turn each of the expressions
of (∗∗) which define µ, we find that
X X X
µ(α, β) = (−1)II [ uI ∧ bK · x K ] · u I = bK · µ(α, xK ) =
I K K
X X
= bK · aJ · µ(xJ , xK ) = µ
e (α, β)
K J
and µ
e indeed extends µ. We now are ready to formally state and prove the
conjectured result.
V
Theorem 94 For any η, ζ ∈ Φ+ , µ e(η, ζ) = η ∨ ζ. Hence for blades α =
u1 ∧ · · · ∧ ul and β = v1 ∧ · · · ∧ vm , of respective degrees l and m such that
l + m > d, we have
X X
α ∨β = (−1)II [ uI ∧ β ]·uI = (−1)II [ α ∧ vI ]·vI (∗ ∨ ∗)
I⊂{1,...,l} I⊂{1,...,m}
|I|=n |I|=n
where n = l + m − d.
have been verified in the example above. The forms of α and β are suffi-
ciently general that every case we need to verify is obtainable by separately
permuting factors inside each of α and β. This merely results in η = (−1)τ ·α
and ζ = (−1)υ · β so that η ∨ ζ = (−1)τ (−1)υ · α ∨ β = µ
e(η, ζ) by bilinearity.
8.13 Problems
1. Theorem 87 and its proof require that P be distinct from the other
points. Broaden the theorem by giving a similar proof for the case where P
coincides with A.
2. What is the effect on H of a change in the assignment of the subscript
labels 1, . . . , d to the vectors of B?
3. Determine the proportionality factor between two regressive products
as the determinant of a vector space map.
each of which must be evaluated at the same k-th vector of some sequence
of vectors. The resulting determinant is then a function of x> >
1 , · · · , xd each
potentiallyVevaluated once at each of d-n arguments, which we identify with
a blade in Φ> + via the isomorphism of Theorem 71.
124 CHAPTER 8. AN ENHANCED AFFINE ENVIRONMENT
Chapter 9
125
126 CHAPTER 9. VECTOR PROJECTIVE GEOMETRY
P+ (V) = P (V) ∪ 0. All flats are joins of points, with the null flat being the
empty join. If X is a subspace of V, then V (X ) ⊂ V (V) gives us a projective
geometry in its own right, a subgeometry of V (V), with P (X ) ⊂ P (V).
V is conceptually the same as our Φ+ of the previous chapter, except
that it is stripped of all references to the affine flat Φ. No points of P (V) are
special. The directions that were viewed as special “points at infinity” in the
viewpoint of the previous chapter become just ordinary points without any
designated Φ0 for them to inhabit. We will at times find it useful to designate
some hyperplane through 0 in V to serve as a Φ0 and thereby allow V to be
interpreted in a generalized affine manner. However, independent of any
generalized affine interpretation, we have the useful concept of homogeneous
representation of points by vectors which we continue to exploit.
A ''
A
C ''
B ''
C '
B ''
C ''
A '
A ''
B '
B
A C
each i we must have g (xi ) = ai · f (xi ) for some nonzero scalar ai . Also
g (x1 + · · · + xd ) = a · f (x1 + · · · + xd ) for some nonzero scalar a. But then
a1 · f (x1 ) + · · · + ad · f (xd ) = a · f (x1 ) + · · · + a · f (xd )
and since {f (x1 ) , . . . , f (xd )} is an independent set, ai = a for all i. Hence
g (xi ) = a · f (xi ) for all i and therefore g = a · f . Thus, similar to the
classes of proportional vectors as the representative classes for points, we
have the classes of proportional vector space isomorphisms as the represen-
tative classes for projective transformations. A vector space isomorphism
homogeneously represents a projective transformation in the same fashion
that a vector represents a point. For the record, we now formally state this
result as the following theorem.
Theorem 99 Two isomorphisms between finite-dimensional vector spaces
induce the same projective transformation if and only if these isomorphisms
are proportional.
It is clear that projective transformations send projective frames to pro-
jective frames. Given an arbitrary projective frame for V (V) and another for
V (W), there is an obvious projective transformation that sends the one to
the other. This projective transformation, in fact, is uniquely determined.
Theorem 100 Let X0 , . . . , Xd and Y0 , . . . , Yd be projective frames for V (V)
and V (W), respectively. Then there is a unique projective transformation
from P (V) to P (W) which for each i sends Xi to Yi .
Proof: Let x1 , . . . , xd be representative vectors corresponding to X1 , . . . , Xd
and such that x0 = x1 +· · ·+xd represents X0 . Similarly, let y1 , . . . , yd be rep-
resentative vectors corresponding to Y1 , . . . , Yd and such that y0 = y1 +· · ·+yd
represents Y0 . Then the vector space map f : V → W that sends xi to yi
for i > 0 induces a projective transformation that for each i sends Xi to Yi .
Suppose that the vector space map g : V → W also induces a projective
transformation that for each i sends Xi to Yi . Then there are nonzero scalars
a0 , . . . , ad such that, for each i, g (xi ) = ai · yi so that then
g (x0 ) = g (x1 + · · · + xd ) = a1 · y1 + · · · + ad · yd = a0 · (y1 + · · · + yd )
and since {y1 , . . . , yd } is an independent set, it must be that all the ai are
equal to a0 . Hence g = a0 · f , so that f and g induce exactly the same
projective transformation.
132 CHAPTER 9. VECTOR PROJECTIVE GEOMETRY
Exercise 9.5.1 The composite of vector space maps induces the composite of
the separately induced projective maps, and the identity induces the identity.
C Q
t P
C P
w
x
The figure above, not the usual schematic but rather a full depiction
of the actual subspaces in V, illustrates central projection in the simple case
where V is of dimension 3 and the target t is a projective line (a 2-dimensional
subspace of V). Let P be represented by the vector v. Then we can (uniquely)
express v as v = x + w where x ∈ C and w ∈ t since V = C ⊕ t. But
w = v − x ∈ CP , so w ∈ CP ∩ t and therefore w represents the point Q that
134 CHAPTER 9. VECTOR PROJECTIVE GEOMETRY
9.7 Problems
Then for all vectors u and v, a (u) = a (v). Thus Theorem 99 holds in the
infinite-dimensional case as well.
(Consider separately the cases where u and v represent the same point
in P (V) and when they do not. Write g (u) two different ways in the former
case and write g (u + v) in two different ways in the latter case.)
9.7. PROBLEMS 135
10.2 Nondegeneracy
A pairing g is called nondegenerate if for each nonzero v, there is some
w for which g(v, w) 6= 0, and for each nonzero w, there is some v for which
137
138 CHAPTER 10. SCALAR PRODUCT SPACES
Exercise 10.2.1 For any vector space V over F, the natural evaluation
pairing e : V > × V → F, defined by e(f, v) = f (v) for f ∈ V > and v ∈ V,
puts V > in duality with V. (Each v 6= 0 is part of some basis and therefore
has a coordinate function v > .)
The nice thing about nondegeneracy is that it makes the included maps
one-to-one, so they map different vectors to different functionals. For, if
the distinct vectors s and t of V both map to the same functional (so that
gt = gs ), then gu , where u = t − s 6= 0, is the zero functional on W. Hence
for the nonzero vector u, gu (w) = g(u, w) = 0 for all w ∈ W, and g therefore
fails to be nondegenerate.
Exercise 10.2.2 If the included maps are both one-to-one, the pairing is
nondegenerate. (The one-to-one included maps send only the zero vector to
the zero functional.)
Notice that the two included maps of a scalar product are identical, and
abusing notation somewhat, we will use the same letter g to refer both to
the included map and to the scalar product itself. g : V × V → F and
g : V → V > will not be readily confused. When V is finite-dimensional, the
scalar product’s included map g is an isomorphism that allows each ϕ ∈ V >
to be represented by the vector g −1 (ϕ) via
which shows how the two seemingly different pairings, e and g, are essen-
tially the same when V is a finite-dimensional scalar product space. The
natural nondegenerate pairing e between V > and V thus takes on various
140 CHAPTER 10. SCALAR PRODUCT SPACES
for all i and j. On the other hand, if some basis {y1 , . . . , yd } satisfies
g(yi , xj ) = (g(yi ))(xj ) = δi,j for all i and j, then g(yi ) must by definition
be the coordinate function x> −1 > ⊥
i , which then makes yi = g (xi ) = xi . We
therefore conclude that biorthogonality characterizes the reciprocal. Hence,
if we replace B with B ⊥ , the original biorthogonality formula displayed above
⊥
tells us that B ⊥ = B.
For each element xj of the above basis B, g(xj ) ∈ V > has a (unique) dual
basis representation of the form
d
X
g(xj ) = gi,j · x>
i .
i=1
Applying g −1 to this result, we get the following formula that expresses each
vector of B in terms of the vectors of B ⊥ as
d
X
xj = g(xi , xj ) · x⊥
i .
i=1
where g i,j is the element in the ith row and jth column of the inverse of
g(x1 , x1 ) · · · g(x1 , xd )
[g(xi , xj )] = [gi,j ] = .. ..
.
. ··· .
g(xd , x1 ) · · · g(xd , xd )
we have
d
X d
X X d X
X d d
X
i,j
g · xi = g i,j ⊥
gk,i · xk = i,j ⊥
gk,i g · xk = δkj · x⊥ ⊥
k = xj .
i=1 i=1 k=1 k=1 i=1 k=1
Exercise 10.4.1 x1 ∧ · · · ∧ xd = G · x⊥ ⊥
1 ∧ · · · ∧ xd where G = det[g(xi , xj )].
and using the result we found in the previous section, the new included map
ge : V > → V has the (unique) basis representation
d
X
ge(x> g x⊥ ⊥
j ) = i , xj · xi .
i=1
which has the same right hand side as the preceding basis representation
formula, so we see that ge(x> ⊥
j ) = xj . The included map ge must therefore be
−1 −1
g . Applying g = ge to both sides of the basis representation formula for
ge(x>j ), we get
Xd
>
g x⊥ ⊥
xj = i , xj · g(xi ),
i=1
or
d
> ⊥
X
x> g x⊥ ⊥
j = i , xj · xi .
i=1
Exercise 10.5.1 g x⊥ ⊥
= [g i,j ], i. e., g x⊥ ⊥
= [g (xi , xj )]−1 =
i , xj i , xj
[gi,j ]−1 .
10.6. SOME NOTATIONAL CONVENTIONS 143
With a standard scalar product, the vectors of the reciprocal of the chosen
basis can be found in terms of those of the chosen basis without computation.
Exercise 10.7.3 The included map g for a standard scalar product satisfies
g(xi ) = ηi · x>
i for each basis vector xi of the chosen basis, and also then
⊥ −1 >
xi = g (xi ) = ηi · xi for each i.
true that an orthonormal basis exists for each possible scalar product that
can be specified on a vector space. As long as 1 + 1 6= 0 in F, an orthogonal
basis can always be found, but due to a lack of square roots in F, it may be
impossible to find any orthogonal basis that can be normalized. However,
over the real numbers, normalization is always possible, and in fact, over
the reals, the number of g(xi , xi ) that equal −1 is always the same for every
orthonormal basis of a given scalar product space due to the well-known
Sylvester’s Law of Inertia (named for James Joseph Sylvester who published
a proof in 1852).
Exercise 10.7.4 Let F = {0, 1} and let V = F2 . Let x1 and x2 be the
0 1
standard basis vectors and let [g(xi , xj )] = . Then for this g, V has
1 0
no orthogonal basis.
gives
∗ (xi1 ∧ · · · ∧ xin ) = (−1)ρ · x⊥ ⊥
in+1 ∧ · · · ∧ xid ,
Exercise 10.8.1 In R2 , with BH the standard basis x1 = (1, 0), x2 = (0, 1),
compute ∗((a, b))
and check the orthogonality
for each of these scalar products
g(x1 , x1 ) g(x1 , x2 )
g with matrix =
g(x2 , x1 ) g(x2 , x2 )
5 4
1 0 1 0 0 1 3 3
a) , b) , c) , d) 4 5 .
0 1 0 −1 1 0 3 3
The Hodge star was defined above by the simple formula g −1 ◦ H. This
V
formula could additionally be scaled, as some authors do, in order to meet
some particular normalization criterion, such as making a blade and V its star
−1
in some sense represent the same geometric
p content. For example, g ◦ H
is sometimes scaled up by the factor |det [g(xi , xj )]| when F is the field of
real numbers. However, for the time being at least, F will be kept general,
and the unscaled simple formula will continue to be our definition.
of course.
The concept of the pth exterior power of a map, introduced by Exercise
Vp
5.5.5,
Vp will now
Vp > find employment. Consider the pth exterior power g :
V → V of the included map g of the original scalar product on V.
148 CHAPTER 10. SCALAR PRODUCT SPACES
We have
!
d
X d
X
Vp
g(xJ ) = g(xj1 )∧· · ·∧g(xjp ) = g(xi1 , xj1 ) · x>
i1 ∧· · ·∧
g(xip , xjp ) · x>
ip
i1 =1 ip =1
and the pth exterior power of the original included map has a basis repre-
sentation X
Vp
g(xJ ) = gI,J · x>
I ,
|I|=p
from which it follows that g(xI , xJ ) = Gg(∗xI , ∗xJ ) once we have attributed
the (−1)ρ (−1)σ to the rearranged rows and columns of the big determinant.
will be the new version of the Hodge star on p V that we will now scrutinize
V
and compare with the Vp original version. We will focus on elements s of degree p
(meaning those V in V), but we do intend this ∗ to be applicable to arbitrary
elements of V by applying it separately to the components of each degree
and summing the results.
IfVwe have an element r ∈ d−p V that we allege is our new ∗s for some
V
s ∈ p V, we can verify that by showing V that g (r, t) is the coefficient of
x1 ∧ · · · ∧ xd in s ∧ t for all t in a basis for d−p V. V If we show this for such
an r corresponding to each s that is V in a basis for p V, then we will have
completely determined the new ∗ on p V.
Let us see what happens when we choose the designated basis B to be the
basis BH used in defining the annihilator blade map H, with exactly the same
assignment of subscript labels to the basis vectors xi . With s and t equal to
the respective basis monomials xI = xi1 ∧ · · · ∧ xip and xJ = xjp+1 ∧ · · · ∧ xjd ,
we have
s ∧ t = xI ∧ xJ = εI,J · x1 ∧ · · · ∧ xd
where εI,J = 0 if {jp+1 , . . . , jd } is not complementary to {i1 , . . . , ip } in
{1, . . . , d}, and otherwise εI,J = (−1)σ with σ the permutation i1 , . . . , ip , jp+1 , . . . , jd
of 1, . . . , d. The original Hodge star gives
the same basis ordered the same way, our new version of the Hodge star is
exactly the same as the original version.
Exercise 10.10.1 For a given scalar product space, using B 0 = {x01 , . . . , x0d }
instead of B = {x1 , . . . , xd } to define the Hodge star gives h · ∗ instead of ∗,
where h is the nonzero factor, independent of p, such that h · x01 ∧ · · · ∧ x0d =
x1 ∧ · · · ∧ xd . Thus, no matter which definition is used for either, or what
basis
V is used in defining either, for any two of our Hodge stars, over all of
V the values of one are the same constant scalar multiple of the values of
the other. That is, V ignoring scale, for a given scalar product space V all of
our Hodge stars on V are identical, and the scale depends only on the basis
choice (including labeling) for V used in each definition.
Exercise 10.10.2 (Continuation) Suppose that for the scalar product g, the
bases B 0 and B 0 have the same Gram determinant, i.e., det [g(xi , xj )] =
det g(xi , x0j ) , or equivalently, g(x1 ∧ · · · ∧ xd , x1 ∧ · · · ∧ xd ) = g(x01 ∧ · · · ∧
x0d , x01 ∧ · · · ∧ x0d ). Taking it as a given that in a field, 1 has only itself and
−1 as square roots, B and B 0 then produce the same Hodge star up to sign.
The following result is now apparent.
Proposition 102 Two bases of a given scalar product space yield the same
Hodge star up to sign if and only if they have the same Gram determinant.
r ∧ ∗s = s ∧ ∗r.
Applying Proposition 101, we also get
s ∧ ∗r = G−1 g (r, s) · x1 ∧ · · · ∧ xd ,
where G = det [g(xi , xj )]. The results of the next two exercises then follow
readily.
152 CHAPTER 10. SCALAR PRODUCT SPACES
Vp
Exercise 10.10.3 For all r, s ∈ V,
g (r, s) = G · ∗ (r ∧ ∗s) = G · ∗ (s ∧ ∗r) .
Exercise 10.10.4 For any r ∈ p V,
V
X
∗r = G−1 (−1)ρ g(r, xI ) · x I ,
|I|=p
for ∗. Thus, applying e∗ to an exterior product of vectors from the dual basis
>
BH gives
∗ x> > ⊥ ⊥
e i 1 ∧ · · · ∧ xi n = H xi 1 ∧ · · · ∧ xi n .
Employing the usual standard scalar product with BH as its chosen basis, we
∗(x>
then have, for example, e > > >
1 ∧ · · · ∧ xn ) = xn+1 ∧ · · · ∧ xd .
10.12 Problems
1. For finite-dimensional spaces identified with their double duals, the
included maps belonging to a pairing are the duals of each other.
2. A pairing of a vector space with itself is reflexive if and only if it is
either symmetric or alternating.
3. Give an example of vectors u, v, w such that u⊥v and v⊥w but it is
not the case that u⊥w.
4. The pairing g of two d-dimensional vector spaces with respective bases
{x1 , . . . , xd } and {y1 , . . . , yd } is nondegenerate if and only if
g(x1 , y1 ) · · · g(x1 , yd )
.. ..
det 6= 0.
. ··· .
g(xd , y1 ) · · · g(xd , yd )
6. When F is the field with just two elements, so that 1+1 = 0, any scalar
product on F 2 will make some nonzero vector orthogonal to itself.
7. For a scalar product space over a field where 1 + 1 6= 0, the scalar
product g satisfies g(w − v, w − v) = g(v, v) + g(w, w) if and only if
v⊥w.
V
8. What, if anything, is the Hodge star on V when dim V = 1?
9. With the same setup as Exercise 10.10.4,
X X
r= (−1)ρ g(∗r, xI ) · xI = (−1)JJ g(∗r, xJ ) · xJ ,
|I|=p |J|=d−p
while X
∗(∗r) = G−1 (−1)JJ g(∗r, xJ ) · xJ .
|J|=d−p
10. ∗−1 (∗s ∧∗t) = H −1 (H (s) ∧ H (t)) = s ∨ t for s, t ∈ V. On the other
V
where {x1 , . . . , xd } is the basis used in defining ∗, and G = det [g(xi , xj )].
13. Using {x1 , . . . , xd } as the basis in defining ∗, and using multi-index
notation as in Section 10.9 above, then
X
∗xJ = G−1 (−1)ρ g (xI , xJ ) · x I ,
|I|=|J|
but X
∗e x>
J = (−1)ρ ge (xI , xJ ) · x>I ,
|I|=|J|
15. If two bases have the same Gram determinant with respect to a given
scalar product, then they have the same Gram determinant with respect
to any scalar product.
16. All chosen bases that produce the same positive standard scalar product
also produce the same Hodge star up to sign.
⊥
17. x⊥
J = (xJ ) .