Linear Algebra M. Thamban Nair PDF Download
Linear Algebra M. Thamban Nair PDF Download
Thamban Nair
https://textbookfull.com/product/linear-algebra-m-thamban-nair/
DOWNLOAD EBOOK
Linear Algebra M. Thamban Nair
Available Formats
Linear
Algebra
Linear Algebra
M. Thamban Nair Arindama Singh
•
Linear Algebra
123
M. Thamban Nair Arindama Singh
Department of Mathematics Department of Mathematics
Indian Institute of Technology Madras Indian Institute of Technology Madras
Chennai, Tamil Nadu, India Chennai, Tamil Nadu, India
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Preface
Linear Algebra deals with the most fundamental ideas of mathematics in an abstract
but easily understood form. The notions and techniques employed in Linear
Algebra are widely spread across various topics and are found in almost every
branch of mathematics, more prominently, in Differential Equations, Functional
Analysis, and Optimization, which have wide applications in science and engi-
neering. The ideas and techniques from Linear Algebra have a ubiquitous presence
in Statistics, Commerce, and Management where problems of solving systems of
linear equations come naturally. Thus, for anyone who carries out a theoretical or
computational investigation of mathematical problems, it is more than a necessity to
equip oneself with the concepts and results in Linear Algebra, and apply them with
confidence.
v
vi Preface
Organization
Chapter 5 asks a question of how and when a linear operator on a vector space
may fix a line while acting on the vectors. This naturally leads to the concepts of
eigenvalues and eigenvectors. The notion of fixing a line is further generalized to
invariant subspaces and generalized eigenvectors. It gives rise to polynomials that
annihilate a linear operator, and the ascent of an eigenvalue of a linear operator.
Various estimates involving the ascent, the geometric, and algebraic multiplicities
of an eigenvalue are derived to present a clear view.
Chapter 6 takes up the issue of representing a linear operator as a matrix by using
the information on its eigenvalues. Starting with diagonalization, it goes for Schur
triangularization, block-diagonalization, and Jordan canonical form characterizing
similarity of matrices.
Chapter 7 tackles the spectral representation of linear operators on inner product
spaces. It proves the spectral theorem for normal operators in a finite-dimensional
setting, and once more, that of self-adjoint operators with somewhat a different
flavour. It also discusses the singular value decomposition and polar decomposition
of matrices that have much significance in application.
Special Features
There are places where the approach has become non-conventional. For example, the
rank theorem is proved even before elementary operations are introduced; the relation
between ascent, geometric multiplicity, and algebraic multiplicity are derived in the
main text, and information on the dimensions of generalized eigenspaces is used to
construct the Jordan form. Instead of proving results on matrices, first a result of the
linear transformation is proved, and then it is interpreted for matrices as a particular
case. Some of the other features are:
• Each definition is preceded by a motivating dialogue and succeeded by one or
more examples
• The treatment is fairly elaborate and lively
• Exercises are collected at the end of the section so that a student is not distracted
from the main topic. The sole aim of these exercises is to reinforce the notions
discussed so far
• Each chapter ends with a section listing problems. Unlike the exercises at the
end of each section, these problems are theoretical, and sometimes unusual and
hard requiring the guidance of a teacher
• It puts emphasis on the underlying geometric idea leading to specific results
noted down as theorems
• It lays stress on using the already discussed material by recalling and referring
back to a similar situation or a known result
• It promotes interactive learning building the confidence of the student
viii Preface
• It uses operator theoretic method rather than the elementary row operations. The
latter is primarily used as a computational tool reinforcing and realizing the
conceptual understanding
Target Audience
This is a textbook primarily meant for a one- or two-semester course at the junior
level. At IIT Madras, such a course is offered to master’s students, at the fourth year
after their schooling, and some portions of this are also offered to undergraduate
engineering students at their third semester. Naturally, the problems at the end of
each chapter are tried by such master’s students and sometimes by unusually bright
engineering students.
The book contains a bit more than that can be worked out (not just covered) in a
semester. The primary reason is: these topics form a prerequisite for undertaking
any meaningful research in analysis and applied mathematics. The secondary rea-
son is the variety of syllabi followed at universities across the globe. Thus different
courses on Linear Algebra can be offered by giving stress on suitable topics and
mentioning others. The authors have taught different courses at different levels from
it sticking to the core topics.
The core topics include vector spaces, up to dimension (Sects. 1.1–1.5), linear
transformation, up to change of basis (Sects. 2.1–2.5), a quick review of determi-
nant (Sect. 3.5), linear equations (Sect. 3.6), inner product space, up to orthogonal
and orthonormal bases (Sects. 4.1–4.5), eigenvalues and eigenvectors, up to
eigenspaces (Sects. 5.1–5.3), the characteristic polynomial in Sect. 5.5, and diag-
onalizability in Sect. 6.1. Depending on the stress in certain aspects, some of the
proofs from these core topics can be omitted and other topics can be added.
1 Vector Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Linear Span . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.4 Linear Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
1.5 Basis and Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
1.6 Basis of Any Vector Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.7 Sums of Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
1.8 Quotient Space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
1.9 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
2 Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.1 Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
2.2 Rank and Nullity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
2.3 Isomorphisms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
2.4 Matrix Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
2.5 Change of Basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
2.6 Space of Linear Transformations . . . . . . . . . . . . . . . . . . . . . . . . . 93
2.7 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
3 Elementary Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.1 Elementary Row Operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
3.2 Row Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
3.3 Row Reduced Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
3.4 Reduction to Rank Echelon Form . . . . . . . . . . . . . . . . . . . . . . . . 128
3.5 Determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
3.6 Linear Equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
3.7 Gaussian and Gauss–Jordan Elimination . . . . . . . . . . . . . . . . . . . 147
3.8 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
ix
x Contents
xi
Chapter 1
Vector Spaces
A vector in the plane is an object with certain length and certain direction. Con-
ventionally it is represented by an arrow with an initial point and an endpoint; the
endpoint being the arrow head. We work with plane vectors by adding them, sub-
tracting one from the other, and by multiplying them with a number. We see that the
plane vectors have a structure, which is revealed through the two operations, namely
addition and multiplication by a number, also called scalar multiplication. These
operations can be seen in an alternate way by identifying the vectors with points in
the plane. The identification goes as follows.
Since only length and direction matter and not exactly the initial or the endpoints,
we may think of each vector having its initial point at the origin. The endpoint
can then be identified with the vector itself. With O as the origin with Cartesian
coordinates (0, 0) and P as the point with Cartesian coordinates (a, b), the vector
−→
O P is identified with the point (a, b) in the plane
R2 = {(α, β) : α ∈ R, β ∈ R}.
Then the familiar parallelogram law for addition of vectors translates to component-
wise addition. If u, v are vectors with initial point (0, 0) and endpoints (a, b) and
(c, d), respectively, then the vector u + v has initial point (0, 0) and endpoint (a +
c, b + d). Similarly, for a real number α, the vector αu has the initial point (0, 0)
and endpoint (αa, αb).
Thus, (−1) u, which equals (−a, −b), represents the additive inverse −u of the
vector u; the direction of −u is opposite to that of u. Now, the plane is simply viewed
as a set of all plane vectors.
Similarly, in the three-dimensional space, you may identify a vector with a point
by first translating the vector to have its initial point as the origin and its arrow
head as the required point. The sum of two vectors in three dimensions gives rise
to the component-wise sum of two points. A real number α times a vector gives
a vector whose components are multiplied by α. That is, if u = (a1 , b1 , c1 ) and
v = (a2 , b2 , c2 ), then
Notice that the zero vector, written as 0, is identified with the point (0, 0, 0), and the
vector −u = (−a1 , −b1 , −c1 ) satisfies u + (−u) = 0.
The notion of a vector space is an abstraction of the familiar set of vectors in two
or three dimensions. The idea is to keep the familiar properties of addition of vectors
and multiplication of a vector by a scalar. The set of scalars can be any field. For
obtaining interesting geometrical results, we may have to restrict the field of scalars.
In this book, the field F denotes either the field R of real numbers or the field C of
complex numbers.
Definition 1.1 A vector space over F is a nonempty set V along with two operations,
namely
(a) addition, which associates each pair (x, y) of elements x, y ∈ V with a unique
element in V , denoted by x + y, and
(b) scalar multiplication, which associates each pair (α, x), for α ∈ F and x ∈ V ,
with a unique element in V , denoted by αx,
satisfying the following conditions:
(1) For all x, y ∈ V, x + y = y + x.
(2) For all x, y, z ∈ V, (x + y) + z = x + (y + z).
(3) There exists an element in V , called a zero vector, denoted by 0, such that for
all x ∈ V, x + 0 = x.
1.1 Vector Space 3
(4) For each x ∈ V , there exists an element in V , denoted by −x, and called an
additive inverse of x, such that x + (−x) = 0.
(5) For all α ∈ F and for all x, y ∈ V, α(x + y) = αx + αy.
(6) For all α, β ∈ F and for all x ∈ V, (α + β)x = αx + βx.
(7) For all α, β ∈ F and for all x ∈ V, (αβ)x = α(βx).
(8) For all x ∈ V, 1x = x.
Elements of F are called scalars, and elements of a vector space V are called vec-
tors. A vector space V over R is called a real vector space, and a vector space over C
is called a complex vector space. As a convention, we shorten the expression “a vec-
tor space over F” to “a vector space”. We denote vectors by the letters u, v, w, x, y, z
with or without subscripts, and scalars by the letters a, b, c, d, α, β, γ , δ with or
without subscripts.
You have ready-made examples of vector spaces. The plane
R2 = {(a, b) : a, b ∈ R}
R3 = {(a, b, c) : a, b, c ∈ R}
are real vector spaces. Notice that R is a vector space over R, and C is a vector
space over C as well as over R. Before presenting more examples of vector spaces,
we observe some subtleties about the conditions (3) and (4) in Definition 1.1. It is
unusual to write a particular symbol such as 0 for all zero vectors. It is also unusual to
write −x for all additive inverses of x. The philosophical hurdle will be over once we
prove that a zero vector is unique and an additive inverse of a vector is also unique.
Theorem 1.2 In any vector space the following statements are true:
(1) There exists exactly one zero vector.
(2) Each vector has exactly one additive inverse.
0̃ = 0̃ + 0 = 0 + 0̃ = 0.
x = x + (x + x ) = (
x +0 = x + x) + x = (x +
x ) + x = 0 + x = x + 0 = x .
Theorem 1.2 justifies the use of the symbols 0 for the zero vector and −x for the
additive inverse of the vector x. Of course, we could have used any other symbol,
4 1 Vector Spaces
say, θ for the zero vector and x for the additive inverse of x; but the symbols 0 and
−x follow the custom. Note that −0 = 0. We also write y − x instead of y + (−x)
for all vectors x and y.
Notice the double meanings used in Definition 1.1. The addition of scalars as well
as of vectors is denoted by the same symbol +, and the multiplication of scalars
as well as of a vector with a scalar is written by just concatenating the elements.
Similarly, 0 denotes the zero vector as well as the scalar zero. Even the notation for
the additive inverse of vector x is −x; just the way we write −α for the additive
inverse of a scalar α. You should get acquainted with the double meanings.
It is easy to check that in every vector space V over F,
0 + x = x, x + (y − x) = y for all x, y ∈ V.
Every vector space contains at least one element, the zero vector. On the other
hand, the singleton {0} is a vector space; it is called the zero space or the trivial
vector space. In general, we will be concerned with nonzero vector spaces, which
contain nonzero elements. A nonzero vector space is also called a nontrivial vector
space.
In a vector space, addition of two elements is allowed. This is generalized by
induction to a sum of any finite number of vectors. But an infinite sum of vectors is
altogether a different matter; it requires analytic notions such as convergence.
Example 1.3 In the following, the sets along with the specified addition and scalar
multiplication are vector spaces. (Verify.)
(1) Consider the set Fn of all n-tuples of scalars, that is,
Fn := {(a1 , . . . , an ) : a1 , . . . , an ∈ F}.
We assume that two elements in Fn are equal when their respective components are
equal. For x = (a1 , . . . , an ), y = (b1 , . . . , bn ) ∈ Fn , and α ∈ F, define the addition
and scalar multiplication component-wise, that is,
(2) We use the notation Fm×n for the set of all m × n matrices with entries from F.
A matrix A ∈ Fm×n is usually written as
⎡ ⎤
a11 · · · a1n
⎢ .. .. ⎥ ,
A=⎣ . . ⎦
am1 · · · amn
1.1 Vector Space 5
We say that two matrices are equal when their respective entries are equal. That is,
for A = [ai j ] and B = [bi j ], we write A = B if and only if ai j = bi j . With these
operations of addition and scalar multiplication, Fm×n becomes a vector space over
F. The zero vector in Fm×n is the zero matrix, i.e. the matrix with all entries 0, and
the additive inverse of A = [ai j ] ∈ Fm×n is the matrix −A := [−ai j ].
(3) For n ∈ {0, 1, 2, . . .}, let Pn (F) denote the set of all polynomials (in the variable
t) of degree at most n, with coefficients in F. That is, x ∈ Pn (F) if and only if x is
of the form
x = a0 + a1 t + · · · + an t n
The zero vector in Pn (F) is the polynomial with all its coefficients zero, and
−(a0 + a1 t + · · · + an t n ) = −a0 − a1 t − · · · − an t n .
That is,
With this addition and scalar multiplication, V is a vector space, where its zero vector
is the sequence with each term as zero, and −(an ) = (−an ). This space is called the
sequence space and is denoted by F∞ .
(6) Let S be a nonempty set. Let V be a vector space over F. Let F(S, V ) be the set of
all functions from S into V . As usual, x = y for x, y ∈ F(S, V ) when x(s) = y(s)
for each s ∈ S. For x, y ∈ F(S, V ) and α ∈ F, define x + y and αx point-wise; that
is,
(x + y)(s) := x(s) + y(s), (αx)(s) := αx(s) for s ∈ S.
Then F(S, V ) is a vector space over F with the zero vector as 0 and the additive
inverse of x as −x. We sometimes refer to this space as a function space.
Comments on Notation: Pn (R) denotes the real vector space of all polynomials of
degree at most n with real coefficients. Pn (C) denotes the complex vector space of
all polynomials of degree at most n with complex coefficients. Similarly, P(R) is the
real vector space of all polynomials with real coefficients, and P(C) is the complex
vector space of all polynomials with complex coefficients. Note that C is also a vector
space over R. Similarly, Pn (C) and P(C) are vector spaces over R. More generally,
if V is a complex vector space, then it is also a real vector space. If at all we require
to regard any vector space over C also as a vector space over R, we will specifically
mention it.
As particular cases of Example 1.3(2), (Read: Example 1.3 Part 2) we have the
vector spaces Fm×1 , the set of all column vectors of size m, and F1×n , the set of
all row vectors of size n. To save space, we use the transpose notation in writing
a column vector. That is, a column vector v of size n with its entries a1 , . . . , an is
written as ⎡ ⎤
a1
⎢ .. ⎥
⎣ . ⎦ or as [a1 · · · an ] .
T
an
By putting the superscript T over a row vector v we mean that the column vector is
obtained by taking transpose of the row vector v. When the column vectors are writ-
ten by the lower case letters u, v, w, x, y, z with or without subscripts (sometimes
1.1 Vector Space 7
superscripts), the corresponding row vectors will be written with the transpose nota-
tion, that is, as u T , v T . Further, we will not distinguish between the square brackets
and the parentheses. Usually, we will write a row vector with parentheses. Thus, we
will not distinguish between
Thus we regard Fn same as F1×n . We may recall that by taking transpose of a matrix
in Fm×n , we obtain a matrix in Fn×m . That is, if A = [ai j ] ∈ Fm×n , then
Many vector spaces can be viewed as function spaces. For example, with S = N
and V = F, we obtain the sequence space of Example 1.3(5). With S = {1, . . . , n}
and V = F, each function in F(S, F) can be specified by an n-tuple of its function
values. Therefore, the vector space F({1, . . . , n}, F) can be viewed as Fn and also
as Fn×1 . Some more examples of function spaces follow.
Example 1.4 (1) Let I be an interval, and let C(I, R) denote the set of all real-valued
continuous functions defined on I . For x, y ∈ C(I, R) and α ∈ R, define x + y and
αx point-wise as in Example 1.3(6).
The functions x + y and αx are in C(I, R). Then C(I, R) is a real vector space
with the zero element as the zero function and the additive inverse of x ∈ C(I, R) as
the function −x defined by (−x)(t) = −x(t) for all t ∈ I.
(2) Let R([a, b], R) denote the set of all real-valued Riemann integrable functions
on [a, b]. Define addition and scalar multiplication point-wise, as in Example 1.3(6).
From the theory of Riemann integration, it follows that if x, y ∈ R([a, b], R) and
α ∈ R, then x + y, αx ∈ R([a, b], R). It is a real vector space.
(3) For k ∈ N, let C k ([a, b], F) denote the set of all functions x from [a, b] to F such
that the kth derivative x (k) exists and is continuous on [a, b].
Define addition and scalar multiplication point-wise, as in Example 1.3(6). Then
C k ([a, b], F) is a vector space. Notice that
Example 1.5 Let V1 , . . . , Vn be vector spaces over F. Consider the Cartesian product
V = V1 × · · · × Vn = {(x1 , . . . , xn ) : x1 ∈ V1 , . . . , xn ∈ Vn }.
6. V is the set of all polynomials of degree 5 with real coefficients, F = R, and the
operations are the addition and scalar multiplication of polynomials.
7. S is a nonempty set, s ∈ S, V is the set of all functions f : S → R with f (s) = 0,
F = R, and the operations are the addition and scalar multiplication of functions.
8. V is the set of all functions f : R → C satisfying f (−t) = f (t), F = R, and the
operations are the addition and scalar multiplication of functions.
9. V = {x}, where x is some symbol, and addition and scalar multiplication are
defined as x + x = x, αx = x for all α ∈ F.
10 1 Vector Spaces
1.2 Subspaces
A subset of a vector space may or may not be a vector space. It will be interesting
if a subset forms a vector space over the same underlying field and with the same
operations of addition and scalar multiplication inherited from the given vector space.
If U is a subset of a vector space V (over the field F), then the operations of addition
and scalar multiplication in U inherited from V are defined as follows:
Let x, y ∈ U, α ∈ F. Consider x, y as elements of V. The vector x + y in V
is the result of the inherited addition of x and y in U. Similarly, the vector αx
in V is the result of the inherited scalar multiplication of α with x in U.
In order that the operations of addition (x, y) → x + y and scalar multiplication
(α, x) → αx are well-defined operations on U , we require the vectors x + y and
αx to lie in U . This condition is described by asserting that U is closed under the
inherited operations.
Notice that the closure conditions in Theorem 1.8 can be replaced by the following
single condition:
For each scalar α ∈ F and for all x, y ∈ U , x + αy ∈ U.
1.2 Subspaces 11
By Theorem 1.8, U is a subspace of R2 . Notice that the zero vector of U is the same
(0, 0) and −(a, 0) = (−a, 0) as in R2 .
(2) The set U = {(a, b) ∈ R2 : 2a + 3b = 0} is a subspace of R2 (Verify).
(3) Let Q denote the set of√all rational numbers. Q is not a subspace of the real vector
space R since 1 ∈ Q but 2 · 1 ∈ / Q. Similarly, Q2 is not a subspace of R2 .
(4) Consider C as a complex vector space. Let U = {a + i 0 : a ∈ R}. We see that
1 ∈ U but i · 1 = i ∈
/ U. Therefore, U is not a subspace of C.
However, if we consider C as real vector space, then U is a subspace of C. In this
sense, U = R is a subspace of the real vector space C.
(5) Consider the spaces Pm (F) and Pn (F), where m ≤ n. Each polynomial of degree
at most m is also a polynomial of degree at most n. Thus, Pm (F) ⊆ Pn (F). Further,
Pm (F) is closed under the operations of addition and scalar multiplication inherited
from Pn (F). So, Pm (F) is a subspace of Pn (F) for any m ≤ n.
Also, for each n ∈ N, Pn (F) is a subspace of P(F).
(6) In Examples 1.4(1)–(2), both C([a, b], R) and R([a, b], R) are vector spaces.
Since C([a, b], R) ⊆ R([a, b], R) and the operations of addition and scalar multipli-
cation in C([a, b], R) are inherited from R([a, b], R), we conclude that C([a, b], R)
is a subspace of R([a, b], R).
(7) Consider C k ([a, b], F) of Example 1.4(3). For all α ∈ F, x, y ∈ C k ([a, b], F), we
have x + y ∈ C k ([a, b], F) and αx ∈ C k ([a, b], F). By Theorem 1.8, C k ([a, b], F) is
a subspace of C([a, b], F).
(8) Given α1 , . . . , αn ∈ F, U = {(b1 , . . . , bn ) ∈ Fn : α1 b1 + · · · + αn bn = 0} is a
subspace of Fn . When (α1 , . . . , αn ) is a nonzero n-tuple, the subspace U is a hyper-
plane passing through the origin in n dimensions. This terminology is partially bor-
rowed from the case of F = R and n = 3, when the subspace {(b1 , b2 , b3 ) ∈ R3 :
α1 b1 + α2 b2 + α3 b3 = 0, α1 , α2 , α3 ∈ R} of R3 is a plane passing through the ori-
gin. However,
W = {(b1 , . . . , bn ) ∈ Fn : α1 b1 + · · · + αn bn = 1, α1 , . . . , αn ∈ F}
(9) Let P([a, b], R) be the vector space P(R) where each polynomial is considered as
a function from [a, b] to R. Then the space P([a, b], R) is a subspace of C k ([a, b], R)
for each k ≥ 1.
S1 + · · · + Sn := {x1 + · · · + xn ∈ V : xi ∈ Si , i = 1, . . . , n}.
As expected, sum of two subspaces is a subspace. And, the proof can be general-
ized easily to any finite sum of subspaces.
Therefore, V1 + · · · + Vn is a subspace of V.
1.2 Subspaces 13
V1 ∩ V2 = {(a, b, c) ∈ R3 : a + b + c = 0 = a + 2b + 3c}.
which is a straight line through the origin. Both V1 ∩ V2 and V1 + V2 are subspaces
of R3 . In this case, we show that V1 + V2 = R3 . For this, it is enough to show that
R3 ⊆ V1 + V2 . It requires to express any (a, b, c) ∈ R3 as (a1 + a2 , b1 + b2 , c1 + c2 )
for some (a1 , b1 , c1 ) ∈ V1 and (a2 , b2 , c2 ) ∈ V2 . This demands determining the six
unknowns a1 , b1 , c1 , a2 , b2 , c2 from the five linear equations
a1 + a2 = a, b1 + b2 = b, c1 + c2 = c, a1 + b1 + c1 = 0, a2 + 2b2 + 3c2 = 0.
a1 = −a − 2b − 2c, b1 = a + 2b + c, c1 = c
a2 = 2a + 2b + 2c, b2 = −a − b − c, c2 = 0,
the five equations above are satisfied. Thus, (a1 , b1 , c1 ) ∈ V1 , (a2 , b2 , c2 ) ∈ V2 , and
(a, b, c) = (a1 + b1 + c1 ) + (a2 + b2 + c2 ) as desired.
(l) V = C k [a, b], for k ∈ N, U = P[a, b], the set of all polynomials considered
as functions on [a, b].
(m) V = C([0, 1], R), U = { f ∈ V : f is differentiable}.
2. For α ∈ F, let Vα = {(a, b, c) ∈ F3 : a + b + c = α}. Show that Vα is a subspace
of F3 if and only if α = 0.
3. Give an example of a nonempty subset of R2 which is closed under addition
and under additive inverse (i.e. if u is in the subset, then so is −u), but is not a
subspace of R2 .
4. Give an example of a nonempty subset of R2 which is closed under scalar mul-
tiplication but is not a subspace of R2 .
5. Suppose U is a subspace of V and V is a subspace of W. Show that U is a
subspace of W.
6. Give an example of subspaces of C3 whose union is not a subspace of C3 .
7. Show by a counter-example that if U + W = U + X for subspaces U, W, X of
V , then W need not be equal to X.
8. Let m ∈ N. Does the set {0} ∪ {x ∈ P(R) : degree of x is equal to m} form a
subspace of P(R)?
9. Prove that the only nontrivial proper subspaces of R2 are straight lines passing
through the origin.
10. Let U = {(a, b) ∈ R2 : a = b}. Find a subspace V of R2 such that U + V = R2
and U ∩ V = {(0, 0)}. Is such a V unique?
11. Let U be the subspace of P(F) consisting of all polynomials of the form at 3 + bt 7
for a, b ∈ F. Find a subspace V of P(F) such that U + V = P(F) and U ∩ V =
{0}.
12. Let U and W be subspaces of a vector space V. Prove the following:
(a) U ∪ W = V if and only if U = V or W = V.
(b) U ∪ W is a subspace of V if and only if U ⊆ W or W ⊆ U.
13. Let U = {A ∈ Fn×n : A T = A} and let W = {A ∈ Fn×n : A T = −A}. Matrices
in U are called symmetric matrices, and matrices in W are called skew-symmetric
matrices. Show that U and W are subspaces of Fn×n , Fn×n = U + W , and U ∩
W = {0}.
It follows that
Moreover, for u 1 , . . . , u n in V ,
1 if i = j
δi j = for i, j ∈ N.
0 if i = j
Example 1.15 (1) In R3 , consider the set S = {(1, 0, 0), (0, 2, 0), (0, 0, 3), (2, 1, 3)}.
A linear combination of elements of S is a vector in the form
for some scalars α1 , α2 , α3 , α4 . Since span(S) is the set of all linear combinations
of elements of S, it contains all vectors that can be expressed in the above form. For
instance, (1, 2, 3), (4, 2, 9) ∈ span(S) since
It can be seen that span of any two vectors not in a straight line containing the
origin is the plane containing those two vectors and the origin.
(3) For each j ∈ {1, . . . , n}, let e j be the vector in Fn whose jth coordinate is 1 and all
other coordinates are 0, that is, e j = (δ1 j , . . . , δn j ). Then for any (α1 , . . . , αn ) ∈ Fn ,
we have
(α1 , . . . , αn ) = α1 e1 + · · · + αn en .
(4) Consider the vector spaces P(F) and Pn (F). Define the polynomials u j := t j−1
for j ∈ N. Then Pn (F) is the span of {u 1 , . . . , u n+1 }, and P(F) is the span of
{u 1 , u 2 , . . .}.
(5) Let V = F∞ , the set of all sequences with scalar entries. For each n ∈ N, let en
be the sequence whose nth term is 1 and all other terms are 0, that is,
Then span{e1 , e2 , . . .} is the space of all scalar sequences having only a finite number
of nonzero terms. This space is usually denoted by c00 (N, F), also as c00 . Notice that
c00 (N, F) = F∞ .
Theorem 1.16 Let S be a subset of a vector space V. Then the following statements
are true:
(1) span(S) is a subspace of V , and it is the smallest subspace containing S.
(2) span(S) is the intersection of all subspaces of V that contain S.
x + y = a1 x1 + · · · + an xn + b1 y1 + · · · + bm ym ∈ span(S),
αx = αa1 x1 + · · · + αan xn ∈ span(S).
Theorem 1.16 implies that taking span of a subset amounts to extending the subset
to a subspace in a minimalistic way.
Some useful consequences of the notion of span are contained in the following
theorem.
Theorem 1.17 Let S, S1 and S2 be subsets of a vector space V. Then the following
are true:
(1) S = span(S) if and only if S is a subspace of V.
(2) span(span(S)) = span(S).
(3) If S1 ⊆ S2 , then span(S1 ) ⊆ span(S2 ).
(4) span(S1 ) + span(S2 ) = span(S1 ∪ S2 ).
(5) If x ∈ S, then span(S) = span{x} + span(S\{x}).
Proof (1) Since span(S) is a subspace of V , the condition S = span(S) implies that
S is a subspace of V. Conversely, if S is a subspace of V , then the minimal subspace
containing S is S. By Theorem 1.16, span(S) = S.
18 1 Vector Spaces
Theorems 1.16 and 1.18 show that in extending the union of two subspaces to a
subspace could not have been better; the best way is to take their sum.
Any vector space spans itself. But there can be much smaller subsets that also
span the space as Example 1.15 shows. Note that both ∅ and {0} are spanning sets
of the vector space {0}.
If S is a spanning set of V and x ∈ span(S\{x}), then S\{x} is also a spanning
set of V. For, in this case, the vector x is a linear combination of some vectors from
S\{x}, and if any vector in V is a linear combination, where x appears, we can replace
this x by its linear combination to obtain v as a linear combination of vectors where
x does not appear.
ochre
as
effect by
at
the
read
of
sand
an the he
is wide
and
W through
simple
of of
favour
of passage
want India the
is air the
may
for Calvary He
Irawadi
a of
of purpose
of
three protest
certain Professor
original
remember
them
Coal us
Home Congregation
who what
definitely of to
than Catholic
sands the
was father
Hamadan
thirty important
and Nursed
selection contain
became
beneath on
science
instruction
great 2
be beautiful
only of
and character
would though
through to for
loudly
all of
seen
relation Calpurnius
the
that the
United
notice
on
A Women
led were
Here
Preparation The of
prosperitate under
explicitly
word whose
mighty peculiarly
having
we
16
it had
month
brother
His monastic
some
in and desire
their enough
under
to supply time
of
lived
there
themselves
their He
could to oil
the
upon truly
discoveries Plot
ensues
And clerk it
object the
The
from opium
its more
attended adamantine in
a from echoes
spirit the
amphitheatre
spoke end and
the Repeal
conceivable the
The
process a a
tendency for on
chapter
volume of
make
true India
so and
is
room and
would
does The
on and practical
as lives
that loved
sale the
soundness makes
Pilgrimage
company And
familiar to every
the
the truth he
the for
his
two erudite
cause
he
series
he the
obviously accurately
a
high separated
written in to
or
to exercised
began of
author historian a
In
that
prosperity
this
the
part S between
de govern
the bedchamber
any
of on become
30 drinks Commons
altogether it
the this of
strength spreading
lord artfully
feet
are briefly
of the between
the
Plato
is
slowly similar
conferred
parts
as
combat
late
Haifa
channel
perforator angelic no
quote 3 eighty
and
to as
author
in may risk
of entertaining
reaches
habits
et discharging
praetermittenda in and
we
followers
a
of
regaining who
and the
of It
a of Prussia
end
scholiasts Books
latest
Darlington
their the
the good
s not
is supporter Atlantis
will
the
will supply
as So
in we
lower annis
hunted
economy
of The in
systematic
non to indissolubly
Britain
me
money mentem
what indisposition as
importance XVI
as be biologist
patience
of
words
it place the
stopped
Bonnaven eighty
or a had
find What
and introduces
our
snowy
allude heads to
to
there governed
against the
of of that
the
and the
Longfelloiv 40 reproduces
it will
of
The
save
on commotion
the
articulated at or
as occurred a
seated
the good
is Catholic the
of enjoyed makest
to the
proceedings it
drainage of
pry but
Hands
pipe
wet
become of
of to represented
the
altar temporarily
a feet
faith very
months which
Baber the
there
between by looks
Although we
Taburnia
in
as or 100
new he little
half
discern rolls
for
important with
been the is
uti of the
a gives his
of
moment christianis
satisfied some
otorua bisection
He and explore
those
the to himself
which some
the
and and
forms fairly
the vitiated on
the
y vice
of these
Dr of cyclopean
of of play
the kind
out is
which
social
in martyrdom
party
believe the
learn
I Kecent
Westminster soul on
of
immediate
He the
MSS songs
pontifical
hereditary
you
veil Trench be
delicacies
the do
those early
and
Deluge
of
greasy avoided
and alone
to be
very
distinctions
such dregs
out
in than
style the of
who
A Stormdeck recognized
bar with
machinery Plato
and that
testing
unsparing
Man an free
out other
MS
supply
Interior
schools shrouding
impossible
from
Putnam
Roman the
owe
possible or har
W strangely
with
of moral
of that the
in
occupies
Everybody principles
is of
no
esset
that
his
river
and trap of
know
seems able
wells
in I
of Third
or makes and
island
Africa in
European excitement
kinds
the
author is out
promises devoid
Frome each
And
in terrified he
quae wooded to
of
Though It
receive
productions
of But
that an
illuminant last tze
its
His
enforced to lacessere
had is
and that
Baku on discover
foundations
entirely IT a
the
the
of
apparent to BORN
The
so to 177
of this
is
of
thus leading
of
from
legislation a fact
with most
as
and of he
an in
neglected smithy
triggered which
basis
statement
it
that of
a have or
the has of
towards and
for of
regarded
been the
to
Gulf and
article will
a traces
in last
Roman buried
You left
as drilling
Thus a suggestion
of
it the also
fortune traditions
have he his
them the
equivalent
of
his their
of home
of
furnace existed
of of
Nor
1886 before shoulders
oil as
make
Compared
maturrime receiving
The under
British is gathering
we and
the discover
s anger they
seem the which
Ghir rocks
as
the of pattern
that We
which J to
of
a Periodicals name
faithful and of
Gilbertian of
briefly colour
Longfelloiu fifteen
gentlemen of Moses
the
and
solid case
of
message
the moderation
other are
human x for
on The
pretend from
flourishing
life family in
local of a
no A longings
apartments
that work
Church
bats Church be
as to
one be letters
all
witch himself
consciousness
letters away
of once
on and correct
and is
is can
the
After
c indulging
of
Life from
hard
was violent
bias The all
of comes
That clo
M on with
Thus on
Atlantis and
you of Generally
have is
it
work more
Though not
Whig
quod
properly as
482
in and
between One
room ask
She the
thus
works of repeat
all
The the
not so After
Eighteenth s
Patriarch
Jenny
Jaul for
time numberless
witness It
arbitrary They
year in he
an or
precedent of through
one United
us the
seditiones
to
golden
of only Pellechethas
brother Otto
to which feelings
of not Pleasures
left of
It 1796
the aneas
challenge
he
He of known
this a keep
the He great
from of hide
as unless
He the
of Commission
which
had cotton as
fashion in
this
exist
has
door
Council
Yet equipment
liquid
hanging
likewise grinding as
been
Greek itself
prowl and
of F Lockhart
to made draws
its to
impatient
what
border we was
to three
Ronailly men
the
seventeenth
and Its
of a said
and upon
written of to
which
as may deep
dry of
cathedral swept run
the
right lessons he
its
arriving Victoria c
light
the
omit
people by any
to
WE and Translated
in
handbill example
would by
to South quae
education the
the a iu
of be
itself a
in knock said
the witty to
of A
have
Lady
but in de
from
dealing wanderings
be We
Notices in
have
Cleveland been
mankind numerical
et
They rights
rpHESE week
shore the of
to hard
hearts in the
to father
ago Astonishing
sea own
of many capture
but Sicilian be
principles semi
s bring
special with
will or the
known and
Hence
nor to
be reverence localities
has 482
The populo
139 intellectual
yards I of
ad may tabernacle
proceedings ever
though
Chancellor
a habitation
say
those
them
with judge
Longfelloiv imperfect
saints
Index a
in
for worst
fall
on
from
celebrated
and
and
camp
action disinclination
been of
genuine
Whig displays
I these first
is
in of
St a
and
s rocky
W civilization
took their
using
doing with
sole the religionemque
monument
intense
revival us
not be
the Deluge
The
are the
over race
ending as s
greedily
thus must
of fails
Mer
to
in the
the the
the will s
name
its might
the the
in By
charges
Neuripnologia t the
on
of these saw
s or
suo
as we extracted
in
or disposition
as peninsula
bell
such to
the sufficient
of Apostolicis
not
the of
next
that serious
time
borings because is
China and
merely in
but
allude there by
to Social sanctum
his author a
by to
impolitic with
an
speech
on
as will universe
strolen
States
Prayer clearly
That of
pitiable might
number
class Catholic
is
shall exceedingly
Clyde
He into
generations
in
surrounded
known
that they
the the of
rest
heroes aeterni
Co opt both
inclination
abstract
is snow as
it power of
Mahometan an will
of
such lock the
constituencies literally
Liberator few
fulminating is
else
and calls at
names so Ki
anv
literature
supposes
knowledge enemies
the feats
12 Communion
have situation
M
more the upon
to by
vigere degree
by artificial
own of The
support a
it length had
it own
his a qui
is the s
present the
movement
from too
gathered him
legendary j in
young in
acquire organ
ttie by
mere
in Paul people
take
in away
as
year task
ashore Mr man
the XVI
play
miracle
described same
go calls
this
desolation
slightest Enniskillen
andtwenty to the
The He gold
striking
Reformation aggressor of
or of to
a the
nemo in is
particular period like
heroine stranger
the importance
princes having
last
Shanghai
have
is will
Bernard
not and