100% found this document useful (4 votes)
12K views163 pages

SK Mapa Linear Algebra

Uploaded by

Subhadip Sinha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (4 votes)
12K views163 pages

SK Mapa Linear Algebra

Uploaded by

Subhadip Sinha
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 163

HIGHER ALGEBRA

152
VECTOR SPACES 153

This shows that a+ WE L{/31 + W, /3-i. + W .... , /3,. + W}.


We prove that the set { .81 + W , 13-i. + 1-V, ... , (3,. + W} is linearly inde.;.·; 4. Let Ube the subspace of JR 3 generated by the set {(\• O, l),Q(2 ,~•~} - F io d
pendent. Let us consider the relation . , _· two different subspaces P, Q of lR3 such that U $ P = JR • U $ - ·
p 1 (/31 + W) + P2 (f32 + W) + · · · + Pn (/3n + W) = 0 + W, where Pi E F;
Then (p1B1 + W) + (,pi.Bz + W) + · · · + (Pn.8-n + lV) = W 5. Let U ={( ; : ) :a = b= 0} and W ={( ; ~ ) :c = d = O} be
or. (pi 81 + P2/J2 + · · · + Pnf3n) + W = lV. subspace.s of lR2 x 2 - Show that U EB i,v· = lR2x2
Therefore Pl .81 + P2f3z + · · · + Pn/3n E W .
6 . Let s be the vector space of all n x n real symmetr~c n1atrices and T _be
Since { 01, et2 .. .. , Om.} is 3. basis of W. CP1/31 + P2f3z + · · · + Pn/3n) =· the vtietor space of all n x n real s kew synunetric matnces. Prove that dtm
Q101 + Q202 + Qm Om for some Qi E F . _c; = n(,~+ 1 > and dim T = n(,~-l) . Hence prove that the space lRnxn of all n X n
This gives Q101 + Q202 + QmOm - P1f31 - P2fh. - · · · - Pnf3n = 0. real matrices is the direct sum of Sand T .
Since { a: 1 , 0:2, . .. /31, /32, .. . , /3,.} is a linearly independent set,t we : :·
, Om,
Deduce that an n x n real matrix can be expressed as the sum of f',n n X n
real symmetric matrix and an n x n real skew symmetric matrix.
have Q1 = Q2 = · · · = Qm = 0, Pl = P2 = · · · = Pn = 0.
This proves that the set {/31 + lV. /32 + W, . .. , /3n + W} is linearly 7. Let V = R 3 and l,V be a subspace of V generated by the vectors
independent. So it is a basis of V /W. (l,0,0) , (l,1,0) . Find a basis of the quotient space V/l--V. Verify that dim
V/\,l/ = dim V- dim tV .
The dimension of V/W = n = (m + n)-; m = dim V- dim l-fT .
This completes the proof. 8. Let V = lR4 a nd lV be a subspace of V generated by the vectors
(l,0 , 0,U) , (1 , 1 , 0 , 0) . Find a basis of the quotient space \.•"/lV. Verify that
Worked Example. dim V/\,V = dim V- dim l-V.
1. Let V = JR3 and W be a subspace of V generated by the vectors '
(1, 0, 0) , (1 , 1, 0) . Find a basis of the quotient space V/W. Verify that 4.9. Row space and column space of a matrix.
dim V/W = dim V- dim W. Let A be an m x n matrix over a field F . Each row of .4 is an n-tuplc
Let a: = (1 , 0, 0), ,8 = (1 , 1. 0) . Since the vectors a , f3 are line!3-l"ly .. vector in pn. Let the row vectors of .4 be denoted by a: 1 , a: 2 , . . . , o:.,,,, .
independent, { a:, /3} is a basis of W. The linearly independent set ,8 o:, Each column of A is an m-~uple vector in F= . Let the column vectors
in V can be extended to a basis of V. be denoted by a'1 , G2 . . . . , &n .
If -y = (0, 0, 1) then the set { a:, /3, -y} is linearly independent in V and The row vectors of A generate a vector space which is said to be the
therefore it is a basis of V. A basis of the quotient space V /lV is -y + W row space of A and is denoted by R(A) and the column vectors generate
and therefore dim V /W = 1. a vector space which is said to be the column space of A a.nd is denoted
Here dim V=3. dim W=2 and dim V/W= 1= dim V- dim W. by C(A) . Clearly, R(A) is a subspace ·of Fn and C(A) is a subspace of
Fm .
Exercises 8 Row rank and column rank.

1. U = L{(2, O, 1), (3, 1, O)}, W = L{(l, 0, 0), (0, 1, 0)} . Fiud <lim U, dim W,
Definition. The dimension of R(A) is said to be the row rank of A and
dim (Un W) and dim (U + W). the dimension of C(.4) is said to be the column rank of A .
Since R(A) C F", the row rank of A $ n . Since C(A) c prn. the
2. Two subspaces of IR3 are U =
{(x, y, z) : x + y + z = 0} and w = {(x, y , z) ·= column rank of A $ m.
x + 2y - z = 0} . Find dim U, dim W, dim Un W , dim (U + W).
Example.
3. = { ( : : ) :a + b = 0}, = {( : : ) ;c+d = 0}
~2 i0 ~1 ) . The row space of A
U W are sub-
spaces of lR2x2- Find dim U, dim W, dim (Un W) and dim (U + W). 1. Let A = ( is the linear span of the

row vectors a1 = (1. 2 , 0) . 02 = (0, 1 , 3) . a- 3 = (2. 0 . 1).


~
154 HIGHER ALGEBRA ] VECTOR SPACES 155 '
p

~ '1
Therefore the row sp~ce_of A= {c 1a: 1 + c2 a: 2 + c3a:3 ·: c.;. E JR} .
T~e set { a1, a:2, 0:3} 1s linearly independent. Therefore the row rank
··/I Since an elementary matrix is non-singular, the product P1P2 . . . Pr
is non-singular.
of A is 3. ::}.!
By Theorem 4.9.1, A and B have the same row spaces.
The C_:>lumn space of_A is the linear span of the column vectors
(1,0,2) , a:2 = (2,1.0)}.a:3 = (0,3; 1). 61 ::::: -:.l Theorem 4.9.3. Let R be a non-zero row reduced echelon matrix row
Therefore ~he ~011:_mn_ sp~e of A= { c 1& 1 + c2 & 2 + c3a3 : ci E JR} .
The set {0:1 , a:2 , 03} 1s lmearly independent. Therefore the colum
rank of A is 3.
11
n _:'.'•_<_,.
·'".:.,,.
·.,- ,
_.
equivalent to an m x n matrix .4. Then the non-zero row vectors of R
form a basis of the row space of A .
Proof. Let R = (a.,J),n,n and 0:1 , 0:2, .. . , Ur be the non-zero row vectors
;:·,;1 of R. Then the row space of R is generated by a1 , a2 , ... , ar, 0, ... , () (0
Theorem 4.9.1. Let A be an m x n matrix and P be an m x m matrix _·;·~·f l being counted m - r times) . .The set of generators { o: 1, a2, .. . , ar, ()} is
over the same field F. Then the row space of PA is a subspace of the .·.'. 1 linearly dependent as it contains the null vector 0. The null vector () can
row space of A. \ be deleted from the generating set, and it is certain that the non-zero
l
l
In particular, if P be non-singular, then the matrices A and p A have . , vectors 0:1 , 0:2 , •.. , O:r generate the row sp~e of R.
the same row spaces. · ··_- 'j We need only to prove linear independence of the set { o: 1, a: 2 , • •. , a:r }.
Proof. Let P = (pij)m ,m, A= (aij)m ,n · Let O:i = (ail, a.;2 , . .. , a;n)-

row:~;:f:°~{E~,: ;::::~~t::;:~::,::::·~,:::+be th; ,


Pim0m2 , · · • · • · ,Pi1a1n + Pi2a2n + · · · + PimOmn) =
Since R is a row reduced echelon matrix, there are positive integers
k1, k2 , ... , kr satisfying the following conditions.
(i) the leading 1 of ai occurs in column ki ,
(ii) k1 < k2 < · · · < kr ,
Pil (au, a12, • • •, a1n)+Pi2(a21, a22 , • • •, a2n)+ · · ·+Pim(am1, am2 : . . . , amn) (iii) a i kj = 6ij,
(iv) aij = 0 if j < ki.
= Pi101 + Pi2a:2 + · · · + PimO:'m -
Let us consider the relation c1a1 + c2a2 + · · • + CrO:r = 0, where c/s
Each Pi is a linear combination of the vectors a1, 0:2 , . .. a:m . arc scalars.
Therefore L{p1,P2, .. . , Pm} C L{a:1,a2, . . . ,am}, by Theorem 4.3.7. Then c1(a11, a12 , .. . , a1n) +c2(a21 , a22, . . . , a:in) + · · · +
That is, the row space of PA is a subspace of the row space of A . . . (i) Cr(arl , arz ,, ·., arn) = (0 , 0, . . . , 0) .
Second Part. Since Pis non-singular, p- 1 exists. Equating k1 th, k2th, ... , krth components only and noting that
1
Considering the product p- 1 (PA), we have ! a1k1 =1, a2k1 = 0, . .. '<lrk1 = 0
alk 2 =O,a2k 2 =l, ... ,a,.·1.2 =0
the row space of p- 1 (PA) is a subspace of the row space of PA. i
That is , the row space of A is a subspace of the row space of PA .. .(ii) I

From (i) and (ii) it follows that the row space of A = . II U1kr = 0 , a2k.. = 0, ... , <lrkr = 1,
the row space i we have C1 = c2 = · · · = Cr = 0.
of PA..
I This proves that the set {01 , 0:2, . . . , Ur} is linearly independent and
Corollary 1. If A be an m x n matrbc and P be a non-singular m x m therefore it is a basis of the row space of R.
matrix, the row rank of A = the row rank of PA . Since R is row equivalent to A, the row space of A is same as that of
2. If .4 be an m x n matrix and P be a non-singular n x n matrix, the R and therefore { 0:1 , a2, ... , a:r} is a basis of the row space of A
column rank of A = the column rank of AP. Corollary. The row rank of a row reduced echelon matrix R is the
number of non-zero rows of R
Theorem 4.9.2. Row equivalent matrices have the same row spaces.
Proof. Let A and B be two row equivalent matrices. Then there exist Theorem 4.9.4. The row rank of an m x n matrix .4. is equal to its
e le m e ntary matrices P1 , P2, ... , Pr such that B = Pi P2 ... P r .4 . determinant rank.
156 HIGHER ALGEBR A

Proof. Let A be row equival ent to a row reduced echelon matrix R having;

.
1 '
.

01
VECTO R SPACES

= bn"fl + ~1 ")'2 + · · · + br1 'Yr,


157

r non-zer o rows . Then the row rank of A is r , by Theore m 4.9.3. Ag~


d'i = b12")'1 + ~2")':l + · · · + br2'Yr,
since A is row equival ent to a row reduced echelon matrix having r
zero rows the determ inant rank of A is r , by Theore m 3.6.5.
S. non--. I
· · -' ·. ·
Thus r = row rank of A = determ inant rank of A . = b1n'Yl + ~n'Y2 + · · · + brn'Yr•
ci'n

Coroll ary 1. The column rank of a matrix A is equal to its determi nant:'.:;; This shows that each of the colum:n vectors ai, a2, •••, d'n belongs to
:':.:· the linear span of r vectors ")'1, ")'2, ..• , 'Yr a.nd therefo re
rank
Proof. By the theorem ,
..; :. f;s{ .· the column rank of A$ r. ... (i)
the row rank of At = the determ inant rank of At. Now r = the row rank of A = the column rank of At.
But the row rank of At = the column rank of A; and Also the column rank of At :s; the row rank of At by just what we
the determ inant rank of At = the determ inant rank of A. have done ~d the row rank of At = the column rank of A. Theref ore
r :s; column rank of A.
Therefo re the column rank of A is equal to its determ inant rank. . . :: :t:. · (ii)
Combin ing (i) and (ii), the row rank of A= the column rankof A.
Coroll ary 2. For an m x n matrix A, the row rank of A
rank of A.
Theor em 4.9.5. For an m x n matrix A, the row rank of A

= the column:}Y;; ·
·"· •,· · Theor e~ A Let A and B be two matrice s over the same field F
such that. AB is defined. Then rank of AB$ min{ra nk of A, rank of B}
.
= the /-
column rank of A. ~ roof. Let A= (a;;)m, n,B = (b.j).;,,,, . Let /31,/h., . .. ,/3n be the row
• vectors of B. Let P1, />2, ... , Pm be the row vectors of AB. Then
Indepe ndent Proof.
Proof. Let A = (aij )-m,n and 01, 0:2;.. . , .",i ·i!! ·"';.:
O:-m be the row vectors, >},; .
Pl = an/31 + a12/32 + · · · + a1n/3n,
& 1 , a 2 , .. . , c:fn be the column vectors of A.
P2 = ~1!31 + a22/J2 + · · · + a2n/3n,
._ .:/ii·
Let the row rank of A= rand S = {th, /32, . . . , !3r} be a basis of the·:.Jff
row space of A where /3i = ( bil, bi2, ... , bin) and bij = akj for some)i:
,. Pm = a..n1f31 + a..n2/3i + · · · + °'mn/3n,
,-._ :::::~ Each p;. is a. linear combin ation of the vectors !31, /3i, ... , f3n.
Since S is a basis, 01 = C11,61 + c12!h + · · · + C1r/3r So L{,01, P2, ... , Pm} C L{/31, /32, ... , /3n}, by Theore m 4.3.7.
02 = C21/31 + C22/32 + · .. + C2r/Jr
... ...
:J
- .r
, ·.
Therefo re the row space of AB is a subspac e of the row space of B.
It follows that row rank of AB $ row rank of B. This implies that
Om, = C-m1/31 + Cm2/32 + · · · + Cmr/3r, where Cij are · '/ ,. rank of AB·::; ·rank of B
suitabl e scalars . (i)
~ -.- ,
The jth compo nent of ~i is Conside ring the produc t Bt At, we deduce that rank of Bt At :::; rank
aij and the jth compon ent of_ _-_-<·-
+ ci 2 fh + •••+ Cir/3r of N. That is, rank of (ABl $ rank of At. This implies that
e;1f31 lS C;1b1j + Ci2b2j + · · · + Cirbrj),
This hol~-for _·. · rank of AB $ rank of A
1, 2, .. . , m . Therefo re (ii) - - -
::.. 1
a1j = c11b1j + c12b2j + · · · + C1rbrj, :,: ,;_' \
Combin ing (i) a.nd (ii), rank of AB :5 min {rank of A, rank of ~ _...
a2j = C21 b1j + C22b2j + · · · + C2rbrj, .. \ Coroll ary 1. If A be non-sin gular, rank of AB= rank of B . ·
.~ Proof. Since A is non-sin gular, A- 1 exists. Co~ide ring the matrix
,· j
. 1 A- 1 (AB), we have rank of A- 1 (AB) $ rank of AB.
en
That is, rank of B $ rank of AB. But rank of AB $ rank of B .
C21 I
= "Ir• Then Hence it follows that rank of AB = rank of B.
Coroll ary 2. If B be non-sin gular, rank of AB= rank of A.
Crnl
158 HIGHER ALGEBRA VECTOR SPACES 159

Theorem 4.9.7. Let A and B be m x n matrices over a field F. Then Since Q- 1 is non-singular and the rank of T is r, the column rank of
rank of (A+ B) :s; rank of A+ rank of B. TQ- 1 is r. That is, the rank of TQ- 1 is r . ,
Proof Let a:1 , 0:2 , • • • . O:m be the row vectors of A; {31 , f3.i , . ... Bm be the Thus A is the product of two matrices p- l S and TQ- 1 , each of rank
row vectors of B. Then the row vectors of A + B are o: 1 + {31 . 02 + r . This completes the proof.
/3,i , .. . ' O:m, + /3m- .
The vectors 01, 02, .. . , o:m generate R(A), the row space of A; the . Worked Examples.
vectors /31, /h , • .. , /3m generate R(B), the row space 6f B and the vectors• 1. Determine the row rank and the column rank of the matrix A and

0~ i n
0:1 + ,81 , 0:2 + /32 , • • •, O:m + 0m generate R(A+ B), the row space of A+B. verify that the row rank of A = column rank of A, where
Because R( .4) + R(B) = {u + v : u E R(A), v E R(B)}, the vectors
C\'.1 + ,8 1, a:2 + /32, .. • ,Om+ /3m lie in the subspace R(A) + R(B). .
A~
But R(A + B) is the smallest subspace containing the vectors 01
+
/31 , a:2 + /32, .. . , om+ /Jm - So R(A + B) C R(A) + R(B) . Let us apply elementary row operations on A to reduce it to a row
Hence dim R(A + B) :s; dim [R(A) + R(B)]
As R(A). R(B) are subspccs of a finitldimcnsiona.l vector space pm,
dim (R(A) + R(B)]= dim R(A)+ dim R(B)- dim (R(A) n R(B)] .
... (i)

l
I
A~u ; !
echelon matrix.

1 4
~3 ) R2:-~R1
R3-2R1
( OOl -i
-1
~0 -~ )
-9

Therefore dim (R(A) + R(B)] ::5 dim R(A)+ dim R(B)


Frnm (i) and (ii) it follows that dim R(A + B) :s; dim R(A)+ dim
··: (ii)
l
I
~ ( ~0 -1
i R~+R 2
~0 (

0 0 0
~)
-9
-~ )

0
Ri_::_R
2
= R, say. ~ ~ ~
R(B) , i.e .. row rank of A+ B ::5 row rank of A+ row rank of B . l Risa row echelon matrix. The non-zero row vectors of Rare (1 ,0 ,2 ,-
H e nce rank of (A+ B) ::5 rank of A+ rank of B .
!
3), (0,1,0,9). These form a basis of the row space of A . Therefore the
1 row rank of .4. = 2.
T h is completes the proof. 1
l1 To determi.ne the column rank of A let us apply elementary row op-
ii
Theorem 4.9.8. Factorisation theorem erations on the matrix At .
An m x n matrix of rank r can be expressed as the product of two
m a trices , e n.ch of rank r .
Proof. Let A be an m x n matrix of rank r . hen there exist non-singular
m a trices P and Q of order m and n respectively such that P AQ =
Ir Or.·11 - r ) = R , say.
( O rn - r .r O rn - r, n-r
At =

- R '.J (
(
i ; i)
4
3

~0
6
9

2
2
6
~

1 )
( ; ; i)
4
3

R1 - 2R.i
6
9
2
6

( ~0
R.i - 2R1
R3 - 4R1
R4 - 3R1

001
u -~)
= B,
2
-1
-2
3

say.
-2
3

- -2
l -~ Rs + 2R2
0 3 R4 - 3R1 O O
R i::i the fully reduced normal form of A . R can be expressed as the
B is a row eche lon matrix. The non-ze ro row vectors o f B are (1,0 ,
product ST where S = ( 0
Ir ) , an m x r matrix of rank r and -1) . (0 ,1,1) . These form a basis of the row space of .4.t, i.e. , of the column
T11-r.r
T = Ur , Or.n -r) , an r x n matrix of rank r . s pa ce of A a nd consequently the column rank of A = 2 .

Since P a n d Q a r c no n-singular, p - l and Q- 1 both exist and are Therefore the row rank o f A. = t he co lumn rank of rL
non-singular . PA Q = ST =;, A = (P- 1 S ) (TQ- 1 ) . 2. Examine linear dep ende ri.ce of t h e set o f vect o rs {(l, -1. 2, 4),
Since p - is non-singu lar and the r a nk of S is r, the row rank of
1 (2.-1.5 , 7).(-1 , 3 , 1 , -2)} in JR. 4 .
p- 1 Sis r . That is, t h e rank o f p - i s is r. Leto= (1.-1 , 2 , 4). 6 = (2 . -1. 5. 7) , 1' = ( -1.3.1. - 2) .
1i VECTOR SPACES 161
160 HIGHER ALGEBRA I
Let us consider the matrix A whose row vectors a,re o:, /3, 'Y and apply
elementary row operations on A to reduce it to a row echelon matrix.
p-•s =( ~ 1
1
1
~
1
).( ~ D=(
\ 0
1
1
2 n
A= ( ~ =i ; ~) 2
R:a- R
1
(
1
0 -i i _1 ) TQ-1 =( ~
A=( i
n-u 0
1
0
1
0
-~) = (
1
0
0
1
-1
2 )-
R 3 +Ri

DU
-1 3 1 -2 0 2 3 2
0

R1±..R
2

R:s- 2 R 2
( ~0 ~0 i -i )
1 4
1;;2-_ 3 (
3
:_
,..,
~0 ~0 g1 =~ )
4
= R, say.
Hence 1
-1
2 ).

Risa row echelon matrix row equivalent to A. There are -3 non-zero Exercise s 9
rows in R. So the row rank of R, and consequentl y the row rank of A is
3. 1. Determine the row rank of the matrix having each set as the row vectors
Therefore { o, /3, 'Y} generates a vector space of dimension 3 and hence and hence examine the linear dependence of the set.
the set {a, /3, 'Y} is linearly independen t. (i) {( 4, 2, 5), (3, 0, 1), (5, 4, 9)} ,

(111)
(ii) {(-2 , 0 , 0, 3), (1, 5, 3, O), (3, 2, 1, 6), (3, 5, 3, -3)},
(iii) {(4, 1, 3, 2), (2, 4, 4, 3) , (1 , 2, 0, 0), (0, 1, 1, 1), (1, 3, 1, l)}.
3. Show that the rank of the matrix A ::: 1 2 3 is 2. Express A

(;) u! ;). oi ! n- u~ 1n
2. Find a basis for the row space of the matrix.
2 1 0
as the product of two matrices, each of rank 2.
Let us apply elementary operations on A to reduce it to the fully
reduced normal form. (n) (;ii)

1) (1 01) (1 00)
0
1 3. Find a basis for the colwnn space of the matrix.
2 R1.:::_R2 0 1 2 C3.:::...C1 0 1 0

n
1
-2
mU ( n
2
~
Ra+R:a Q Q Q Ca-2C:a 0 0 Q 3 1 1 1 1

=( h
-1
02 ,1 ) = R, say. R is
the fully reduced normal form of A .
3
1 (
-1)
0
1
, (;;} 0
1
6
1
(iii) 1
2
2
1
2
1
01,2 01,1
R = Ea2(l)E12 (-l)Ea1(-2 )~1(-l)A{E 31(-l)P{E a•2(-2)}t = FAQ, 4. A= ( 5) 1
3
2
0 ~ . Examine if (1, 1, 1), (1, -1, 1) are in
where P = Ea2(l)E12 (-l)Ea1(-2 )E21(-l), and -1 4
Q = {Ea1(-l)}t {Ea2(-2)} t. (i) the row space of A, (ii) the column space of A .
5. If A be a singular matrix prove that
R can also be expressed as the product ST, where S = ( 2
1,2
) c5 , a (i) the row vectors of A a.re linearly dependent;
3 x 2 matrix of rank 2 and T = (I2, 02,1), a 2 x 3 matrix of rank 2. (ii) the column vectors of A are linearly dependent .

P=(-i -~ ~)-P- =(!; 1~).


6. If A be a non singular matrix prove that
1 (i) the row vectors of A are linearly independent ;
-3 1 1 2 1 (ii) the column vectors of A are linearly independent .

Q=u ! =n q-•=u ! -n 1. A is an m x r matrix and Bis an r x n matrix.


(i) If the rank of the matrix AB is m, prove that rank of A is m ;
(ii) if the rank of the matrix AB is n, prove that rank of B is n .
PAQ = ST gives A = (P- 1 S )(TQ- 1 ) .
162 HIGHER ALGEBRA VECTOR SPACES
163

8. If A _1;!.e a rectangula r matrix, prove that either its row vectors or its column 3. X l + 2x 1 - X3 = 0
. vectors or both the sets are linearly dependent . 2x1 + x2 - 2xa = O.
9, If B is a non-null m x 1 matrix and C is a non-null 1 x n matrix prove that (I, 0, I) is a solution of the system. (2, 0, 2), (3, 0, 3) are also
the rank of BC is 1. solutions. In fact k(l, 0, I) is a solution for each real number k. Thus the
10. If au m x n matrix A be of rank 1 prove that A can be expressed the-. system has many solutions.
prod~ct BC, where B is a non-null m x 1 matrix and C is a non-null 1 x n
n1atnx. Matrix represen tation.

~ ~ ~
( ..,:.n~.)' B--( bt~1 )·
11. Show that the rank of A= ( ) is 2 .
1 0 1 Let A= (a,j)m,n, X =
Express A a.s the product of two matrices , each of rank 2.
Then the system (i) can be expressed as AX = B. The matrix A is
4.10. System of Linear Equation s. said to be the coefficien t matrix of the system and the matrix
A system of m linear equations in n unknowns x 1 , x2 ,
form
. ... x,.. is of the
A= ( :~~... :~: :~: :~ ) .
, also denoted by (A, b), is said
a11X1 + + · · · + '11nXn
a12X2 b1 am1 am2 amn bm
a21X1 + a22X2 + · · · + a2,.x,. b2 to be the augmente d matrix of the system.
The system AX = B is said to be a homogene ous system
am.1X1 + am2X2 + · · · + amnXn bm.,
(i) B = O; otherwise , a non-homo geneous system.
if
where a i;'s and bi's are clements of a field F, called tho field of scalars.
a i j 's are called coefficien ts of the system. In particular , aii 's
and b,. 's are Two systems AX = B and C X = D are said to be equivalen t systems
real (or complex) numbers when F is the field JR (or C) . .. if the augmente d matrices (A, b) and (C, d) be row equivalen t.
An ordered set (c 1, c2 , . . . , c,..) where Ci E F, is said to be a solu~ion Theorem 4.10.1. Let AX= Band RX= S be two equivalen t systems
of the system (i) if each equation of the system is satisfied by and o be a solution of AX= B . Then o is also a solution of RX= S.
X1 = C1 , X2 = C2, ·. · ,X-n = Cn - Proof. Let the equations of the system AX = B be
Thcrcfor e a solution of the system can be considere d as an n-tuple
vector of V-n(F). In particular , if the field of scalars be JR, a solution of Ji - a11x1 + a12X2 + · · · + a1nXn - b1 = 0
the system is a vector in JR" . h - a21x1 + a22x2 + · · · + a2nXn - b-i = 0
A ::;y::;tem of equations is said to be consisten t if it has a solution.
Otherwis e. it is said to be inconsiste nt. fm - a,.,.1x1 + am2X2 + · · · + Om.nXn - b,.,. =0
and let a= (c1, c2, . . . , en) be a solution of the system.
Exampl es.
1. X1 + 2x2 =3 Let us apply elementar y row operation R;,i on the augmente d matrix
3X 1 + X2 = 4. (A, b) of the system. Then the ith and the jth equations J. and Ii , are
( 1 . 1) j::; a solution. There is no other solution of the system. interchan ged and the others remain unchange d. Therefore (c1, c2, ... , en)
is also a solution of the new system.
2. XJ + 2x2 = 3 This implies that if R;,j(A, b) = (C, d), then (c1, c2, .. . , en) is also a
3x 1 + 6x2 = 7. solution of the system C X = D.
Thi::; !;ystem has no solution . This is not a consisten t system.
164 HIGHER ALGEBRA

Let us apply elementa ry row operatio n cRi on the augment ed matrix


7 VECTOR SPACES 165

~ ( ~ ~ -i ~ )-
0 0 3
(A. , b) of the system. Then the ith equation fi, is multiplie d by c(=/ O)
and the other equation s remain unchang ed. Therefor e ( c1 , c2 , . .. , en) is o o 1 o· ,
R1_-=.R3
· -R 2+RaQ
( ~ 1
0
0
f
1
0
also a solution of the new system. Hence the system is equivale nt to x1 = 3
This implies that if cRi (A, b) = (C, d), then ( c:1, c2 , .. . , en) is also a X2 = 1
solution of the system C X =
D. X3 = 0
Let us apply elementa ry row operatio n Ri + cRj on the augment ed and therefore the solution is (3, 1, O).
matrix (A. b) of the system. Then the ith equation Ji is replaced by Ji+ Note. The coefficie nt mat.rix A is row equivale nt to the identity mat~ix
c/j and the other equation s remain unchang ed. Therefor e (c , c , . . . , en) J~ and so it is non-sing ular. This also suggests that the system admits
1 2
is also a solution of the new system.
of a unique solution.
This implies that· if Ri + cRj(A , b) = (C,d), then (c 1 , c , .. . ,Cn) is
2
also a solution of the system CX = D. 2. Solve the system of equation s
x1 + 3x2 + x2 = 0
Since (R, s) can be obtained from (A, b) by a finite number of elemen-
tary row operatio ns of the above types, a solution of the system AX = B
2X1 - X2 + X3 =
0.
This is a non-hom ogeneou s system. The coefficie nt matrix of the
is also a solution of the system RX= S.
Corolla ry. If one of the two equivale °' systems be inconsis tent the
system is A = ( ~ _~ i) and the augmen ted matrix is A =
other is also so. '
To examine the solvabili ty of the system AX = B, or to determin e the
.
( ~ -1 i g).
solutions (or solution) of the system, when it is consisten t, the obvious· Let us apply elementa ry row operatio ns on A tq reduce it to a row
procedur e is to apply such elementa ry row operatio ns on the augment ed reduced echelon matrix.
matrix (A, b) as will reduce it to a row reduced echelon matrix. 1 3 1 0 ) R2~R1 ( 1
. . I' 3 1
( 2 -1 1 0 0 -7 -1
Worked Examp les.
1. Solve the system of equation s
X1+ X2 = 4
-~2 ( ~ ~ i g) R1-=~2 ( ~ ~ t
The given system is equivale nt to
X2 - X3 = 1 + !x3 = 0

:Trr~rr::,h::::::.::n~:::i~. t:
Xt
2x1 + x2 + 4xa = 7.
X2 + ~X3 = 0.
This is R.
Assignin g to x 3 an arbitrary real number c, we have the solution
system is A 0~
:c1= - 74 c , X2 = - 71 c,, X 3 -- C, .
-½ ,

u n
Therefor e the solution is ( - ~c, -,}c, c), i.e. , c ( - ~, 1), where c is
1 0 an arbitrary real number. The solution can be also equivale ntly expresse d
1 - 1 as k(4, 1, -7), where k is an arbitrary real number.
1 4 Note 1. Instead of consider ing the augment ed matrix we can consider
1

u_: -!
Let us apply elementa ry row operatio ns on A to reduce it to a. row only the coefficie nt matrix A in the case of a homogen eous systei;n AX =
reduced echelon matrix. O, since the last column of the augmented matrix is the zero column and

A n.~ •• -1
3
1

n this column remains unchang ed under elementa ry row operatio ns.


Note 2 .. Since k is an arbitrai"y real number, the number of solution s of
the system is infinite. o = (4, 1, - 7) is a solution and all solution s are of
. +.,.....,.

·1·
.!.
r
ii

166 HIGHER ALGE?RA VECTOR SPACES 167

~o
1

the form ko:, where k E lR. So the solutions form a subspace of JR 3 and
the dimension of this subspace is ! .
i
J
2
1
-1
1
3 _ 4
10) R 1 -2R~
- ( -,
1
0
0

1 3
5
2
4
Ij -3 -1 12
Ra+3R2
0 0 0 0
\'
3. Solve, if possible, the system of equations
X1 + 2x2 - X3 = 10 The given system is equivalent to i
-xi + x2 + 2x3 = 2
2x1 +x2 - 3xa = 2.
'1
j
X1 - 2 X3 = 2
x2 + ½xa = 4.
i i
t Assigning to x 3 an arbitrary real number c, the solution is l
1
(jc+2,-½c+4,c) .
:I
This can be expressed as c(j , -½, 1) + (2, 4, 0), where c is an arbitrary
real number.

(-i 2
1
1
-1
2
-3
Let us apply elementary row operations on A to reduce it to a. row
Note 1. Since c is arbitrary, the number of solutions is infinite.
Note 2: (2, 4, 0) is a particular solution of the system. c(j -
general solution of the associated homogeneous system.
½, 1) is the
reduced echelon matrix.

(-i 2 ..... 1
1
1 -3
2 2
2
10)
R2.±_.R1
Ra-2R1
O
O
2 -1
3 1c·
-3 -1 -18
12 10)
-, )·
j
l
Homogeneous System.
We first discuss some properties of the solutions of a homogeneous
system. The system is necessarily a consistent one, since (0, 0, ... , 0) is

~( 1
0
2 -1
1
0 -3 -1
1
3 _ 4
10 )
18
R1-2R2
Ra+3R2
0
0
1
0
5

3
0
2
4
-6 J
always a solution of the system. This solution is said to be the trivial so-
lution of the system. Our main interest will be in the non-zero solutions,
if there be any, of the system.

1~T~~ 2 .
The given system is equivalent to
The solutions of a homogeneous system AX = 0 in
X1 - 2 x3 =2 1 n unknowns where A is an m x n matrix over a field F, form a subspace
~.c2 + lxa =4 of Vn(F) .
0= -6.
The last equation disallows the existence of any solution of the equiv- Proof. The system being always a consistent system has a solution which
alent system. Therefore the given system is inconsistent. is an n-tuple vector in Vn(F). Let S be the set of all solutions of the
system.
4. Solve, if possible, the system of equations -
Case 1. The zero solution is the only solut ion of the system.
X1 + 2X2 - X3 = 10 Then S = {0} and this is a subspace of Vn(F) .
-x 1 +x2 + 2xa = 2
Case 2. The system has many solutons.
2x1 + x2 - 3xa = 8. Leto:= (c1,c2, ... ,Cn) ES and c E F.
Since a is a solution of the system, ailc1 + £ii2C2 + · · · + ainCn = 0 for
This is a non-homogeneous system. Let us apply elem~ntary row i = 1,2, .. . ,m.

rt-r;r)
operations on the augmented matrix of the system to reduce 1t to a row ail (cci) + Oi2(cc2) + · · · + Oin(cen)
= c(ai'1 ci) + ai2C2 + · · · + ainCn) = 0 for i = 1, 2, ... , m .
redu(ce~ :::2~1 ( 001 3
2 -1
1 10)
12 This shows that (cc1 , cc2, . .. , ccn) is a solution of the system.
-3 -1 -12 Therefore a E S => ca ES (i)
2 1 -3 8
168 HIGHER ALGEBRA
·1 j VECTOR SPACES 169
I

Let a= (c1, c2, • • . ,Cn), /3 = (di, d2, ... , dn) ES. Then d1a 1 + d 2a2. + • • • + drar + dr+1(e11a1 + e12a2 + · · · + e1retr) +
Since a, /3 are solutions of the system, · · · + dn(en-r10:1 + en-r202 + · · · + en-rrar) = (}
auci + ai2C2 + · · · + ainCn = 0 and or, (d1 +dr+1e11 +dr+2e21 +· · ·+dnen-r1)a1 +(d2+dr+1e12+dr +2e22+
and1 + ai2d2 + · · · + aindn = 0 for =1,2, .. . ,m. • · · + dnen-r2)a2 + · · · · · · + (dr + dr+1e1r + · · · + dnen-rr) 0 r = 0.
Therefore ai1(c1 + d1) + ai2(c2 + d2) + · · · + ain(Cn + dn)
Since a 1, a 2 , .. . , Or are linearly independent,
= (ailc1 + ai2C2 + · · · + ainCn) + (and1 + ai2d2 + · · · + aindn)
= 0 for i = 1, 2, .. . , m. -dr+l en - dr+2e21 - · · · - dnen-rl
This shows that ( c1 + d 1, c2 + d2, ... , Cn + dn) is a solution of the -dr+1e12 - dr+2e22 - · · · - dnen-r2
system. Therefore a ES, /3 ES~ a+ f3 ES (ii)
From (i) and (ii) it follows that Sis a subspace of Vn(F).
This completes the proof. Therefore f; = (d1,d2 , ... , dr,dr+l, · · · ,dn)
-dr+1(e11 , e12 , . . . , e1r, -1 , 0, 0 , .. . ; 0)
Note. The subspace of solutions of the homogeneous system AX == O -dr+2(e21, e22, .. . , e2r, 0, -1, . .. , 0)
is denoted by X(A).
- · · · - dn(en-rl, en-r2, en-rr, 0, 0 , • • •, -1)
Theorem 4. uf'/.
Let AX = 0 be a homogeneou~ system of n unknowns = -dr+16 - dr+26 - · · · - dnl;n-r·
and X(A) ~;~~lution space of the system. Then This shows that any solution vector f; is a linear combination of
rank of A+ rank of X(A) = n. 6 , 6, ... , l;n-r and since these solution vectors fa , 6, . . . , l;n-r are lin-
Proof. Let rank of A = r. Then A has r independent column vectors. early independent , the rank (dimension) of the solution space is n - r.
Without loss of generality, we can assume that first r column vectors · Therefore rank of A +rank of X (A) = r + (n -r) = n and the theorem
01, 02 , ... , or of A are linearly independent. Then the remaining column is done.
vectors Or+i, .. . , On can be expressed as Note. f; 1 , {2, ... , {n-r are the basis vectors of the solution space of the
+ e12a2 + · · · + homogeneous system. So any solution can be expressed as c 16 + c 26 +
Or+l = e11a1 e1rCtr
• • • +Cn-rl;n-r, where ci's are arbitrary scalars. This is called the general
Ctr+2 = e21c:t1 + e22et2 + · · · + e2r'ar
solution of the homogeneous system.
Ctn = ~ Y - If the number of equations be less than the number of un-
for suitable scalars eii. knowns in a homogeneous system AX = 0, then the system admits of a
Equivalently, e11c:t1 + e12c:t2 + · · · + e1rO:r - O:r+l = 0 non-zero solution.
e21 a1 + e220:2 + · · · + e2rOr - Or+2 = 0 Proof. Let the order of A be m x n. Then m < n and rank of A < n.
As rank of A + rank of X(A) = n, we have rank of X(A) > O and
= 0. this proves that there is a non-zero solution of the system.
The relations show that
6 = (en, e12 , .. . , e1r, -1, 0,0..... ., 0), Theorem 4.10.4. The homogeneous system AX = 0 containing n
{2 = (e21,e221•--·•, ~2r,O,-l,O, ... ,0), equations in n unknowns has a non-zero solution if and only if rank of
A<n.
{n-r = (en-rl,en-r2,· · ·•en-rr,0,0, ... ,-1) Proof. Let (c1, c2, . . . , en) be a ndn-zero solution of the system.
are solutions of the system.
LetA = (aij) - Then a11c1 + a12c2 + · · · + a1nCn = 0
But {1 , {2, . .. , {n-r are linearly independent, because a21C1, + a22c2 + · · · + a2nCn = 0
C1{i + C2{2 + · · · + Cn-r{n-r = 0 implies C1 = C2 = ' · · = Cn-r = 0.
Let { = (d1 , d2 , ... , dr , . .. , dn) be any solution of the system. = 0.
170 HIGHER ALGEBRA VECTOR SPACES 171

~ince ( c1 , c2 , . • . , Cn) is non-zero, at least one of. the components, sa.y X1 + b1r+1Xr+l+ b1r+2Xr+2 + · · · + b1nXn 0
Cj, 1s non-zero. j x2+ b2r+1Xr+l + b2r+2Xr+2 + · · · + b2nXn 0
a11 a12
i
Cjalj a1n
det A= a21 a22 Cja2j a2n
Xr + brr+1Xr+l + brr+2Xr+2 + ··· + brnXn = 0.
Cj-
= A solution can be obtained by choosing arbitrary scalars for
an1 an2 Cjanj ann Xr+l , Xr+2·· ·· , Xn-
a11 a12 C1a11 + C2a12 + · · · + Cjaij + · · · + Cnaln- a1n j Let Xr+l = -c1, Xr+2 = -c2 , • • • , Xn = -Cn-r·
a21 a22 C1a21 + C2a22 + · · · + Cja2j + · · · + Cna2n a2n 1., Then the general solution of the system is
i
an1 an2 + c2an2 + · · · + Cjanj + · · · + CnUnn
C1Un1 ann
1
j (c 1 b1r+1 + c2b1r+2 + · · · + Cn-rb1n, C1b2r+l + c2b2r+1 + · · · +
Cn.:...rb2n , ... , c1brr+l + c2brr+2 + · · · + Cn-rbrn, -C1, -c2, · · ·, -Cn-r)
[c; = c1C1 + c2C2 + · · · + Cj-lCj-1 + cj + Cj+1Cj+1 + · · · + CnCn]
= C1 (b1r+l, b2r+l, • • •, brr+l, -1, 0, 0, · · ·, 0)
= 0, since the jth column is the zero column. +c2(b1r+2, b2r+2, ... , brr+2, 0, -1, 0, • • • , O)
Since Cj =I=- 0, <let A = 0 and this implies that rank of A < n. +
+cn-r(bin, b2n, .. . , brn, 0, 0, 0, ... , -1), where C1, C2, . . . , Cn-r are ar-

Conversely, let rank of A < n. Let X(A) be the solution space of the bitrary scalars.
homogeneous system. Then rank of X(A) + rank of A= n. Since rank
of A < n, it follows that rank of X(A) > 0. This proves that there is a Worked Example (continued).
non-zero solution of the system. l 5. Solve the system of equations
l x+ 2y +z- 3w 0
Note. The number of solutions in this case is infinite. l 2x+4y+3z+w = 0
I 3x + 6y + 4z - 2w = 0.
Method of solution of a homogeneous system
Let the system of equations be AX = 0. The matrix A can be 1 This is a homogeneous system. Let A be the coefficient matrix of the

reduced to a row-echelon matrix R by elementary row operations on A. l system.


Let us apply elementary row operations on A to reduce it to a row
If the rank of A be r, then R has r non-zero rows. The leading 1 's of the !
reduced echelon matrix.
l
non-zero rows appear in columns k1, k2, . .. , kr where k1 < k2 < · · · <
~ ~ ~ -10)
0
2 0
kr. By suitable interchange of columns (i.e., by suitably renaming the A R2~R1 ( 0 1 7 .
unknowns) , R takes the form R3-3R1 O O l 0 0 0

1 0 0 bir+l b1r+2 bin The given system is equivalent to


0 1 0 b2r+l b2r+2 b2n x+2y- l0w = 0
z +1w = 0.
0 0 1 brr+l brr+2 brn Choosing y = c, w = d, where c, d are arbitrary real numbers; the
0 0 0 0 0 0 solution is (-2c + lOd, c, -7d, d) = c(-2, 1, 0, 0) + d(lO, 0, -7, 1).

0 0 0 0 0 0 Note. The solutions form a vector space generated by the vectors


(-2, 1, 0, 0) and (10, 0, -7, 1) which are linearly independent. So the rank
Therefore, by suitable adjustment of renaming the unknowns, if nec- of the solution space is 2. The rank of the matrix A is 2. Therefore rank
essary, the equivalent system is of A+ rank of X(A) = 4 (the number of unknowns).
172 HIGHER ALGEBRA VECTOR SPACES
173

Non-hom ogeneou s system. Let Y E T . Then AY = 0.


We have seen that a non-homo geneous system may not have a solu-
l A(Y + Xo) = AY + AXo = B. This shows that Y + Xo ES.
tion.
ll Let Z ES. Then AZ= Band also AXo = B.
First of all; we discuss the solvability of a non-homo geneous system.
l
-...-~l
A(Z - Xo) = AZ - AXo = 0. This shows that Z - Xo ET.
Theorem 4.10.5. A necessary and sufficient condition for a non.- ·l Thus YET=;. Y + Xo ES, and Z ES=;. Z- Xo ET.
homogene ous system AX = B to be consisten t is l
This completes the proof.
_ rank of A = rank of A, Corollary . If the non-homo geneous system AX = B be consil:Jten t, the
where A is the augmente d matrix of the system.
system possesses only one solution or infinitely many solutions according
Proof. Let 01, 02, ... , On be the column vectors of A and o 1 a 2 0 {3 a.<; the associated homogene ous system possesses only the
zero solution
be those of the augmente d matrix (A, b). ' ,. · ·' n, or inffnitely many solutions.
Suppose that there exists a solution (c 1 , c2 , •.. , en) of the system.
Existenc e and number of solutions of the non-hom ogeneou s sys-
Then c1a1 + c202 + · · · + CnOn = f3 ••• •.. (i)
tem AX = B, where A is an m x n matrix.
Let S= {01,02, ... ,an}. T= {01,02, ... ,on,/3}.
Since Sc T, we have L(S) c L(T). Case 1. m =n.
The system is consistent if and only if rank of A = rank of A. For a
Using {i), we can say that each elem;nt of Tis a linear combinati on
- of the vectors of S. Therefore L(T) c L(S). consistent system, two cases arise.
Conse9ue ntly, L(S) = L(T), i.e., the column space of A= the column - Subcase (i) - Rank of A = rank of ii= n.
space of A. Here A is non singular and so A - 1 exists.
Therefore the column rank of A= the column rank of A. The syst_e m possesses the unique solution X = A- 1 B .
Conseque ntly, rank of A = rank of .A. Subcase (ii). Rank of A = rank of ii< n.
Conversel11; let rank of A = rank of A = r. The associated homogene ous system AX = 0 has infinitely many
Then r columns of A are linearly independ ent. Without loss of gener- solutions and therefore the system AX = B possesses infinitely many
ality, we can assume .01, 02, ••. , Or are linearly independe nt. Since rank solutions.
of A = r, ~hese column vectors are also linearly independ ent column
Case 2. m < n.
vectors of A and /3 is a linear combinat ion of o 1 , 02, ... , Or. The system is consistent if and only if rank of A = rank of A $ m .
Hence /3 = d10:1 + d2et2 + · · · + d.ror for some scalars d •. If consistent , rank of A = rank of ii < .n.
It follows that {d1;d2, ... , d.r, 0, 0, ... , 0) is a solution of the system. In the consistent case the homogene ous system AX = 0 bas infinitely
Hence the system is consisten t. many solutions and therefore the system AX = B possesses infinitely
many solutions.
This completes the proof.
Case 3. m > n.
Note. The solutions of a consisten t non-homo geneous system AX =B The system is consistent if and only if rank of A
do not form a vector space bf::cause (0, 0, ... , 0) is not a solution. = rank of ii $ n.
For a consistent system, two cases arise.
Theorem 4.10.6. If the non-homo geneous system AX = B possesses a Subcase (i). Rank of A = rank of ii= n.
solution Xo then all solutions of the system are obtained by adding Xo Let X(A) be the solution space of the homogene ous system AX= 0.
to the general solution of the associate d homogene ous system AX = O. Then rank of X(A) + ran\( of A = n gives rank of X(A) = 0. The
Proof. Let S be the set of all solutions of the system AX = 8 and T be system AX = 0 possesses only the zero solution and therefore the system
the set of all solutions of the associated homogene ous system AX = 0 - AX= B possesses only one solution.

HA-33
174 HIGHER ALGEBRA VECTOR SPACES 175

1
Subcase (ii). Rank of A = rank of A< n . Worked Examples (continued).
The ass?ciated homogeneous system AX = O possesses infinitely 1 6. Solve, if possible.
many solut_10ns and therefore the system AX = B possesses infinitel f
+ 2y + Z - 3W = {ii) X + 2y + Z - 3w
many solutions. y
! (i) X 1 = 1
2x + 4y + 3z + w = 3 2x+4y+3z+w = 3
3x + 6y + 4z - 2w = 5, 3x + 6y + 4z - 2w = 4.
Method of solution of a non-homogeneou s system.
(i) This is a non-homogeneous system.
Let the system of equations be AX = B where A is an m x n matrix

0
and let rank of A= rank of A= r . 2 1 -3
The coefficient matrix of the system is A = 4 3 1 ) and
L~t ~ be_ reduced by elementary row operations to the row echelon . 6 4 -2
matnx R which by suitable interchange of columns takes the form
1
0
0
1
0
0
b1r+l
b2r+l
b1r+2
b2r+2
bin
b2n
di
d2
the augmented matrix is A
.
=
C 2
3
2
4
6
1
3
4
-3
1
-2 5
1
3
)
0 0
Let us apply elementary row operations on A .
1 dr
-r ~ )
brr+l brr+2 brn

~0 0~ 1~ ~ 0~ 0~ ~
0 0 0 0 0 0 0
A R2=.=R1 ( R1 -R2 ( - ~l ) .
0 0 0 0 0 0
R3-3R1 7 2 R3-R2 0 0
0
Therefore, by suitable adjustment of renaming the unknowns, if nec- Here rank of A = 3 and rank of A = 2. Since rank of A ,:/ rank of A,
essary, the equivalent system is the system is inconsistent.

X1 + b1r+1Xr+l + b1r+2Xr+2 + · · · + b1nXn = d1 (ii) This is a n(oriiho~o~ene~~) system. The coefficient matrix of the
X2 + b2r+1Xr+l + b2r+2Xr+2 + · · · + b2nXn = d2
system is A = 2 4 3 1
Xr + brr+lXr+l + brr+2Xr+2 + · · · + brnXn dr-
3 6 4 -2
A solution can be obtained by choosing arbitrary scalars for 1 2 1 -3
Xr+l - Xr+2 , - - - - Xn- Let Xr+l = -C1,Xr+2 = -C2, · · · ,Xn = -Cn-r·

Then the general solution of the system is


(di + c1.b1r+l + c2b1r+2 + ... + Cn-rbln,
and the augmented matrix is A =

Let us apply elementary row operations on A to reduce it to a row


reduced echelon matrix.
( ~
4
6
3
4
1
-2 D·
di + C1b2r+l + C2b2r+2 + · · · + Cn-rf>in, ... ,
dr

=
+ C1 brr+l

( d1 . d2 ,
+ C2brr+2 + · · · + C,,-rbrn, -Ci, -C2, ... , -C,._r)
. . ., dr , 0, 0 ... . , 0)
+ c 1 (b 1r+ l , 02r+l , - - •, brr+l,
+ c2(b1r+2 , h 2r +2 , • - • ,

+
-1, 0, 0, . .. , 0)
brr+2 , 0 , -1, 0, . . . , 0)
A R2:-~R1
R3-3R1

Here rank of A
(~
Q
~ ~
Q

=
l
-;
7
A = 2 . So the system is consistent.
rank of
The given system is equivalent to x + 2y - 10w = O
2
0
0
0
1
0
-10
7
0 n
+ cn - r ( b 1.,. . l;-2.,,, ... , b r n , O, O, O, .. . , - l ) , where C1,C2, .. . , cn-r are ar- z + 1w = 1.
bi trn. ry s c,11.ars.
Choosing y = c, w = d, where c , d are arbitrary real numbers, the
N ote. The sol ution (di . ch ,- .. , dr , 0,0 , . . .• O) is a particular solution of solution is (-2c + 10d, c , 1 - 7d, d)
t lw ~Yl>~!rn, 0btained by t &.king Xr + 1 = Xr+2 = · · · = Xn = 0. = (0, 0, 1, 0) + c(-2, 1, 0 , 0) + d{lO , 0, -7, 1).
176 HIGHER ALGEBRA VECTOR SPACES 177

7. Solve the system of equations 3x


X + 2y+
+ y + 2z
Z 1
3 in integers. A - ! ~ -t )
( and the augmente d matrix of the system is

u~ -t :. )
X + 7y + 2z 1

_
This is a non-homo geneous system.
1 2 1
The augmente d matrix of the system is

Let us apply elementar y row operation s on


reduced echelbn matrix.
A- ( 3
1
.1
7
2
2 D·
A to reduce it to a row
A-
Let us apply elementar y row operation s on A.

~ ; _ ~1 tb2 ) R2.::_,R1 ( ~0 2~ _; !- 1 )
( 7 R3-5R1 -4 b2 - 5

H)
2 1 2 5
- R2-3R1 ( l
A - 0 -5 -1 01 ) -!R:i ( .o1
Rs-R1 O
5
--+ 1 0 3 -b+2
1 0 0 5 R1-R2
010 1 -2 b- 1
0
1
f
s
R3-2R2 ( 0 0 b2 - 2b- 3
If b2 _ 2b _ 3 = o, then rank of A= rank of A= 2 and therefore the
0 0
system is consistent .
The given system is equivalen t to If b2 - 2b - 3 :/= 0, then rank of A = 3, rank of A = 2 and since rank
x+ fz = 1 of A :/= rank of A, the system is inconsiste nt.
y + 5 z = 0.
Therefore if a= 1, b :/= -1, 3, the system has no solution; and
Choosing z = c, the solution is (1 - c, f -½c,c), where c E JR. if a= I, b = -1 or if ,.z. = 1, b = 3, the system has many solutions.
Since the solutions are to be in integers, c = 5k where k is an arbitrary
integer . Hence the solution is (1 - 3k, -k, 5k) = (1, 0, 0) + k(-3, -1, 5),
k being an integer. Exercis es 10
1. Solve the equations
8. Determin e the condition s for which the system of equations
(i) + y + 3z
X 0 (ii) x+y-z- w 0

x+y+z 1
2x+y+z = 0 x-y+z- w 0.
3x+2y+4 z 0,
x+2y- z b 2. Find the solution of the system of equations in rational numbers.
5x+ 7y+az b2 (i) 2X + 3y + Z 0, (ii) X + 4y + Z 0
4x +y- z 0.
admits of (i) only one solution, (ii) no solution , (iii) many solutions.
3. Find the solution of the system of equations in integers.
The system has a unique solution if the coefficien t determina nt be
(i) X + 2y + Z O (ii) · X - 3y + 4z
non-zero. 0
3x+y+2 z 0, 3x+y-2 z 0.
1 1 1
The coefficien t determin ant= 1 2 -1 = a - l. 4. Find·a·linea.r·horilogene€>US equation in x1,x2,xa,x 4 such that x1 = l,x2 =
1;x3 = l,x 4 = l; x 1 = l,x2 = -1,xa = -l,x4 .= 1 and X1 = 2,x2 = 3,xa =
1
5 7 a
If a - 1 :/= 0, i.e., if a :/= 1, the system has only one solution. 3, x 4 =2 are solutions of the equation. ·
5. Find a linear homogeneo us -system of two independe nt equations in x1, x2,
If a =
1, the system has either no solution or many solutions.
x 3, x such that x 1 = l,x 2 = 2,x3 = 3,x4 = 4 and x1 = 2,x~ = 3,xa = 4,x4 =
4
When a = 1, the coefficien t matrix of the system is 1 are solutions of the system.
.
178 VECTOR SPACES 179
HIGHER ALGEBRA

6. Examine the solvability of the system of equations and solve, if possible. 4.11. Application to Geometry.
(i) x + y + z I 4.11.1. Intersection of two lines in Euclidean plane.
( ii) x+y+z = 1
2x + y + 2z = 1 2x + y + 2z = 2 Let the Jines be a11x1 + a12x2 = bi,
X + 2y + 3z = 0, 3x + 2y + 3z = 5. a21X1 + a22x2 = b2.
7. For what values of a the system of equations is consistent? Solve completely
in each consistent case. Let A =( a11
a 21
a12 )
a22
B = ( aua21"
a12
a22
b1) .
b2 .
(i) x-y+z 1 (ii)
X + 2y + 4z a
x+y+z
2x + 3y - z
1 Let°'•= (ai 1 , ai2), i = 1, 2, /3, = (au, ai2, bi), i = 1, 2.
a+l
X +4y+ 6z 2x +y + 5z a2 + 1. . In order to investigate the nature of solutions of the given non homo-
geneous system of equations, we are to consider the following cases.
8. For what values of k the system of equations has a non-trivial solution?
Solve in each case. Case 1. Rank of A = 2.
(i) x+y+z kx There is a unique solution of the system since det A =/:- O. Therefore,
(ii) x + 2y + 3z kx
x+y+z = ky 2x+y+3z = ky
the lines intersect in a point.
x+y+z kz, 2x + 3y + z kz. Case 2. Rank of A= 1, rank of B = 2.
9. Determine the conditions for which the system of equations has (a) only The system is inconsistent and therefore there is no solution of the
one solution, (b) no solution, (c) many solutions. system. The lines are parallel.
(i) X + 2y +z 1 (ii) x+y+z Case 3. Rank of A = 1, rank of B = l.
b
2x +y+3z b 2x+y+3z = b+l The system is consistent and there are infinite number of solutiollB
x + ay + 3z b + 1, 5x+2y+az b2. since rank of A < 2.
Since rank of B = 1, /31, /h are linearly dependent. Therefore /32 =
a c{3 1 for some non-zero real number c.
10. Solv e the system of equations b and use the solution to
X1 + X2 C
This shows that the two equations are identical and therefore ·the lines
0 1 1 are coincident.
fi nd t h e inverse of the matrix A=
(
1
1
0
1
1
0 )· Examples.

+ X2 + X3
1. The lines 2x + 3y = 3 and x + 2y = 1 intersect in a point, since the
11. So lve the s ystem of equations
-X1
X1
X1
- X2 + X3
+ X2 - X3
=
=
=
a
b and use the rank of the matrix ( i ;) is 2.

-!
C

I 2. The lines ~ + 3y = 3 and 4x + 6y = 7 are parallel, since the _rank of


s oiution to fi nd t h e inverse of the matrix A= ( -; 1
-1
)- the matrix ( ~ : ) is 1 and the rank of ( ! ! ~) is 2.
12. Solve the syst e m s AX= E1,AX = E2,AX = E3, 3. The lines 2x + 3y = 3 and 6x + 9y = 9 are identical, since the rank of

l) the matrix ( ! :) = the rank of ( ~ ! !) = 1.


wh ere E i = (g . Hence find A- 1 .
4.11.2. Intersection of two planes in Euclidean 3-space.

~
1
Let the planes be a11x1 + a12x2a13x3 = b1,
(i ) A = ( 1
a21X1 + a22x2a23x3 = ½.
180 HIGHER ALGEBRA VECTOR SPACES 181

Let A =( a11 a12 a13 ) B - ( au a12 a13 bi ) In this case, only one of 01 , 02 , ~ 3 , say a 1 is independ ent. Therefore
a21 a22 a23 ' - a21 a22 a23 b2 ' 0:2 = ca1, 0:3 = da1, c, d being non-zero real numbers.
a, = (an, a,2, Oi3), i = 1, 2,· (3i - (a il, a,2,
· ai3, bi ) , i· ==. 1 , 2 • The planes are perpendic ular to a common direction , the direction
The following cases come up. ·;ector being (au, a12, a13).

Case 1. Rank of A= 2, rank of B = 2. Sub case (i). Rank of A= 1, rank of B 1. =


In this case, {h = c/31, /33 = d/31,
. In ~his c~, the soluti.?n space of the homogene ous system AX
1s of _dimensio n 1. Let (li, l2, l3) be ·the independe nt generator of the
=o This shows that the equations are identical and therefore the planes
solution space. Then 0i1li + a,2h + a;3 l3 = 0, i = 1, 2. n.re coinciden t.
This shows that the planes · are parallel to the direction (l 1 l l '.\r Sub case (ii). Rank of A= 1, rank of B = 2.
S'1nce r ank o f A = 21
' Leta,. The system of equations is inconsiste nt and therefore the planes have
rank of B, the system admits of a. solution.
(p1,P2,P3 ) be a solution. no common point.
Then the general solution of the system is r (h, h, la) + (p1, p 2 , p ) Since rank of B = 2, only two of /31, /h, {33, say /31, /32 are independ ent.
3 · Therefore the planes (i) and (ii) are parallel, by case 2 (i) of 4.11.2.
= (hr+ Pl, hr+ P2, lar + Pa), where r is an arbitrary real number.
Therefore the planes intersect a.long the line Now /33 = c1P1 + c2/32 where c1, c2 E JR and (c1, c2) =/= (0, 0).
~ = =2,P2 = -z;-- = r. If c1 = 0, /33 = c2f32, Therefore , the planes (ii) and (iii) are identical.
~ :,;3-Jf3
2
If c2 = 0, /33 = c1f31, Therefore , the planes (i) and (iii) are identical,
Case 2. Rank of A = 1. If c1 =I= 0, c2 =I= 0, the three planes are parallel.
. In this _case, 01, 02 are linearly dependen t. Then a 2 = ca1 , where c
Therefore , in this subcase, the planes are parallel, two of which may
1s a. non-zero real number. This shows that the planes are perpendic ular
to the direction (a11,a12,a 1a). be coinciden t.

Sub case (i). Rank of A= 1, rank of B = 2. Case 2. Rank of A = 2.


In this case, the system is inconsiste nt and therefore admits of no · In this case, · the solution space of the homogene ous system AX -
0 is of dimension 1. Let (h, l2, l3) be the independ ent solution of the
solution. The planes are parallel.
homogene ous system.
Sub case (ii). Rank of A = 1, rank of B = 1. Then 0i1l1 + 0-i2l2 + a,ala = 0, i = 1, 2, 3.
~n thi_s case, 132 =
c/31, where c ,/: 0. This shows that the equations Therefore the planes are all parallel to the common · direction
are 1dent1cal. Therefore the planes are coinciden t.
(li, l2, l3) .
4.11.8. Intersect ion of three planes in Euclidea n 8-space. Sub case (i). Rank of A =--= 2, rank of B = 2.
The system is consistent . Let (p1, p2, p3) be a solution of the system
Let the planes be a11x1 + a12~2 + a13X3 = b1 •.. (i) The general solution is r(h, l2, l3) + (pi, P2, p 3 ), r E JR. ·
+ a22:r:2 + a23x3 = ~ ...
a21:r:1 (ii) That is, (x1,x2,xa ) = (hr.+p1, l~r+p2,l ar+p 3 ).
a31x1 + aa2:r:2 + a33X3 = ba ... (iii) Therefore the planes intersect along the line
Let A= (4i;), B = (A, b), ai == (0ii,0i2,a .a), ~-i-i- -- =,,P?
=!.=.1!.!.
, -- ~
7s =r.
f3i = (an, Oi2, a,a, b,), i == 1, 2, 3. Sub case (ii). Rank of A = 2, rank of B = 3.
The following cases come up. The system is inconsiste nt an~ therefore admits of no solution. The
Cwse 1. Rank of A= 1. . f31,l {h,
vectors . d{33 ared linearly independe nt. Two of 01 , 02 , ....
,..,3, s"'v
a..J a1, a2
are lmear y lll epen ent.
'l

VECTOR SPACES
I
182 HIGHER ALGEBRA 183 l

Therefore the planes (i) and (ii) intersect along a line parallel to the Here rank of A= 2, rank of B = 3. The solution of the homogeneous
direction (h, '2, l3), by case 1 of 4.11.2. system AX = O is c(l, 1, 1). The planes are all parallel to the direction
( 1, 1 , 1). The first and the sec~:md planes are paral~el. ;1'he third plane
Let 0:3 = ca:1 + da2 where (c, d) =I= (0, 0). intersects the other two along )mes parallel to the d1rect1on (1, 1, 1).
If c = 0, 0:3 = da:2, d =I= O; but /33 =I= d/32 • :I'herefd're the planes (ii) and
(iii) a.re parallel. 7. Let the planes be 5x1 + 3x2 + 7x3 = 4
If d -= 0, a:a = co2, c =I= 0; but /33 =/: c/32 • Therefore the planes (i) and 2x1 + x2 + 3xa = 1
(iii) a.re parallel. + 3x2 + llxa = 2.
7x1
. If c ¥: 0, d =I= 0 then 01, 03 and also a: 2 , a: 3 a.re linearly independent.'
pairs .. Th~refore the planes (i) and (iii') intersect along a line parallel to
the d1rect1on (h, h, la) and also the planes (ii) and (iii) intersect alorig a.
line parallel to the direction (h, h, l 3 ), by case 1 of 4.11..2.
Ij
l
' j
Here rank of A = 2, rank of B = 2.
The system of equations is consistent. It is equivalent to
X1 +2X3 = -1
Therefore either 1 X2 - X3 = 3.
(i) two of the planes a.re parallel and the third intersects them, or Taking x 3 = r, the general solution is (-1 - 2r, 3 + r, r).
(ii) three planes intersect in pairs along three parallel lines and the 'rherefore x 1 = -1 - 2r, x2 = 3 + r, xa = r.
planes form a prism, the axis of the prjsm being parallel to (h, h, la). Hence 'the planes intersect along the line
:i:1~1 _ xa-3 _ x;,-0 _ r
.
Case 3. Rank of A = 3. - 1 - 1 -

In this case, the system of equations admits of a unique solution, since


det A :,/: 0. The planes intersect in one point only. Exercises 11
~xamples · ( continued ).
1. Examine the nature of intersection of the triad of planes
4. Let the planes be 2x1 + 3x2 - xa = 0
(i) 2x - y +z + 2y + 4z = 7, 5x + 3y - z = O;
= 5,:z:
3x1 + 3x2 + xa = 2
(ii) X - 2y = 0, 3x + y + Z = 8, 2:i: + 3y + Z = 1;
X1 - X2 + 2x3 = 5.
(iii) x + y - z = 3, 5x + 2y + z = 1, 2x + 2y - 2z = 1;
Here det A :,/: 0 and therefore the planes intersect in one point only. (iv) 2x + y + 2z = 6, y + z = 4, 4:i: + y + 3z = 8;
The point is (4, -3, -1).
(v) 2x-y= 1,3y-2z=3,3x-z=3.
5. Let the planes be x1 + x2 - 2xa =3 2. Show that plaries bx - ay = n, cy - bz = l, az - ex= m intersect in a line if
Xi - 2x2 + X3 = 3 al+ bm +en= 0 and the direction of the line is (a, b, c).
X1 - X3 = 1. 3. For what vah,te of k the planes
Here rank of A = 2, rank of B = 3. The solution of the homogeneous (i) x - 4y + 5z = k, x - y + 2z = 3 and 2x + y + z . = O;
system AX = 0 is c(l, 1, 1), '1Y'here c is an arbitrary real number. The (ii) x + y + z = 2, 3:z: + y - 2.z = k and 2x + 4y + 7z = k + 2
planes intersect in pairs along lines which a.re parallel to the.direction (1, intersect in a. line? Find the equations of the line in that case.
1, 1). Therefore the planes form a prism, the axis of the prism is parallel
to the direction (1, 1, 1). 4. For what values of k the planes

6. Let the planes be x1+ x2 - 2xa = 3 (i) x +y+z= 2, 3x + 11 - 2z =k and 2x + 411 + 7z= k + 1;
X1 - 2x2 + X3 = 3
(ii) x + y + 1 = 0, 4:z: + y - z =k and 5:z: - y - 2z = k2
form a triangular prism ?
2x1 + 2x2 - 4.xa = 1.
'i
VECTOR SPACES 187
186 HIGHER ALGEBRA

Note. In particular, if the Euclidean space be Rn with standard inner


5. In Pn, let us define (J. g) = J~ 1 f(t)g(t)dt. for
f,g E Pn.
product, taking a= (a 1 , a 2 , . . . ,an),fJ = (b1,b2, • ·· • bn) the inequality
Then (, ) also satisfies all the conditions for a real inner product.
takes the form
Therefore the vector space Pn equipped with this real inner product (a 1b1 + a2b2 + · · · + anbn) 2 ::=; (af +a~+· ·· + a~)(bf + b~ + · · · + b~),
becomes a Euclidean space different from the space described in Ex.4. the equality holds when
Definition. Norm of a vector. either (i) <Li = 0, or bi = 0, or both ai = 0 and bi = 0 for ·i = 1, 2, . .. , n;
or (ii) ai = h-:bi for some non-zero real k, i = l. 2, ... , n.
If Cl:' be a vector in a Euclidean space V with the inner product (, ),
the norm of a, denoted by lla:11 . is dcfim~d by ! l u l l = ~ - Theoem 4.12.3. In a Euclidean space V, two vectors a, /3 are linearly
Theorem 4.12.1. Let a be a vector in a Euclidean space V and llo:11 be dependent if and only if l(a:,/3)1 = llallll/311-
its norm. Proof. Let a: , (3 be linearly dependent.
Then (i) llcal! = lcllla:11, c being a real number; If one or both of a:, /3 be null, then the equality holds.

(ii) lla:11 > 0 unless a= 0 and 11011 = 0. If a, f3 be both non-null, then a = k/3 for some non-zero real k.
2
In this case , llall = !kill.BIi and (a./3) = (k/3,/3) = k(,6,/3) = kll/311 -
Proof. (i) llcall = ✓ (ca:,ca:) = ✓c 2 (a:.a:) = icl ✓ (a:,a:) = icllla:11- Therefore l(a,/3)1 = lklll/311 2 = llallll/311 -
(ii) a: # 0 implies (a:, a:) > 0 and tMrefore llall > 0. Conversely, let l(o:, /3)1 = lla:1111/311 -
If a:= fl. then (a: . a:) = (0 . 0) = 0 and therefore lla:11 = 0. a, .Bare linearly independent implies l(a:,/3)1 < llallll/311, by case 3 of
the previous theorem.
4.12.2. Schwarz's inequality. Contrapositively, l(a,/3)1 -/:. llallll/311 implies a , {3 are linearly depen-
For a ny two vectors a.[] in a Euclidean space V, dent . But by Schwarz's inequality, l(a,/3)1 ::=; liallll/311 for all a,/3 in V.
l(o- . t:1)1 ::=; llall ll,811, Therefore l(a, /3)1 = llall II.Bil implies a, .B are linearly dependent.
the equality holds whe n o, fJ are linearly dependent. This completes the proof.
Proof. Case 1. Let one or both of a:. 3 b e null. Then both s ides being Note. a,/3 are linearly dependent implies l(a ,,6)1 = llallll/311-
zero, t he equality h o ld t:i. But (i) o. , /3 are linearly dependent may not imply (a, /3) = llall 11/311 -
Case 2. Let a. i3 b e non-null and linearly dependent . Then there exists For example, let a= (1 , 2, 3). /3 = .(-2, -4, -6);
a n on-ze ro r eal number k such that u = k:t'J. (ii) a: , /3 are linearly dependent may not imply (a,,B) = -llallll/311. For
example, let a = (1, 2 , 3), /3 = (2, 4, 6) .
Then lla:11 = if.: I11/111 and (a. /3) = (k /3 . 3 ) = k (/3. /3 ) = klli3ll 2 .
· Therefore l(a: .f-3)1 = lklllt111 2 = ll a llll.3 11- 4.12.4. Triangle inequality.
Case 3. Le t a , /3 be not linearly d ependent. Then a: - k/3 'I= 0 for all If a , /3 be any two vectors in a Euclidean space V , then
real k. Ila: + -/311 s 11°11 + II .B Ii.
T h er efore (a - k /3 , a - k /3) > 0 for all rea l k
Proof. From the properties of an inner product ,
or , (a:, a ) - 2k (a, .6 ) + k 2 (/3, /3) > 0 for all real k .
Since (a, a) , (a:, /3) . (/3 . /3 ) are all real and (/3 , f3) 'I= 0, the left-hand
Ila+ ,611 2 = (a+ /3 , a+ ,6 ) = (a:, a:)+ 2(o:, /3) + (/3, ;3)
= lio:11 2 + 2(a , f3) + 11011 2
side is a real q uad r at ic poly nomial in k and since it is positive for all
S lla:11 2 + 2lla:II 11/311 + 11 .6 11 2 , by Schwarz's inequality
Teal values of k , t he discri m ina nt of the quadratic polynomial must be 2
negat ive, for otherwise the p olynomia l would be zero for some real k . (llall + 11 /3 11) .
T h us (o ,/3) 2 -(o ,o: ) (/3 . /1) < 0, whe nce l(a . .B)I < llallll/311- Therefore lln + .611 ::=; llnll + 11 /3 11 -
This com pletes the p roof. This completes the proof.
·-,
1
188 HIGHER ALGEBRA li VECTOR SPACES 189
/
Note. Ila+ /311 = llall + ll611 implies (a. ,6) = llallll/311 and this again Exampl e 6. In a.n n x n real orthogo nal matrix, the row vectors form
implies o:, /3 are linearly depende nt . an orthonor mal set and the column vectors form another orthono rmal
Since o:, /3 are linearly depende nt may not imply (o:, .6) = llall 11/311, set in lR" with standard inner product .
Ila+ .BIi may not be equal to Hall + 11/311 if a. /3 are linearly depende nt.
For example , let o: = (1, 2, 3), f3 = (-1, -2, -3). Let A be a. real n x n ·orthogo nal matrix. Then AAt In. Let
0:1, 0:2, •.. , an be the row vectors of A.
Then the column vectors of At are 01, a2, ... , On.
Definit ions.
l The ijth element of AAt
A vector o: in a Euclidea n space V is said to be a unit vector if = the inner ·product of the ith row vector of A and the jth column
llo:11 = l. If o: be a non-zero vector in V, then u¼rra is a unit vector.
In a Euclide an space, a vector a is said to be orthogon al to a vect~r
{3 if (o:,/3) = 0. Since (o:,/3) = (.6,a), if a be orthogon al to /3 then /3 is
I
l
= (ai, aj)-
Since AAt = !.,., (ai, aj) = 0 if i
vector of At

=/= j
orthogo nal to a. In this case, o:, .6 are said to be orthogon al. j
= 1 if i = j.
The null vector 0 is orthogon al to any non-null vector o: and also it is This proves that {a 1 ,a2, ... ,on} is an orthono rmal set.
orthogo nal to itself. This follows from the property of an inner product.
In a similar manner it can be shown that the column vectors of A
4.12.5. Pythag oras theorem . form an orthonor mal set.
If a, /3 be two orthogo nal vectors in a Euclidea n space V, then Note. The row vectors of In, i.e ., the vectors t:1,
11° + /311 2 = 110:ll 2 + 11/311 2.
I orthonor mal set in ]Rn with standard inner product .
t:2, ..• , E:n form an
l
Proof. llo: + ,611 2 = (a+ /3, o: + /3) = (o:, a:)+ 2(a:, /3) + (/3, /3) j
Theore m 4.12.7. An orthogon al set of non-null vectors in a Euclidea n
= llo:11 2 + II bll 2 , since (a:, /3) = 0. ! space Vis linearly independ ent.
This complet es the proof.
\ Proof. Let {,81,/32, ... ,/Jr} be an orthogon al set of non-null vectors. Let
4.12.6. Paralle logram law. us consider the relation
If o:, /3 be any two vectors in a Euclidea n space V, then ci/31 + c2f32 + · · · + Cr/3r = 0, where c., are real numbers .
llo: + /311 2 + Ila: - /311 2 = 211all 2 + 211/311 2 • +
Then (c1,81 + c2.82 + · · · Cr.8r, /3-.) = (0, /3i) = 0 for i = 1, 2, ... , r.
Proof. Ila+ /311 2 = (a+ /3, o: + /3) = (a. a) + 2(a, /3) + (/3, /3). or, c1 (/31, /3;.) + c2(/32, /3i) + · · · + c;.(/3;., /3i) + · · · + Cr(/3r, /3;.) = 0
or, Ci(/3i,/3i) = 0, since (/3;.,/3j) = O,j =I= i
Ila - /311 2 = (a: - /3. a - /3) =(a.a) - 2(a, /3) + (/3, /3).
Since /3i is non-null , (/3i, /3;.) > 0 and therefore Ci = 0.
Therefo re Ila:+ /311 2 + Ila - /311 2 = 2(o. a)+ 2(/3, /3) = 2llall 2 + 211/311 2 .
This complet es the proof. This proves that the set {/31, /32, ... , /3r} is linearly indepen dent.
Corolla ry. An orthonor mal set of vectors in a Euclidea n space is linearly
Defi.::ii tions. independ ent.
A set of vectors {,6 1 , /32 , .. . , /3r} in a Euclidea n space is said to be
orthogon al if (/3i, {)j) = 0 wheneve r i # j. Definiti ons.

A set of vectors {,6 1 , /32, .. . , /Jr} in a Euclidea n space is said to be Let /3 be a fixed non-zero vector in a Euclidea n space V . Then for
a non-zero vector a: in V there exists a unique real number c such
orthono rmal if (/3i, /3j) = 0 for i =I= j that
o: - c/3 is orthogon al to /3.
= 1 for -i = j.
Note. An orthogo nal set of vectors may contain the null vector 0 but c is determin ed by ( a: - c/3,'/3) = 0 . Therefor e ( o:, /3) = c(/3, /3) , giving
an orthono rmal set contains only non-null vectors. c=~-

HA-34
HIGHER ALGEBRA
l
190 V E CTOR S P AC E S 191

c is said to b e the scalar component (or component) of o a long {3 and


The orem 4.12.9. B e sse l's inequality.
cf] is said to be the projection of o upon f3 .
If {{31 , {32 • •. . , f3r } be a n orthonormal s e t of vectors in a Eucliden.n
Theorem 4.12.8. If {,81 , !32 . . . . , !3r} be an orthogonal set of non null s p ncc V. then fo r a ny v ector o in V.
v ectors in a Euclidean space V then any vector f3 in L{/3 1 , /32 , . .. , f3r}
has the unique representation f3 = c1f31 + c2/32 + · · · + Cr/3r. where Ci is \\o\1 2 ~ Ci + c~ + · · · + c; .
the scalar component of {3 along f3i . whe re c; is the scala r compone nt of a: along 6 •. i = 1 , 2 , .. • . r •

Proof. Since { /31 , 132, ... , f3r} is an orthogonal set of non null vectors , P roof. For all i (i = 1, 2, . . . , r) . ci = (Ct:, /3-; ) . since (/3i , /3.) = l.
it is linearly independent and there fore it is a basis of the s ubspace u - c 1 6 1 - c2 (h - • • • - crf3r is orthogonal to e a ch /3i, 1 ::; i::; r, since
L{/31 Jh, . . . , /Jr} . (a: - cu'31 - c2fh - · · · - Cr/3r , /3;) = (o,/3;) - Ci(/Ji , /3;.) = 0.
Therefore /3 can be expressed as /3 = c1f31 + c2,82 + ··· + Cr .Br, whe re It follows that o - c 1{31 - c 2{32 - · · · - cr!3r is orthogonal to c i/31 +
Ci are unique real numbers. c2B2 + · · · + Cr/3r -
Now (/3, /Ji )= ci(l3. , Bi), since ({3j , Bi) = 0 for j =I=- i. = (Q - cif31 - c2/32 - · · · - Cr/3r) + (c1/31 + C2/J2 + · · · + Cr/3r)-
Ct:
SoCi = t~·~.~
= the scalar component {3 along ,8; . By Pythagoras. Theore m,
Note. Every vector {3 in a Euclidean spate V is the sum of its projections \\od\ 2 \lo - 2
C1/31 - c2f3'!. - · · · - Cr/3r\\ + l\c11'.11 + · · · + Cr/3r\1 2
a long the vectors of an orthogonal basis of V . ~ l\c1/31 + C'). .62 + · · · + crf3r \1 2 • since a norm is ~ 0 .
Corollary. If { 8 1 , /32 , . .. , /Jr} be an orthonormal set of vectors in a But llc1 ,61 + C2.62 + ... + Cr ,Br\1 2
Euclidean space V, any vector /3 in L{.61, /32, .. . , f3r} can be expressed as
= (c1,81 + C2/32 + · · · + Cr ,f3r, C1,61 + C2/3'!. + · · · + Cr/3r)
/3 = (/3, /31 ).81 + (/3, /h)/32 + · · · + (/3, /3r )/Jr-
= Ci + c~ + · · · + c;., since {/31 , /32 , . . . , .6r } is a n orthonormal set.
Worked Example. Consequently, \lo\1 2 ~Ci+ c~ + · · · + c;.
1. Prove that the set of vectors {(1, 2, 2), (2. -2. 1) , (2.1, -2)} is an or- This completes the proof.
thogonal basis of the Euclidean space JR.3 with standard inner product . 1 ·
E x press ( 4 , 3, 2) as a linear combination of these basis vectors. Note. The equality occurs if \la: - ci/31 - c2f32 - · · • - Cr/Jr\1 2 = 0 ,
i..e ., if a.= c1/31 + c2/J2 +· · ·+Cr.B r, i.e., if a: E L{/31 , .6 2, . . . . /3r} -
Let {31 = (1. 2, 2) , /h = (2, -2, 1). ,8 3 = (2, 1. -2).
Then (/31 , /32 ) = O. (/32 , /3:{) = 0 , (/33, /31) = 0 . Thcorem. 4.12.10. Parseval's theoren1.
So {.l3i. {32 , {33 } is an orthogonal set of non-zero vectors and therefore If { {31 , /32, . . . , 6 11 } be an orthonormal basis of a Euclidean space V,
it is linearly independent. the n for any vector C\ in V,
Since JR.3 is a v ector space of dimension 3, {/31 . f32 , B:~} is a basis of IR3 . \\a:\\ 2 =Ci+~+ · ··+ C~,
Let .6 = (4. 3, 2). So {3 = c1/31 +c2.B 2+c3{33, where Ci is the component where Ci is the scalar component of o alo ng f3;, i = 1, 2 . . . . . n .
of (3 along 13-t . Proof. Since { f31, f32, .. . , ,Br,} is a. basis of V, a ny vector a: E V can be
C = (/3 .~ 1) = 4.1+3 2+2.2 =
1 (;➔ 1 ,th) 9 li , expressed a::; a= c1/J1 +c2.62+· · ·+cn f3-n, where Ci is the scalar component
of a: along {3;., i = l , 2 . . .. , r .
_ ( /3 ,fh) _ 4.2+:3 .-2+2.1 = !
C2 - c.12.Jh ) - 9 9' So \la:1\ 2 = (a:, a:) (cifJ1 + · · · + c,J3.,... c1/31 + · · · + c,1fJn)
C _ U3Jh) _ 4.2 + :1.1+2.-2 = ~. c21 + c22 + · · · + C27 1 , since
-· {6
. 1 • 132 , .. . , .Br }
3 - (J ,. 13:. ) - () ;., is n.n orthonormal set .
T h e r e fore {3 = \~ 8 1 + ! /32 + ~ ,B;,, . This C(: mplet.cs the proof.
192 VECTO R SPACES
HIGHER ALGEBR A 193

Theor em 4.12.11 . Let {/31, /32, ... , Br} be an orthogo nal set If r < n, then L{,81, /32, .. . , /3r} is a proper subspac e of V and
of non null so
vectors in a Euclide an space V and o: be a vector in V - L{/3 , /3 there exists a vector O:r+l in V such that O:r+1 <t L{/31.132 , . ..
1 2 , ... , Br} . , /3r} - We
If the scalar compon ent of o: along (3i be Ci , then assert that the set {/31, /32, . . . , /3r, O'.r+d is linearly indepen dent.

(i) /3 = Let us conside r the relation c1/31 + c2/32 + · · · + Cr/3r + Cr+10:r+


o: - c1/31 - c2/32 - · · · - Cr/3r is orthogo nal to ca.ch /3i; and 1 = 0,
where c,. E JR. Then Cr+l must be zero, for otherwi se O'r+l would
(ii) L{fJ1, /32, • • •, /3r, a}= L{/31 , f32, ... , /3r, /3}. belong
to L{,81 , ,82 , . . . , /3·r } . The linear indepen dence of {/31, /32, ....
/3r} and
Proof. (i) (/3, /31) (o: - C1/31 - c2/32 - · · · - Cr/3r, /31) Cr+l = 0 togethe r imply that C1 = c2 = · · · = Cr+l = 0 and this
proveH
( o:, ,6 i) - c1 (.61, /31) = 0, since c1 = S;;r:J
1
~ •
our assertio n.
Let .Br+l = O:r+1-d 1,61 -d2,B 2-· · ·-dr/3r where c4 is the sca.lar compo-
Similar ly, (/3, /32) = 0, ... , (/3, /3r) = 0.
nent of O:r+i along /3i. Then /3r+l i= 0 and is orthogo nal to /31.
Therefo re /3 is orthogo nal to each /3i. /32, ... , /3r
and thus an orthogo nal set of r + l non null vectors {/31, /32, ..
(ii) Let S = {/31,/32 , ... ,/3r,o:}, T = {/31,/32, ... ,f3r,/3}. . , !3r+1} is
obtaine d in V as an extensi on of the set {/31, /32 , ... , /3r}.
We have /3 = o: - C1/31 - c2/32 - · · · - Cr/3r
· and O'. = /3 + C1/31 + C2/32 + · · · + Cr ,6r
(i) If r + 1 = n , then { /31, /32, ... , /3r+ 1} is an orthogo nal bru:;is of V.
(ii)
Using (i) we can say that each vector in Tis a linear combin ation
If r+ l < n, then by repeate d applica tion of the proced ure describ ed
of above we obtain in a finite number of steps an orthogo nal set of
the vectors in S . Therefo re L(T) c L(S)~ n vectors
{ /31 , /h, . .. , /3r+ 1, . • . , .Bn } in V .
Using (ii) we can say that each vector in Sis a linear combin ation This set being an orthogo nal set of non-nul l vectors , is linearly
of inde-
the vectors in T . Therefo re L( S) C L(T) , penden t. Further more, this being a linearly indepen dent set of
It follows that L(S) = L(T). n vectors
in Vis a basis of V.
This comple tes the proof.
This comple tes the proof.
Note 1. The theorem is also valid when o: E L{/31, /32, . .. ,
f3r}. In this Corolla ry. An orthono rmal set of vectors in a finite dimens
case, a = c 1 /3 1 + c2/32 + · · · + cr/3r, where Ci is the scalar compon ional Eu-
ent of o: clidean space V , if not a basis of V, can be extende d to an orthono
along /3i. Clearly , /3 = 0 and therefo re rmal
basis of V .
( i) /3 is orthogo nal to each /3-., and
Worke d Examp le (contin ued.)
(ii) £{/31,/ 32,·· ·, /3r,/3} = L{/31,/ 32,···•/3 r,0} = L{/31 , /32 , . . .
, /3r}= 2. Extend the set of vectors {(2, 3, -1), (1, -2, -4)} to an
L { /3 1 , (32 , ... , f3r. a}, since O'. is a linear combin ation of /31 , /32, ... , !3r orthogo nal
. basis of the Euclide an space JR.3 with standar d inner produc t
and then
2. An orthogo nal set of r non-nul l vectors {.81 , /32, . .. , /3r} find the associa ted orthono rmal basis.
can be
extend ed to an orthogo nal set of r + l non-nul l vectors with the Let =
a vector a: lying in V - L{/31, /32, . . . , /3r}. The theorem gives
help of 01 (2, 3, -1), 02 = (1, -2. -4).
a clue to 01,02 are orthogo nal vectors . Let 0:3 = (0,0.1) . Then {a .a .a:d
the extensi on . 1 2
2 3 -1
Theor em 4.12.1 2. An orthogo nal set of non null vectors is linearly indepen dent because 1 -2 -4 =/- 0.
in a finite
dimens ional Euclide an space V, if not a basis of V, can be extende 0 0 1
d to
an orthogo nal basis of V . So {01, 0:2, 03} is a basis of JR .
3

Proof. Let d im V = n and let {/31, /32, ... , /3r} be an ortho?o nal_ Let /3 =
set of 0:3 - C101 - c2a2, where c 1 = ((u:.,oi)), c 2 -- (o 3 , 02 /
uon null vectors in V where 1 S r S n. So { /31 , 82 , .. . , /3r} 1s 0 1 , a- 1 (a:2,0: 2) ·
a linearly Then {3 is orthogo nal to a 1· and a2 and L{ a: 1. 02 , 03}
indepe ndent set. = L{ 01. 02. /3} •
= Therefo re { o 1 , 0:2, fJ} is an orthogo nal basis of JR 3 .
If r n , then {81 , th, .. . , /Jr} is an orthogo nal basis of V.

j
194 HIGHER · ALGEBRA 195
VECTOR SPACES

c1 = - 1~ , c2 = - 2i and therefore Worked Examples (continued) . .


f3 = (0, 0, 1) + 1\ (2, 3, -1) + 2i (1, -2, -4) = (½, -½, ½). 3. Use Gram-Schmidt process to obtain an orthogonal basis
3, 4)} of the Euclidean space 1R
3
from the
with stan-
basis set { (1, 0, 1), (1, 1, 1) , (1,
Hence an extended . orthogonal basis is {(2 " 3 ' -1) , (1 , -2 ' -4) , dard inner product.
1 1 1
( 3 , - 6 , 6 )} and the associated orthonormal basis is
Let a: 1= (1, 0, 1), o:2 = (1, 1, 1), 0:3 = (1, 3, 4).
{ }ct(2. 3, -1), Jk(l , -2, -4) , ~(2, -1, 1) } · ·l 02
Let ,81 = o: 1 and f32 = a 2 - c 1 {31 , where c 1 is the scalar component of
along {31 . Then 132 is orthogonal to ./31 and L {./31, /32} = L{ ./31 , 02} =
Theorem 4.12.13. Every non-null subspace W of a finite dimensional lt £{01 , a2}.
Euclidean space V possesses an orthonormal basis.
c1= ~~~ :g~~ = 1. Therefore f3i = 0:2 - /31 = (0, 1, 0) .
Proof. Let { 01 , a:2, ... , O:r} be a basis of W. An orthogonal basis will Let {33 = a 3 - d 1(31 - d 2/3i, where d 1, d2 are scalar components of 0:3
be obtained by a method known as Gram-Schmidt process of orthog- along {31 , .rJ.i respectively.
onalisation. Since the basis vectors are none zero, we pick up one of
them , say 01, and consider as the first member of the new basis. For Then /33 is orthogonal to f31 , 132 and L{/31, /32 , /33} = L{ 0:1 , 0:2, 0:3}.
convenience we rename it /31 . d - (0:3,/h) - (03,0:1} - ~ 1 d2 - 03,/3, -- 21 -- 3
l - (.81,,9i) - (0:1,0:1) - 2 - ,82,/32 0

/31 = 0:1-
• Therefore 133 = (1, 3, 4) - ~(l, 0, 1) - 3(0, 1, 0) = H-1, 0 , 1) .
Let /32 = 02
c1/J1, where ci/31 is the projection of a: 2 upon {31.
-
Therefore an orthogonal basis is {(l, 0, 1), (0, 1, 0), J(-1, 0, l)}.
Then /32 is orthogonal to /31 and L{/31, 132} = L{/31, 02} = L{ a 1, a: 2 }.
4. Use Gram-Schmidt process to obtain an orthonormal basis of the sub-
lh = a2 - /31 · t~:J~J space of the Euclidean space JR4 with standard inner product, generated
a3 f/. L{ .81 , /32}. Let /33 = a3 - d1/J1 - d2f32, where di/31, d2f32 are the by the linearly independent set {(1, 1, 0, 1), (1, 1, 0, 0), (0, 1, 0, l)}.
projections of 03 upon /31, 132 respectively. Let a 1 = (1 , 1, 0 , 1), a2 = (1, 1, 0 , 0) , a3 = (0, 1, 0, 1) .
Then /33 is orthogonal to 01,/h and L{/31,/32,/33} = L{/31,/32,a3 } = Let /31 = 0:1 and 132 = a2 - c1/31, where c1/31 is the projection of o 2
L{ a-1 , a2, a3}. upon /31-
a _ _ ,.., _ (cr3 ,,h /3 _ ~03,,82~ r.,
/J",j - ,,8, fJ2.
..... 3 flt ,,81 1 .82 Then /32 is orthogonal to /31 and L{/31, 132} = L{/31 , 0:2} ....: L{ 01 , 0:2} .

04 f/. L{/31, /h , /Jg}. Let /34 = 04 - r1/31 - r2/J2 - r3/33, where Ct=~= t~~:~~~ = ~-
r 1/31 , r 2/h , r 3 {33 are the projections of 04 upon /31, /32 , 13a respectively. /32 = o: 2 - Jo 1 = (1 , 1, 0, 0) - j(l; 1, 0, 1) = ½(1 , 1, 0, -2).
Then {34 is orthogonal to /31, 132, {33 and L{ /31 , !32, /33, /34} = Let /3-;; = a3 - d1/31 - d2/32, where d1 ,B1, d2fh. are projections of o: 3
L{./31 , B2 , .B3,a4} = L{a1,02,a3,04}. upon /31 and 132 respectively.
_ (il/)) f3 _ ~04,~2) a _ 04 ,,83
,,83) a_ Then /33 is orthogonal to /31, 132 and L {/31, 132 , /33} = L{ 0:1 , 02 , o 3}.
/34 - Q:4 - ( ' l fJ3 •
. 1 ,8, ,,8,) fJ2 (,83
d _ (a:i ..th~ _ a d 2 = ~a:i,.82) = _!.
1 - (/J1 ,{Jl - 3' /3,,,82) 2
This process terminates after a finite number of steps because at every °

step one vector of the given basis is replaced by a vector in the desired /33 = (0, 1, 0, 1) - Hl,l, 0, 1) + ½(1, 1, 0, -2) = ½(-1 , 1, 0 , 0) .
orthogonal basis. Finally we obtain Therefore o.n orthogonal basis of the subspace is
/3r = 0: . _ 7
~r,glj a
1 , 1 fJl
_ ~*•$2f f32 _
, :i
.. . _ bor ,l3r-1) /3r-l,
( r- 1 ,,B,._ i)
and {(l, 1, o', 1), ½{l. 1, 0, -2), ½(-1 , 1, 0, 0)}
and the corresponding orthonormal basis is ·
{(31 , /h , .. . , .Br} is an orthogonal basis of the subspace W .
This completes the proof.
{ "73(1, 1, 0, 1) , 75(1 , 1, 0, -2), 72(- l. 1, 0, 0)} ·
VECTOR SPACES
196 HIGHER ALGEBRA 197

4 . 13. Orthogo nal complem ent of a subspace . Let a,/3 ES. Then (a, /3i) = 0 and (/3,/3i) = 0 for i = 1,2, . .. ,r.
This implies (a+ /3, /3i) = 0 for i = 1, 2, . .. , r .
Theorem 4.13. 1. In a Euclidean space V if a vector be orthogona l to This shows a+ /3 is orthogona l to L{/31, 132 , • •. , /3r }, i.e., o: + .B is
a set of vectors, then it is orthogona l to every vector belonging to the orthogona l to P. Therefore a ES, /3 ES implies et+ /3 ES. ..
subspace spanned by the set of vectors. .. . (ii)
From (i) and (ii) it follows that S is a subspace of V.
Proof. Let a E V and a: be orthogona l to the vectors {31 , 132, . . . , /3r in V.
Note. This sµbspace S is denoted by p.l..
Then (et, .81) = 0 , (a:, /h) = 0, ... , (a:, /3r) = 0.
Let P = L{/31. />i , ... , .Br} and
for c.; E lR.
e E P. Then e= c1/31 +c2/>i+- · ·+Cr/3r Theorem 4.13.4. Let P be a subspace of a finite dimensio nal Euclidean
space V. Then V = P EB p.l..
(a,~) (a, c1/31) + · · · +(at, Cr/3r) Proof. Case 1. Let P = {0}. Then p.l. = V and the theorem is obvious.
=c1 (o, /31) + c;(o, /32) + · · · + Cr(o, /3r) Case 2. Let P -::f. { 9} and let {.81 , 132 , .. . , .Br} be an orthogon al basis of P .
0, since (a:, .Bi) = 0 for i = 1, 2 , . .. , r . This can be extended to an orthogona l basis {,81, 132, . • • , /3r, .Br+ 1, • • • , /3n}
This proves that et is orthogona l to every vector of P. ofV.
Note. In this case o is said to be orthogona l to the subspace P . /3r+i is orthogona l to each of /31, 132, . . . , .Br .
Therefore /Jr+l E p.l. . Similarly, f3r+2, .. . , .Bn E p.l. ·
Theorem 4.13.2. Let P be a subspace eof a finite dimension al Euclidean
Since {/3r+1, f3r+2, .. . , /3n} is an orthogona l set, it is linearly indepen-
space V. A vector et in V is orthogona l to P if and only if a: is orthogona l dent in V and this being therefore linearly independ ent in p.l. is either
to every vector of a generatin g set of P. a basis of p.i., or can be extended to a basis of p.l..
Proof. Let { .81 , 132 , .. . , .Br} be a set of generator s of P . Therefore n - r :5 dim p.l. < n.
Let a be orthogon al to P . Then o is orthogona l to every vector of P P and p.l. being subspaces of V, P + p.l. is a subspace of V and
and therefore a is orthogon al to each _8;,. i = 1, 2, .... r. · furthermo re P n p.1. = {0}, since O is the only vector in V orthogon al to
Converse ly, let 0t be orthogona l to each /3i. By the previous theorem, a: · itself.
is orthogon al to L{,81 , 132 , . . . , .Br}, i.e., a is orthogona l to P. The relation dim (P+P.1.) = dimP+ dim p.1. - dim (PnP.l.)'g ives
dim (P + p.l.) = dimP + dim p.l. ~ r + (n- r), i.e., ~ n ...... (i)
Theorem . 4.13.3. Let P be a subspace of a finite dimension al Euclidean Again, P + p.1. being a subspace of V, dim (P + pJ..) S n . .. . .. (ii)
space V. The set of all vectors in V · which are orthogona l to P is a
subspace -of V. From (i) and (ii), dim P + p.l. = n and this i.Jnplies P + pJ.. = V.
·
Hence V = P EB p.l. since P + p.1. = V and P n pJ.. = {8}.
Proof. Let S be the set. Since 9 in Vis orthogona l to every vector in P , This completes the proof.
0 E S and therefore S is non-empt y.
Note. {/Jr+l, /3r+2, .. . , /3n} ·is an orthogona l basis of pJ...
Case 1. S = {0}. Then Sis a. subspace of V .
Definitio n. The subspace p.l. is said to be the orthogona l compleme nt
Case 2. Let S ~ {0} and let a ES.
of Pin V .
Let {.Bi , {3-i, •• . , /3r} be a set of generator s of P . Since a: is orthogona l
to P , a is orthogon al to each of /31 , /h., .. . , /3r. Since an orthogona l set of non null vectoP,> in a Euclidean space V can
Then (a, /3;,) = 0 for i = 1, 2, . . . , r and this implies (ca, /3;) = 0 for l;>e extended to an orthogona l basis of V in a unique way, p.l. is uniquely
a11· c ER and for i = 1, 2, . .. ,r. deterriline d by P. Thus although a. given subspace P may have many
This shows ca is orthogon al to L{/31, /3-i., .. . , /3r}, i.e., ca: is orthogona l compleme nts in V, its orthogona l complem ent pJ.. is unique.
to P for all c E R . Thus a. finite dimension al Euclidean space V is the direct sum of any
Therefor e ca E S for all c E JR. .. ... (i) subspace of V and its orthogona l complem ent in V.
J
198 HlGHER ALGEBR A VECTO R SPACES 199

Worke d Examp les.


Exerc ises 12
L In Euclide an space IR. 3 with standar d inner produc t, let
P be the
subspac e generat ed by the vectors (1,1,0) and (0,1,1). Find p.1._ 1. In IR.'3, let = (ai, a 2 , a 3 ), ,8 = (bi, b2 , b3 ) . Determ ine whethe r (,) is a real
O
Leto= (1.1 , 0), ,6 = (0.1, 1) and let 'Y = (a ,a ,a ) E p.1._ l
inner product for IR. 3 if(,) be defined by
1 2 3
Then (a: . '")') = 0 and (/3, 1 ) = 0. Therefo re a1 + a2 = 0, a2 + a =

where k E JR.
3
Toking a2 = k , we have a1 = a3 = -k and therefor e 'Y = k(-1, 1, -1),
o.
.
.1
l
t (i) (a:, /3)
(ii) (a, (3)
a1b1 + a2b2 + a3b3 I;
=I
= (a1 + a2 + a3)(b1 + b2 + b3);
i (iii) (o,,B) = a1b1 + (a2 + a3)(b:i + b3);
So p.1. is the subspac e generat ed by the vector ( -1, 1, -1).
i (iv) (a,(3) = a1b1 + (a2 + a3)(b:z + b3) + a3b3 .
2. A is a real m x n matrix. Show that the solution space of the
system · .!
of equatio ns AX = 0 is the orthogo nal comple ment of the row space 2. Prove that for vectors a, /3 in a Euclide an space
A in the Euclide an space !Rn with standar d inner produc t.
of· .f (a, (3) = 0
V,
(i) Ila+ .BIi = Ila - /311;
if and only if
Let A = ( G.ij )m,n, a;j E JR and a:1, a:2, ... , Orn be the row vectors
of (ii) (a,(3) = 0 if and only if Ila+ f311 2 = llall 2 + 11/311 2 ;
A . The row space P = L { a:1, 02, ... , Orn}, a subspac e of ]Rn. (iii) (a+ /3, a - (3) = 0 if and only if llall = 11.Bll-
Let Q be the solution space of AX= 0 and let~= (t • t , . . •, t,.) be 3. Prove that the set of vectors {(2,3, -1), (1, -2,-4) , (2, -1,
1 2 l)} is an orthog-
a ~olutio n of the system. onal basis of the Euclide an space lR 3 with standar d inner product
Then ai1t1 + G.i2t2 +···+G.intn = o\or i = 1, 2, ... , ·m , Le., (a:i, e) =o projecti ons of the vector a= (1, 1, 1) a.long these basis vectors
. Find the
and verify that
for i = 1, 2, ... , m . a: is the sum of its projecti ons a.long these basis vectors.
-l
Therefo re e is orthogo nal to each ai and therefor e ,; is orthogo nal .• l
P , i .e. , ,; E pJ.... Thus e E Q =>,; E p.1.. Therefo re QC p.1. .... (i)
to 4. Use Gram-S chmidt process to convert the given basis of the
Euclide an space
IR3 with standar d inner product into an orthogo nal basis.
Let 1J = (u1 , u2, ... , un) E p.l.. Then 1] is orthogo nal to each of
the (i) {(1,2,- 2),(2,0 ,l),(1,l , O)}; (ii) {(l,l , O),(O,l , l),(1,0 ,l)} .
generat ors a-1 , 02, ... , Orn of P.
Therefo re an U1 + Ui2U2 + · · · + Uin Un = -0 for i = 1, 2, ... , m. 5. Apply Gram-S chmidt process to find an orthono rmal basis for
the Euclide an
This shows that 17 is a solution of the system, i.e., TJ E Q. space R 3 with standar d inner product , that contains the vectors
Thus 77 E pi. => T/ E Q. Therefo re p.1. C Q . ... (ii) ( .) ( 1 -1 O)·
✓2' ✓ 2'
(") ( r 1 -1) ( 2 -1 1 )
I ' ll '7a• 73• 73 ' °7s ' 76 • '7s .
From (i) and (ii) p.l. = Q. That is, the solution space of the system 6. Apply Gram-S chmidt process to obtain an orthono rmal basis
of the subspac e
AX = 0 is the orthogo nal comple ment of the row space of A. of the Euclide an space R 4 with standar d inner product , spanned
by the vectors
Note. Since P $ p.l. = R"-, we have P $ Q = lRn. (i) (l,l,O, l),(l,-2 ,0 , 0),(l,O ,-l , 2);
Therefo re dim P+dim Q =n, i.e., rank of A +?·ank of X(A) = (ii) (1, 1, 1, 1), (1, l, -1 , -1), (1 , 2, 0 , 2).
n,
where X (A) is the solutio n space. j 7. Find an orthono rmal basis of the row space of the matrix:
3. P is a subspac e of a Euclide an space V of finite dimens ion !
j
~),
. Prove l l
i ~
l l
that p .1. .1. = P.
Let d im V = n . Let {/31 , /h. , . . . , !3r} be an orthogo nal basis of P and
le t {/31 , . . . , /Jr- f3r +1, .. . , /3n} be an extende d orthogo nal basis of
V.
I
l (i) (
2 l
2 1
2
(ii) ( 3 l
2 3 D
8. Find the orthogo nal complem ent of the row space of the
ma.t ri.-c:
P = L{/31, /3?. , . . . , /3r}- Since P $ p.l. = V, pJ... = L{f3r+l • . . . ,/3n}-
Si.ncc p .1. ..L iB t he orthogo nal complm ent of p.1. in V
L { f3r + 1, , .. , /1n:} , we have pl. .l. =
and p.l.
L {/31, /32, .. .. .Br}. Therefo re p.l. .l.
(o) ( l
2 1
3 :.?
1 :2
~) ·
( ii ) (
1
3
4
i ~ )-
= [Hint. The solution sp11<..-e of the system of equat ions ,-\X =
0 w i·t h t_h e given m atri.."<
P.
A as the coeff>cie nt matri x . is the orthogon al complem ent o f
the row space o f A .J
{1.·
200
HIGHE R ALGEB RA : :~ 1
VECTO R SPACE S
t 201
4.14. Matr ic polyn omial s.
i'
Let us· consid er a 2 x 2 matrix A =(
+ x + 1 x + 2x )x
2 3
.:
.
-j G(x) = ( ; ~ ) +( ~ ~ ) x.
3:t3 + x 4x 2 + 3 . . l
~ ~ ) x+( ~ : )
whose el~m~ nts are real polyno mials in x. A can . be expres
polyno mial m x sed aa the
·
:• l
·• .:•- ····-~~ 1·.l
Then F(x) + G(x) = ( : ; ) + ( x
2
.

( ~ ~ ) x3 + ( ~ ~)x 2
+( ! ~) +( ~ 1) F(x)G( x)
': ~ ! ( i ~ ) ( ; ~ ) +[ ( i ~ ) ( ~ ~ ) + ( ;
=
~ ) ( ! ·~))
x
whose coeffic ients are -real matric es of order 2 x 2.
.._.:; .,1 X

Such a polyno mial is said to be a matric polyno mial. The


degree of the · . · l +[( ; ~ ) ( ~ ; ) + ( ~ : ) ( ; ~ ) ] x2 +( ~ : ) ( ~ ; )
matric polyno mial is the degree 9f the consti tuent polyno
mial of highes t -' -- ·. - it x3
degree appea ring in the matrix A .
In genera l, if A be a.n n x n matrix whose elemen ts a.re real ( comple
· · = ( li ~ ) +( 21 138 ) x +( ~~ !: ) x2 +( ;! 183 ) x3.
x) · I
polyno mials in x, then A ca.n be expres sed as a ma.tric polyno
coeffic ients are n x n real (comp lex) matric es.
mial whose
.ll G(x)F (x) =
( ; ~ ) ( i ~ ) +[ ( ~ ~
) ( ; g) + ( ~ ; ) ( ~ ~ ) ]
Two matric polyno mials F(x) and G~x) whose coeffic ients X
are matri-
ces of the same order .over the same field a.re said to be equal
if they have + [( ; ~ ) ( ~ : ) +( ~ ; ) ( ; ~ )] + ( ~ ; ) ( ~ : )
x2
the sam.e degree and the coeffic ients of like powers of x be x3
equal matric es.
Let F(x) =Ao+ A1x + · · · + Anxn, = ( ~ :1 ) + ( 1;
6
29 ) x +( i32
2
4~ ) x +( ;
6 3
28 ) x •
G(x) =Bo+ B1x + · · · + B,,,.xm
be two matric polyno mials whose coeffic ients are square
same order over the same field. Then the sum F(z) + G(x)
ma.tric es of the
and the
2. Let F(x) = ( ! ~) x +( ~ ~ )-
.. Expres s adj F(x) as a matric polyno mial and verify that F(x)
produ ct F(x)G (x) are define d by . adj
F(x) = dct F(x)I2 .
F(x) + G(x) - (Ao+ Bo)+ (A1 + B1)x +· · ·+(Am + Bm)xm
+Am+lxm+l + .. . + Anxn' if m < n F( x) = ( X : 1 2x ;- 1 ) .
- (Ao + Bo) + (A1 + B1)x + · · · + (An+ Bn)xn
+Bn+ lxn+l + ... + Bmxm , if n < m
= (Ao+ Bo)+ (A1 + B1)x +···+ (An+ Bn)xn ,
adj F(x) =( -x3-1 -i ) x + (
-2xx - 1 ) = ( -~ ~~ , -~ )-
if m = n. ·
F(x)G (x) =Co+ C1x + · · · + Cm+nx m+n, where
F(x). F(x) =( ~ ~ )( -~ -i ) x~+
adj

Cr= AoBr + A1Br -l + ... + ArBo, r = 1,2, . . . ,m +n


taking An+l = An+2 = . .. = An+n.. = 0, Bm+l = Bm+2 = .. . = B.,.+n = o;
[( ~ ! )( -~ -i ) + ( ~ ~ )( -~ -~) 1x+
Note. G(x)F (x} can be define d in a si~~ar _m~ne r.
F(x)G (x)_ 'F
G(x)F (x), in genera l, becaus e matrix mult1p hcat1o n 1S not commu
( ~ ! ) ( -~ -~ )=
Exam ples.
tative.
( -~ -~ )x +( ~ ~ )~+( -~ ~~)2

1 5 = -2J2 x 2 + Ox - 12 = (-2x2 - l)I2 and det F(x) = -2x2 - 1.


L Let F(x) = 3 0 Theref ore F(x). adj F(x) = det F(x).f<J (verifie d) .
VECTOR SPACES 203
202 HIGHER ALGEBRA

4.15. Characteristic equation. -Bo = coin,


ABo - B1 = c1ln,
Let A be an n x nm,..'. :x over a field F. Then det (A - xln) is said AB1 - B2 = c2In,
to be the characteristic polynomial of A and is denoted by '11 A(x). The
equation 'I/J.4.(x) = 0 is said to be the characteristic equation of A.
Let A= (aij)- Then '11 A(x) =
l
• I ABn-2 - Bn-1
ABn-1
=
=
Cn-tln,
Cnln. .
.
Pre-multiplying the -relations b An An-1 An-2 A I respectively
au - x a12 a1n Y , , '···• ' ~
and adding, we have eoAn + c1An-l +' · · + C,.-1A + Cr.In - 0 '
a21 a22 - X a2n
This completes the proof.
ant ann -x Cayley-Hamilton theorem gives a method of computing A- 1 when A
= c0 xn + c 1 xn-l + ... + c,., where co= (-1)'" and Cr= (-1y•-r.(sum is a non-singular matrix.
of the principal minors of A of order r].
Let the ~haracteristic equation of A be coxn + cixn-l + · · · + Cn = 0.
In particular, c1 = (-1)n- 1(a11 +a22 + ... +a,m) = (-l)n-l trace A, By Cayley-Ha.milton theorem, caAn + c1An-l + · · · + Cn-1A + Cnin = 0.
and Cn = <let A.
Since en = det A -:f. 0, c;; 1 exists in F. Multiplying by -c;; 1 , we have
The degree of the characteristic equation is same as the order of the -c;; 1 (c0 An + c1An-l + · · · + Cn-1A) - In= 0
matrix A and the coefficients are scalars belonging to F. or, -c; 1 (coAn-l + c1An- 2 + · · · + Cn-1In)A = In .
[Note that the determinant of the submatrix of an nxn matrix A obtained From the definition and uniqueness of an inverse it follows that
by deleting i1th, i2th, ... , in-rth rows and i1th, i2th, ... , in-rth columns, f
. .4- 1 = -c;; 1 (coAn-l + c1An- 2 + · · · + Cn-1In),
is a principal minor of order r of A.]
Thus A- 1 is expressed as a polynomial in A with scalar coefficients .
Theorem 4.15.1. Cayley-Hamilton theorem.
Every square matrix satisfies its own characteristic equation. Worked Examples.
The theorem states that if A be an n x n matrix and the characteristic 1. Use Cayley-Hamilton theorem to find A- 1 , where A= ( ~ ~ ) .
polynomial of A be coxn + c1 xn-: + · · · + Cn, then
coAn + c1An-l + . .. + Cnln = 0. The charaeteristic equation of A is I 2
; x 5
~x \ = O
Proof. Let .4 be an n x n matrix. Then det (A- xln) = coxn + c1x••- 1 +
or, x - 7x + 7 = 0.
2
•• • + c,.,. A - Inx is a rriatric polynomial in x of degree 1 and adj (A - By Cayley-Hamilton theorem, A 2 - 7 A+ 7 /2 = 0
Inx) is a matric polynomial in x of degree n - 1. since each element of
or, A(A - 112) = -7h or, --½.4(A - 712) = I2.
adj (A - xln) (i.e., a cofactor of an element of the matrix A - Inx) is a
polynomial in x of degree n - 1 at most. This gives A- 1 = -½(A - 712) = ½ ( -~ -; ) .
Let adj (A - I.n.x) = Boxn-l + B1xn- 2 + · · · + B,.,_1, where each B;
is an n x n matrix.
(A - xln ). adj (A - xln) =(det (A - xln)] In gives
2. Use Cayley-Hamilton theorem to find A 50 , where A =
(01
The characteristic equation of A is x 2 - 2x + l = 0 .
(A-Inx)(Boxn- 1 +B1xn- 2 +• · ·+Bn-1) = (coxn +c1_xn-l +· · ·+cn)Iri
By Cayley-Hamilton theorem, A 2 -2A+I2 = 0 or, A2 -A = A-h_ .
or, A(Baxn-l +B1xn- 2 +- · ·+Br,-d-(Boxn+ B1xn-l +· · ·+B.,._1x) Therefore A 3 - A 2 = A 2 - , A= A - 12. · · · , A 50 - .4 49 = .4. - / 2 .
= (coln )xn + (c1I.,.)xn-l + · · · + (cnl.,.).
Equating coefficients of like powers of x, we have Adding, we have A. 50 = 50.4. - 4912 =( ~ 5
~ ) .

You might also like