SK Mapa Linear Algebra
SK Mapa Linear Algebra
152
VECTOR SPACES 153
1. U = L{(2, O, 1), (3, 1, O)}, W = L{(l, 0, 0), (0, 1, 0)} . Fiud <lim U, dim W,
Definition. The dimension of R(A) is said to be the row rank of A and
dim (Un W) and dim (U + W). the dimension of C(.4) is said to be the column rank of A .
Since R(A) C F", the row rank of A $ n . Since C(A) c prn. the
2. Two subspaces of IR3 are U =
{(x, y, z) : x + y + z = 0} and w = {(x, y , z) ·= column rank of A $ m.
x + 2y - z = 0} . Find dim U, dim W, dim Un W , dim (U + W).
Example.
3. = { ( : : ) :a + b = 0}, = {( : : ) ;c+d = 0}
~2 i0 ~1 ) . The row space of A
U W are sub-
spaces of lR2x2- Find dim U, dim W, dim (Un W) and dim (U + W). 1. Let A = ( is the linear span of the
~ '1
Therefore the row sp~ce_of A= {c 1a: 1 + c2 a: 2 + c3a:3 ·: c.;. E JR} .
T~e set { a1, a:2, 0:3} 1s linearly independent. Therefore the row rank
··/I Since an elementary matrix is non-singular, the product P1P2 . . . Pr
is non-singular.
of A is 3. ::}.!
By Theorem 4.9.1, A and B have the same row spaces.
The C_:>lumn space of_A is the linear span of the column vectors
(1,0,2) , a:2 = (2,1.0)}.a:3 = (0,3; 1). 61 ::::: -:.l Theorem 4.9.3. Let R be a non-zero row reduced echelon matrix row
Therefore ~he ~011:_mn_ sp~e of A= { c 1& 1 + c2 & 2 + c3a3 : ci E JR} .
The set {0:1 , a:2 , 03} 1s lmearly independent. Therefore the colum
rank of A is 3.
11
n _:'.'•_<_,.
·'".:.,,.
·.,- ,
_.
equivalent to an m x n matrix .4. Then the non-zero row vectors of R
form a basis of the row space of A .
Proof. Let R = (a.,J),n,n and 0:1 , 0:2, .. . , Ur be the non-zero row vectors
;:·,;1 of R. Then the row space of R is generated by a1 , a2 , ... , ar, 0, ... , () (0
Theorem 4.9.1. Let A be an m x n matrix and P be an m x m matrix _·;·~·f l being counted m - r times) . .The set of generators { o: 1, a2, .. . , ar, ()} is
over the same field F. Then the row space of PA is a subspace of the .·.'. 1 linearly dependent as it contains the null vector 0. The null vector () can
row space of A. \ be deleted from the generating set, and it is certain that the non-zero
l
l
In particular, if P be non-singular, then the matrices A and p A have . , vectors 0:1 , 0:2 , •.. , O:r generate the row sp~e of R.
the same row spaces. · ··_- 'j We need only to prove linear independence of the set { o: 1, a: 2 , • •. , a:r }.
Proof. Let P = (pij)m ,m, A= (aij)m ,n · Let O:i = (ail, a.;2 , . .. , a;n)-
From (i) and (ii) it follows that the row space of A = . II U1kr = 0 , a2k.. = 0, ... , <lrkr = 1,
the row space i we have C1 = c2 = · · · = Cr = 0.
of PA..
I This proves that the set {01 , 0:2, . . . , Ur} is linearly independent and
Corollary 1. If A be an m x n matrbc and P be a non-singular m x m therefore it is a basis of the row space of R.
matrix, the row rank of A = the row rank of PA . Since R is row equivalent to A, the row space of A is same as that of
2. If .4 be an m x n matrix and P be a non-singular n x n matrix, the R and therefore { 0:1 , a2, ... , a:r} is a basis of the row space of A
column rank of A = the column rank of AP. Corollary. The row rank of a row reduced echelon matrix R is the
number of non-zero rows of R
Theorem 4.9.2. Row equivalent matrices have the same row spaces.
Proof. Let A and B be two row equivalent matrices. Then there exist Theorem 4.9.4. The row rank of an m x n matrix .4. is equal to its
e le m e ntary matrices P1 , P2, ... , Pr such that B = Pi P2 ... P r .4 . determinant rank.
156 HIGHER ALGEBR A
Proof. Let A be row equival ent to a row reduced echelon matrix R having;
,·
.
1 '
.
01
VECTO R SPACES
Coroll ary 1. The column rank of a matrix A is equal to its determi nant:'.:;; This shows that each of the colum:n vectors ai, a2, •••, d'n belongs to
:':.:· the linear span of r vectors ")'1, ")'2, ..• , 'Yr a.nd therefo re
rank
Proof. By the theorem ,
..; :. f;s{ .· the column rank of A$ r. ... (i)
the row rank of At = the determ inant rank of At. Now r = the row rank of A = the column rank of At.
But the row rank of At = the column rank of A; and Also the column rank of At :s; the row rank of At by just what we
the determ inant rank of At = the determ inant rank of A. have done ~d the row rank of At = the column rank of A. Theref ore
r :s; column rank of A.
Therefo re the column rank of A is equal to its determ inant rank. . . :: :t:. · (ii)
Combin ing (i) and (ii), the row rank of A= the column rankof A.
Coroll ary 2. For an m x n matrix A, the row rank of A
rank of A.
Theor em 4.9.5. For an m x n matrix A, the row rank of A
•
= the column:}Y;; ·
·"· •,· · Theor e~ A Let A and B be two matrice s over the same field F
such that. AB is defined. Then rank of AB$ min{ra nk of A, rank of B}
.
= the /-
column rank of A. ~ roof. Let A= (a;;)m, n,B = (b.j).;,,,, . Let /31,/h., . .. ,/3n be the row
• vectors of B. Let P1, />2, ... , Pm be the row vectors of AB. Then
Indepe ndent Proof.
Proof. Let A = (aij )-m,n and 01, 0:2;.. . , .",i ·i!! ·"';.:
O:-m be the row vectors, >},; .
Pl = an/31 + a12/32 + · · · + a1n/3n,
& 1 , a 2 , .. . , c:fn be the column vectors of A.
P2 = ~1!31 + a22/J2 + · · · + a2n/3n,
._ .:/ii·
Let the row rank of A= rand S = {th, /32, . . . , !3r} be a basis of the·:.Jff
row space of A where /3i = ( bil, bi2, ... , bin) and bij = akj for some)i:
,. Pm = a..n1f31 + a..n2/3i + · · · + °'mn/3n,
,-._ :::::~ Each p;. is a. linear combin ation of the vectors !31, /3i, ... , f3n.
Since S is a basis, 01 = C11,61 + c12!h + · · · + C1r/3r So L{,01, P2, ... , Pm} C L{/31, /32, ... , /3n}, by Theore m 4.3.7.
02 = C21/31 + C22/32 + · .. + C2r/Jr
... ...
:J
- .r
, ·.
Therefo re the row space of AB is a subspac e of the row space of B.
It follows that row rank of AB $ row rank of B. This implies that
Om, = C-m1/31 + Cm2/32 + · · · + Cmr/3r, where Cij are · '/ ,. rank of AB·::; ·rank of B
suitabl e scalars . (i)
~ -.- ,
The jth compo nent of ~i is Conside ring the produc t Bt At, we deduce that rank of Bt At :::; rank
aij and the jth compon ent of_ _-_-<·-
+ ci 2 fh + •••+ Cir/3r of N. That is, rank of (ABl $ rank of At. This implies that
e;1f31 lS C;1b1j + Ci2b2j + · · · + Cirbrj),
This hol~-for _·. · rank of AB $ rank of A
1, 2, .. . , m . Therefo re (ii) - - -
::.. 1
a1j = c11b1j + c12b2j + · · · + C1rbrj, :,: ,;_' \
Combin ing (i) a.nd (ii), rank of AB :5 min {rank of A, rank of ~ _...
a2j = C21 b1j + C22b2j + · · · + C2rbrj, .. \ Coroll ary 1. If A be non-sin gular, rank of AB= rank of B . ·
.~ Proof. Since A is non-sin gular, A- 1 exists. Co~ide ring the matrix
,· j
. 1 A- 1 (AB), we have rank of A- 1 (AB) $ rank of AB.
en
That is, rank of B $ rank of AB. But rank of AB $ rank of B .
C21 I
= "Ir• Then Hence it follows that rank of AB = rank of B.
Coroll ary 2. If B be non-sin gular, rank of AB= rank of A.
Crnl
158 HIGHER ALGEBRA VECTOR SPACES 159
Theorem 4.9.7. Let A and B be m x n matrices over a field F. Then Since Q- 1 is non-singular and the rank of T is r, the column rank of
rank of (A+ B) :s; rank of A+ rank of B. TQ- 1 is r. That is, the rank of TQ- 1 is r . ,
Proof Let a:1 , 0:2 , • • • . O:m be the row vectors of A; {31 , f3.i , . ... Bm be the Thus A is the product of two matrices p- l S and TQ- 1 , each of rank
row vectors of B. Then the row vectors of A + B are o: 1 + {31 . 02 + r . This completes the proof.
/3,i , .. . ' O:m, + /3m- .
The vectors 01, 02, .. . , o:m generate R(A), the row space of A; the . Worked Examples.
vectors /31, /h , • .. , /3m generate R(B), the row space 6f B and the vectors• 1. Determine the row rank and the column rank of the matrix A and
0~ i n
0:1 + ,81 , 0:2 + /32 , • • •, O:m + 0m generate R(A+ B), the row space of A+B. verify that the row rank of A = column rank of A, where
Because R( .4) + R(B) = {u + v : u E R(A), v E R(B)}, the vectors
C\'.1 + ,8 1, a:2 + /32, .. • ,Om+ /3m lie in the subspace R(A) + R(B). .
A~
But R(A + B) is the smallest subspace containing the vectors 01
+
/31 , a:2 + /32, .. . , om+ /Jm - So R(A + B) C R(A) + R(B) . Let us apply elementary row operations on A to reduce it to a row
Hence dim R(A + B) :s; dim [R(A) + R(B)]
As R(A). R(B) are subspccs of a finitldimcnsiona.l vector space pm,
dim (R(A) + R(B)]= dim R(A)+ dim R(B)- dim (R(A) n R(B)] .
... (i)
l
I
A~u ; !
echelon matrix.
1 4
~3 ) R2:-~R1
R3-2R1
( OOl -i
-1
~0 -~ )
-9
0 0 0
~)
-9
-~ )
0
Ri_::_R
2
= R, say. ~ ~ ~
R(B) , i.e .. row rank of A+ B ::5 row rank of A+ row rank of B . l Risa row echelon matrix. The non-zero row vectors of Rare (1 ,0 ,2 ,-
H e nce rank of (A+ B) ::5 rank of A+ rank of B .
!
3), (0,1,0,9). These form a basis of the row space of A . Therefore the
1 row rank of .4. = 2.
T h is completes the proof. 1
l1 To determi.ne the column rank of A let us apply elementary row op-
ii
Theorem 4.9.8. Factorisation theorem erations on the matrix At .
An m x n matrix of rank r can be expressed as the product of two
m a trices , e n.ch of rank r .
Proof. Let A be an m x n matrix of rank r . hen there exist non-singular
m a trices P and Q of order m and n respectively such that P AQ =
Ir Or.·11 - r ) = R , say.
( O rn - r .r O rn - r, n-r
At =
- R '.J (
(
i ; i)
4
3
~0
6
9
2
2
6
~
1 )
( ; ; i)
4
3
R1 - 2R.i
6
9
2
6
( ~0
R.i - 2R1
R3 - 4R1
R4 - 3R1
001
u -~)
= B,
2
-1
-2
3
say.
-2
3
- -2
l -~ Rs + 2R2
0 3 R4 - 3R1 O O
R i::i the fully reduced normal form of A . R can be expressed as the
B is a row eche lon matrix. The non-ze ro row vectors o f B are (1,0 ,
product ST where S = ( 0
Ir ) , an m x r matrix of rank r and -1) . (0 ,1,1) . These form a basis of the row space of .4.t, i.e. , of the column
T11-r.r
T = Ur , Or.n -r) , an r x n matrix of rank r . s pa ce of A a nd consequently the column rank of A = 2 .
Since P a n d Q a r c no n-singular, p - l and Q- 1 both exist and are Therefore the row rank o f A. = t he co lumn rank of rL
non-singular . PA Q = ST =;, A = (P- 1 S ) (TQ- 1 ) . 2. Examine linear dep ende ri.ce of t h e set o f vect o rs {(l, -1. 2, 4),
Since p - is non-singu lar and the r a nk of S is r, the row rank of
1 (2.-1.5 , 7).(-1 , 3 , 1 , -2)} in JR. 4 .
p- 1 Sis r . That is, t h e rank o f p - i s is r. Leto= (1.-1 , 2 , 4). 6 = (2 . -1. 5. 7) , 1' = ( -1.3.1. - 2) .
1i VECTOR SPACES 161
160 HIGHER ALGEBRA I
Let us consider the matrix A whose row vectors a,re o:, /3, 'Y and apply
elementary row operations on A to reduce it to a row echelon matrix.
p-•s =( ~ 1
1
1
~
1
).( ~ D=(
\ 0
1
1
2 n
A= ( ~ =i ; ~) 2
R:a- R
1
(
1
0 -i i _1 ) TQ-1 =( ~
A=( i
n-u 0
1
0
1
0
-~) = (
1
0
0
1
-1
2 )-
R 3 +Ri
DU
-1 3 1 -2 0 2 3 2
0
R1±..R
2
R:s- 2 R 2
( ~0 ~0 i -i )
1 4
1;;2-_ 3 (
3
:_
,..,
~0 ~0 g1 =~ )
4
= R, say.
Hence 1
-1
2 ).
Risa row echelon matrix row equivalent to A. There are -3 non-zero Exercise s 9
rows in R. So the row rank of R, and consequentl y the row rank of A is
3. 1. Determine the row rank of the matrix having each set as the row vectors
Therefore { o, /3, 'Y} generates a vector space of dimension 3 and hence and hence examine the linear dependence of the set.
the set {a, /3, 'Y} is linearly independen t. (i) {( 4, 2, 5), (3, 0, 1), (5, 4, 9)} ,
(111)
(ii) {(-2 , 0 , 0, 3), (1, 5, 3, O), (3, 2, 1, 6), (3, 5, 3, -3)},
(iii) {(4, 1, 3, 2), (2, 4, 4, 3) , (1 , 2, 0, 0), (0, 1, 1, 1), (1, 3, 1, l)}.
3. Show that the rank of the matrix A ::: 1 2 3 is 2. Express A
(;) u! ;). oi ! n- u~ 1n
2. Find a basis for the row space of the matrix.
2 1 0
as the product of two matrices, each of rank 2.
Let us apply elementary operations on A to reduce it to the fully
reduced normal form. (n) (;ii)
1) (1 01) (1 00)
0
1 3. Find a basis for the colwnn space of the matrix.
2 R1.:::_R2 0 1 2 C3.:::...C1 0 1 0
n
1
-2
mU ( n
2
~
Ra+R:a Q Q Q Ca-2C:a 0 0 Q 3 1 1 1 1
=( h
-1
02 ,1 ) = R, say. R is
the fully reduced normal form of A .
3
1 (
-1)
0
1
, (;;} 0
1
6
1
(iii) 1
2
2
1
2
1
01,2 01,1
R = Ea2(l)E12 (-l)Ea1(-2 )~1(-l)A{E 31(-l)P{E a•2(-2)}t = FAQ, 4. A= ( 5) 1
3
2
0 ~ . Examine if (1, 1, 1), (1, -1, 1) are in
where P = Ea2(l)E12 (-l)Ea1(-2 )E21(-l), and -1 4
Q = {Ea1(-l)}t {Ea2(-2)} t. (i) the row space of A, (ii) the column space of A .
5. If A be a singular matrix prove that
R can also be expressed as the product ST, where S = ( 2
1,2
) c5 , a (i) the row vectors of A a.re linearly dependent;
3 x 2 matrix of rank 2 and T = (I2, 02,1), a 2 x 3 matrix of rank 2. (ii) the column vectors of A are linearly dependent .
8. If A _1;!.e a rectangula r matrix, prove that either its row vectors or its column 3. X l + 2x 1 - X3 = 0
. vectors or both the sets are linearly dependent . 2x1 + x2 - 2xa = O.
9, If B is a non-null m x 1 matrix and C is a non-null 1 x n matrix prove that (I, 0, I) is a solution of the system. (2, 0, 2), (3, 0, 3) are also
the rank of BC is 1. solutions. In fact k(l, 0, I) is a solution for each real number k. Thus the
10. If au m x n matrix A be of rank 1 prove that A can be expressed the-. system has many solutions.
prod~ct BC, where B is a non-null m x 1 matrix and C is a non-null 1 x n
n1atnx. Matrix represen tation.
~ ~ ~
( ..,:.n~.)' B--( bt~1 )·
11. Show that the rank of A= ( ) is 2 .
1 0 1 Let A= (a,j)m,n, X =
Express A a.s the product of two matrices , each of rank 2.
Then the system (i) can be expressed as AX = B. The matrix A is
4.10. System of Linear Equation s. said to be the coefficien t matrix of the system and the matrix
A system of m linear equations in n unknowns x 1 , x2 ,
form
. ... x,.. is of the
A= ( :~~... :~: :~: :~ ) .
, also denoted by (A, b), is said
a11X1 + + · · · + '11nXn
a12X2 b1 am1 am2 amn bm
a21X1 + a22X2 + · · · + a2,.x,. b2 to be the augmente d matrix of the system.
The system AX = B is said to be a homogene ous system
am.1X1 + am2X2 + · · · + amnXn bm.,
(i) B = O; otherwise , a non-homo geneous system.
if
where a i;'s and bi's are clements of a field F, called tho field of scalars.
a i j 's are called coefficien ts of the system. In particular , aii 's
and b,. 's are Two systems AX = B and C X = D are said to be equivalen t systems
real (or complex) numbers when F is the field JR (or C) . .. if the augmente d matrices (A, b) and (C, d) be row equivalen t.
An ordered set (c 1, c2 , . . . , c,..) where Ci E F, is said to be a solu~ion Theorem 4.10.1. Let AX= Band RX= S be two equivalen t systems
of the system (i) if each equation of the system is satisfied by and o be a solution of AX= B . Then o is also a solution of RX= S.
X1 = C1 , X2 = C2, ·. · ,X-n = Cn - Proof. Let the equations of the system AX = B be
Thcrcfor e a solution of the system can be considere d as an n-tuple
vector of V-n(F). In particular , if the field of scalars be JR, a solution of Ji - a11x1 + a12X2 + · · · + a1nXn - b1 = 0
the system is a vector in JR" . h - a21x1 + a22x2 + · · · + a2nXn - b-i = 0
A ::;y::;tem of equations is said to be consisten t if it has a solution.
Otherwis e. it is said to be inconsiste nt. fm - a,.,.1x1 + am2X2 + · · · + Om.nXn - b,.,. =0
and let a= (c1, c2, . . . , en) be a solution of the system.
Exampl es.
1. X1 + 2x2 =3 Let us apply elementar y row operation R;,i on the augmente d matrix
3X 1 + X2 = 4. (A, b) of the system. Then the ith and the jth equations J. and Ii , are
( 1 . 1) j::; a solution. There is no other solution of the system. interchan ged and the others remain unchange d. Therefore (c1, c2, ... , en)
is also a solution of the new system.
2. XJ + 2x2 = 3 This implies that if R;,j(A, b) = (C, d), then (c1, c2, .. . , en) is also a
3x 1 + 6x2 = 7. solution of the system C X = D.
Thi::; !;ystem has no solution . This is not a consisten t system.
164 HIGHER ALGEBRA
~ ( ~ ~ -i ~ )-
0 0 3
(A. , b) of the system. Then the ith equation fi, is multiplie d by c(=/ O)
and the other equation s remain unchang ed. Therefor e ( c1 , c2 , . .. , en) is o o 1 o· ,
R1_-=.R3
· -R 2+RaQ
( ~ 1
0
0
f
1
0
also a solution of the new system. Hence the system is equivale nt to x1 = 3
This implies that if cRi (A, b) = (C, d), then ( c:1, c2 , .. . , en) is also a X2 = 1
solution of the system C X =
D. X3 = 0
Let us apply elementa ry row operatio n Ri + cRj on the augment ed and therefore the solution is (3, 1, O).
matrix (A. b) of the system. Then the ith equation Ji is replaced by Ji+ Note. The coefficie nt mat.rix A is row equivale nt to the identity mat~ix
c/j and the other equation s remain unchang ed. Therefor e (c , c , . . . , en) J~ and so it is non-sing ular. This also suggests that the system admits
1 2
is also a solution of the new system.
of a unique solution.
This implies that· if Ri + cRj(A , b) = (C,d), then (c 1 , c , .. . ,Cn) is
2
also a solution of the system CX = D. 2. Solve the system of equation s
x1 + 3x2 + x2 = 0
Since (R, s) can be obtained from (A, b) by a finite number of elemen-
tary row operatio ns of the above types, a solution of the system AX = B
2X1 - X2 + X3 =
0.
This is a non-hom ogeneou s system. The coefficie nt matrix of the
is also a solution of the system RX= S.
Corolla ry. If one of the two equivale °' systems be inconsis tent the
system is A = ( ~ _~ i) and the augmen ted matrix is A =
other is also so. '
To examine the solvabili ty of the system AX = B, or to determin e the
.
( ~ -1 i g).
solutions (or solution) of the system, when it is consisten t, the obvious· Let us apply elementa ry row operatio ns on A tq reduce it to a row
procedur e is to apply such elementa ry row operatio ns on the augment ed reduced echelon matrix.
matrix (A, b) as will reduce it to a row reduced echelon matrix. 1 3 1 0 ) R2~R1 ( 1
. . I' 3 1
( 2 -1 1 0 0 -7 -1
Worked Examp les.
1. Solve the system of equation s
X1+ X2 = 4
-~2 ( ~ ~ i g) R1-=~2 ( ~ ~ t
The given system is equivale nt to
X2 - X3 = 1 + !x3 = 0
:Trr~rr::,h::::::.::n~:::i~. t:
Xt
2x1 + x2 + 4xa = 7.
X2 + ~X3 = 0.
This is R.
Assignin g to x 3 an arbitrary real number c, we have the solution
system is A 0~
:c1= - 74 c , X2 = - 71 c,, X 3 -- C, .
-½ ,
u n
Therefor e the solution is ( - ~c, -,}c, c), i.e. , c ( - ~, 1), where c is
1 0 an arbitrary real number. The solution can be also equivale ntly expresse d
1 - 1 as k(4, 1, -7), where k is an arbitrary real number.
1 4 Note 1. Instead of consider ing the augment ed matrix we can consider
1
u_: -!
Let us apply elementa ry row operatio ns on A to reduce it to a. row only the coefficie nt matrix A in the case of a homogen eous systei;n AX =
reduced echelon matrix. O, since the last column of the augmented matrix is the zero column and
A n.~ •• -1
3
1
·1·
.!.
r
ii
~o
1
the form ko:, where k E lR. So the solutions form a subspace of JR 3 and
the dimension of this subspace is ! .
i
J
2
1
-1
1
3 _ 4
10) R 1 -2R~
- ( -,
1
0
0
)·
1 3
5
2
4
Ij -3 -1 12
Ra+3R2
0 0 0 0
\'
3. Solve, if possible, the system of equations
X1 + 2x2 - X3 = 10 The given system is equivalent to i
-xi + x2 + 2x3 = 2
2x1 +x2 - 3xa = 2.
'1
j
X1 - 2 X3 = 2
x2 + ½xa = 4.
i i
t Assigning to x 3 an arbitrary real number c, the solution is l
1
(jc+2,-½c+4,c) .
:I
This can be expressed as c(j , -½, 1) + (2, 4, 0), where c is an arbitrary
real number.
(-i 2
1
1
-1
2
-3
Let us apply elementary row operations on A to reduce it to a. row
Note 1. Since c is arbitrary, the number of solutions is infinite.
Note 2: (2, 4, 0) is a particular solution of the system. c(j -
general solution of the associated homogeneous system.
½, 1) is the
reduced echelon matrix.
(-i 2 ..... 1
1
1 -3
2 2
2
10)
R2.±_.R1
Ra-2R1
O
O
2 -1
3 1c·
-3 -1 -18
12 10)
-, )·
j
l
Homogeneous System.
We first discuss some properties of the solutions of a homogeneous
system. The system is necessarily a consistent one, since (0, 0, ... , 0) is
~( 1
0
2 -1
1
0 -3 -1
1
3 _ 4
10 )
18
R1-2R2
Ra+3R2
0
0
1
0
5
3
0
2
4
-6 J
always a solution of the system. This solution is said to be the trivial so-
lution of the system. Our main interest will be in the non-zero solutions,
if there be any, of the system.
1~T~~ 2 .
The given system is equivalent to
The solutions of a homogeneous system AX = 0 in
X1 - 2 x3 =2 1 n unknowns where A is an m x n matrix over a field F, form a subspace
~.c2 + lxa =4 of Vn(F) .
0= -6.
The last equation disallows the existence of any solution of the equiv- Proof. The system being always a consistent system has a solution which
alent system. Therefore the given system is inconsistent. is an n-tuple vector in Vn(F). Let S be the set of all solutions of the
system.
4. Solve, if possible, the system of equations -
Case 1. The zero solution is the only solut ion of the system.
X1 + 2X2 - X3 = 10 Then S = {0} and this is a subspace of Vn(F) .
-x 1 +x2 + 2xa = 2
Case 2. The system has many solutons.
2x1 + x2 - 3xa = 8. Leto:= (c1,c2, ... ,Cn) ES and c E F.
Since a is a solution of the system, ailc1 + £ii2C2 + · · · + ainCn = 0 for
This is a non-homogeneous system. Let us apply elem~ntary row i = 1,2, .. . ,m.
rt-r;r)
operations on the augmented matrix of the system to reduce 1t to a row ail (cci) + Oi2(cc2) + · · · + Oin(cen)
= c(ai'1 ci) + ai2C2 + · · · + ainCn) = 0 for i = 1, 2, ... , m .
redu(ce~ :::2~1 ( 001 3
2 -1
1 10)
12 This shows that (cc1 , cc2, . .. , ccn) is a solution of the system.
-3 -1 -12 Therefore a E S => ca ES (i)
2 1 -3 8
168 HIGHER ALGEBRA
·1 j VECTOR SPACES 169
I
Let a= (c1, c2, • • . ,Cn), /3 = (di, d2, ... , dn) ES. Then d1a 1 + d 2a2. + • • • + drar + dr+1(e11a1 + e12a2 + · · · + e1retr) +
Since a, /3 are solutions of the system, · · · + dn(en-r10:1 + en-r202 + · · · + en-rrar) = (}
auci + ai2C2 + · · · + ainCn = 0 and or, (d1 +dr+1e11 +dr+2e21 +· · ·+dnen-r1)a1 +(d2+dr+1e12+dr +2e22+
and1 + ai2d2 + · · · + aindn = 0 for =1,2, .. . ,m. • · · + dnen-r2)a2 + · · · · · · + (dr + dr+1e1r + · · · + dnen-rr) 0 r = 0.
Therefore ai1(c1 + d1) + ai2(c2 + d2) + · · · + ain(Cn + dn)
Since a 1, a 2 , .. . , Or are linearly independent,
= (ailc1 + ai2C2 + · · · + ainCn) + (and1 + ai2d2 + · · · + aindn)
= 0 for i = 1, 2, .. . , m. -dr+l en - dr+2e21 - · · · - dnen-rl
This shows that ( c1 + d 1, c2 + d2, ... , Cn + dn) is a solution of the -dr+1e12 - dr+2e22 - · · · - dnen-r2
system. Therefore a ES, /3 ES~ a+ f3 ES (ii)
From (i) and (ii) it follows that Sis a subspace of Vn(F).
This completes the proof. Therefore f; = (d1,d2 , ... , dr,dr+l, · · · ,dn)
-dr+1(e11 , e12 , . . . , e1r, -1 , 0, 0 , .. . ; 0)
Note. The subspace of solutions of the homogeneous system AX == O -dr+2(e21, e22, .. . , e2r, 0, -1, . .. , 0)
is denoted by X(A).
- · · · - dn(en-rl, en-r2, en-rr, 0, 0 , • • •, -1)
Theorem 4. uf'/.
Let AX = 0 be a homogeneou~ system of n unknowns = -dr+16 - dr+26 - · · · - dnl;n-r·
and X(A) ~;~~lution space of the system. Then This shows that any solution vector f; is a linear combination of
rank of A+ rank of X(A) = n. 6 , 6, ... , l;n-r and since these solution vectors fa , 6, . . . , l;n-r are lin-
Proof. Let rank of A = r. Then A has r independent column vectors. early independent , the rank (dimension) of the solution space is n - r.
Without loss of generality, we can assume that first r column vectors · Therefore rank of A +rank of X (A) = r + (n -r) = n and the theorem
01, 02 , ... , or of A are linearly independent. Then the remaining column is done.
vectors Or+i, .. . , On can be expressed as Note. f; 1 , {2, ... , {n-r are the basis vectors of the solution space of the
+ e12a2 + · · · + homogeneous system. So any solution can be expressed as c 16 + c 26 +
Or+l = e11a1 e1rCtr
• • • +Cn-rl;n-r, where ci's are arbitrary scalars. This is called the general
Ctr+2 = e21c:t1 + e22et2 + · · · + e2r'ar
solution of the homogeneous system.
Ctn = ~ Y - If the number of equations be less than the number of un-
for suitable scalars eii. knowns in a homogeneous system AX = 0, then the system admits of a
Equivalently, e11c:t1 + e12c:t2 + · · · + e1rO:r - O:r+l = 0 non-zero solution.
e21 a1 + e220:2 + · · · + e2rOr - Or+2 = 0 Proof. Let the order of A be m x n. Then m < n and rank of A < n.
As rank of A + rank of X(A) = n, we have rank of X(A) > O and
= 0. this proves that there is a non-zero solution of the system.
The relations show that
6 = (en, e12 , .. . , e1r, -1, 0,0..... ., 0), Theorem 4.10.4. The homogeneous system AX = 0 containing n
{2 = (e21,e221•--·•, ~2r,O,-l,O, ... ,0), equations in n unknowns has a non-zero solution if and only if rank of
A<n.
{n-r = (en-rl,en-r2,· · ·•en-rr,0,0, ... ,-1) Proof. Let (c1, c2, . . . , en) be a ndn-zero solution of the system.
are solutions of the system.
LetA = (aij) - Then a11c1 + a12c2 + · · · + a1nCn = 0
But {1 , {2, . .. , {n-r are linearly independent, because a21C1, + a22c2 + · · · + a2nCn = 0
C1{i + C2{2 + · · · + Cn-r{n-r = 0 implies C1 = C2 = ' · · = Cn-r = 0.
Let { = (d1 , d2 , ... , dr , . .. , dn) be any solution of the system. = 0.
170 HIGHER ALGEBRA VECTOR SPACES 171
~ince ( c1 , c2 , . • . , Cn) is non-zero, at least one of. the components, sa.y X1 + b1r+1Xr+l+ b1r+2Xr+2 + · · · + b1nXn 0
Cj, 1s non-zero. j x2+ b2r+1Xr+l + b2r+2Xr+2 + · · · + b2nXn 0
a11 a12
i
Cjalj a1n
det A= a21 a22 Cja2j a2n
Xr + brr+1Xr+l + brr+2Xr+2 + ··· + brnXn = 0.
Cj-
= A solution can be obtained by choosing arbitrary scalars for
an1 an2 Cjanj ann Xr+l , Xr+2·· ·· , Xn-
a11 a12 C1a11 + C2a12 + · · · + Cjaij + · · · + Cnaln- a1n j Let Xr+l = -c1, Xr+2 = -c2 , • • • , Xn = -Cn-r·
a21 a22 C1a21 + C2a22 + · · · + Cja2j + · · · + Cna2n a2n 1., Then the general solution of the system is
i
an1 an2 + c2an2 + · · · + Cjanj + · · · + CnUnn
C1Un1 ann
1
j (c 1 b1r+1 + c2b1r+2 + · · · + Cn-rb1n, C1b2r+l + c2b2r+1 + · · · +
Cn.:...rb2n , ... , c1brr+l + c2brr+2 + · · · + Cn-rbrn, -C1, -c2, · · ·, -Cn-r)
[c; = c1C1 + c2C2 + · · · + Cj-lCj-1 + cj + Cj+1Cj+1 + · · · + CnCn]
= C1 (b1r+l, b2r+l, • • •, brr+l, -1, 0, 0, · · ·, 0)
= 0, since the jth column is the zero column. +c2(b1r+2, b2r+2, ... , brr+2, 0, -1, 0, • • • , O)
Since Cj =I=- 0, <let A = 0 and this implies that rank of A < n. +
+cn-r(bin, b2n, .. . , brn, 0, 0, 0, ... , -1), where C1, C2, . . . , Cn-r are ar-
•
Conversely, let rank of A < n. Let X(A) be the solution space of the bitrary scalars.
homogeneous system. Then rank of X(A) + rank of A= n. Since rank
of A < n, it follows that rank of X(A) > 0. This proves that there is a Worked Example (continued).
non-zero solution of the system. l 5. Solve the system of equations
l x+ 2y +z- 3w 0
Note. The number of solutions in this case is infinite. l 2x+4y+3z+w = 0
I 3x + 6y + 4z - 2w = 0.
Method of solution of a homogeneous system
Let the system of equations be AX = 0. The matrix A can be 1 This is a homogeneous system. Let A be the coefficient matrix of the
HA-33
174 HIGHER ALGEBRA VECTOR SPACES 175
1
Subcase (ii). Rank of A = rank of A< n . Worked Examples (continued).
The ass?ciated homogeneous system AX = O possesses infinitely 1 6. Solve, if possible.
many solut_10ns and therefore the system AX = B possesses infinitel f
+ 2y + Z - 3W = {ii) X + 2y + Z - 3w
many solutions. y
! (i) X 1 = 1
2x + 4y + 3z + w = 3 2x+4y+3z+w = 3
3x + 6y + 4z - 2w = 5, 3x + 6y + 4z - 2w = 4.
Method of solution of a non-homogeneou s system.
(i) This is a non-homogeneous system.
Let the system of equations be AX = B where A is an m x n matrix
0
and let rank of A= rank of A= r . 2 1 -3
The coefficient matrix of the system is A = 4 3 1 ) and
L~t ~ be_ reduced by elementary row operations to the row echelon . 6 4 -2
matnx R which by suitable interchange of columns takes the form
1
0
0
1
0
0
b1r+l
b2r+l
b1r+2
b2r+2
bin
b2n
di
d2
the augmented matrix is A
.
=
C 2
3
2
4
6
1
3
4
-3
1
-2 5
1
3
)
0 0
Let us apply elementary row operations on A .
1 dr
-r ~ )
brr+l brr+2 brn
~0 0~ 1~ ~ 0~ 0~ ~
0 0 0 0 0 0 0
A R2=.=R1 ( R1 -R2 ( - ~l ) .
0 0 0 0 0 0
R3-3R1 7 2 R3-R2 0 0
0
Therefore, by suitable adjustment of renaming the unknowns, if nec- Here rank of A = 3 and rank of A = 2. Since rank of A ,:/ rank of A,
essary, the equivalent system is the system is inconsistent.
X1 + b1r+1Xr+l + b1r+2Xr+2 + · · · + b1nXn = d1 (ii) This is a n(oriiho~o~ene~~) system. The coefficient matrix of the
X2 + b2r+1Xr+l + b2r+2Xr+2 + · · · + b2nXn = d2
system is A = 2 4 3 1
Xr + brr+lXr+l + brr+2Xr+2 + · · · + brnXn dr-
3 6 4 -2
A solution can be obtained by choosing arbitrary scalars for 1 2 1 -3
Xr+l - Xr+2 , - - - - Xn- Let Xr+l = -C1,Xr+2 = -C2, · · · ,Xn = -Cn-r·
=
+ C1 brr+l
( d1 . d2 ,
+ C2brr+2 + · · · + C,,-rbrn, -Ci, -C2, ... , -C,._r)
. . ., dr , 0, 0 ... . , 0)
+ c 1 (b 1r+ l , 02r+l , - - •, brr+l,
+ c2(b1r+2 , h 2r +2 , • - • ,
+
-1, 0, 0, . .. , 0)
brr+2 , 0 , -1, 0, . . . , 0)
A R2:-~R1
R3-3R1
Here rank of A
(~
Q
~ ~
Q
=
l
-;
7
A = 2 . So the system is consistent.
rank of
The given system is equivalent to x + 2y - 10w = O
2
0
0
0
1
0
-10
7
0 n
+ cn - r ( b 1.,. . l;-2.,,, ... , b r n , O, O, O, .. . , - l ) , where C1,C2, .. . , cn-r are ar- z + 1w = 1.
bi trn. ry s c,11.ars.
Choosing y = c, w = d, where c , d are arbitrary real numbers, the
N ote. The sol ution (di . ch ,- .. , dr , 0,0 , . . .• O) is a particular solution of solution is (-2c + 10d, c , 1 - 7d, d)
t lw ~Yl>~!rn, 0btained by t &.king Xr + 1 = Xr+2 = · · · = Xn = 0. = (0, 0, 1, 0) + c(-2, 1, 0 , 0) + d{lO , 0, -7, 1).
176 HIGHER ALGEBRA VECTOR SPACES 177
u~ -t :. )
X + 7y + 2z 1
_
This is a non-homo geneous system.
1 2 1
The augmente d matrix of the system is
~ ; _ ~1 tb2 ) R2.::_,R1 ( ~0 2~ _; !- 1 )
( 7 R3-5R1 -4 b2 - 5
H)
2 1 2 5
- R2-3R1 ( l
A - 0 -5 -1 01 ) -!R:i ( .o1
Rs-R1 O
5
--+ 1 0 3 -b+2
1 0 0 5 R1-R2
010 1 -2 b- 1
0
1
f
s
R3-2R2 ( 0 0 b2 - 2b- 3
If b2 _ 2b _ 3 = o, then rank of A= rank of A= 2 and therefore the
0 0
system is consistent .
The given system is equivalen t to If b2 - 2b - 3 :/= 0, then rank of A = 3, rank of A = 2 and since rank
x+ fz = 1 of A :/= rank of A, the system is inconsiste nt.
y + 5 z = 0.
Therefore if a= 1, b :/= -1, 3, the system has no solution; and
Choosing z = c, the solution is (1 - c, f -½c,c), where c E JR. if a= I, b = -1 or if ,.z. = 1, b = 3, the system has many solutions.
Since the solutions are to be in integers, c = 5k where k is an arbitrary
integer . Hence the solution is (1 - 3k, -k, 5k) = (1, 0, 0) + k(-3, -1, 5),
k being an integer. Exercis es 10
1. Solve the equations
8. Determin e the condition s for which the system of equations
(i) + y + 3z
X 0 (ii) x+y-z- w 0
x+y+z 1
2x+y+z = 0 x-y+z- w 0.
3x+2y+4 z 0,
x+2y- z b 2. Find the solution of the system of equations in rational numbers.
5x+ 7y+az b2 (i) 2X + 3y + Z 0, (ii) X + 4y + Z 0
4x +y- z 0.
admits of (i) only one solution, (ii) no solution , (iii) many solutions.
3. Find the solution of the system of equations in integers.
The system has a unique solution if the coefficien t determina nt be
(i) X + 2y + Z O (ii) · X - 3y + 4z
non-zero. 0
3x+y+2 z 0, 3x+y-2 z 0.
1 1 1
The coefficien t determin ant= 1 2 -1 = a - l. 4. Find·a·linea.r·horilogene€>US equation in x1,x2,xa,x 4 such that x1 = l,x2 =
1;x3 = l,x 4 = l; x 1 = l,x2 = -1,xa = -l,x4 .= 1 and X1 = 2,x2 = 3,xa =
1
5 7 a
If a - 1 :/= 0, i.e., if a :/= 1, the system has only one solution. 3, x 4 =2 are solutions of the equation. ·
5. Find a linear homogeneo us -system of two independe nt equations in x1, x2,
If a =
1, the system has either no solution or many solutions.
x 3, x such that x 1 = l,x 2 = 2,x3 = 3,x4 = 4 and x1 = 2,x~ = 3,xa = 4,x4 =
4
When a = 1, the coefficien t matrix of the system is 1 are solutions of the system.
.
178 VECTOR SPACES 179
HIGHER ALGEBRA
6. Examine the solvability of the system of equations and solve, if possible. 4.11. Application to Geometry.
(i) x + y + z I 4.11.1. Intersection of two lines in Euclidean plane.
( ii) x+y+z = 1
2x + y + 2z = 1 2x + y + 2z = 2 Let the Jines be a11x1 + a12x2 = bi,
X + 2y + 3z = 0, 3x + 2y + 3z = 5. a21X1 + a22x2 = b2.
7. For what values of a the system of equations is consistent? Solve completely
in each consistent case. Let A =( a11
a 21
a12 )
a22
B = ( aua21"
a12
a22
b1) .
b2 .
(i) x-y+z 1 (ii)
X + 2y + 4z a
x+y+z
2x + 3y - z
1 Let°'•= (ai 1 , ai2), i = 1, 2, /3, = (au, ai2, bi), i = 1, 2.
a+l
X +4y+ 6z 2x +y + 5z a2 + 1. . In order to investigate the nature of solutions of the given non homo-
geneous system of equations, we are to consider the following cases.
8. For what values of k the system of equations has a non-trivial solution?
Solve in each case. Case 1. Rank of A = 2.
(i) x+y+z kx There is a unique solution of the system since det A =/:- O. Therefore,
(ii) x + 2y + 3z kx
x+y+z = ky 2x+y+3z = ky
the lines intersect in a point.
x+y+z kz, 2x + 3y + z kz. Case 2. Rank of A= 1, rank of B = 2.
9. Determine the conditions for which the system of equations has (a) only The system is inconsistent and therefore there is no solution of the
one solution, (b) no solution, (c) many solutions. system. The lines are parallel.
(i) X + 2y +z 1 (ii) x+y+z Case 3. Rank of A = 1, rank of B = l.
b
2x +y+3z b 2x+y+3z = b+l The system is consistent and there are infinite number of solutiollB
x + ay + 3z b + 1, 5x+2y+az b2. since rank of A < 2.
Since rank of B = 1, /31, /h are linearly dependent. Therefore /32 =
a c{3 1 for some non-zero real number c.
10. Solv e the system of equations b and use the solution to
X1 + X2 C
This shows that the two equations are identical and therefore ·the lines
0 1 1 are coincident.
fi nd t h e inverse of the matrix A=
(
1
1
0
1
1
0 )· Examples.
+ X2 + X3
1. The lines 2x + 3y = 3 and x + 2y = 1 intersect in a point, since the
11. So lve the s ystem of equations
-X1
X1
X1
- X2 + X3
+ X2 - X3
=
=
=
a
b and use the rank of the matrix ( i ;) is 2.
-!
C
~
1
Let the planes be a11x1 + a12x2a13x3 = b1,
(i ) A = ( 1
a21X1 + a22x2a23x3 = ½.
180 HIGHER ALGEBRA VECTOR SPACES 181
Let A =( a11 a12 a13 ) B - ( au a12 a13 bi ) In this case, only one of 01 , 02 , ~ 3 , say a 1 is independ ent. Therefore
a21 a22 a23 ' - a21 a22 a23 b2 ' 0:2 = ca1, 0:3 = da1, c, d being non-zero real numbers.
a, = (an, a,2, Oi3), i = 1, 2,· (3i - (a il, a,2,
· ai3, bi ) , i· ==. 1 , 2 • The planes are perpendic ular to a common direction , the direction
The following cases come up. ·;ector being (au, a12, a13).
VECTOR SPACES
I
182 HIGHER ALGEBRA 183 l
Therefore the planes (i) and (ii) intersect along a line parallel to the Here rank of A= 2, rank of B = 3. The solution of the homogeneous
direction (h, '2, l3), by case 1 of 4.11.2. system AX = O is c(l, 1, 1). The planes are all parallel to the direction
( 1, 1 , 1). The first and the sec~:md planes are paral~el. ;1'he third plane
Let 0:3 = ca:1 + da2 where (c, d) =I= (0, 0). intersects the other two along )mes parallel to the d1rect1on (1, 1, 1).
If c = 0, 0:3 = da:2, d =I= O; but /33 =I= d/32 • :I'herefd're the planes (ii) and
(iii) a.re parallel. 7. Let the planes be 5x1 + 3x2 + 7x3 = 4
If d -= 0, a:a = co2, c =I= 0; but /33 =/: c/32 • Therefore the planes (i) and 2x1 + x2 + 3xa = 1
(iii) a.re parallel. + 3x2 + llxa = 2.
7x1
. If c ¥: 0, d =I= 0 then 01, 03 and also a: 2 , a: 3 a.re linearly independent.'
pairs .. Th~refore the planes (i) and (iii') intersect along a line parallel to
the d1rect1on (h, h, la) and also the planes (ii) and (iii) intersect alorig a.
line parallel to the direction (h, h, l 3 ), by case 1 of 4.11..2.
Ij
l
' j
Here rank of A = 2, rank of B = 2.
The system of equations is consistent. It is equivalent to
X1 +2X3 = -1
Therefore either 1 X2 - X3 = 3.
(i) two of the planes a.re parallel and the third intersects them, or Taking x 3 = r, the general solution is (-1 - 2r, 3 + r, r).
(ii) three planes intersect in pairs along three parallel lines and the 'rherefore x 1 = -1 - 2r, x2 = 3 + r, xa = r.
planes form a prism, the axis of the prjsm being parallel to (h, h, la). Hence 'the planes intersect along the line
:i:1~1 _ xa-3 _ x;,-0 _ r
.
Case 3. Rank of A = 3. - 1 - 1 -
6. Let the planes be x1+ x2 - 2xa = 3 (i) x +y+z= 2, 3x + 11 - 2z =k and 2x + 411 + 7z= k + 1;
X1 - 2x2 + X3 = 3
(ii) x + y + 1 = 0, 4:z: + y - z =k and 5:z: - y - 2z = k2
form a triangular prism ?
2x1 + 2x2 - 4.xa = 1.
'i
VECTOR SPACES 187
186 HIGHER ALGEBRA
(ii) lla:11 > 0 unless a= 0 and 11011 = 0. If a, f3 be both non-null, then a = k/3 for some non-zero real k.
2
In this case , llall = !kill.BIi and (a./3) = (k/3,/3) = k(,6,/3) = kll/311 -
Proof. (i) llcall = ✓ (ca:,ca:) = ✓c 2 (a:.a:) = icl ✓ (a:,a:) = icllla:11- Therefore l(a,/3)1 = lklll/311 2 = llallll/311 -
(ii) a: # 0 implies (a:, a:) > 0 and tMrefore llall > 0. Conversely, let l(o:, /3)1 = lla:1111/311 -
If a:= fl. then (a: . a:) = (0 . 0) = 0 and therefore lla:11 = 0. a, .Bare linearly independent implies l(a:,/3)1 < llallll/311, by case 3 of
the previous theorem.
4.12.2. Schwarz's inequality. Contrapositively, l(a,/3)1 -/:. llallll/311 implies a , {3 are linearly depen-
For a ny two vectors a.[] in a Euclidean space V, dent . But by Schwarz's inequality, l(a,/3)1 ::=; liallll/311 for all a,/3 in V.
l(o- . t:1)1 ::=; llall ll,811, Therefore l(a, /3)1 = llall II.Bil implies a, .B are linearly dependent.
the equality holds whe n o, fJ are linearly dependent. This completes the proof.
Proof. Case 1. Let one or both of a:. 3 b e null. Then both s ides being Note. a,/3 are linearly dependent implies l(a ,,6)1 = llallll/311-
zero, t he equality h o ld t:i. But (i) o. , /3 are linearly dependent may not imply (a, /3) = llall 11/311 -
Case 2. Let a. i3 b e non-null and linearly dependent . Then there exists For example, let a= (1 , 2, 3). /3 = .(-2, -4, -6);
a n on-ze ro r eal number k such that u = k:t'J. (ii) a: , /3 are linearly dependent may not imply (a,,B) = -llallll/311. For
example, let a = (1, 2 , 3), /3 = (2, 4, 6) .
Then lla:11 = if.: I11/111 and (a. /3) = (k /3 . 3 ) = k (/3. /3 ) = klli3ll 2 .
· Therefore l(a: .f-3)1 = lklllt111 2 = ll a llll.3 11- 4.12.4. Triangle inequality.
Case 3. Le t a , /3 be not linearly d ependent. Then a: - k/3 'I= 0 for all If a , /3 be any two vectors in a Euclidean space V , then
real k. Ila: + -/311 s 11°11 + II .B Ii.
T h er efore (a - k /3 , a - k /3) > 0 for all rea l k
Proof. From the properties of an inner product ,
or , (a:, a ) - 2k (a, .6 ) + k 2 (/3, /3) > 0 for all real k .
Since (a, a) , (a:, /3) . (/3 . /3 ) are all real and (/3 , f3) 'I= 0, the left-hand
Ila+ ,611 2 = (a+ /3 , a+ ,6 ) = (a:, a:)+ 2(o:, /3) + (/3, ;3)
= lio:11 2 + 2(a , f3) + 11011 2
side is a real q uad r at ic poly nomial in k and since it is positive for all
S lla:11 2 + 2lla:II 11/311 + 11 .6 11 2 , by Schwarz's inequality
Teal values of k , t he discri m ina nt of the quadratic polynomial must be 2
negat ive, for otherwise the p olynomia l would be zero for some real k . (llall + 11 /3 11) .
T h us (o ,/3) 2 -(o ,o: ) (/3 . /1) < 0, whe nce l(a . .B)I < llallll/311- Therefore lln + .611 ::=; llnll + 11 /3 11 -
This com pletes the p roof. This completes the proof.
·-,
1
188 HIGHER ALGEBRA li VECTOR SPACES 189
/
Note. Ila+ /311 = llall + ll611 implies (a. ,6) = llallll/311 and this again Exampl e 6. In a.n n x n real orthogo nal matrix, the row vectors form
implies o:, /3 are linearly depende nt . an orthonor mal set and the column vectors form another orthono rmal
Since o:, /3 are linearly depende nt may not imply (o:, .6) = llall 11/311, set in lR" with standard inner product .
Ila+ .BIi may not be equal to Hall + 11/311 if a. /3 are linearly depende nt.
For example , let o: = (1, 2, 3), f3 = (-1, -2, -3). Let A be a. real n x n ·orthogo nal matrix. Then AAt In. Let
0:1, 0:2, •.. , an be the row vectors of A.
Then the column vectors of At are 01, a2, ... , On.
Definit ions.
l The ijth element of AAt
A vector o: in a Euclidea n space V is said to be a unit vector if = the inner ·product of the ith row vector of A and the jth column
llo:11 = l. If o: be a non-zero vector in V, then u¼rra is a unit vector.
In a Euclide an space, a vector a is said to be orthogon al to a vect~r
{3 if (o:,/3) = 0. Since (o:,/3) = (.6,a), if a be orthogon al to /3 then /3 is
I
l
= (ai, aj)-
Since AAt = !.,., (ai, aj) = 0 if i
vector of At
=/= j
orthogo nal to a. In this case, o:, .6 are said to be orthogon al. j
= 1 if i = j.
The null vector 0 is orthogon al to any non-null vector o: and also it is This proves that {a 1 ,a2, ... ,on} is an orthono rmal set.
orthogo nal to itself. This follows from the property of an inner product.
In a similar manner it can be shown that the column vectors of A
4.12.5. Pythag oras theorem . form an orthonor mal set.
If a, /3 be two orthogo nal vectors in a Euclidea n space V, then Note. The row vectors of In, i.e ., the vectors t:1,
11° + /311 2 = 110:ll 2 + 11/311 2.
I orthonor mal set in ]Rn with standard inner product .
t:2, ..• , E:n form an
l
Proof. llo: + ,611 2 = (a+ /3, o: + /3) = (o:, a:)+ 2(a:, /3) + (/3, /3) j
Theore m 4.12.7. An orthogon al set of non-null vectors in a Euclidea n
= llo:11 2 + II bll 2 , since (a:, /3) = 0. ! space Vis linearly independ ent.
This complet es the proof.
\ Proof. Let {,81,/32, ... ,/Jr} be an orthogon al set of non-null vectors. Let
4.12.6. Paralle logram law. us consider the relation
If o:, /3 be any two vectors in a Euclidea n space V, then ci/31 + c2f32 + · · · + Cr/3r = 0, where c., are real numbers .
llo: + /311 2 + Ila: - /311 2 = 211all 2 + 211/311 2 • +
Then (c1,81 + c2.82 + · · · Cr.8r, /3-.) = (0, /3i) = 0 for i = 1, 2, ... , r.
Proof. Ila+ /311 2 = (a+ /3, o: + /3) = (a. a) + 2(a, /3) + (/3, /3). or, c1 (/31, /3;.) + c2(/32, /3i) + · · · + c;.(/3;., /3i) + · · · + Cr(/3r, /3;.) = 0
or, Ci(/3i,/3i) = 0, since (/3;.,/3j) = O,j =I= i
Ila - /311 2 = (a: - /3. a - /3) =(a.a) - 2(a, /3) + (/3, /3).
Since /3i is non-null , (/3i, /3;.) > 0 and therefore Ci = 0.
Therefo re Ila:+ /311 2 + Ila - /311 2 = 2(o. a)+ 2(/3, /3) = 2llall 2 + 211/311 2 .
This complet es the proof. This proves that the set {/31, /32, ... , /3r} is linearly indepen dent.
Corolla ry. An orthonor mal set of vectors in a Euclidea n space is linearly
Defi.::ii tions. independ ent.
A set of vectors {,6 1 , /32 , .. . , /3r} in a Euclidea n space is said to be
orthogon al if (/3i, {)j) = 0 wheneve r i # j. Definiti ons.
A set of vectors {,6 1 , /32, .. . , /Jr} in a Euclidea n space is said to be Let /3 be a fixed non-zero vector in a Euclidea n space V . Then for
a non-zero vector a: in V there exists a unique real number c such
orthono rmal if (/3i, /3j) = 0 for i =I= j that
o: - c/3 is orthogon al to /3.
= 1 for -i = j.
Note. An orthogo nal set of vectors may contain the null vector 0 but c is determin ed by ( a: - c/3,'/3) = 0 . Therefor e ( o:, /3) = c(/3, /3) , giving
an orthono rmal set contains only non-null vectors. c=~-
HA-34
HIGHER ALGEBRA
l
190 V E CTOR S P AC E S 191
Proof. Since { /31 , 132, ... , f3r} is an orthogonal set of non null vectors , P roof. For all i (i = 1, 2, . . . , r) . ci = (Ct:, /3-; ) . since (/3i , /3.) = l.
it is linearly independent and there fore it is a basis of the s ubspace u - c 1 6 1 - c2 (h - • • • - crf3r is orthogonal to e a ch /3i, 1 ::; i::; r, since
L{/31 Jh, . . . , /Jr} . (a: - cu'31 - c2fh - · · · - Cr/3r , /3;) = (o,/3;) - Ci(/Ji , /3;.) = 0.
Therefore /3 can be expressed as /3 = c1f31 + c2,82 + ··· + Cr .Br, whe re It follows that o - c 1{31 - c 2{32 - · · · - cr!3r is orthogonal to c i/31 +
Ci are unique real numbers. c2B2 + · · · + Cr/3r -
Now (/3, /Ji )= ci(l3. , Bi), since ({3j , Bi) = 0 for j =I=- i. = (Q - cif31 - c2/32 - · · · - Cr/3r) + (c1/31 + C2/J2 + · · · + Cr/3r)-
Ct:
SoCi = t~·~.~
= the scalar component {3 along ,8; . By Pythagoras. Theore m,
Note. Every vector {3 in a Euclidean spate V is the sum of its projections \\od\ 2 \lo - 2
C1/31 - c2f3'!. - · · · - Cr/3r\\ + l\c11'.11 + · · · + Cr/3r\1 2
a long the vectors of an orthogonal basis of V . ~ l\c1/31 + C'). .62 + · · · + crf3r \1 2 • since a norm is ~ 0 .
Corollary. If { 8 1 , /32 , . .. , /Jr} be an orthonormal set of vectors in a But llc1 ,61 + C2.62 + ... + Cr ,Br\1 2
Euclidean space V, any vector /3 in L{.61, /32, .. . , f3r} can be expressed as
= (c1,81 + C2/32 + · · · + Cr ,f3r, C1,61 + C2/3'!. + · · · + Cr/3r)
/3 = (/3, /31 ).81 + (/3, /h)/32 + · · · + (/3, /3r )/Jr-
= Ci + c~ + · · · + c;., since {/31 , /32 , . . . , .6r } is a n orthonormal set.
Worked Example. Consequently, \lo\1 2 ~Ci+ c~ + · · · + c;.
1. Prove that the set of vectors {(1, 2, 2), (2. -2. 1) , (2.1, -2)} is an or- This completes the proof.
thogonal basis of the Euclidean space JR.3 with standard inner product . 1 ·
E x press ( 4 , 3, 2) as a linear combination of these basis vectors. Note. The equality occurs if \la: - ci/31 - c2f32 - · · • - Cr/Jr\1 2 = 0 ,
i..e ., if a.= c1/31 + c2/J2 +· · ·+Cr.B r, i.e., if a: E L{/31 , .6 2, . . . . /3r} -
Let {31 = (1. 2, 2) , /h = (2, -2, 1). ,8 3 = (2, 1. -2).
Then (/31 , /32 ) = O. (/32 , /3:{) = 0 , (/33, /31) = 0 . Thcorem. 4.12.10. Parseval's theoren1.
So {.l3i. {32 , {33 } is an orthogonal set of non-zero vectors and therefore If { {31 , /32, . . . , 6 11 } be an orthonormal basis of a Euclidean space V,
it is linearly independent. the n for any vector C\ in V,
Since JR.3 is a v ector space of dimension 3, {/31 . f32 , B:~} is a basis of IR3 . \\a:\\ 2 =Ci+~+ · ··+ C~,
Let .6 = (4. 3, 2). So {3 = c1/31 +c2.B 2+c3{33, where Ci is the component where Ci is the scalar component of o alo ng f3;, i = 1, 2 . . . . . n .
of (3 along 13-t . Proof. Since { f31, f32, .. . , ,Br,} is a. basis of V, a ny vector a: E V can be
C = (/3 .~ 1) = 4.1+3 2+2.2 =
1 (;➔ 1 ,th) 9 li , expressed a::; a= c1/J1 +c2.62+· · ·+cn f3-n, where Ci is the scalar component
of a: along {3;., i = l , 2 . . .. , r .
_ ( /3 ,fh) _ 4.2+:3 .-2+2.1 = !
C2 - c.12.Jh ) - 9 9' So \la:1\ 2 = (a:, a:) (cifJ1 + · · · + c,J3.,... c1/31 + · · · + c,1fJn)
C _ U3Jh) _ 4.2 + :1.1+2.-2 = ~. c21 + c22 + · · · + C27 1 , since
-· {6
. 1 • 132 , .. . , .Br }
3 - (J ,. 13:. ) - () ;., is n.n orthonormal set .
T h e r e fore {3 = \~ 8 1 + ! /32 + ~ ,B;,, . This C(: mplet.cs the proof.
192 VECTO R SPACES
HIGHER ALGEBR A 193
Theor em 4.12.11 . Let {/31, /32, ... , Br} be an orthogo nal set If r < n, then L{,81, /32, .. . , /3r} is a proper subspac e of V and
of non null so
vectors in a Euclide an space V and o: be a vector in V - L{/3 , /3 there exists a vector O:r+l in V such that O:r+1 <t L{/31.132 , . ..
1 2 , ... , Br} . , /3r} - We
If the scalar compon ent of o: along (3i be Ci , then assert that the set {/31, /32, . . . , /3r, O'.r+d is linearly indepen dent.
Proof. Let d im V = n and let {/31, /32, ... , /3r} be an ortho?o nal_ Let /3 =
set of 0:3 - C101 - c2a2, where c 1 = ((u:.,oi)), c 2 -- (o 3 , 02 /
uon null vectors in V where 1 S r S n. So { /31 , 82 , .. . , /3r} 1s 0 1 , a- 1 (a:2,0: 2) ·
a linearly Then {3 is orthogo nal to a 1· and a2 and L{ a: 1. 02 , 03}
indepe ndent set. = L{ 01. 02. /3} •
= Therefo re { o 1 , 0:2, fJ} is an orthogo nal basis of JR 3 .
If r n , then {81 , th, .. . , /Jr} is an orthogo nal basis of V.
j
194 HIGHER · ALGEBRA 195
VECTOR SPACES
/31 = 0:1-
• Therefore 133 = (1, 3, 4) - ~(l, 0, 1) - 3(0, 1, 0) = H-1, 0 , 1) .
Let /32 = 02
c1/J1, where ci/31 is the projection of a: 2 upon {31.
-
Therefore an orthogonal basis is {(l, 0, 1), (0, 1, 0), J(-1, 0, l)}.
Then /32 is orthogonal to /31 and L{/31, 132} = L{/31, 02} = L{ a 1, a: 2 }.
4. Use Gram-Schmidt process to obtain an orthonormal basis of the sub-
lh = a2 - /31 · t~:J~J space of the Euclidean space JR4 with standard inner product, generated
a3 f/. L{ .81 , /32}. Let /33 = a3 - d1/J1 - d2f32, where di/31, d2f32 are the by the linearly independent set {(1, 1, 0, 1), (1, 1, 0, 0), (0, 1, 0, l)}.
projections of 03 upon /31, 132 respectively. Let a 1 = (1 , 1, 0 , 1), a2 = (1, 1, 0 , 0) , a3 = (0, 1, 0, 1) .
Then /33 is orthogonal to 01,/h and L{/31,/32,/33} = L{/31,/32,a3 } = Let /31 = 0:1 and 132 = a2 - c1/31, where c1/31 is the projection of o 2
L{ a-1 , a2, a3}. upon /31-
a _ _ ,.., _ (cr3 ,,h /3 _ ~03,,82~ r.,
/J",j - ,,8, fJ2.
..... 3 flt ,,81 1 .82 Then /32 is orthogonal to /31 and L{/31, 132} = L{/31 , 0:2} ....: L{ 01 , 0:2} .
04 f/. L{/31, /h , /Jg}. Let /34 = 04 - r1/31 - r2/J2 - r3/33, where Ct=~= t~~:~~~ = ~-
r 1/31 , r 2/h , r 3 {33 are the projections of 04 upon /31, /32 , 13a respectively. /32 = o: 2 - Jo 1 = (1 , 1, 0, 0) - j(l; 1, 0, 1) = ½(1 , 1, 0, -2).
Then {34 is orthogonal to /31, 132, {33 and L{ /31 , !32, /33, /34} = Let /3-;; = a3 - d1/31 - d2/32, where d1 ,B1, d2fh. are projections of o: 3
L{./31 , B2 , .B3,a4} = L{a1,02,a3,04}. upon /31 and 132 respectively.
_ (il/)) f3 _ ~04,~2) a _ 04 ,,83
,,83) a_ Then /33 is orthogonal to /31, 132 and L {/31, 132 , /33} = L{ 0:1 , 02 , o 3}.
/34 - Q:4 - ( ' l fJ3 •
. 1 ,8, ,,8,) fJ2 (,83
d _ (a:i ..th~ _ a d 2 = ~a:i,.82) = _!.
1 - (/J1 ,{Jl - 3' /3,,,82) 2
This process terminates after a finite number of steps because at every °
step one vector of the given basis is replaced by a vector in the desired /33 = (0, 1, 0, 1) - Hl,l, 0, 1) + ½(1, 1, 0, -2) = ½(-1 , 1, 0 , 0) .
orthogonal basis. Finally we obtain Therefore o.n orthogonal basis of the subspace is
/3r = 0: . _ 7
~r,glj a
1 , 1 fJl
_ ~*•$2f f32 _
, :i
.. . _ bor ,l3r-1) /3r-l,
( r- 1 ,,B,._ i)
and {(l, 1, o', 1), ½{l. 1, 0, -2), ½(-1 , 1, 0, 0)}
and the corresponding orthonormal basis is ·
{(31 , /h , .. . , .Br} is an orthogonal basis of the subspace W .
This completes the proof.
{ "73(1, 1, 0, 1) , 75(1 , 1, 0, -2), 72(- l. 1, 0, 0)} ·
VECTOR SPACES
196 HIGHER ALGEBRA 197
4 . 13. Orthogo nal complem ent of a subspace . Let a,/3 ES. Then (a, /3i) = 0 and (/3,/3i) = 0 for i = 1,2, . .. ,r.
This implies (a+ /3, /3i) = 0 for i = 1, 2, . .. , r .
Theorem 4.13. 1. In a Euclidean space V if a vector be orthogona l to This shows a+ /3 is orthogona l to L{/31, 132 , • •. , /3r }, i.e., o: + .B is
a set of vectors, then it is orthogona l to every vector belonging to the orthogona l to P. Therefore a ES, /3 ES implies et+ /3 ES. ..
subspace spanned by the set of vectors. .. . (ii)
From (i) and (ii) it follows that S is a subspace of V.
Proof. Let a E V and a: be orthogona l to the vectors {31 , 132, . . . , /3r in V.
Note. This sµbspace S is denoted by p.l..
Then (et, .81) = 0 , (a:, /h) = 0, ... , (a:, /3r) = 0.
Let P = L{/31. />i , ... , .Br} and
for c.; E lR.
e E P. Then e= c1/31 +c2/>i+- · ·+Cr/3r Theorem 4.13.4. Let P be a subspace of a finite dimensio nal Euclidean
space V. Then V = P EB p.l..
(a,~) (a, c1/31) + · · · +(at, Cr/3r) Proof. Case 1. Let P = {0}. Then p.l. = V and the theorem is obvious.
=c1 (o, /31) + c;(o, /32) + · · · + Cr(o, /3r) Case 2. Let P -::f. { 9} and let {.81 , 132 , .. . , .Br} be an orthogon al basis of P .
0, since (a:, .Bi) = 0 for i = 1, 2 , . .. , r . This can be extended to an orthogona l basis {,81, 132, . • • , /3r, .Br+ 1, • • • , /3n}
This proves that et is orthogona l to every vector of P. ofV.
Note. In this case o is said to be orthogona l to the subspace P . /3r+i is orthogona l to each of /31, 132, . . . , .Br .
Therefore /Jr+l E p.l. . Similarly, f3r+2, .. . , .Bn E p.l. ·
Theorem 4.13.2. Let P be a subspace eof a finite dimension al Euclidean
Since {/3r+1, f3r+2, .. . , /3n} is an orthogona l set, it is linearly indepen-
space V. A vector et in V is orthogona l to P if and only if a: is orthogona l dent in V and this being therefore linearly independ ent in p.l. is either
to every vector of a generatin g set of P. a basis of p.i., or can be extended to a basis of p.l..
Proof. Let { .81 , 132 , .. . , .Br} be a set of generator s of P . Therefore n - r :5 dim p.l. < n.
Let a be orthogon al to P . Then o is orthogona l to every vector of P P and p.l. being subspaces of V, P + p.l. is a subspace of V and
and therefore a is orthogon al to each _8;,. i = 1, 2, .... r. · furthermo re P n p.1. = {0}, since O is the only vector in V orthogon al to
Converse ly, let 0t be orthogona l to each /3i. By the previous theorem, a: · itself.
is orthogon al to L{,81 , 132 , . . . , .Br}, i.e., a is orthogona l to P. The relation dim (P+P.1.) = dimP+ dim p.1. - dim (PnP.l.)'g ives
dim (P + p.l.) = dimP + dim p.l. ~ r + (n- r), i.e., ~ n ...... (i)
Theorem . 4.13.3. Let P be a subspace of a finite dimension al Euclidean Again, P + p.1. being a subspace of V, dim (P + pJ..) S n . .. . .. (ii)
space V. The set of all vectors in V · which are orthogona l to P is a
subspace -of V. From (i) and (ii), dim P + p.l. = n and this i.Jnplies P + pJ.. = V.
·
Hence V = P EB p.l. since P + p.1. = V and P n pJ.. = {8}.
Proof. Let S be the set. Since 9 in Vis orthogona l to every vector in P , This completes the proof.
0 E S and therefore S is non-empt y.
Note. {/Jr+l, /3r+2, .. . , /3n} ·is an orthogona l basis of pJ...
Case 1. S = {0}. Then Sis a. subspace of V .
Definitio n. The subspace p.l. is said to be the orthogona l compleme nt
Case 2. Let S ~ {0} and let a ES.
of Pin V .
Let {.Bi , {3-i, •• . , /3r} be a set of generator s of P . Since a: is orthogona l
to P , a is orthogon al to each of /31 , /h., .. . , /3r. Since an orthogona l set of non null vectoP,> in a Euclidean space V can
Then (a, /3;,) = 0 for i = 1, 2, . . . , r and this implies (ca, /3;) = 0 for l;>e extended to an orthogona l basis of V in a unique way, p.l. is uniquely
a11· c ER and for i = 1, 2, . .. ,r. deterriline d by P. Thus although a. given subspace P may have many
This shows ca is orthogon al to L{/31, /3-i., .. . , /3r}, i.e., ca: is orthogona l compleme nts in V, its orthogona l complem ent pJ.. is unique.
to P for all c E R . Thus a. finite dimension al Euclidean space V is the direct sum of any
Therefor e ca E S for all c E JR. .. ... (i) subspace of V and its orthogona l complem ent in V.
J
198 HlGHER ALGEBR A VECTO R SPACES 199
where k E JR.
3
Toking a2 = k , we have a1 = a3 = -k and therefor e 'Y = k(-1, 1, -1),
o.
.
.1
l
t (i) (a:, /3)
(ii) (a, (3)
a1b1 + a2b2 + a3b3 I;
=I
= (a1 + a2 + a3)(b1 + b2 + b3);
i (iii) (o,,B) = a1b1 + (a2 + a3)(b:i + b3);
So p.1. is the subspac e generat ed by the vector ( -1, 1, -1).
i (iv) (a,(3) = a1b1 + (a2 + a3)(b:z + b3) + a3b3 .
2. A is a real m x n matrix. Show that the solution space of the
system · .!
of equatio ns AX = 0 is the orthogo nal comple ment of the row space 2. Prove that for vectors a, /3 in a Euclide an space
A in the Euclide an space !Rn with standar d inner produc t.
of· .f (a, (3) = 0
V,
(i) Ila+ .BIi = Ila - /311;
if and only if
Let A = ( G.ij )m,n, a;j E JR and a:1, a:2, ... , Orn be the row vectors
of (ii) (a,(3) = 0 if and only if Ila+ f311 2 = llall 2 + 11/311 2 ;
A . The row space P = L { a:1, 02, ... , Orn}, a subspac e of ]Rn. (iii) (a+ /3, a - (3) = 0 if and only if llall = 11.Bll-
Let Q be the solution space of AX= 0 and let~= (t • t , . . •, t,.) be 3. Prove that the set of vectors {(2,3, -1), (1, -2,-4) , (2, -1,
1 2 l)} is an orthog-
a ~olutio n of the system. onal basis of the Euclide an space lR 3 with standar d inner product
Then ai1t1 + G.i2t2 +···+G.intn = o\or i = 1, 2, ... , ·m , Le., (a:i, e) =o projecti ons of the vector a= (1, 1, 1) a.long these basis vectors
. Find the
and verify that
for i = 1, 2, ... , m . a: is the sum of its projecti ons a.long these basis vectors.
-l
Therefo re e is orthogo nal to each ai and therefor e ,; is orthogo nal .• l
P , i .e. , ,; E pJ.... Thus e E Q =>,; E p.1.. Therefo re QC p.1. .... (i)
to 4. Use Gram-S chmidt process to convert the given basis of the
Euclide an space
IR3 with standar d inner product into an orthogo nal basis.
Let 1J = (u1 , u2, ... , un) E p.l.. Then 1] is orthogo nal to each of
the (i) {(1,2,- 2),(2,0 ,l),(1,l , O)}; (ii) {(l,l , O),(O,l , l),(1,0 ,l)} .
generat ors a-1 , 02, ... , Orn of P.
Therefo re an U1 + Ui2U2 + · · · + Uin Un = -0 for i = 1, 2, ... , m. 5. Apply Gram-S chmidt process to find an orthono rmal basis for
the Euclide an
This shows that 17 is a solution of the system, i.e., TJ E Q. space R 3 with standar d inner product , that contains the vectors
Thus 77 E pi. => T/ E Q. Therefo re p.1. C Q . ... (ii) ( .) ( 1 -1 O)·
✓2' ✓ 2'
(") ( r 1 -1) ( 2 -1 1 )
I ' ll '7a• 73• 73 ' °7s ' 76 • '7s .
From (i) and (ii) p.l. = Q. That is, the solution space of the system 6. Apply Gram-S chmidt process to obtain an orthono rmal basis
of the subspac e
AX = 0 is the orthogo nal comple ment of the row space of A. of the Euclide an space R 4 with standar d inner product , spanned
by the vectors
Note. Since P $ p.l. = R"-, we have P $ Q = lRn. (i) (l,l,O, l),(l,-2 ,0 , 0),(l,O ,-l , 2);
Therefo re dim P+dim Q =n, i.e., rank of A +?·ank of X(A) = (ii) (1, 1, 1, 1), (1, l, -1 , -1), (1 , 2, 0 , 2).
n,
where X (A) is the solutio n space. j 7. Find an orthono rmal basis of the row space of the matrix:
3. P is a subspac e of a Euclide an space V of finite dimens ion !
j
~),
. Prove l l
i ~
l l
that p .1. .1. = P.
Let d im V = n . Let {/31 , /h. , . . . , !3r} be an orthogo nal basis of P and
le t {/31 , . . . , /Jr- f3r +1, .. . , /3n} be an extende d orthogo nal basis of
V.
I
l (i) (
2 l
2 1
2
(ii) ( 3 l
2 3 D
8. Find the orthogo nal complem ent of the row space of the
ma.t ri.-c:
P = L{/31, /3?. , . . . , /3r}- Since P $ p.l. = V, pJ... = L{f3r+l • . . . ,/3n}-
Si.ncc p .1. ..L iB t he orthogo nal complm ent of p.1. in V
L { f3r + 1, , .. , /1n:} , we have pl. .l. =
and p.l.
L {/31, /32, .. .. .Br}. Therefo re p.l. .l.
(o) ( l
2 1
3 :.?
1 :2
~) ·
( ii ) (
1
3
4
i ~ )-
= [Hint. The solution sp11<..-e of the system of equat ions ,-\X =
0 w i·t h t_h e given m atri.."<
P.
A as the coeff>cie nt matri x . is the orthogon al complem ent o f
the row space o f A .J
{1.·
200
HIGHE R ALGEB RA : :~ 1
VECTO R SPACE S
t 201
4.14. Matr ic polyn omial s.
i'
Let us· consid er a 2 x 2 matrix A =(
+ x + 1 x + 2x )x
2 3
.:
.
-j G(x) = ( ; ~ ) +( ~ ~ ) x.
3:t3 + x 4x 2 + 3 . . l
~ ~ ) x+( ~ : )
whose el~m~ nts are real polyno mials in x. A can . be expres
polyno mial m x sed aa the
·
:• l
·• .:•- ····-~~ 1·.l
Then F(x) + G(x) = ( : ; ) + ( x
2
.
( ~ ~ ) x3 + ( ~ ~)x 2
+( ! ~) +( ~ 1) F(x)G( x)
': ~ ! ( i ~ ) ( ; ~ ) +[ ( i ~ ) ( ~ ~ ) + ( ;
=
~ ) ( ! ·~))
x
whose coeffic ients are -real matric es of order 2 x 2.
.._.:; .,1 X