0% found this document useful (0 votes)
53 views55 pages

Solution of Tutorial Sheet 1 Q1 Q16

The document provides solutions to various mathematical problems related to matrix rank, determinants, and finding inverses of matrices. It includes detailed steps for row reduction and determining the rank of matrices based on specific values. Additionally, it demonstrates the process of finding the inverse of given matrices using augmented matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views55 pages

Solution of Tutorial Sheet 1 Q1 Q16

The document provides solutions to various mathematical problems related to matrix rank, determinants, and finding inverses of matrices. It includes detailed steps for row reduction and determining the rank of matrices based on specific values. Additionally, it demonstrates the process of finding the inverse of given matrices using augmented matrices.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 55

Department of Mathematics and Computing

Engineering Mathematics-II
Tutorial Sheet-I(SOLUTIONS of Q.1 to Q.16)
Winter 2024-25

January 22, 2025

sol(1)

To find the rank of the given matrix, we need to determine the maximum number of linearly
independent rows or columns in the matrix.
For this, we transform the matrix into a row-reduced-echelon form.

The given matrix A is:


 
a −1 1
 
−1 a −1
−1 −1 a  .
 
 
1 1 1

Now

 
a −1 1
 
−1 a −1 
 
−1 −1 a 
 
1 1 1

1
 
1 1 1 R1 ↔ R4
 
−1 a −1
=⇒ 
 
Here a11 = 1 is pivot element.We reduce
−1 −1 a 

a −1 1 a21 = −1, a31 = −1 to zero.

  R2 ←− R2 + R1
1 1 1 R3 ←− R3 + R1
 
 0 a+1 0 
=⇒ 
  R4 ←− R4 + R1
 0 0 a + 1 

a+1 0 2
 
1 1 1 R2 ↔ R3
 
 0 0 a + 1
=⇒ 
 
 0 a + 1 0 

a+1 0 2
 
1 1 1 R2 ↔ R4
 
a + 1 0 2 
=⇒ 
 0

 a+1 0 

0 0 a+1
1
  R1 ←− R1 − R
a+1 4
1 1 0 R2 ←− R2 − 2
R
  a+1 4
a + 1 0 0 
=⇒ 
 
 0 a + 1 0 

0 0 a+1
1
  R1 ←− R1 − R
a+1 2
0 1 0
 
a + 1 0 0 
=⇒ 
 0

 a + 1 0 

0 0 a+1
1
  R1 ←− R1 − R
a+1 3
0 0 0
 
a + 1 0 0 
=⇒ 
 0

 a+1 0 
0 0 a+1

2
  R1 ↔ R2 ↔ R3 ↔ R4
a+1 0 0
 
 0 a+1 0 
=⇒ 
 
 0 0 a + 1

0 0 0

From this row- reduced echelon form, we observe that:

1. When a ̸= −1 ,there are 3 non-zero and independent rows. Therefore, the rank of A is 3.

2. a = −1: Substituting a = −1 into the matrix A, we get:


 
−1 −1 1
 
−1 −1 −1
A= −1 −1 −1 .

 
1 1 1
 
1 1 1
 
−1 −1 −1
=⇒   R1 ←→ R4
 −1 −1 −1 

−1 −1 1
 
1 1 1
 
0 0 0
=⇒ 0 0 0
 R2 ←− R2 + R1
R3 ←− R3 + R1
 
0 0 2
R4 ←− R4 + R1

 
1 1 1
 
0 0 2
=⇒    R2 ←→ R4
 0 0 0

0 0 0
Hence the row-reduced- echelon form becomes:
 
1 1 1
 
0 0 2
0 0 0 .
 
 
0 0 0

3
Clearly, there are 2 nonzero rows and, also, they are independent. Therefore, the rank of A
is 2.

sol(2)

The rank of a matrix is 2 if and only if any 3 × 3 determinant formed by the rows is zero, while at
least one 2 × 2 minor is nonzero.

Perform row reduction to simplify the matrix:


 
1 3 −3 x
2 2 x −4  .
 

1 1 − x 2x + 1 −5 − 3x

Subtract 2 × R1 from R2 , and subtract R1 from R3 :


 
1 3 −3 x
0 −4 x + 6 −4 − 2x .
 

0 −2 − x 5 + 3x −5 − x

Divide R2 by −4:  
1 3 −3 x
0 1 − x+6 1 + x2  .
 
4
0 −2 − x 5 + 3x −5 − x
Now, it becomes row-reduced matrix. To ensure the rank is 2, the bottom row must become zero,
but there is no x so that −2 − x, 5 + 3x, −5 − x all these entries simultaneously become zero.

Hence there does not exist any value of x ∈ R such that the rank of A is 2.

4
sol(3)

The inverse of a square matrix A can be found using the augmented matrix [A|I], where I is the
identity matrix. Perform row reduction on [A|I] to transform M into I, and the resulting right-hand
side will be A−1 .

(a) For matrix A:

The augmented matrix is

 
0 2 4 | 1 0 0
2 4 2 | 0 1 0
 

3 3 1 | 0 0 1

  R1 ↔ R2
2 4 2 | 0 1 0 (since the pivot in the first columns is zero)
=⇒ 0 2 4 | 1 0 0
  
We reduce the matrix A to identity matrix I
3 3 1 | 0 0 1 to find the inverse of marix A, thats why we
are performing elementary row operation.

 
1 2 1 | 0 21 0 R1 ←− (1/2)R1
=⇒ 0 1 2 | 12 0 0
 

3 3 1 | 0 0 1 R2 ←− (1/2)R2

(To make the pivots in the both column 1)

 1
 R3 ←− R3 − 3R1
1 2 1 | 0 2
0
=⇒ 0 1 2 | 12 0 0
 
−3
0 −3 −2 | 0 2
1

 1

1 2 1 | 0 2
0 R3 ←− R3 + 3R2
=⇒ 0 1 2 | 21 0 0 .
 
−3
0 0 4 | 23 2
1

5
 1

1 2 1 | 0 2
0 R3 ←− (1/4)R3
=⇒ 0 1 2 | 21 0 0
 
−3
0 0 1 | 38 8
1
4

 −3 7 −1
 R1 ←− R1 − R3
1 2 0 | 8 8 4
=⇒ 0 1 0 |
 −2 6 −2  R2 ←− R2 − 2R3
8 8 4 
3 −3 1
0 0 1 | 8 8 4

 1 −5 3
 R1 ←− R1 − 2R2
1 0 0 | 8 8 4
−1 3 −1 
=⇒ 0 1 0 | .

4 4 2 
3 −3 1
0 0 1 | 8 8 4

Thus, the inverse of A is:  


1 −5 3
8 8 4
A−1 =
 −1 3 −1 
4 4 2 
3 −3 1
8 8 4

(b) For matrix B:

The augmented matrix is

 
1 1 2 | 1 0 0
2 4 4 | 0 1 0
 

3 3 7 | 0 0 1
 
1 1 2 | 1 0 0 R2 ←− R2 − 2R1
=⇒ 0 2 0 | −2 1 0
 

0 0 1 | −3 0 1 R3 ←− R3 − 3R1
1
 
1 1 2 | 1 0 0 R2 ←− R2
=⇒ 0 1 0 | −1
 1
0
 2
2
0 0 1 | −3 0 1
 −1

1 0 2 | 2 2
0 R1 ←− R1 − R2
1
=⇒ 0 1 0 | −1 0
 
2
0 0 1 | −3 0 1

6
 −1
 R1 ←− R1 − 2R3
1 0 0 | 8 2
−2
1
=⇒ 0 1 0 | −1 0 .
 
2
0 0 1 | −3 0 1
Thus, the inverse of B is:  
8 − 21 −2
B −1 = −1 12 0 .
 

−3 0 1

sol (4)

(a)

Given system is:


x1 + x2 = 4,
x2 − x3 = 1,
2x1 + x2 + 4x3 = 7.

It’s matrix form is:     


1 1 0 x1 4
0 1 −1 x2  = 1 .
    

2 1 4 x3 7
The augmented matrix is

 
1 1 0 | 4
0 1 −1 | 1
 

2 1 4 | 7
 
1 1 0 | 4 R3 ←− R3 − 2R1
=⇒ 0 1 −1 | 1 
 

0 −1 4 | −1

7
 
1 1 2 | 4 R3 ←− R3 + R2
=⇒ 0 1 −1 | 1
 

0 0 3 | 0
1
 
1 1 0 | 4 R3 ←− R3
=⇒ 0

1 −1 | 1
 3
0 0 1 | 0
 
1 1 0 | 4 R2 ←− R2 + R3
=⇒ 0 1 0 | 1
 

0 0 1 | 0 R1 ←− R1 − R2

 
1 0 0 | 3 R1 ←− R1 − R2
=⇒ 0 1 0 | 1 .
 

0 0 1 | 0
Therefore the required solution is:
   
x 1 x2 x3 = 3 1 0 .

(b)

Given system is:


x1 + 3x2 + x3 = 0,
2x1 − x2 + x3 = 0.

Matrix form is:  


" # x1 " #
1 3 1   0
x2  = .
2 −1 1 0
x3
The augmented matrix is:
" #
1 3 1 | 0
2 −1 1 | 0
" #
1 3 1 | 0 R2 ←− R2 − 2R1
=⇒
0 −7 −1 | 0

8
Now,

=⇒ x1 + 3x2 + x3 = 0,
−7x2 − x3 = 0.

=⇒ x1 + 3x2 + x3 = 0,
x3 = −7x2 .

=⇒ x1 = 4x2 ,
x3 = −7x2 .

The required solution is:    


x1 x2 x3 = x2 4 1 −7

where x2 is real number.

(c)

Given matrix is:


x1 + 2x2 − x3 = 10,
−x1 + x2 + 2x3 = 2,
2x1 + x2 − 3x3 = 2.

Matrix form:     
1 2 −1 x1 10
−1 1 2  x2  =  2  .
    

2 1 −3 x3 2
The augmented matrix is

9
 
1 2 −1 | 10
−1 1 2 | 2
 

2 1 −3 | 2
 
1 2 −1 | 10 R3 ←− R3 − 2R1
0 3 1 | 12 .
 

0 0 0 | 6 R2 ←− R2 + R1

Now

=⇒ x1 + 2x2 − x3 = 10,
0.x1 + 3x2 + x3 = 12,
0.x1 + 0.x2 + 0.x3 = 6.

Clearly, the last equation in the system is absurd and therefore the given system is inconsitent.

(d)

Given system is:


x1 + x2 + x3 − 3x4 = 1,
2x1 + 4x2 + 3x3 + x4 = 3,
3x1 + 6x2 + 4x3 − 2x4 = 4.

Matrix form is:  


  x1  
1 1 1 −3   1
x 2   
x  = 3 .
2 4 3 1  
 
 3
3 6 4 −2 4
x4
The augmented matrix is

 
1 1 1 −3 | 1
2 4 3 1 | 3
 

3 6 4 −2 | 4

10
 
1 1 1 −3 | 1 R2 ←− R2 − 2R1
=⇒ 0 2 1 7 | 1
 

0 3 1 7 | 1 , R3 ←− R3 − 3R1
 
1 1 1 −3 | 1 R3 ←− R3 − R2
=⇒ 0 2 1 7 | 1
 

0 1 0 0 | 0
Now,

=⇒ x1 + x2 + x3 − 3x4 = 1,
0.x1 + 2x2 + x3 + 7x4 = 1,
0.x1 + x2 + 0.x3 + 0.x4 = 0.

By simplifying we get the required solution and it is:


     
x1 x2 x3 x4 = 0 0 1 0 + x4 10 0 −7 1 .

(e)

Given system is:


x1 + 2x2 − x3 = 10,
−x1 + x2 + 2x3 = 2,
2x1 + x2 − 3x3 = 8.

Matrix form is:     


1 2 −1 x1 10
−1 1 2  x2  =  2  .
    

2 1 −3 x3 8
The augmented matrix is

11
 
1 2 −1 | 10
−1 1 2 | 2
 

2 1 −3 | 8
 
1 2 −1 | 10 R2 ←− R2 + R1
=⇒ 0 3 1 | 12 
 

0 −3 −1 | −12 R3 ←− R3 − 2R1

 
1 2 −1 | 10 R3 ←− R3 + R2
=⇒ 0 3 1 | 12
 

0 0 0 | 0
1
 
1 2 −1 | 10 R2 ←− R2
=⇒ 0

1 31 | 4 
 3
0 0 0 | 0

 −5

1 0 2
| 2 R1 ←− R1 − 2R2
1
=⇒ 0 1 | 4
 
3
0 0 0 | 0
Now

5
=⇒ x1 + 0.x2 − x3 = 2,
2
1
0.x1 + x2 + x3 = 4.
3

Therefore the solution is:


   
x1 x2 x3 = −5x
2
3
+2 −x3
3
+ 4 x3 .

12
sol (5)

The system has

1. A unique solution: if the determinant of the coefficient matrix is nonzero.

2. No solution: if the system is inconsistent.

3. Many solutions: if the determinant is zero and the augmented matrix is consistent.

Given the system of equations:


x + y + z = 1,
x + 2y − z = b,
5x + 7y + az = b2 ,
the augmented matrix is:

 
1 1 1 | 1
1 2 −1 | b 
 

5 7 a | b2

 
1 1 1 | 1 R2 ←− R2 − R1
=⇒ 0 1 −2 | b − 1 
 

0 2 a − 5 | b2 − 5 R3 ←− R3 − R1

  R3 ←− R3 − 2R2
1 1 1 | 1
=⇒ 0 1 −2 | b−1 
 

0 0 a − 1 | b2 + 2b − 3
Now

=⇒ x + y + z = 1,
y − 2z = b − 1,
(a − 1)z = b2 + 2b − 3.

13
The determinant of the coefficient matrix is:

1 1 1
∆ = 1 2 −1 = a − 1.
5 7 a

Therefore,

1. Only one solution: ∆ ̸= 0, i.e., a ̸= 1.

2. No solution: a = 1 and by the equation

(a − 1)z = b2 + 2b − 3

, we can say the given system has no solution when b ̸= −1, 3.

3. Many solutions: a = 1 and b = −1 or b = 3.

Solution 6(a):

Let V = C[a, b] be the set of all continuous functions on the interval [a, b] with the given operations
on V as follows:

• Addition: (f + g)(x) = f (x) + g(x), ∀f, g ∈ C[a, b], x ∈ [a, b].

• Scalar multiplication: (λ · f )(x) = λf (x), ∀λ ∈ R, f ∈ C[a, b], x ∈ [a, b].

We verify that V satisfies the vector space axioms over R:

1. Closure under addition: If f, g ∈ C[a, b], then f + g ∈ C[a, b] because the sum of contin-
uous functions is continuous.

2. Associativity of addition: For f, g, h ∈ C[a, b] and ∀ x ∈ [a, b],

((f +g)+h)(x) = (f +g)(x)+h(x) = (f (x)+g(x))+h(x) = f (x)+(g(x)+h(x)) = (f +(g+h))(x).

14
3. Existence of the zero vector: The zero function f0 (x) = 0 is in C[a, b], and for f ∈ C[a, b]
and ∀ x ∈ [a, b],
(f + f0 )(x) = f (x) + f0 (x) = f (x) + 0 = f (x).

4. Existence of additive inverses: For f ∈ C[a, b] and ∀ x ∈ [a, b], the function −f ∈ C[a, b]
satisfies
(f + (−f ))(x) = f (x) − f (x) = 0 = f0 (x).

5. Commutativity of addition: For f, g ∈ C[a, b] and ∀ x ∈ [a, b],

(f + g)(x) = f (x) + g(x) = g(x) + f (x) = (g + f )(x).

6. Closure under scaler multiplication: If f, g ∈ C[a, b], then λf ∈ C[a, b] because the the
scaler multiplication preserves continuity.

7. Compatibility of scalar multiplication with field multiplication: For λ, µ ∈ R, f ∈


C[a, b] and ∀ x ∈ [a, b],

((λµ) · f )(x) = (λµ)f (x) = λ(µf (x)) = (λ · (µ · f ))(x).

8. Distributivity of scalar multiplication over vector addition: For λ ∈ R, f, g ∈ C[a, b]and


∀ x ∈ [a, b],

(λ · (f + g))(x) = λ(f (x) + g(x)) = λf (x) + λg(x) = ((λ · f ) + (λ · g))(x).

9. Distributivity of scalar multiplication over field addition: For λ, µ ∈ R, f ∈ C[a, b] and


∀ x ∈ [a, b]

((λ + µ) · f )(x) = (λ + µ)f (x) = λf (x) + µf (x) = ((λ · f ) + (µ · f ))(x).

10. Identity element of scalar multiplication: For f ∈ C[a, b] and ∀ x ∈ [a, b],

(1 · f )(x) = 1 · f (x) = f (x).

Thus, V = C[a, b] is a vector space over R.

15
Note: To prove the axioms 2, 3, 4, 5, 7, 8, 9, 10 the properties of real number are used in interme-
diate steps.

Solution 6(b)

Let V be the set of all n × n Hermitian matrices over C:

V = {A ∈ Cn×n | A = A† },

T
where A† = A is the conjugate transpose of A. with the operations on V as:

• Addition: (A + B) is the usual matrix addition.

• Scalar multiplication: (λ · A) is the usual scalar multiplication, where λ ∈ C.

1. Closure axiom of multiplication does not hold: If A ∈ V and λ ∈ C, then A = A† .


Consider λA:
(λA)† = λA† ̸= λA.

For pure imaginary number.


For example take Λ = i and A ∈ Cn×n hermitian matrix then (iA)† = −iA this shows that
iA is skew Hermitian. Not a vector space.

Solution 6(c)

Let V = R[x] be the set of all polynomials with real coefficients with the given operations on V
as:

• Addition: (f + g)(x) = f (x) + g(x), ∀f, g ∈ R[x].

• Scalar multiplication: (λ · f )(x) = λf (x), ∀λ ∈ R, f ∈ R[x].

We verify that V satisfies the vector space axioms over R:

16
1. Closure under addition: If f, g ∈ R[x] with degree m and n respectively, then f +g ∈ R[x]
with degree ≤ m + n because the sum of two polynomials is a polynomial with degree
≤ m + n.
A detailed explanation of closure of addition:

Case 1: m < n

Let
f (x) = am xm + am−1 xm−1 + · · · + a0 ,

and
g(x) = bn xn + bn−1 xn−1 + · · · + b0 ,

where m < n.
Their sum is:

f (x) + g(x) = bn xn + bn−1 xn−1 + · · · + bm+1 xm+1 + (am + bm )xm + · · · + (a0 + b0 ).

The highest degree term is bn xn because n > m. Thus,

deg(f + g) = n.

This shows that f + g is in V = R[x]

Case 2: m = n

Let
f (x) = am xm + am−1 xm−1 + · · · + a0 ,

and
g(x) = bm xm + bm−1 xm−1 + · · · + b0 ,

where m = n.

17
Their sum is:

f (x) + g(x) = (am + bm )xm + (am−1 + bm−1 )xm−1 + · · · + (a0 + b0 ).

- If am + bm ̸= 0, the highest degree term is (am + bm )xm , so:

deg(f + g) = m = n.

- If am + bm = 0, the highest degree term comes from the next highest degree term, and:

deg(f + g) < m.

This shows that f + g is in V = R[x]

Case 3: m > n

Let
f (x) = am xm + am−1 xm−1 + · · · + a0 ,

and
g(x) = bn xn + bn−1 xn−1 + · · · + b0 ,

where m > n.
Their sum is:

f (x) + g(x) = am xm + am−1 xm−1 + · · · + an+1 xn+1 + (an + bn )xn + · · · + (a0 + b0 ).

The highest degree term is am xm because m > n. Thus,

deg(f + g) = m.

This shows that f + g is in V = R[x]

2. Associativity of addition: For f, g, h ∈ R[x]:

(f (x) + g(x)) + h(x) = f (x) + (g(x) + h(x)).

18
• Let the polynomials be:

f (x) = a0 + a1 x + a2 x2 + · · · + am xm ,

g(x) = b0 + b1 x + b2 x2 + · · · + bn xn ,

h(x) = c0 + c1 x + c2 x2 + · · · + cp xp .

• Left-hand side: (f (x) + g(x)) + h(x)


(a) Compute f (x) + g(x):

f (x) + g(x) = (a0 + b0 ) + (a1 + b1 )x + (a2 + b2 )x2 + . . .

(b) Add h(x) to this:


 
(f (x) + g(x)) + h(x) = (a0 + b0 ) + c0 + (a1 + b1 ) + c1 x + . . .

• Right-hand side: f (x) + (g(x) + h(x))


(a) Compute g(x) + h(x):

g(x) + h(x) = (b0 + c0 ) + (b1 + c1 )x + (b2 + c2 )x2 + . . .

(b) Add f (x) to this:


 
f (x) + (g(x) + h(x)) = a0 + (b0 + c0 ) + a1 + (b1 + c1 ) x + . . .

• Compare coefficients:

(ak + bk ) + ck = ak + (bk + ck ) (associativity of real numbers).

• Thus:
(f (x) + g(x)) + h(x) = f (x) + (g(x) + h(x)).

(f + g) + h = f + (g + h) (polynomial addition is associative).

3. Existence of the zero vector: The zero polynomial p0 (x) = 0 satisfies p0 + f = f for all

19
f ∈ R[x].

4. Existence of additive inverses: For f ∈ R[x], the polynomial −f satisfies f + (−f ) = p0 .

5. Commutativity of Polynomial Addition We want to show that:

f (x) + g(x) = g(x) + f (x).

• Let the polynomials be:

f (x) = a0 + a1 x + a2 x2 + · · · + am xm ,

g(x) = b0 + b1 x + b2 x2 + · · · + bn xn .

• Compute f (x) + g(x):

f (x) + g(x) = (a0 + b0 ) + (a1 + b1 )x + (a2 + b2 )x2 + . . .

• Compute g(x) + f (x):

g(x) + f (x) = (b0 + a0 ) + (b1 + a1 )x + (b2 + a2 )x2 + . . .

• Compare coefficients:

ak + b k = b k + ak (commutativity of real numbers).

• Thus:
f (x) + g(x) = g(x) + f (x).

6. Closure under scalar multiplication: If f ∈ R[x] with degree = n and λ ∈ R, then


λf ∈ R[x] because scalar multiplication preserves the polynomial form.

7. Compatibility of scalar multiplication with field multiplication:


For λ, µ ∈ R and f (x) ∈ R[x]:
(λµ)f = λ(µf ).

Let f (x) = a0 + a1 x + a2 x2 + · · · + an xn .

20
Left-hand side:
(λµ)f (x) = (λµ)(a0 + a1 x + a2 x2 + · · · + an xn ).

= (λµ)a0 + (λµ)a1 x + (λµ)a2 x2 + · · · + (λµ)an xn .

Right-hand side:

λ(µf (x)) = λ µ(a0 + a1 x + a2 x2 + · · · + an xn ) .




= λ(µa0 ) + λ(µa1 )x + λ(µa2 )x2 + · · · + λ(µan )xn .

On Comparison:

(λµ)ak = λ(µak ), (Associativity of real numbers) for allk.

Thus:
(λµ)f = λ(µf ).

8. Distributivity of scalar multiplication over vector addition:


For λ ∈ R and f (x), g(x) ∈ R[x]:

λ(f + g) = λf + λg.

Let f (x) = a0 + a1 x + · · · + an xn and g(x) = b0 + b1 x + · · · + bm xm .


Left-hand side:

λ(f (x) + g(x)) = λ (a0 + b0 ) + (a1 + b1 )x + (a2 + b2 )x2 + . . . .




= λ(a0 + b0 ) + λ(a1 + b1 )x + λ(a2 + b2 )x2 + . . . .

Right-hand side:
 
λf (x) + λg(x) = λa0 + λa1 x + . . . + λb0 + λb1 x + . . . .

= λa0 + λb0 + λa1 + λb1 x + λa2 + λb2 x2 + . . . .


  

Comparison:

λ(ak + bk ) = λak + λbk , (Distributive property of real numbers)for all k.

21
Thus:
λ(f + g) = λf + λg.

9. Distributivity of scalar multiplication over field addition:


For λ, µ ∈ R and f (x) ∈ R[x]:

(λ + µ)f = λf + µf.

Let f (x) = a0 + a1 x + a2 x2 + · · · + an xn .
Left-hand side:

(λ + µ)f (x) = (λ + µ)(a0 + a1 x + a2 x2 + · · · + an xn ).

= (λ + µ)a0 + (λ + µ)a1 x + (λ + µ)a2 x2 + · · · + (λ + µ)an xn .

Right-hand side:
 
λf (x) + µf (x) = λa0 + λa1 x + . . . + µa0 + µa1 x + . . . .

= λa0 + µa0 + λa1 + µa1 x + λa2 + µa2 x2 + . . . .


  

Comparison:

(λ + µ)ak = λak + µak , (Distributive property of real numbers)for all k.

Thus:
(λ + µ)f = λf + µf.

10. Identity element of scalar multiplication:


For f (x) ∈ R[x]:
1 · f = f.

Let f (x) = a0 + a1 x + a2 x2 + · · · + an xn .
Compute:
1 · f (x) = 1 · (a0 + a1 x + a2 x2 + · · · + an xn ).

= (1 · a0 ) + (1 · a1 )x + (1 · a2 )x2 + · · · + (1 · an )xn .

22
Since 1 · ak = ak for all k, we have:

1 · f (x) = a0 + a1 x + a2 x2 + · · · + an xn = f (x).

Thus, V = R[x], the set of all polynomials with real coefficients, is a vector space over R.

Solution 6(d)

Let V = R∞ be the set of all infinite sequences of real numbers:

V = {a = {an }∞
n=1 | an ∈ R, ∀n ∈ N}.

with the operations on V as follows:

• Addition: For a = {an }∞ ∞ ∞


n=1 , b = {bn }n=1 ∈ R ,

(a + b) = {an + bn }∞
n=1 .

• Scalar multiplication: For a = {an }∞ ∞


n=1 ∈ R and λ ∈ R,

(λ · a) = {λan }∞
n=1 .

We verify that V satisfies the vector space axioms over R:

1. Closure under addition: If a, b ∈ R∞ , then a + b = {an + bn }∞


n=1 . Since an , bn ∈ R, it
follows that an + bn ∈ R, so a + b ∈ R∞ .

2. Associativity of addition: For a, b, c ∈ R∞ :

((a + b) + c) = {(an + bn ) + cn }∞ ∞
n=1 = {an + (bn + cn )}n=1 = (a + (b + c)).

3. Existence of the zero vector: The zero sequence 0 = {0}∞ ∞


n=1 satisfies a+0 = {an +0}n=1 =
a for all a ∈ R∞ .

23
4. Existence of additive inverses: For a = {an }∞ ∞ ∞
n=1 ∈ R , the sequence −a = {−an }n=1
satisfies:
a + (−a) = {an − an }∞ ∞
n=1 = {0}n=1 .

5. Commutativity of addition: For a, b ∈ R∞ :

(a + b) = {an + bn }∞ ∞
n=1 = {bn + an }n=1 = (b + a).

6. Closure under scalar multiplication: If a ∈ R∞ and λ ∈ R, then λa = {λan }∞


n=1 . Since

an ∈ R, it follows that λan ∈ R, so λa ∈ R .

7. Compatibility of scalar multiplication with field multiplication: For λ, µ ∈ R and a =


{an }∞
n=1 :
(λµ)a = {(λµ)an }∞ ∞
n=1 = {λ(µan )}n=1 = λ(µa).

8. Distributivity of scalar multiplication over vector addition: For λ ∈ R and a, b ∈ R∞ :

λ(a + b) = {λ(an + bn )}∞ ∞


n=1 = {λan + λbn }n=1 = (λa + λb).

9. Distributivity of scalar multiplication over field addition: For λ, µ ∈ R and a = {an }∞


n=1 :

(λ + µ)a = {(λ + µ)an }∞ ∞


n=1 = {λan + µan }n=1 = (λa + µa).

10. Identity element of scalar multiplication: For a = {an }∞


n=1 :

1 · a = {1 · an }∞ ∞
n=1 = {an }n=1 = a.

Thus, V = R∞ , the set of all infinite sequences of real numbers, is a vector space over R.

Solution 6(e)

Let V = R+ be the set of all positive real numbers with the operations on V as follows:

• Addition: For x, y ∈ R+ ,

x + y = xy (usual multiplication of real numbers).

24
• Scalar multiplication: For x ∈ R+ and λ ∈ R,

λx = xλ .

We verify that V satisfies the vector space axioms over R:

1. Closure under addition: If x, y ∈ R+ , then x + y = xy. Since the product of two positive
real numbers is positive, x + y ∈ R+ .

2. Associativity of addition: For x, y, z ∈ R+ :

(x + y) + z = (xy)z = x(yz) = x + (y + z).

3. Existence of the zero vector: The multiplicative identity 1 ∈ R+ serves as the zero vector,
since for any x ∈ R+ ,
x + 1 = x · 1 = x.

4. Existence of additive inverses: For x ∈ R+ , the inverse x−1 = (1/x) ∈ R+ satisfies:

x + x−1 = x · x−1 = 1.

5. Commutativity of addition: For x, y ∈ R+ :

x + y = xy = yx = y + x.

6. Closure under scalar multiplication: If x ∈ R+ and λ ∈ R, then λx = xλ . Since a positive


number raised to a real power is positive, λx ∈ R+ .

7. Compatibility of scalar multiplication with field multiplication: For λ, µ ∈ R and x ∈


R+ :
(λµ)x = xλµ = (xλ )µ = λ(µx).

8. Distributivity of scalar multiplication over vector addition: For λ ∈ R and x, y ∈ R+ :

λ(x + y) = λ(xy) = (xy)λ = xλ y λ = (λx) + (λy).

25
9. Distributivity of scalar multiplication over field addition: For λ, µ ∈ R and x ∈ R+ :

(λ + µ)x = xλ+µ = xλ xµ = (λx) + (µx).

10. Identity element of scalar multiplication: For x ∈ R+ :

1 · x = x1 = x.

Thus, V = R+ , the set of all positive real numbers, with the given operations, is a vector space
over R.

Solution6(f)

Let V be the set of all real-valued functions defined on an open interval I that are continuous
everywhere on I except at a finite number of points, where they may be discontinuous with the
given operations on V as follows:

• Addition: For f, g ∈ V , define pointwise addition as:

(f + g)(x) = f (x) + g(x), ∀x ∈ I.

• Scalar multiplication: For f ∈ V and λ ∈ R, define scalar multiplication as:

(λ · f )(x) = λf (x), ∀x ∈ I.

We verify that V satisfies the vector space axioms over R:

1. Closure under addition: If f, g ∈ V , then f + g ∈ V . Since the sum of two real-valued


functions with at most a finite number of discontinuities is itself a function with at most a
finite number of discontinuities, f + g ∈ V .

2. Associativity of addition: For f, g, h ∈ V :

((f + g) + h)(x) = (f (x) + g(x)) + h(x) = f (x) + (g(x) + h(x)) = (f + (g + h))(x).

26
3. Existence of the zero vector: The zero function f0 (x) = 0, which is continuous everywhere
on I, satisfies:

(f + f0 )(x) = f (x) + f0 (x) = f (x) + 0 = f (x), ∀f ∈ V. ∀x ∈ I.

4. Existence of additive inverses: For f ∈ V , the function −f satisfies:

(f + (−f ))(x) = f (x) − f (x) = 0 = f0 (x), ∀x ∈ I.

5. Commutativity of addition: For f, g ∈ V :

(f + g)(x) = f (x) + g(x) = g(x) + f (x) = (g + f )(x).

6. Closure under scalar multiplication: If f ∈ V and λ ∈ R, then λf ∈ V . Scalar multi-


plication does not introduce additional discontinuities, so λf has at most a finite number of
discontinuities and is in V .

7. Compatibility of scalar multiplication with field multiplication: For λ, µ ∈ R and f ∈ V :

(λµ) · f = {(λµ)f (x)} = λ · (µ · f ), ∀x ∈ I

8. Distributivity of scalar multiplication over vector addition: For λ ∈ R and f, g ∈ V :

λ · (f + g) = {λ(f (x) + g(x))} = {λf (x) + λg(x)} = (λ · f ) + (λ · g), ∀x ∈ I.

9. Distributivity of scalar multiplication over field addition: For λ, µ ∈ R and f ∈ V :

(λ + µ) · f = {(λ + µ)f (x)} = {λf (x) + µf (x)} = (λ · f ) + (µ · f ), ∀x ∈ I.

10. Identity element of scalar multiplication: For f ∈ V :

1 · f = {1 · f (x)} = f, ∀x ∈ I.

Thus, V , the set of all real-valued functions with at most a finite number of discontinuities on I, is
a vector space over R.

27
Solution 6(g)

Let V = {tα : R → R | tα (x) = x + α, α ∈ R}, where tα is a translation function with the given
operations on V as follows:

• Addition (composition of mappings): For tα , tβ ∈ V ,

tα ◦ tβ (x) = tα (tβ (x)) = tα (x + β) = (x + β) + α = x + (α + β).

• Scalar multiplication: For tα ∈ V and λ ∈ R,

λtα (x) = tαλ (x) = x + αλ.

We verify that V satisfies the vector space axioms over R:

1. Closure under addition (composition): If tα , tβ ∈ V , then their composition tα ◦ tβ satis-


fies:
tα ◦ tβ (x) = x + (α + β).

Since α + β ∈ R, tα ◦ tβ ∈ V .

2. Associativity of addition: For tα , tβ , tγ ∈ V :

(tα ◦ tβ ) ◦ tγ (x) = tα ◦ (tβ ◦ tγ )(x) = x + (α + β + γ).

3. Existence of the zero vector: The identity mapping t0 (x) = x acts as the zero vector, since
for any tα ∈ V :
tα ◦ t0 (x) = tα (x) = x + α, t0 ◦ tα (x) = tα (x).

4. Existence of additive inverses: For tα ∈ V , the inverse t−α ∈ V satisfies:

tα ◦ t−α (x) = t−α ◦ tα (x) = x + (α − α) = x.

5. Commutativity of addition: For tα , tβ ∈ V :

tα ◦ tβ (x) = x + (α + β) = x + (β + α) = tβ ◦ tα (x).

28
6. Closure under scalar multiplication: If tα ∈ V and λ ∈ R, then:

λtα (x) = tαλ (x) = x + αλ.

Since αλ ∈ R, λtα ∈ V .

7. Compatibility of scalar multiplication with field multiplication: For λ, µ ∈ R and tα ∈


V:
(λµ)tα (x) = tα(λµ) (x) = x + α(λµ) = λ(tαµ (x)).

8. Distributivity of scalar multiplication over vector addition: For λ ∈ R and tα , tβ ∈ V :

λ(tα ◦ tβ )(x) = λtα+β (x) = tλ(α+β) (x) = tλα (x) ◦ tλβ (x).

9. Distributivity of scalar multiplication over field addition: For λ, µ ∈ R and tα ∈ V :

(λ + µ)tα (x) = tα(λ+µ) (x) = tλα (x) ◦ tµα (x).

10. Identity element of scalar multiplication: For tα ∈ V :

1 · tα (x) = tα·1 (x) = tα (x).

Thus, V , the set of translation functions of the form tα (x) = x + α, is a vector space over R.

Solution 6(h)

Compatibility of scalar multiplication with field multiplication: For λ, µ ∈ R and (x, y) ∈ R2 :

λ(µ(x, y)) = (λµ)(x, y).

Substituting the operations defined for V :

λ(µ(x, y)) = λ(3µx, y) = (3λ(3µx), y) = (9λµx, y),

while
(λµ)(x, y) = (3(λµ)x, y) = (3λµx, y).

29
Since 9λµx ̸= 3λµx, this axiom is violated. Does not forms a vector space.

Solution 6(i)

Addition is not closed.

Let P (x) = x4 + 2x3 + 3x2 + x + 5 and Q(x) = −x4 + x3 − 2x2 + 2x − 4.

P (x) + Q(x) = (x4 − x4 ) + (2x3 + x3 ) + (3x2 − 2x2 ) + (x + 2x) + (5 − 4)

P (x) + Q(x) = 3x3 + x2 + 3x + 1.

Thus, P (x)+Q(x) is a polynomial of degree 3, while P (x) and Q(x) are both degree 4 polynomials
this implies that closure axiom of addition does not hold.
Does not forms a vector space.

Solution 6(j)

scaler multiplication is not closed.


Closure under scalar multiplication: For A ∈ V and λ ∈ C, A† = −A. Then:

(λA)† = λA† = λ(−A) ̸= −(λA)

for pure imaginary number.


Foe example one can take λ = i. A is skew hermitian then iA is Hermitian matrix.
(iA)† = iA† = i(−A) = (iA)

30
Solution 7

Determining Subspaces of R[x]

Let R[x] be the set of all polynomials with real coefficients. We verify whether each set S is a
subspace of R[x] by checking the following conditions:

1. The zero polynomial 0(x) = 0 is in S.

2. S is closed under addition: If f (x), g(x) ∈ S, then f (x) + g(x) ∈ S.

3. S is closed under scalar multiplication: If f (x) ∈ S and λ ∈ R, then λf (x) ∈ S.

(a) S = Rn [x]

The set Rn [x] is the set of all polynomials of degree at most n.

• The zero polynomial is in Rn [x] since its degree is −∞ (by convention).

• If f (x), g(x) ∈ Rn [x], then f (x) + g(x) is also a polynomial of degree at most n.

• If f (x) ∈ Rn [x] and λ ∈ R, then λf (x) is also a polynomial of degree at most n.

Thus, S = Rn [x] is a subspace.

(b) S = {f (x) ∈ R[x] : f (x) = f (1 − x), ∀x}

• The zero polynomial satisfies p0 (x) = 0 = p0 (1 − x), so p0 (x) ∈ S.

• If f (x), g(x) ∈ S, then f (x) = f (1 − x) and g(x) = g(1 − x). For their sum:

(f + g)(x) = f (x) + g(x) = f (1 − x) + g(1 − x) = (f + g)(1 − x).

Thus, f + g ∈ S.

31
• If f (x) ∈ S and λ ∈ R, then:

(λf )(x) = λf (x) = λf (1 − x) = (λf )(1 − x).

Thus, λf ∈ S.

Hence, S is a subspace.

(c) S = {f (x) ∈ R[x] : f (x) = f (−x), ∀x}

• The zero polynomial satisfies p0 (x) = p0 (−x), so p0 (x) ∈ S.

• If f (x), g(x) ∈ S, then f (x) = f (−x) and g(x) = g(−x). For their sum:

(f + g)(x) = f (x) + g(x) = f (−x) + g(−x) = (f + g)(−x).

Thus, f + g ∈ S.

• If f (x) ∈ S and λ ∈ R, then:

(λf )(x) = λf (x) = λf (−x) = (λf )(−x).

Thus, λf ∈ S.

Hence, S is a subspace.

(d) S = {f (x) ∈ R[x] : f (1) ≥ 0}

• The zero polynomial satisfies p0 (1) = 0 ≥ 0, so p0 (x) ∈ S.

• Let f (x), g(x) ∈ S. Then f (1) ≥ 0 and g(1) ≥ 0. For their sum:

(f + g)(1) = f (1) + g(1) ≥ 0.

Thus, f + g ∈ S.

• However, scalar multiplication fails. For λ < 0, (λf )(1) = λf (1) < 0. Thus, λf ∈
/ S.

32
Therefore, S is not a subspace.

(e) S = {f (x) ∈ R[x] : f ′ (0) + f (0) = 0}

• The zero polynomial satisfies p′0 (0) = 0 and p0 (0) = 0, so p′0 (0) + p0 (0) = 0. Thus,
p0 (x) ∈ S.

• If f (x), g(x) ∈ S, then f ′ (0) + f (0) = 0 and g ′ (0) + g(0) = 0. For their sum:

(f + g)′ (0) + (f + g)(0) = f ′ (0) + g ′ (0) + f (0) + g(0) = 0.

Thus, f + g ∈ S.

• If f (x) ∈ S and λ ∈ R, then:

(λf )′ (0) + (λf )(0) = λf ′ (0) + λf (0) = λ(f ′ (0) + f (0)) = λ · 0 = 0.

Thus, λf ∈ S.

Hence, S is a subspace.

(f) S = {f (x) ∈ R[x] : f (x) has a root in [−1, 1]}

• The zero polynomial satisfies p0 (x) = 0, which is true for all x, so p0 (x) ∈ S.

• If f (x) ∈ S and g(x) ∈ S, f (x) and g(x) each have a root in [−1, 1]. However, (f + g)(x)
may not have a root in [−1, 1]. For example, f (x) = x + 1 and g(x) = −x have roots in
[−1, 1], but f (x) + g(x) = 1 does not.

Thus, S is not a subspace.

33
Solution 8

Determining Subspaces of Rn

Let Rn be the vector space of n-tuples of real numbers. To determine whether a subset S is a
subspace, we verify the following conditions:

1. The zero vector 0 = (0, 0, . . . , 0) ∈ S.

2. S is closed under vector addition: If u, v ∈ S, then u + v ∈ S.

3. S is closed under scalar multiplication: If u ∈ S and λ ∈ R, then λu ∈ S.

(a) S = {(x1 , x2 , . . . , xn ) ∈ Rn : xn = 0}

• The zero vector 0 = (0, 0, . . . , 0) satisfies xn = 0, so 0 ∈ S.

• If u, v ∈ S, then un = 0 and vn = 0. For their sum:

(u + v)n = un + vn = 0 + 0 = 0.

Thus, u + v ∈ S.

• If u ∈ S and λ ∈ R, then un = 0. For scalar multiplication:

(λu)n = λun = λ · 0 = 0.

Thus, λu ∈ S.

Hence, S is a subspace.

(b) S = {(x1 , x2 , . . . , xn ) ∈ Rn : x1 + x2 + · · · + xn = 0}

• The zero vector 0 = (0, 0, . . . , 0) satisfies x1 + x2 + · · · + xn = 0, so 0 ∈ S.

34
• If u, v ∈ S, then u1 + u2 + · · · + un = 0 and v1 + v2 + · · · + vn = 0. For their sum:

(u + v)1 + (u + v)2 + · · · + (u + v)n = (u1 + v1 ) + (u2 + v2 ) + · · · + (un + vn ) = 0.

Thus, u + v ∈ S.

• If u ∈ S and λ ∈ R, then u1 + u2 + · · · + un = 0. For scalar multiplication:

(λu)1 + (λu)2 + · · · + (λu)n = λ(u1 + u2 + · · · + un ) = λ · 0 = 0.

Thus, λu ∈ S.

Hence, S is a subspace.

(c) S = {(x1 , x2 , . . . , xn ) ∈ Rn : x21 + x22 + · · · + x2n ≥ 1}

• The zero vector 0 = (0, 0, . . . , 0) satisfies 02 + 02 + · · · + 02 = 0, which does not satisfy


x21 + x22 + · · · + x2n ≥ 1. Thus, 0 ∈
/ S.

Since the zero vector is not in S, S is not a subspace.

(d) S = {(x1 , x2 , . . . , xn ) ∈ Rn : xi = xn−i+1 , ∀i = 1, 2, . . . , n}

• The zero vector 0 = (0, 0, . . . , 0) satisfies xi = xn−i+1 = 0 for all i, so 0 ∈ S.

• If u, v ∈ S, then ui = un−i+1 and vi = vn−i+1 . For their sum:

(u + v)i = ui + vi and (u + v)n−i+1 = un−i+1 + vn−i+1 .

Since ui = un−i+1 and vi = vn−i+1 , we have (u + v)i = (u + v)n−i+1 . Thus, u + v ∈ S.

• If u ∈ S and λ ∈ R, then ui = un−i+1 . For scalar multiplication:

(λu)i = λui and (λu)n−i+1 = λun−i+1 .

Since ui = un−i+1 , we have (λu)i = (λu)n−i+1 . Thus, λu ∈ S.

Hence, S is a subspace.

35
Solution 9

Determining Subspaces of M2×2(R)

Let M2×2 (R) denote the vector space of all 2 × 2 real matrices. To determine whether a subset
S ⊆ M2×2 (R) is a subspace, we verify the following conditions:

1. The zero matrix O ∈ S.

2. S is closed under addition: If A, B ∈ S, then A + B ∈ S.

3. S is closed under scalar multiplication: If A ∈ S and λ ∈ R, then λA ∈ S.


" #
a b
Let A = ∈ M2×2 (R).
c d

(" # )
a b
(a) S = ∈ M2×2 (R) : a + b = 0
c d
" #
0 0
• The zero matrix O = satisfies 0 + 0 = 0. Thus, O ∈ S.
0 0
" # " #
′ ′
a b a b
• If A = ∈ M2×2 (R), B = ′ ′ ∈ M2×2 (R), then a + b = 0 and a′ + b′ = 0. For
c d c d
their sum: A+B,

(a + a′ ) + (b + b′ ) = (a + b) + (a′ + b′ ) = 0 + 0 = 0.

Thus, A + B ∈ S.
" #
a b
• If A = ∈ M2×2 (R) and λ ∈ R, then a + b = 0. For scalar multiplication λA:
c d

(λa) + (λb) = λ(a + b) = λ · 0 = 0.

Thus, λA ∈ S.

36
Hence, S is a subspace.

(" # )
a b
(b) S = ∈ M2×2 (R) : a + b + c + d = 0
c d

• The zero matrix O satisfies 0 + 0 + 0 + 0 = 0. Thus, 0 ∈ S.


" # " #
a b a′ b ′
• If A = ∈ M2×2 (R), B = ∈ M2×2 (R), then a + b + c + d = 0 and
c d c′ d′
a′ + b′ + c′ + d′ = 0. For their sum:

(a + a′ ) + (b + b′ ) + (c + c′ ) + (d + d′ ) = (a + b + c + d) + (a′ + b′ + c′ + d′ ) = 0 + 0 = 0.

Thus, A + B ∈ S.
" #
a b
• If A = ∈ M2×2 (R) and λ ∈ R, then a + b + c + d = 0. For scalar multiplication λA:
c d

(λa) + (λb) + (λc) + (λd) = λ(a + b + c + d) = λ · 0 = 0.

Thus, λA ∈ S.

Hence, S is a subspace.

(" # " # )
a b a b
(c) S = ∈ M2×2 (R) : det =0
c d c d

• The zero matrix 0 satisfies det(0) = 0, so 0 ∈ S.

• If A, B ∈ S, det(A) = 0 and det(B) = 0, but in general:

det(A + B) ̸= det(A) + det(B).

For example:
" # " # # "
1 0 0 0 1 0
A= , B= , det(A) = 0, det(B) = 0, det(A+B) = det = 1 ̸= 0.
0 0 0 1 0 1

37
Thus, S is not closed under addition.

Hence, S is not a subspace.

(" # )
a b
(d) S = ∈ M2×2 (R) : b = c = 0
c d

• The zero matrix 0 satisfies b = c = 0. Thus, 0 ∈ S.


" # " #
a b a′ b ′
• If A = ∈ M2×2 (R), B = ′ ′ ∈ M2×2 (R), then b = c = 0 and b′ = c′ = 0 for
c d c d
both A and B. For their sum:

b + b′ = 0 + 0 = 0, c + c′ = 0 + 0 = 0.

Thus, A + B ∈ S.

• If A ∈ S and λ ∈ R, then b = c = 0. For scalar multiplication:

λb = λ · 0 = 0, λc = λ · 0 = 0.

Thus, λA ∈ S.

Hence, S is a subspace.

(e) S = {A ∈ M2×2 (R) : A = AT }

• The zero matrix O satisfies O = OT . Thus, O ∈ S.

• If A, B ∈ S, then A = AT and B = B T . For their sum:

(A + B)T = AT + B T = A + B.

Thus, A + B ∈ S.

38
• If A ∈ S and λ ∈ R, then A = AT . For scalar multiplication:

(λA)T = λAT = λA.

Thus, λA ∈ S.

Hence, S is a subspace.

(f) S = {A ∈ M2×2 (R) : A = −AT }

• The zero matrix 0 satisfies 0 = −0T . Thus, 0 ∈ S.

• If A, B ∈ S, then A = −AT and B = −B T . For their sum:

(A + B)T = AT + B T = −A − B = −(A + B).

Thus, A + B ∈ S.

• If A ∈ S and λ ∈ R, then A = −AT . For scalar multiplication:

(λA)T = λAT = λ(−A) = −(λA).

Thus, λA ∈ S.

Hence, S is a subspace.

(" # )
a b
(g) S = ∈ M2×2 (R) : c = 0
c d
• The zero matrix O satisfies c = 0. Thus, O ∈ S.
" # " #
a b a′ b ′
• A= ∈ M2×2 (R), B = ′ ′ ∈ M2×2 (R), then c = 0 and c′ = 0. For their sum:
c d c d

c + c′ = 0 + 0 = 0.

Thus, A + B ∈ S.

39
• If A ∈ S and λ ∈ R, then c = 0. For scalar multiplication:

λc = λ · 0 = 0.

Thus, λA ∈ S.

Hence, S is a subspace.

(" # )
a b
(h) S = ∈ M2×2 (R) : b = 0
c d

• The zero matrix O satisfies b = 0. Thus, O ∈ S.


" # " #
a b a′ b ′
• If A = ∈ M2×2 (R), B = ′ ′ ∈ M2×2 (R), then b = 0 and b′ = 0 . For their
c d c d
sum:
b + b′ = 0 + 0 = 0.

Thus, A + B ∈ S.

• If A ∈ S and λ ∈ R, then b = 0. For scalar multiplication:

λb = λ · 0 = 0.

Thus, λA ∈ S.

Hence, S is a subspace.

Solution 10

Determining Subspaces of C[0, 1]

Let C[0, 1] denote the vector space of all continuous functions defined on the interval [0, 1]. To
determine whether a subset S ⊆ C[0, 1] is a subspace, we check the following conditions:

40
1. The zero function f0 (x) = 0, ∀x ∈ [0, 1], is in S.

2. S is closed under addition: If f, g ∈ S, then f + g ∈ S.

3. S is closed under scalar multiplication: If f ∈ S and λ ∈ R, then λf ∈ S.

(a) S = {f ∈ C[0, 1] : f (0) = 0}

• The zero function f0 (x) = 0, ∀x ∈ [0, 1], satisfies f0 (0) = 0. Thus, f0 ∈ S.

• If f, g ∈ S, then f (0) = 0 and g(0) = 0. For their sum:

(f + g)(0) = f (0) + g(0) = 0 + 0 = 0.

Thus, f + g ∈ S.

• If f ∈ S and λ ∈ R, then f (0) = 0. For scalar multiplication:

(λf )(0) = λf (0) = λ · 0 = 0.

Thus, λf ∈ S.

Hence, S is a subspace.

(b) S = {f ∈ C[0, 1] : f (0) = 0, f (1) = 0}

• The zero function f0 (x) = 0, ∀x ∈ [0, 1], satisfies f0 (0) = 0 and f0 (1) = 0. Thus, f0 ∈ S.

• If f, g ∈ S, then f (0) = 0, f (1) = 0, g(0) = 0, and g(1) = 0. For their sum:

(f + g)(0) = f (0) + g(0) = 0 + 0 = 0, (f + g)(1) = f (1) + g(1) = 0 + 0 = 0.

Thus, f + g ∈ S.

• If f ∈ S and λ ∈ R, then f (0) = 0 and f (1) = 0. For scalar multiplication:

(λf )(0) = λf (0) = λ · 0 = 0, (λf )(1) = λf (1) = λ · 0 = 0.

41
Thus, λf ∈ S.

Hence, S is a subspace.

(c) S = D[0, 1], the set of differentiable functions on [0, 1]

• The zero function f0 (x) = 0, ∀x ∈ [0, 1], is differentiable, so f0 ∈ S.

• If f, g ∈ S, then f and g are differentiable on [0, 1]. The sum f + g is differentiable, since:

(f + g)′ (x) = f ′ (x) + g ′ (x), ∀x ∈ [0, 1].

Thus, f + g ∈ S.

• If f ∈ S and λ ∈ R, then f is differentiable. The scalar multiple λf is differentiable, since:

(λf )′ (x) = λf ′ (x), ∀x ∈ [0, 1].

Thus, λf ∈ S.

Hence, S = D[0, 1] is a subspace.

Solution 11

Subspaces of R2

A subspace of a vector space R2 is a subset W ⊆ R2 that satisfies the following three properties:

1. W contains the zero vector, i.e., 0 = (0, 0) ∈ W .

2. W is closed under vector addition, i.e., for all u, v ∈ W ,

u + v ∈ W.

42
3. W is closed under scalar multiplication, i.e., for all u ∈ W and λ ∈ R,

λu ∈ W.

The subspaces of R2 are:

• The trivial subspace: {(0, 0)}, which contains only the zero vector.

• All lines through the origin: For any nonzero vector v = (a, b) ∈ R2 , the set:

{tv | t ∈ R} = {t(a, b) | t ∈ R}

is a subspace of R2 . These are lines passing through the origin.

• The entire space: R2 , which includes all vectors in R2 .

Thus, the subspaces of R2 are:

{(0, 0)}, all lines through the origin, R2 .

Subspaces of R3

Similarly, a subspace of R3 satisfies the same three properties (contains the zero vector, closed
under addition, and closed under scalar multiplication). The subspaces of R3 are:

• The trivial subspace: {(0, 0, 0)}, which contains only the zero vector.

• All lines through the origin: For any nonzero vector v = (a, b, c) ∈ R3 , the set:

{tv | t ∈ R} = {t(a, b, c) | t ∈ R}

is a subspace of R3 . These are lines passing through the origin.

• All planes through the origin: For any two linearly independent vectors u, v ∈ R3 , the set:

{su + tv | s, t ∈ R}

43
forms a plane through the origin.

• The entire space: R3 , which includes all vectors in R3 .

Thus, the subspaces of R3 are:

{(0, 0, 0)}, all lines through the origin, all planes through the origin, R3 .

Solution of 12(a)

Any set containing the zero vector is linearly dependent.

E.g: A = {(a, b), (0, 0)}


0(a, b) = (0, 0) −→ Other than zero vector, we can write linear combination to zero. So, set
become linearly dependent.
So, given statement is true.

Solution fo 12(b)

If S is linearly dependent set, then each vector in S is a linear combination of other vector in
S.
E.g:
Let S = {(1, 0), (0, 0)},

0(1, 0) = (0, 0)
α(0, 0) = (1, 0) −→ there doesn’t exist α such that, equality hold.
So, given statemnet is false.

Solution of 12(c)

Subsets of linearly independent sets are linearly independent (L.I).

44
Let S is L.I then A ⊂ S is also L.I.
Suppose A is not L.I. Then there exist v in A, which can be written as linear combination of
elements of A.
v = α1 v1 + α2 v2 + ... + αn vn .
Since vϵA that implies vϵS
Hence S is also L.D. So it contradicts our assumption S is L.I.
Therefor subsets of independent sets are linearly independent(L.I).
So, given statement is true

Solution of 12(d)

Subset of linearly dependent sets are linearly dependent.

Since subset of linearly dependent sets may or may not be linearly dependent.

E.g:
S = {(1, 0), (0, 1), (0, 0)} is L.D.
But A ⊂ S and A = {(1, 0), (0, 1)} is L.I.
So, given statement is false

Solution of 13(a)

Given that A = {x3 + 2x2 , −x3 + 3x + 1, x3 − x2 + 2x − 1} in P3 (R)


so,

aP1 (x) + bP2 (x) + cP3 (x) = 0


a(x3 + 2x2 ) + b(−x2 + 3x + 1) + c(x3 − x2 + 2x − 1) = 0
(a + c)x3 + (2a − b − c)x2 + (3b + 2c)x + (b − c) = 0

45
Now

(a + b) = 0
(2a − b − c) = 0
(3a + 2c) = 0
(b − c) = 0

here, a = −c, b = − 2c
3
,b = c
so only possible solution when b = c = 0 that implies a = 0.
Therefore given set is linearly independent.

Solution of 13(b)

Given that A = {(1, 2, 2), (2, 1, 2), (2, 2, 1)} in R3 .


Now

a(1, 2, 2) + b(2, 1, 2) + c(2, 2, 1) = 0


(a + 2b + 2c, 2a + b + 2c, 2a + 2b + c) = 0

here

a + 2b + 2c = 0
2a + b + 2c = 0
2a + 2b + c = 0

A matrix equation:

     
1 2 2 a 0
2 1 2 ·  b  = 0
     

2 2 1 c 0

here

46
 
1 2 2 : 0 R2 ←− R2 − 2R1
0 −3 −2 : 0
 

0 2 −3 : 0 R3 ←− R3 − 2R1

2
 
1 2 2 : 0 R3 ←− R3 − R1
0 −3 −2 : 0
  3
0 0 −53
: 0

now

−5
c = 0 =⇒ c = 0
3
−3b − 2c = 0 =⇒ b = 0
a + 2b + 2c = 0 =⇒ a = 0

So, A is L.I.

Solution of 13(c)
(" # " #)
1 −3 −2 6
X= , in ∈ M2×2 (R)
−2 4 4 −8
" # " #
1 −3 −2 6
let A = and B =
−2 4 4 −8

or

" # " # " #


1 −3 −2 6 0 0
a +b =
−2 4 4 −8 0 0

now

47
a − 2b = 0
−3a + 6b = 0
−2a + 4b = 0
4a − 8b = 0
=⇒ a = 2b

So

" # " # " #


1 −3 −2 6 0 0
2b +b =
−2 4 4 −8 0 0

2A + B = 0
B = −2A

so X is L.D.

Solution of 14

Let u + v be distinct vector in any vector space V over F. We have to show that {u, v} is
linearly dependent if and only if u or v is multiple of each other.
now
{u, v} is L.D. ⇐⇒ u or v is multiple of each other.

Suppose {u, v} is L.D. then there exist a + b(a ̸= 0, b ̸= 0 : a, b ∈ R) such that au + bv = ⃗0 (zero
vector).

Suppose a ̸= 0 then

48
au + bv = ⃗0
=⇒ au = −bv
=⇒ a−1 au = a−1 (−bv)

{Since a ̸= 0, aa−1 = 1 =⇒ ∃a−1 ∈ F }


=⇒ u = (−a−1 b)v
Therefore u is multiple of v.
Now Suppose b ̸= 0 then

au + bv = ⃗0
=⇒ bv = −au
=⇒ b−1 bv = b−1 (−av)

{Since b ̸= 0, bb−1 = 1 =⇒ ∃b−1 ∈ F }


=⇒ v = (−b−1 a)u
Therefore v is multiple of u.

1. Suppose u is multiple of v,∃a ∈ F such that

u = av
u − av = ⃗0

zero vector as linear combination of u + v (Non trivial). ginenumerate

2. Suppose v is multiple of u,∃a ∈ F such that

v = au
v − au = ⃗0

zero vector as non trivial linear combination of u + v.

So {u, v} is linearly independent.

49
Solution of 15

Let {u, v, w} is L.I in real vector space V. We have to show that {λu, λv, λw},{u+λv, v, w},{u+
v, u + w, v + w},{u + v + w, v + w, w} are L.I in V and {u + λv, v + λw, w + λu} may not be
linearly independent in V, where λ ∈ R and λ ̸= 0.

Now

Let us assume that {λu, λv, λw} are not L.I. There exists scalars a,b,c (not all zero) such that

a(λu) + b(λv) + c(λw) = 0


aλu + bλv + cλw = 0
αu + βv + rw = 0

(λ ∈ R and a ∈ R =⇒ aλ ∈ R)
But u, v and w are L.I, which contradicts our assumption. Hence {λu, λv, λw} is L.I.
Next

Let us consider that {u + λv, v, w} are not L.I. There exists scalars a,b,c (not all zero) such that

a(u + λv) + bv + cw = 0
au + (aλ + b)v + cw = 0
au + b′ v + cw = 0

(a ∈ R and aλ ∈ R =⇒ aλ + b ∈ R)
But u, v and w are L.I, which contradicts our assumption. Hence {u + λv, v, w} is L.I.

Again

Let us consider that {u + v + w, v + w, w} are not L.I. There exists scalars a,b,c (not all zero) such
that

a(u + v + w) + b(v + w) + cw = 0
au + (a + b)v + (a + b + c)w = 0
au + b∗ v + c∗ w = 0

(a, b, c ∈ R − {0}, a + b ∈ R =⇒ a + b + c ∈ R)

50
But u, v and w are L.I, which contradicts our assumption. Hence {u + v + w, v + w, w} is L.I.

Check {u + λv, v + λw, w + λu}

1. let λ = 0 then {u, v, w} −→ which is already L.I.

2. λ ̸= 0 , let a,b,c are scalar from R

a(u + λv) + b(v + λw) + c(w + λu) = 0


(a + cλ)u + (b + aλ)v + (c + bλ)w = 0

Since {u, v, w} are L.I, the above equation will be valid for that

a + cλ = 0
b + aλ = 0
c + bλ = 0

here

     
1 0 λ a 0
λ 1 0  ·  b  = 0
     

0 λ 1 c 0

 
1 0 λ : 0
0 1 −λ2 : 0
 

0 0 1 + λ3 : 0
here
1 + λ3 = 0 =⇒ λ3 = −1 =⇒ λ = −1.
So

b + c = 0 =⇒ b = −c = 0
a − b = 0 =⇒ a = b = 0.

51
So, when λ = −1 and a = b = c = 0.
For that {u + λv, v + λw, w + λu} is L.D.
Hence, {u + λv, v + λw, w + λu} may not be L.I in V.

Solution of 16
 
1 2 5
For A =  3 0 7, examine whether (1, 1, 1) and (1, −1, 1) are in (a) row space of A, (b)
 

−1 4 3
column space of A.

(a) Let a(1, 2, 5) + b(3, 0, 7) + c(−1, 4, 3) = (1, 1, 1, )


so

a + 3b − c = 1
2a + 4c = 1
5a + 7b + 3c = 1

Now

 
1 3 −1 : 1
2 0 4 : 1
 

5 7 3 : 1
 
1 3 −1 : 1 R2 ←− R2 − 2R1
=⇒ 0 −6 6 : −1
 

0 −8 8 : 4 R3 ←− R3 − 5R1

4
 
1 3 −1 : 1 R3 ←− R3 − R2
=⇒ 0

−6 6 : −1
 3
0 0 0 : −8
3

=⇒ Rank(A) ̸= Rank(A:B)
=⇒ No solution.

52
So (1, 1, 1) is not in row space of A.

Here
Let a(1, 2, 5) + b(3, 0, 7) + c(−1, 4, 3) = (1, −1, 1, )
so

a + 3b − c = 1
2a + 4c = −1
5a + 7b + 3c = 1

Now

  Here a11 = 1 is pivot element. We reduce


1 3 −1 : 1 a21 , a31 to zero.aij is element of matrix A.
2 0 4 : −1
 

5 7 3 : 1
 
1 3 −1 : 1 R2 ←− R2 − 2R1
=⇒ 0 −6 6 : −3
 

0 −8 8 : 0 R3 ←− R3 − 5R1

4
 
1 3 −1 : 1 R3 ←− R3 − R2
=⇒ 0 −6 6

: −1
 3
0 0 0 : −8
3

=⇒ −6b + 6c = −3 =⇒ 2b = 2c + 1 =⇒ b = 2c+1
2
and
=⇒ a + 3b − c = 1 =⇒ a + 6c+3
2
− c = 1 =⇒ a = −1
2
− 2c.

1 −1
let c = 0, then b = 2
and a = 2
so,
−1 1
(1, 3, −1) + (3, 0, 7) = (1, −1, 1)
2 2

Hence (1, −1, 1) is in row space of A.

53
(b) Now for column space of A

Let a(1, 3, −1) + b(2, 0, 4) + c(5, 7, 3) = (1, −1, 1, )


so

a + 2b + 5c = 1
3a + 7c = −1
−a + 4b + 3c = 1

Now

 
1 2 5 : 1
3 0 7 : −1
 

−1 4 3 : 1
 
1 2 5 : 1 R2 ←− R2 − 3R1
=⇒ 0 −6 −8 : −4
 

0 6 8 : 2 R3 ←− R3 + R1
 
1 2 5 : 1 R3 ←− R3 + R2
=⇒ 0 −6 −8 : −4
 

0 0 0 : −2

=⇒ Rank(A) ̸= Rank(A:B)
=⇒ No solution.
So (1, −1, 1) is not in column space of A.
Next
Let a(1, 3, −1) + b(2, 0, 4) + c(5, 7, 3) = (1, 1, 1, )
so

a + 2b + 5c = 1
3a + 7c = 1
−a + 4b + 3c = 1

Now

54
 
1 2 5 : 1
3 0 7 : −1
 

−1 4 3 : 1
 
1 2 5 : 1 R2 ←− R2 − 3R1
=⇒ 0 −6 −8 : −2
 

0 6 8 : 2 R3 ←− R3 + R1
 
1 2 5 : 1 R3 ←− R3 + R2
=⇒ 0 −6 −8 : −2
 

0 0 0 : 0

=⇒ −6b − 8c = −2 =⇒ 6b = 2 − 8c =⇒ b = 1−4c 3
and
=⇒ a + 2b + 5c = 1 =⇒ a + 2(1−4c) 3
+ 5c = 1 =⇒ a = 1−7c 3
.
1 1
let c = 0, then b = 3 and a = 3
so,
1 1
(1, 3, −1) + (2, 0, 4) = (1, 1, 1)
3 3

Hence (1, 1, 1) is in column space of A.

55

You might also like