Solution of Tutorial Sheet 1 Q1 Q16
Solution of Tutorial Sheet 1 Q1 Q16
Engineering Mathematics-II
Tutorial Sheet-I(SOLUTIONS of Q.1 to Q.16)
Winter 2024-25
sol(1)
To find the rank of the given matrix, we need to determine the maximum number of linearly
independent rows or columns in the matrix.
For this, we transform the matrix into a row-reduced-echelon form.
Now
a −1 1
−1 a −1
−1 −1 a
1 1 1
1
1 1 1 R1 ↔ R4
−1 a −1
=⇒
Here a11 = 1 is pivot element.We reduce
−1 −1 a
a −1 1 a21 = −1, a31 = −1 to zero.
R2 ←− R2 + R1
1 1 1 R3 ←− R3 + R1
0 a+1 0
=⇒
R4 ←− R4 + R1
0 0 a + 1
a+1 0 2
1 1 1 R2 ↔ R3
0 0 a + 1
=⇒
0 a + 1 0
a+1 0 2
1 1 1 R2 ↔ R4
a + 1 0 2
=⇒
0
a+1 0
0 0 a+1
1
R1 ←− R1 − R
a+1 4
1 1 0 R2 ←− R2 − 2
R
a+1 4
a + 1 0 0
=⇒
0 a + 1 0
0 0 a+1
1
R1 ←− R1 − R
a+1 2
0 1 0
a + 1 0 0
=⇒
0
a + 1 0
0 0 a+1
1
R1 ←− R1 − R
a+1 3
0 0 0
a + 1 0 0
=⇒
0
a+1 0
0 0 a+1
2
R1 ↔ R2 ↔ R3 ↔ R4
a+1 0 0
0 a+1 0
=⇒
0 0 a + 1
0 0 0
1. When a ̸= −1 ,there are 3 non-zero and independent rows. Therefore, the rank of A is 3.
1 1 1
0 0 2
=⇒ R2 ←→ R4
0 0 0
0 0 0
Hence the row-reduced- echelon form becomes:
1 1 1
0 0 2
0 0 0 .
0 0 0
3
Clearly, there are 2 nonzero rows and, also, they are independent. Therefore, the rank of A
is 2.
sol(2)
The rank of a matrix is 2 if and only if any 3 × 3 determinant formed by the rows is zero, while at
least one 2 × 2 minor is nonzero.
1 1 − x 2x + 1 −5 − 3x
0 −2 − x 5 + 3x −5 − x
Divide R2 by −4:
1 3 −3 x
0 1 − x+6 1 + x2 .
4
0 −2 − x 5 + 3x −5 − x
Now, it becomes row-reduced matrix. To ensure the rank is 2, the bottom row must become zero,
but there is no x so that −2 − x, 5 + 3x, −5 − x all these entries simultaneously become zero.
Hence there does not exist any value of x ∈ R such that the rank of A is 2.
4
sol(3)
The inverse of a square matrix A can be found using the augmented matrix [A|I], where I is the
identity matrix. Perform row reduction on [A|I] to transform M into I, and the resulting right-hand
side will be A−1 .
0 2 4 | 1 0 0
2 4 2 | 0 1 0
3 3 1 | 0 0 1
R1 ↔ R2
2 4 2 | 0 1 0 (since the pivot in the first columns is zero)
=⇒ 0 2 4 | 1 0 0
We reduce the matrix A to identity matrix I
3 3 1 | 0 0 1 to find the inverse of marix A, thats why we
are performing elementary row operation.
1 2 1 | 0 21 0 R1 ←− (1/2)R1
=⇒ 0 1 2 | 12 0 0
3 3 1 | 0 0 1 R2 ←− (1/2)R2
1
R3 ←− R3 − 3R1
1 2 1 | 0 2
0
=⇒ 0 1 2 | 12 0 0
−3
0 −3 −2 | 0 2
1
1
1 2 1 | 0 2
0 R3 ←− R3 + 3R2
=⇒ 0 1 2 | 21 0 0 .
−3
0 0 4 | 23 2
1
5
1
1 2 1 | 0 2
0 R3 ←− (1/4)R3
=⇒ 0 1 2 | 21 0 0
−3
0 0 1 | 38 8
1
4
−3 7 −1
R1 ←− R1 − R3
1 2 0 | 8 8 4
=⇒ 0 1 0 |
−2 6 −2 R2 ←− R2 − 2R3
8 8 4
3 −3 1
0 0 1 | 8 8 4
1 −5 3
R1 ←− R1 − 2R2
1 0 0 | 8 8 4
−1 3 −1
=⇒ 0 1 0 | .
4 4 2
3 −3 1
0 0 1 | 8 8 4
1 1 2 | 1 0 0
2 4 4 | 0 1 0
3 3 7 | 0 0 1
1 1 2 | 1 0 0 R2 ←− R2 − 2R1
=⇒ 0 2 0 | −2 1 0
0 0 1 | −3 0 1 R3 ←− R3 − 3R1
1
1 1 2 | 1 0 0 R2 ←− R2
=⇒ 0 1 0 | −1
1
0
2
2
0 0 1 | −3 0 1
−1
1 0 2 | 2 2
0 R1 ←− R1 − R2
1
=⇒ 0 1 0 | −1 0
2
0 0 1 | −3 0 1
6
−1
R1 ←− R1 − 2R3
1 0 0 | 8 2
−2
1
=⇒ 0 1 0 | −1 0 .
2
0 0 1 | −3 0 1
Thus, the inverse of B is:
8 − 21 −2
B −1 = −1 12 0 .
−3 0 1
sol (4)
(a)
2 1 4 x3 7
The augmented matrix is
1 1 0 | 4
0 1 −1 | 1
2 1 4 | 7
1 1 0 | 4 R3 ←− R3 − 2R1
=⇒ 0 1 −1 | 1
0 −1 4 | −1
7
1 1 2 | 4 R3 ←− R3 + R2
=⇒ 0 1 −1 | 1
0 0 3 | 0
1
1 1 0 | 4 R3 ←− R3
=⇒ 0
1 −1 | 1
3
0 0 1 | 0
1 1 0 | 4 R2 ←− R2 + R3
=⇒ 0 1 0 | 1
0 0 1 | 0 R1 ←− R1 − R2
1 0 0 | 3 R1 ←− R1 − R2
=⇒ 0 1 0 | 1 .
0 0 1 | 0
Therefore the required solution is:
x 1 x2 x3 = 3 1 0 .
(b)
8
Now,
=⇒ x1 + 3x2 + x3 = 0,
−7x2 − x3 = 0.
=⇒ x1 + 3x2 + x3 = 0,
x3 = −7x2 .
=⇒ x1 = 4x2 ,
x3 = −7x2 .
(c)
Matrix form:
1 2 −1 x1 10
−1 1 2 x2 = 2 .
2 1 −3 x3 2
The augmented matrix is
9
1 2 −1 | 10
−1 1 2 | 2
2 1 −3 | 2
1 2 −1 | 10 R3 ←− R3 − 2R1
0 3 1 | 12 .
0 0 0 | 6 R2 ←− R2 + R1
Now
=⇒ x1 + 2x2 − x3 = 10,
0.x1 + 3x2 + x3 = 12,
0.x1 + 0.x2 + 0.x3 = 6.
Clearly, the last equation in the system is absurd and therefore the given system is inconsitent.
(d)
1 1 1 −3 | 1
2 4 3 1 | 3
3 6 4 −2 | 4
10
1 1 1 −3 | 1 R2 ←− R2 − 2R1
=⇒ 0 2 1 7 | 1
0 3 1 7 | 1 , R3 ←− R3 − 3R1
1 1 1 −3 | 1 R3 ←− R3 − R2
=⇒ 0 2 1 7 | 1
0 1 0 0 | 0
Now,
=⇒ x1 + x2 + x3 − 3x4 = 1,
0.x1 + 2x2 + x3 + 7x4 = 1,
0.x1 + x2 + 0.x3 + 0.x4 = 0.
(e)
2 1 −3 x3 8
The augmented matrix is
11
1 2 −1 | 10
−1 1 2 | 2
2 1 −3 | 8
1 2 −1 | 10 R2 ←− R2 + R1
=⇒ 0 3 1 | 12
0 −3 −1 | −12 R3 ←− R3 − 2R1
1 2 −1 | 10 R3 ←− R3 + R2
=⇒ 0 3 1 | 12
0 0 0 | 0
1
1 2 −1 | 10 R2 ←− R2
=⇒ 0
1 31 | 4
3
0 0 0 | 0
−5
1 0 2
| 2 R1 ←− R1 − 2R2
1
=⇒ 0 1 | 4
3
0 0 0 | 0
Now
5
=⇒ x1 + 0.x2 − x3 = 2,
2
1
0.x1 + x2 + x3 = 4.
3
12
sol (5)
3. Many solutions: if the determinant is zero and the augmented matrix is consistent.
1 1 1 | 1
1 2 −1 | b
5 7 a | b2
1 1 1 | 1 R2 ←− R2 − R1
=⇒ 0 1 −2 | b − 1
0 2 a − 5 | b2 − 5 R3 ←− R3 − R1
R3 ←− R3 − 2R2
1 1 1 | 1
=⇒ 0 1 −2 | b−1
0 0 a − 1 | b2 + 2b − 3
Now
=⇒ x + y + z = 1,
y − 2z = b − 1,
(a − 1)z = b2 + 2b − 3.
13
The determinant of the coefficient matrix is:
1 1 1
∆ = 1 2 −1 = a − 1.
5 7 a
Therefore,
(a − 1)z = b2 + 2b − 3
Solution 6(a):
Let V = C[a, b] be the set of all continuous functions on the interval [a, b] with the given operations
on V as follows:
1. Closure under addition: If f, g ∈ C[a, b], then f + g ∈ C[a, b] because the sum of contin-
uous functions is continuous.
14
3. Existence of the zero vector: The zero function f0 (x) = 0 is in C[a, b], and for f ∈ C[a, b]
and ∀ x ∈ [a, b],
(f + f0 )(x) = f (x) + f0 (x) = f (x) + 0 = f (x).
4. Existence of additive inverses: For f ∈ C[a, b] and ∀ x ∈ [a, b], the function −f ∈ C[a, b]
satisfies
(f + (−f ))(x) = f (x) − f (x) = 0 = f0 (x).
6. Closure under scaler multiplication: If f, g ∈ C[a, b], then λf ∈ C[a, b] because the the
scaler multiplication preserves continuity.
10. Identity element of scalar multiplication: For f ∈ C[a, b] and ∀ x ∈ [a, b],
15
Note: To prove the axioms 2, 3, 4, 5, 7, 8, 9, 10 the properties of real number are used in interme-
diate steps.
Solution 6(b)
V = {A ∈ Cn×n | A = A† },
T
where A† = A is the conjugate transpose of A. with the operations on V as:
Solution 6(c)
Let V = R[x] be the set of all polynomials with real coefficients with the given operations on V
as:
16
1. Closure under addition: If f, g ∈ R[x] with degree m and n respectively, then f +g ∈ R[x]
with degree ≤ m + n because the sum of two polynomials is a polynomial with degree
≤ m + n.
A detailed explanation of closure of addition:
Case 1: m < n
Let
f (x) = am xm + am−1 xm−1 + · · · + a0 ,
and
g(x) = bn xn + bn−1 xn−1 + · · · + b0 ,
where m < n.
Their sum is:
deg(f + g) = n.
Case 2: m = n
Let
f (x) = am xm + am−1 xm−1 + · · · + a0 ,
and
g(x) = bm xm + bm−1 xm−1 + · · · + b0 ,
where m = n.
17
Their sum is:
deg(f + g) = m = n.
- If am + bm = 0, the highest degree term comes from the next highest degree term, and:
deg(f + g) < m.
Case 3: m > n
Let
f (x) = am xm + am−1 xm−1 + · · · + a0 ,
and
g(x) = bn xn + bn−1 xn−1 + · · · + b0 ,
where m > n.
Their sum is:
deg(f + g) = m.
18
• Let the polynomials be:
f (x) = a0 + a1 x + a2 x2 + · · · + am xm ,
g(x) = b0 + b1 x + b2 x2 + · · · + bn xn ,
h(x) = c0 + c1 x + c2 x2 + · · · + cp xp .
• Compare coefficients:
• Thus:
(f (x) + g(x)) + h(x) = f (x) + (g(x) + h(x)).
3. Existence of the zero vector: The zero polynomial p0 (x) = 0 satisfies p0 + f = f for all
19
f ∈ R[x].
f (x) = a0 + a1 x + a2 x2 + · · · + am xm ,
g(x) = b0 + b1 x + b2 x2 + · · · + bn xn .
• Compare coefficients:
• Thus:
f (x) + g(x) = g(x) + f (x).
Let f (x) = a0 + a1 x + a2 x2 + · · · + an xn .
20
Left-hand side:
(λµ)f (x) = (λµ)(a0 + a1 x + a2 x2 + · · · + an xn ).
Right-hand side:
On Comparison:
Thus:
(λµ)f = λ(µf ).
λ(f + g) = λf + λg.
Right-hand side:
λf (x) + λg(x) = λa0 + λa1 x + . . . + λb0 + λb1 x + . . . .
Comparison:
21
Thus:
λ(f + g) = λf + λg.
(λ + µ)f = λf + µf.
Let f (x) = a0 + a1 x + a2 x2 + · · · + an xn .
Left-hand side:
Right-hand side:
λf (x) + µf (x) = λa0 + λa1 x + . . . + µa0 + µa1 x + . . . .
Comparison:
Thus:
(λ + µ)f = λf + µf.
Let f (x) = a0 + a1 x + a2 x2 + · · · + an xn .
Compute:
1 · f (x) = 1 · (a0 + a1 x + a2 x2 + · · · + an xn ).
= (1 · a0 ) + (1 · a1 )x + (1 · a2 )x2 + · · · + (1 · an )xn .
22
Since 1 · ak = ak for all k, we have:
1 · f (x) = a0 + a1 x + a2 x2 + · · · + an xn = f (x).
Thus, V = R[x], the set of all polynomials with real coefficients, is a vector space over R.
Solution 6(d)
V = {a = {an }∞
n=1 | an ∈ R, ∀n ∈ N}.
(a + b) = {an + bn }∞
n=1 .
(λ · a) = {λan }∞
n=1 .
((a + b) + c) = {(an + bn ) + cn }∞ ∞
n=1 = {an + (bn + cn )}n=1 = (a + (b + c)).
23
4. Existence of additive inverses: For a = {an }∞ ∞ ∞
n=1 ∈ R , the sequence −a = {−an }n=1
satisfies:
a + (−a) = {an − an }∞ ∞
n=1 = {0}n=1 .
(a + b) = {an + bn }∞ ∞
n=1 = {bn + an }n=1 = (b + a).
1 · a = {1 · an }∞ ∞
n=1 = {an }n=1 = a.
Thus, V = R∞ , the set of all infinite sequences of real numbers, is a vector space over R.
Solution 6(e)
Let V = R+ be the set of all positive real numbers with the operations on V as follows:
• Addition: For x, y ∈ R+ ,
24
• Scalar multiplication: For x ∈ R+ and λ ∈ R,
λx = xλ .
1. Closure under addition: If x, y ∈ R+ , then x + y = xy. Since the product of two positive
real numbers is positive, x + y ∈ R+ .
3. Existence of the zero vector: The multiplicative identity 1 ∈ R+ serves as the zero vector,
since for any x ∈ R+ ,
x + 1 = x · 1 = x.
x + x−1 = x · x−1 = 1.
x + y = xy = yx = y + x.
25
9. Distributivity of scalar multiplication over field addition: For λ, µ ∈ R and x ∈ R+ :
1 · x = x1 = x.
Thus, V = R+ , the set of all positive real numbers, with the given operations, is a vector space
over R.
Solution6(f)
Let V be the set of all real-valued functions defined on an open interval I that are continuous
everywhere on I except at a finite number of points, where they may be discontinuous with the
given operations on V as follows:
(λ · f )(x) = λf (x), ∀x ∈ I.
26
3. Existence of the zero vector: The zero function f0 (x) = 0, which is continuous everywhere
on I, satisfies:
1 · f = {1 · f (x)} = f, ∀x ∈ I.
Thus, V , the set of all real-valued functions with at most a finite number of discontinuities on I, is
a vector space over R.
27
Solution 6(g)
Let V = {tα : R → R | tα (x) = x + α, α ∈ R}, where tα is a translation function with the given
operations on V as follows:
Since α + β ∈ R, tα ◦ tβ ∈ V .
3. Existence of the zero vector: The identity mapping t0 (x) = x acts as the zero vector, since
for any tα ∈ V :
tα ◦ t0 (x) = tα (x) = x + α, t0 ◦ tα (x) = tα (x).
tα ◦ tβ (x) = x + (α + β) = x + (β + α) = tβ ◦ tα (x).
28
6. Closure under scalar multiplication: If tα ∈ V and λ ∈ R, then:
Since αλ ∈ R, λtα ∈ V .
λ(tα ◦ tβ )(x) = λtα+β (x) = tλ(α+β) (x) = tλα (x) ◦ tλβ (x).
Thus, V , the set of translation functions of the form tα (x) = x + α, is a vector space over R.
Solution 6(h)
while
(λµ)(x, y) = (3(λµ)x, y) = (3λµx, y).
29
Since 9λµx ̸= 3λµx, this axiom is violated. Does not forms a vector space.
Solution 6(i)
Thus, P (x)+Q(x) is a polynomial of degree 3, while P (x) and Q(x) are both degree 4 polynomials
this implies that closure axiom of addition does not hold.
Does not forms a vector space.
Solution 6(j)
30
Solution 7
Let R[x] be the set of all polynomials with real coefficients. We verify whether each set S is a
subspace of R[x] by checking the following conditions:
(a) S = Rn [x]
• If f (x), g(x) ∈ Rn [x], then f (x) + g(x) is also a polynomial of degree at most n.
• If f (x), g(x) ∈ S, then f (x) = f (1 − x) and g(x) = g(1 − x). For their sum:
Thus, f + g ∈ S.
31
• If f (x) ∈ S and λ ∈ R, then:
Thus, λf ∈ S.
Hence, S is a subspace.
• If f (x), g(x) ∈ S, then f (x) = f (−x) and g(x) = g(−x). For their sum:
Thus, f + g ∈ S.
Thus, λf ∈ S.
Hence, S is a subspace.
• Let f (x), g(x) ∈ S. Then f (1) ≥ 0 and g(1) ≥ 0. For their sum:
Thus, f + g ∈ S.
• However, scalar multiplication fails. For λ < 0, (λf )(1) = λf (1) < 0. Thus, λf ∈
/ S.
32
Therefore, S is not a subspace.
• The zero polynomial satisfies p′0 (0) = 0 and p0 (0) = 0, so p′0 (0) + p0 (0) = 0. Thus,
p0 (x) ∈ S.
• If f (x), g(x) ∈ S, then f ′ (0) + f (0) = 0 and g ′ (0) + g(0) = 0. For their sum:
Thus, f + g ∈ S.
Thus, λf ∈ S.
Hence, S is a subspace.
• The zero polynomial satisfies p0 (x) = 0, which is true for all x, so p0 (x) ∈ S.
• If f (x) ∈ S and g(x) ∈ S, f (x) and g(x) each have a root in [−1, 1]. However, (f + g)(x)
may not have a root in [−1, 1]. For example, f (x) = x + 1 and g(x) = −x have roots in
[−1, 1], but f (x) + g(x) = 1 does not.
33
Solution 8
Determining Subspaces of Rn
Let Rn be the vector space of n-tuples of real numbers. To determine whether a subset S is a
subspace, we verify the following conditions:
(a) S = {(x1 , x2 , . . . , xn ) ∈ Rn : xn = 0}
(u + v)n = un + vn = 0 + 0 = 0.
Thus, u + v ∈ S.
(λu)n = λun = λ · 0 = 0.
Thus, λu ∈ S.
Hence, S is a subspace.
(b) S = {(x1 , x2 , . . . , xn ) ∈ Rn : x1 + x2 + · · · + xn = 0}
34
• If u, v ∈ S, then u1 + u2 + · · · + un = 0 and v1 + v2 + · · · + vn = 0. For their sum:
Thus, u + v ∈ S.
Thus, λu ∈ S.
Hence, S is a subspace.
Hence, S is a subspace.
35
Solution 9
Let M2×2 (R) denote the vector space of all 2 × 2 real matrices. To determine whether a subset
S ⊆ M2×2 (R) is a subspace, we verify the following conditions:
(" # )
a b
(a) S = ∈ M2×2 (R) : a + b = 0
c d
" #
0 0
• The zero matrix O = satisfies 0 + 0 = 0. Thus, O ∈ S.
0 0
" # " #
′ ′
a b a b
• If A = ∈ M2×2 (R), B = ′ ′ ∈ M2×2 (R), then a + b = 0 and a′ + b′ = 0. For
c d c d
their sum: A+B,
(a + a′ ) + (b + b′ ) = (a + b) + (a′ + b′ ) = 0 + 0 = 0.
Thus, A + B ∈ S.
" #
a b
• If A = ∈ M2×2 (R) and λ ∈ R, then a + b = 0. For scalar multiplication λA:
c d
Thus, λA ∈ S.
36
Hence, S is a subspace.
(" # )
a b
(b) S = ∈ M2×2 (R) : a + b + c + d = 0
c d
(a + a′ ) + (b + b′ ) + (c + c′ ) + (d + d′ ) = (a + b + c + d) + (a′ + b′ + c′ + d′ ) = 0 + 0 = 0.
Thus, A + B ∈ S.
" #
a b
• If A = ∈ M2×2 (R) and λ ∈ R, then a + b + c + d = 0. For scalar multiplication λA:
c d
Thus, λA ∈ S.
Hence, S is a subspace.
(" # " # )
a b a b
(c) S = ∈ M2×2 (R) : det =0
c d c d
For example:
" # " # # "
1 0 0 0 1 0
A= , B= , det(A) = 0, det(B) = 0, det(A+B) = det = 1 ̸= 0.
0 0 0 1 0 1
37
Thus, S is not closed under addition.
(" # )
a b
(d) S = ∈ M2×2 (R) : b = c = 0
c d
b + b′ = 0 + 0 = 0, c + c′ = 0 + 0 = 0.
Thus, A + B ∈ S.
λb = λ · 0 = 0, λc = λ · 0 = 0.
Thus, λA ∈ S.
Hence, S is a subspace.
(A + B)T = AT + B T = A + B.
Thus, A + B ∈ S.
38
• If A ∈ S and λ ∈ R, then A = AT . For scalar multiplication:
Thus, λA ∈ S.
Hence, S is a subspace.
Thus, A + B ∈ S.
Thus, λA ∈ S.
Hence, S is a subspace.
(" # )
a b
(g) S = ∈ M2×2 (R) : c = 0
c d
• The zero matrix O satisfies c = 0. Thus, O ∈ S.
" # " #
a b a′ b ′
• A= ∈ M2×2 (R), B = ′ ′ ∈ M2×2 (R), then c = 0 and c′ = 0. For their sum:
c d c d
c + c′ = 0 + 0 = 0.
Thus, A + B ∈ S.
39
• If A ∈ S and λ ∈ R, then c = 0. For scalar multiplication:
λc = λ · 0 = 0.
Thus, λA ∈ S.
Hence, S is a subspace.
(" # )
a b
(h) S = ∈ M2×2 (R) : b = 0
c d
Thus, A + B ∈ S.
λb = λ · 0 = 0.
Thus, λA ∈ S.
Hence, S is a subspace.
Solution 10
Let C[0, 1] denote the vector space of all continuous functions defined on the interval [0, 1]. To
determine whether a subset S ⊆ C[0, 1] is a subspace, we check the following conditions:
40
1. The zero function f0 (x) = 0, ∀x ∈ [0, 1], is in S.
Thus, f + g ∈ S.
Thus, λf ∈ S.
Hence, S is a subspace.
• The zero function f0 (x) = 0, ∀x ∈ [0, 1], satisfies f0 (0) = 0 and f0 (1) = 0. Thus, f0 ∈ S.
Thus, f + g ∈ S.
41
Thus, λf ∈ S.
Hence, S is a subspace.
• If f, g ∈ S, then f and g are differentiable on [0, 1]. The sum f + g is differentiable, since:
Thus, f + g ∈ S.
Thus, λf ∈ S.
Solution 11
Subspaces of R2
A subspace of a vector space R2 is a subset W ⊆ R2 that satisfies the following three properties:
u + v ∈ W.
42
3. W is closed under scalar multiplication, i.e., for all u ∈ W and λ ∈ R,
λu ∈ W.
• The trivial subspace: {(0, 0)}, which contains only the zero vector.
• All lines through the origin: For any nonzero vector v = (a, b) ∈ R2 , the set:
{tv | t ∈ R} = {t(a, b) | t ∈ R}
Subspaces of R3
Similarly, a subspace of R3 satisfies the same three properties (contains the zero vector, closed
under addition, and closed under scalar multiplication). The subspaces of R3 are:
• The trivial subspace: {(0, 0, 0)}, which contains only the zero vector.
• All lines through the origin: For any nonzero vector v = (a, b, c) ∈ R3 , the set:
{tv | t ∈ R} = {t(a, b, c) | t ∈ R}
• All planes through the origin: For any two linearly independent vectors u, v ∈ R3 , the set:
{su + tv | s, t ∈ R}
43
forms a plane through the origin.
{(0, 0, 0)}, all lines through the origin, all planes through the origin, R3 .
Solution of 12(a)
Solution fo 12(b)
If S is linearly dependent set, then each vector in S is a linear combination of other vector in
S.
E.g:
Let S = {(1, 0), (0, 0)},
0(1, 0) = (0, 0)
α(0, 0) = (1, 0) −→ there doesn’t exist α such that, equality hold.
So, given statemnet is false.
Solution of 12(c)
44
Let S is L.I then A ⊂ S is also L.I.
Suppose A is not L.I. Then there exist v in A, which can be written as linear combination of
elements of A.
v = α1 v1 + α2 v2 + ... + αn vn .
Since vϵA that implies vϵS
Hence S is also L.D. So it contradicts our assumption S is L.I.
Therefor subsets of independent sets are linearly independent(L.I).
So, given statement is true
Solution of 12(d)
Since subset of linearly dependent sets may or may not be linearly dependent.
E.g:
S = {(1, 0), (0, 1), (0, 0)} is L.D.
But A ⊂ S and A = {(1, 0), (0, 1)} is L.I.
So, given statement is false
Solution of 13(a)
45
Now
(a + b) = 0
(2a − b − c) = 0
(3a + 2c) = 0
(b − c) = 0
here, a = −c, b = − 2c
3
,b = c
so only possible solution when b = c = 0 that implies a = 0.
Therefore given set is linearly independent.
Solution of 13(b)
here
a + 2b + 2c = 0
2a + b + 2c = 0
2a + 2b + c = 0
A matrix equation:
1 2 2 a 0
2 1 2 · b = 0
2 2 1 c 0
here
46
1 2 2 : 0 R2 ←− R2 − 2R1
0 −3 −2 : 0
0 2 −3 : 0 R3 ←− R3 − 2R1
2
1 2 2 : 0 R3 ←− R3 − R1
0 −3 −2 : 0
3
0 0 −53
: 0
now
−5
c = 0 =⇒ c = 0
3
−3b − 2c = 0 =⇒ b = 0
a + 2b + 2c = 0 =⇒ a = 0
So, A is L.I.
Solution of 13(c)
(" # " #)
1 −3 −2 6
X= , in ∈ M2×2 (R)
−2 4 4 −8
" # " #
1 −3 −2 6
let A = and B =
−2 4 4 −8
or
now
47
a − 2b = 0
−3a + 6b = 0
−2a + 4b = 0
4a − 8b = 0
=⇒ a = 2b
So
2A + B = 0
B = −2A
so X is L.D.
Solution of 14
Let u + v be distinct vector in any vector space V over F. We have to show that {u, v} is
linearly dependent if and only if u or v is multiple of each other.
now
{u, v} is L.D. ⇐⇒ u or v is multiple of each other.
Suppose {u, v} is L.D. then there exist a + b(a ̸= 0, b ̸= 0 : a, b ∈ R) such that au + bv = ⃗0 (zero
vector).
Suppose a ̸= 0 then
48
au + bv = ⃗0
=⇒ au = −bv
=⇒ a−1 au = a−1 (−bv)
au + bv = ⃗0
=⇒ bv = −au
=⇒ b−1 bv = b−1 (−av)
u = av
u − av = ⃗0
v = au
v − au = ⃗0
49
Solution of 15
Let {u, v, w} is L.I in real vector space V. We have to show that {λu, λv, λw},{u+λv, v, w},{u+
v, u + w, v + w},{u + v + w, v + w, w} are L.I in V and {u + λv, v + λw, w + λu} may not be
linearly independent in V, where λ ∈ R and λ ̸= 0.
Now
Let us assume that {λu, λv, λw} are not L.I. There exists scalars a,b,c (not all zero) such that
(λ ∈ R and a ∈ R =⇒ aλ ∈ R)
But u, v and w are L.I, which contradicts our assumption. Hence {λu, λv, λw} is L.I.
Next
Let us consider that {u + λv, v, w} are not L.I. There exists scalars a,b,c (not all zero) such that
a(u + λv) + bv + cw = 0
au + (aλ + b)v + cw = 0
au + b′ v + cw = 0
(a ∈ R and aλ ∈ R =⇒ aλ + b ∈ R)
But u, v and w are L.I, which contradicts our assumption. Hence {u + λv, v, w} is L.I.
Again
Let us consider that {u + v + w, v + w, w} are not L.I. There exists scalars a,b,c (not all zero) such
that
a(u + v + w) + b(v + w) + cw = 0
au + (a + b)v + (a + b + c)w = 0
au + b∗ v + c∗ w = 0
(a, b, c ∈ R − {0}, a + b ∈ R =⇒ a + b + c ∈ R)
50
But u, v and w are L.I, which contradicts our assumption. Hence {u + v + w, v + w, w} is L.I.
Since {u, v, w} are L.I, the above equation will be valid for that
a + cλ = 0
b + aλ = 0
c + bλ = 0
here
1 0 λ a 0
λ 1 0 · b = 0
0 λ 1 c 0
1 0 λ : 0
0 1 −λ2 : 0
0 0 1 + λ3 : 0
here
1 + λ3 = 0 =⇒ λ3 = −1 =⇒ λ = −1.
So
b + c = 0 =⇒ b = −c = 0
a − b = 0 =⇒ a = b = 0.
51
So, when λ = −1 and a = b = c = 0.
For that {u + λv, v + λw, w + λu} is L.D.
Hence, {u + λv, v + λw, w + λu} may not be L.I in V.
Solution of 16
1 2 5
For A = 3 0 7, examine whether (1, 1, 1) and (1, −1, 1) are in (a) row space of A, (b)
−1 4 3
column space of A.
a + 3b − c = 1
2a + 4c = 1
5a + 7b + 3c = 1
Now
1 3 −1 : 1
2 0 4 : 1
5 7 3 : 1
1 3 −1 : 1 R2 ←− R2 − 2R1
=⇒ 0 −6 6 : −1
0 −8 8 : 4 R3 ←− R3 − 5R1
4
1 3 −1 : 1 R3 ←− R3 − R2
=⇒ 0
−6 6 : −1
3
0 0 0 : −8
3
=⇒ Rank(A) ̸= Rank(A:B)
=⇒ No solution.
52
So (1, 1, 1) is not in row space of A.
Here
Let a(1, 2, 5) + b(3, 0, 7) + c(−1, 4, 3) = (1, −1, 1, )
so
a + 3b − c = 1
2a + 4c = −1
5a + 7b + 3c = 1
Now
5 7 3 : 1
1 3 −1 : 1 R2 ←− R2 − 2R1
=⇒ 0 −6 6 : −3
0 −8 8 : 0 R3 ←− R3 − 5R1
4
1 3 −1 : 1 R3 ←− R3 − R2
=⇒ 0 −6 6
: −1
3
0 0 0 : −8
3
=⇒ −6b + 6c = −3 =⇒ 2b = 2c + 1 =⇒ b = 2c+1
2
and
=⇒ a + 3b − c = 1 =⇒ a + 6c+3
2
− c = 1 =⇒ a = −1
2
− 2c.
1 −1
let c = 0, then b = 2
and a = 2
so,
−1 1
(1, 3, −1) + (3, 0, 7) = (1, −1, 1)
2 2
53
(b) Now for column space of A
a + 2b + 5c = 1
3a + 7c = −1
−a + 4b + 3c = 1
Now
1 2 5 : 1
3 0 7 : −1
−1 4 3 : 1
1 2 5 : 1 R2 ←− R2 − 3R1
=⇒ 0 −6 −8 : −4
0 6 8 : 2 R3 ←− R3 + R1
1 2 5 : 1 R3 ←− R3 + R2
=⇒ 0 −6 −8 : −4
0 0 0 : −2
=⇒ Rank(A) ̸= Rank(A:B)
=⇒ No solution.
So (1, −1, 1) is not in column space of A.
Next
Let a(1, 3, −1) + b(2, 0, 4) + c(5, 7, 3) = (1, 1, 1, )
so
a + 2b + 5c = 1
3a + 7c = 1
−a + 4b + 3c = 1
Now
54
1 2 5 : 1
3 0 7 : −1
−1 4 3 : 1
1 2 5 : 1 R2 ←− R2 − 3R1
=⇒ 0 −6 −8 : −2
0 6 8 : 2 R3 ←− R3 + R1
1 2 5 : 1 R3 ←− R3 + R2
=⇒ 0 −6 −8 : −2
0 0 0 : 0
=⇒ −6b − 8c = −2 =⇒ 6b = 2 − 8c =⇒ b = 1−4c 3
and
=⇒ a + 2b + 5c = 1 =⇒ a + 2(1−4c) 3
+ 5c = 1 =⇒ a = 1−7c 3
.
1 1
let c = 0, then b = 3 and a = 3
so,
1 1
(1, 3, −1) + (2, 0, 4) = (1, 1, 1)
3 3
55