0% found this document useful (0 votes)
63 views8 pages

Linear Algebra Tutorial on Independence and Bases

The document is a tutorial for a Linear Algebra with Differential Equations course, focusing on determining linear independence and bases for sets of vectors in R3. It includes exercises on finding bases for solution spaces of homogeneous systems and performing transformations in R3. Additionally, it covers Gaussian elimination and least squares solutions for linear systems.

Uploaded by

Van
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views8 pages

Linear Algebra Tutorial on Independence and Bases

The document is a tutorial for a Linear Algebra with Differential Equations course, focusing on determining linear independence and bases for sets of vectors in R3. It includes exercises on finding bases for solution spaces of homogeneous systems and performing transformations in R3. Additionally, it covers Gaussian elimination and least squares solutions for linear systems.

Uploaded by

Van
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Tutorial 3

MA1513 – Linear Algebra with Differential Equations

Tutor: Christian Go | [Link]@[Link]

1. Determine whether the following sets are (i) linearly independent, and
(ii) bases for R3 .

(a) 𝑆 1 = {(1, 0, −1) , (−1, 2, 3)}


(b) 𝑆 2 = {(1, 0, −1) , (−1, 2, 3) , (0, 3, 0)}
(c) 𝑆 3 = {(1, 0, −1) , (−1, 2, 3) , (0, 3, 3)}
(d) 𝑆 4 = {(1, 0, −1) , (−1, 2, 3) , (0, 3, 0) , (1, −1, 1)}

To begin, recall that a basis for R3 is a set of three linearly independent


vectors or, equivalently, a set of three vectors that span R3 .

(a) The two vectors in the set are not scalar multiples and so 𝑆 1 is
linearly independent. 𝑆 1 is not a basis for R3 since it only contains
two vectors.
(b) We consider the vector equation
𝑐 1 (1, 0, −1) + 𝑐 2 (−1, 2, 3) + 𝑐 3 (0, 3, 0) = 0.
Then, 𝑆 2 is linearly independent if and only if the above vector
equation has only the trivial solution. Row-reducing,

© 1 −1 0 0 ª 𝑅3 +𝑅1 𝑅3 −𝑅2 © 1 −1 0 0 ª
­ 0
­ 2 3 0 ®® −−−−→−−−−→ ­­ 0 2 3 0 ®® .
« −1 3 0 0 ¬ « 0 0 −3 0 ¬
Hence, the vector equation has only the trivial solution, and 𝑆 2 is
linearly independent. Since 𝑆 2 is a set of three linearly independent
vectors in R3 , it must span R3 .
(c) We consider the vector equation
𝑐 1 (1, 0, −1) + 𝑐 2 (−1, 2, 3) + 𝑐 3 (0, 3, 3) = 0.
Notice that we may write the vector equation as a matrix equation,

© 1 −1 0 ª © 𝑐 1 ª © 0 ª
­ 0
­ 2 3 ®® ­­ 𝑐 2 ®® = ­­ 0 ®® .
« −1 3 3 ¬ « 𝑐 3 ¬ « 0 ¬
The homogeneous system will have only the trivial solution if and
only if the coefficient matrix is invertible. We check its determinant:
0
1 −1 0 >

2 3 0 3 0 2
0 2 3 =1 − (−1) +0  = 0.
3 3 −1 3 −1 3
−1 3 3


Since the determinant is zero, the homogeneous system has non-
trivial solutions; thus, 𝑆 3 is linearly dependent, and 𝑆 3 is not a basis
for R3 .
(d) 𝑆 4 is a set of four vectors in R3 : since any more than 3 vectors in
R3 cannot be linearly independent, 𝑆 4 must be linearly dependent,
and is hence not a basis for R3 .
tutorial 3 2

2. Find a basis for and the dimension of the solution space of each of the
following homogeneous systems.
(
𝑥 1 + 3𝑥 2 − 𝑥 3 + 2𝑥 4 = 0
(a)
− 3𝑥 2 + 𝑥 3 = 0

 𝑥1


 + 3𝑥 2 − 𝑥3 + 2𝑥 4 = 0
(b)

− 3𝑥 2 + 𝑥3 = 0
0


 𝑥1
 − 𝑥4 =

(a) Setting up the augmented matrix of the homogeneous system,


! !
1 3 −1 2 0 𝑅1 +𝑅2 − 13 𝑅2 1 0 0 2 0
−−−−→−−−−→ .
0 −3 1 0 0 0 1 − 13 0 0

Assigning arbitrary parameters to 𝑥 3 and 𝑥 4 , say 𝑥 3 = 𝑠 and 𝑥 4 = 𝑡,


we have
1
𝑥 1 = −2𝑡, 𝑥 2 = 𝑠, 𝑥 3 = 𝑠, 𝑥 4 = 𝑡, 𝑠, 𝑡 ∈ R.
3
Then,
0 −2
© 1 ª
𝑥
© 1 ª ©
® ­ 0 ®
ª
­ 𝑥2 ®
® = 𝑠 ­ 3 ®+𝑡 ­
­ 1 ® ­ 0 ®, for 𝑠, 𝑡 ∈ R.
­
­ ®
­ 3 ®
­ 𝑥 ®
­ ® ­ ®
« 𝑥4 ¬ « 0 ¬ « 1 ¬
Thus, the set {(0, 1/3, 1, 0) , (−2, 0, 0, 1)} is a basis for the solution
space, and the dimension of the solution space is 2.
(b) Setting up the augmented matrix of the homogeneous system,

© 1 3 −1 2 0
ª 𝑅3 −𝑅1 𝑅1 +𝑅2 − 13 𝑅2 𝑅1 −2𝑅3 ©
1 0 0 0 0
­ 0 −3 1 0 0 ® −−−−→−−−−→−−−−→−−−−−→ ­ 0 1 − 1 0 0 ®.
ª
­ ® 𝑅3 −𝑅2 − 1 𝑅3 ­ 3 ®
« 1 0 0 −1 0 0 0 0 1 0
3
¬ « ¬
Assigning an arbitrary parameter 𝑥 3 = 𝑡, we have

0 0
© 1 ª © 1 ª
𝑥
© 1 ª
­ 𝑥2 ® ­ 𝑡 ®
® ­ 3 ® ­ 3 ®
­ 𝑥 ® = ­ 𝑡 ® = 𝑡 ­ 1 ®, for 𝑡 ∈ R.
­ ®
­
­ 3 ® ­ ® ­ ®
« 𝑥4 ¬ « 0 ¬ « 0 ¬
Thus, {(0, 1/3, 1, 0)} is a basis for the solution space, and the dimen-
sion of the solution space is 1.
tutorial 3 3

3. The regular coordinate axes for the 𝑥𝑦𝑧-space is rotated about the
𝑧-axis through 45◦ counterclockwise, viewing from the positive 𝑧-axis.

(a) Find a basis 𝑆 consisting of unit vectors that determine the new
coordinate axes.
(b) State the coordinates of the vector 𝒗 = (1, 1, 1) relative to the new
axes.
(c) Find a matrix 𝑴 such that 𝑴𝒗 = (𝒗)𝑆 .

Since we rotate about the 𝑧-axis, it remains fixed. To visualize what


𝑦
happens on the 𝑥𝑦-plane, we take a top view of 𝑥𝑦𝑧-space, so that the 𝑦0 𝑥0
𝑧-axis points out of the page.
45◦
(a) In order to find a basis that determines the coordinate axes, it suf-
45◦
fices to consider how the standard R3 basis vectors are transformed 𝑥
by the rotation. Observe that while (0, 0, 1) is fixed,
√ √ ! √ √ !
2 2 2 2
(1, 0, 0) ↦→ , , 0 , (0, 1, 0) ↦→ − , ,0 .
2 2 2 2

Note that the unit vector along the 𝑧-axis remains unchanged by the
rotation. Hence, we have a basis
( √ √ ! √ √ ! )
2 2 2 2
𝑆= , ,0 , − , , 0 , (0, 0, 1) .
2 2 2 2

(b) We wish to express 𝒗 in terms of the vectors in 𝑆; that is,


√ √
2 2
© −√ 2
√2 © 0 ª © 1 ª
2 2 ®+𝑧 ­ 0 ® = ­ 1 ®.
© ª ª
𝑥­ ® +𝑦 ­ 2
­ ® ­
2 ® ­ ® ­ ®
« 0 ¬ « 0 ¬ « 1 ¬ « 1 ¬
Solving this system,
√ √ √ √
2 2
− 0 1 0 01 2 © 𝑥 ª © 2 ª
√2 √2 ª GJE ©
2 2 ® =⇒ ­ 𝑦 ® = ­ 0 ® .
©
­ 0 1 0 0
ª
­
­ 2 0 1 ®
2
® −−→ ­
® ­ ® ­ ®
« 0 0 1 1 ¬ « 0 0 1 1 ¬ « 𝑧 ¬ « 1 ¬
√ 
Hence, (𝒗)𝑆 = 2, 0, 1 .
(c) Observe that our linear combination in the previous part can be
expressed as the matrix product 𝑨𝒙 = 𝒃, where
√ √ √
2 2
− 0 2 1
√2 √2
2 2
0 ® ­ 0 ® = ­ 1 ®® .
© ª© ª © ª
­ ® ­ ® ­
­ 2 2
« 0 0 1 ¬« 1 ¬ « 1 ¬

Since the columns of 𝑨 form a basis, it follows that 𝑨 is a square


matrix whose columns are linearly independent, and hence, 𝑨 must
be invertible. Hence,
√ √ √ −1 √ √
2 2 2 2
2 − 0 1 0 1
√2 √2 √2 √2
2 2 2 2
0 ® ­ 1 ®® .
© ª © © ª © ª© ª
0
ª
­
­
®=­
® ­ 2 2 0 ®® ­
­ 1 ® = ­−
® ­ 2 2
® ­

« 1 ¬ « 0 0 1 ¬ « 1 ¬ « 0 0 1 ¬« 1 ¬
tutorial 3 4

4. Let
1 −1 0 1
­ 0 1 −1 ®® © 𝑥 ª ­ 1 ®
© ª © ª
𝑨 = ­­ , 𝒙 = ­­ 𝑦 ®® , 𝒃 = ­­ ®® .
­ −1 0 1 ®® ­ 1 ®
« 𝑧 ¬
« 1 1 1 ¬ « 1 ¬
(a) Show that the linear system 𝑨𝒙 = 𝒃 has no solution.
(b) Find the least squares solution to 𝑨𝒙 = 𝒃.
(c) Find the projection of 𝒃 onto span {(1, 0, −1, 1) , (−1, 1, 0, 1) , (0, −1, 1, 1)} .
(d) Find a vector that is orthogonal to all three vectors (1, 0, −1, 1) , (−1, 1, 0, 1) , (0, −1, 1, 1) .

(a) Performing Gaussian Elimination on (𝑨 | 𝒃) ,

1 −1 0 1 1 −1 0 1
­ 0 1 1 ® 𝑅3 +𝑅1 𝑅3 +𝑅2 𝑅2 ↔𝑅3 ­ 0 1 −1 1
© ª © ª
­ −1 ® −−−−→−−−−−→−−−−−→ ­ ®.
®
­ −1 0 1 1 ® 𝑅4 −𝑅1 𝑅4 −2𝑅2 ­ 0 0 3 −2 ®
­ ® ­ ®
« 1 1 1 1 ¬ « 0 0 0 1 ¬
Observe that the last row corresponds to the equation 0 = 1, imply-
ing that the system is inconsistent and 𝑨𝒙 = 𝒃 has no solution.
(b) The least squares solution to an inconsistent system is simply the
solution to the consistent system 𝑨𝑇 𝑨𝒙 = 𝑨𝑇 𝒃. Hence,

1 −1 0 1
© 1 0 −1 1 © 1 0 −1 1 © ª
ª 𝑥
ª­ 0 1 −1 ®© ª © ª­ 1 ®
­ −1 1 0 1 ®­ ® ­ 𝑦 ® = ­ −1 1 0 1 ®® ­­ ®®
­ ® ­ −1 0 1 ®® ­ ® ­ 1
0 −1 1 1 ­
¬ 1 𝑧 ¬ « 0 −1 1 1 ¬ ­ ®
«
« 1 1 ¬« « 1 ¬

© 3 0 0 𝑥 1
®­ 𝑦 ® = ­ 1 ®,
ª© ª © ª
∴ ­­ 0 3 0 ®­ ® ­ ®
« 0 0 3 ¬« 𝑧 ¬ « 1 ¬
and hence, we have the solution 𝑥 = 𝑦 = 𝑧 = 1/3.
(c) Observe that the vectors in linear span form the columns of the
matrix 𝑨; that is, this linear span is precisely the column space of
𝑨. Hence, the projection of 𝒃 onto the column space of 𝑨 is simply
𝒑 = 𝑨𝒖, where 𝒖 is a least square solution. Then,

1 −1 0 1 0
­ 0 1 −1 ®® ©­ 3 ª ­ 0 ®
© ª © ª
1 ® = ­ ®.
𝒑 = 𝑨𝒖 = ­­ 3
­ −1 0 1 ®® ­ ® ­ 0 ®
1 ­ ®
3
« 1 1 1 ¬« « 1 ¬
¬

(d) Since 𝒑 is the projection of 𝒃 onto the span of the three vectors,
their difference must be orthogonal to the span of those vectors (and
hence the vectors themselves):

1 0 1
­ 1 ® ­ 0 ® ­ 1 ®
© ª © ª © ª
𝒃 − 𝒑 = ­­ ®® − ­­ ® = ­ ®.
­ 1 ® ­ 0
® ­ 1 ®
® ­ ®
« 1 ¬ « 1 ¬ « 0 ¬
tutorial 3 5

5. To measure the takeoff performance of an airplane, the horizontal


𝑡 𝑦
position of the plane was measured every second, from 𝑡 = 0 to 𝑡 = 12.
0 0
The positions (in meters) are given in the table. 1 2.9
2 9.8
(a) Find the least squares cubic curve: 𝑦 = 𝑎 + 𝑏𝑡 + 𝑐𝑡 2 + 𝑑𝑡 3 for this data 3 20.1
set. 4 34.3
5 52.8
(b) Use the result from the previous part to estimate the velocity of the 6 74.0
plane when 𝑡 = 4.5 seconds. 7 98.5
8 126.7
9 157.2
(a) We wish to find the curve 𝑦 = 𝑎 + 𝑏𝑡 + 𝑐𝑡 2 + 𝑑𝑡 3 that best fits the 10 190.3
13 data points (𝑡, 𝑦) . Notice that this yields a linear system with 13 11 228.6
12 269.0
equations (one for each data point), in the unknowns 𝑎, 𝑏, 𝑐, 𝑑.
Let us write the system in the form 𝑻 𝒙 = 𝒚, where each row in the
matrix 𝑻 correspond to the coefficients 1, 𝑡, 𝑡 2 , 𝑡 3 of each linear equa-
tion, 𝒙 is the vector of variables, and 𝒚 is the vector of constants;
that is, the measured positions at each time 𝑡.
To begin, notice that the first column of 𝑻 consists of all ones, while
the second column of 𝑻 consists of the numbers one through 12.
>‌> t1 = ones(13,1)
>‌> t2 = [0:12]’
The third and fourth columns of 𝑡 are precisely the squares and
cubes of the numbers 0 through 12. We can use the “.^” command
to compute element-wise powers of the vector t2.
>‌> t3 = t2.^2
>‌> t4 = t2.^3
Hence, we have the matrix 𝑻 , as well as the column matrix 𝒚.
>‌> T = [t1 t2 t3 t4]
>‌> y = [0 2.9 9.8 20.1 34.3 52.8 74.0 98.5 126.7
157.2 190.3 228.6 269.0]’
To find the least squares solution to this linear system, it suffices to
solve the normal equation 𝑻 𝑇 𝑻 𝒙 = 𝑻 𝑇 𝒚.
>‌> rref([T’*T T’*y])

1 0 0 0 −0.1758 𝑎 ≈ −0.1758
  ©­ 0 1 0 0 1.1716
ª
𝑏 ≈ 1.1716
rref 𝑻 𝑇 𝑻 | 𝑻 𝑇 𝒚 = ­­ .
®
® =⇒
­ 0 0 1 0 1.9453 𝑐 ≈ 1.9453
®
®
« 0 0 0 1 −0.0147 ¬ 𝑑 ≈ −0.0147
We thus have a unique least squares solution, and the cubic curve of
best fit is given by

𝑦 = −0.1758 + 1.1716𝑡 + 1.9453𝑡 2 − 0.0147𝑡 3 .

(b) The velocity function 𝑣 (𝑡) of the plane is given by the first deriva-
tive of its distance function; hence,

𝑣 (𝑡) = 𝑦 0 (𝑡) = 1.1716 + 2 × 1.9453𝑡 − 3 × 0.0147𝑡 2 .

Thus, at 𝑡 = 4.5 s, the velocity is approximately 𝑣 (4.5) ≈ 17.7863


meters per second.
tutorial 3 6

6. For each of the following matrices

© 1 2 0 ª 2 1 4 1 2
­ 0 1 1 ®
4 2 2 3 2 ®® © 1 4 5 8 ª
­ ® © ª
𝑨 = ­ −1 3 6 ® , 𝑩 = ­­ , ­ −1 4 3 0 ® ,
­
𝑪 =
­ ®
­ 2 1 −2 2 0 ®
® ­ ®
­ 2 1 0 ® 2 0 2 1
­ ®
« 6 3 6 4 4 ¬
­ ® « ¬
« 3 1 −1 ¬
find a basis for the row space and a basis for the column space; find
a basis for the nullspace; and find the nullity of the matrix. For each
matrix, verify the Dimension Theorem for Matrices.
We have previously already found the reduced row-echelon forms of
these matrices.
© 1 0 0 ª
­ 0 1 0 ®
­ ®
rref (𝑨) = ­ 0 0 1 ®
­ ®
­ 0 0 0 ®
­ ®
­ ®
« 0 0 0 ¬
Consider the matrix 𝑨, which we have established has rank 3.

• By considering the reduced row-echelon form of 𝑨, we find that the


row space of 𝑨 is given by the basis 𝑅𝐴 = {(1, 0, 0) , (0, 1, 0) , (0, 0, 1)} .
• The pivot columns in rref (𝑨) correspond to linearly independent
columns in 𝑨. Hence, a basis for the column space of 𝑨 is given by


 © 1 ª © 2 ª © 0 
ª

0 1 1

 ­ ® ­ ® ­ ®


 ®

 ­­ ® ­ ® ­
®,­ ®,­ ® .
 ®

𝐶𝐴 = ­ −1 3 6
® ­ ® ­
2 1 0

 ­ ® ­ ® ­ ®
­
 ® ­ ® ­ ®

 ­ ® ­ ® ­ ®

3 1
 

« ¬ « ¬ « −1 
¬
• The nullspace of 𝑨 is the solution space of the homogeneous system
𝑨𝒙 = 0. The system only has the trivial solution, and the basis for
the nullspace is the empty set 𝑁𝐴 = {} .
• The nullity is the dimension of the nullspace: nullity (𝑨) = 0.
• Indeed, the sum of the rank and nullity of 𝑨 is 3 + 0 = 3, equal to the
number of columns of 𝑨.
1 5 1
1 2 0 6 3
­ 0 0 1 − 16 1
© ª
rref (𝑩) = ­­ 3
®
®
­ 0 0 0 0 0 ®®
« 0 0 0 0 0 ¬
Consider the matrix 𝑩, which we have established has rank 2.

• By considering the reduced row-echelon form of 𝑩, we find that the


row space of 𝑩 is given by the basis 𝑅𝐵 = 1, 12 , 0, 56 , 13 , 0, 0, 1, − 16 , 13 .
  

• The pivot columns in rref (𝑩) correspond to linearly independent


columns in 𝑩. Hence, a basis for the column space of 𝑩 is given by


 2 4  
ª
4 ® ­ 2 ®

 ©­ ª © 
®,­ ® .

 

𝐶𝐵 = ­­

 ­ 2 ® ­ −2 ® 
® ­ ®

6 6
 

 
« ¬ « ¬
tutorial 3 7

• The nullspace of 𝑩 is the solution space of the homogeneous system


𝑩𝒙 = 0. Assigning arbitrary parameters to non-pivot columns, the
reduced row-echelon form of 𝑩 gives us the solution

© 𝑥1 ª © − 12 ª © − 56 ª © − 31 ª
­
­ 𝑥2 ®
®
­
­ 1 ®
®
­
­ 0 ®® ­­ 0 ®
®
1 ®
®=𝑟­ 0 ®+𝑠 ­ +𝑡 ­ − 13 ®.
­ ® ­ ® ­ ­ ®
­ 𝑥3 6 ®
0 1 ®® ­­ 0
­ ® ­ ® ­ ® ­ ®
­
­ 𝑥4 ®
®
­
­
®
®
­
­
®
®
« 𝑥5 ¬ « 0 ¬ « 0 ¬ « 1 ¬
Thus, a basis for the nullspace is given by



 © − 12 ª © − 56 ª © − 13 ª 

1 0 ®® ­­ 0 ®® 

 ­ ® ­ 

 

 ­­ ® ­
1 ® ­ 1 ®
®,­ , .
 

𝑁𝐵 = ­ 0 −
® ­
6 ® ­
® ­ 3 ®
®
0 1 ®® ­­ 0 ®® 

 ­ ® ­
­
 ® ­ 

 ­ ® ­ 
0 0 ¬ « 1 ¬

 
« ¬ « 

• The dimension of the nullspace of 𝑩 is nullity (𝑩) = 3.


• Indeed, the sum of the rank and nullity of 𝑩 is 2 + 3 = 5 equal to the
number of columns of 𝑩.

© 1 0 1 0 ª
rref (𝑪) = ­­ 0 1 1 0 ®®
« 0 0 0 1 ¬
Consider the matrix 𝑪, which we have established has rank 3.

• By considering the reduced row-echelon form of 𝑪, we find that the


row space of 𝑪 is given by the basis 𝑅𝐶 = {(1, 0, 1, 0) , (01, 1, 0) , (0, 0, 0, 1)} .
• The pivot columns in rref (𝑪) correspond to linearly independent
columns in 𝑪. Hence, a basis for the column space of 𝑪 is given by

© 1 ª © 5 ª © 8 ª

 


𝐶𝐶 = ­ −1 ® , ­ 3 ® , ­ 0 ® .
 

­ ® ­ ® ­ ®
 2
¬ « 2 ¬ « 1 ¬

 


«
• The nullspace of 𝑪 is the solution space of the homogeneous system
𝑪𝒙 = 0. Assigning arbitrary parameters to non-pivot columns, the
reduced row-echelon form of 𝑪 gives us the solution

−1 −1
© 1 ª
𝑥 




© ª 
 ©­ ª

­ 𝑥2 ® ­ −1 ® −1 ®
® .

 

­ ®=𝑡­ ® =⇒ 𝑁𝐶 = ­­
­ 1 ® 1
­ 3 ®
­ 𝑥 ®  ®
­ ®  ­ ®
0

0
 

𝑥 4
 
« ¬ « ¬ « ¬

• The dimension of the nullspace of 𝑪 is nullity (𝑪) = 1.


• Indeed, the sum of the rank and nullity of 𝑪 is 3 + 1 = 4, equal to the
number of columns of 𝑪.
tutorial 3 8

7. Consider two 3 × 4 matries 𝑨 and 𝑩 with respective row-echelon forms

© 1 0 1 2 © 0 1 0 1
𝑹 𝑨 = ­­ 0 0 1 1 ®, ®.
ª
𝑹 𝑩 = ­­ 0 0 1
ª
® −1 ®
« 0 0 0 0 ¬ « 0 0 0 1 ¬
Determine if we have enough information to find:

(a) the matrices 𝑨 and 𝑩,


(b) the row spaces of 𝑨 and 𝑩,
(c) the column spaces of 𝑨 and 𝑩.

If possible, write down a basis for each of the row spaces and column
spaces of 𝑨 and 𝑩.

(a) We are unable to recover the original matrices 𝑨 and 𝑩 without


knowing the elementary row operations taken to obtain the corre-
sponding row-echelon forms.
(b) The row space of a matrix is preserved by elementary row opera-
tions; thus,

row (𝑨) = span {(1, 0, 1, 2) (0, 0, 1, 1) (0, 0, 0, 0)} ,


row (𝑩) = span {(0, 1, 0, 1) (0, 0, 1, −1) , (0, 0, 0, 1)} .

In particular, the linearly independent rows of 𝑹 𝑨 and 𝑹 𝑩 form


bases for the row spaces of 𝑨 and 𝑩, respectively, so a basis for
row (𝑨) is given by {(1, 0, 1, 2) (0, 0, 1, 1)} , and a basis for row (𝑩) is
given by {(0, 1, 0, 1) (0, 0, 1, −1) , (0, 0, 0, 1)} .
(c) Linear independence of columns is preserved by elementary row
operations. Thus, the linearly independent columns of 𝑹 𝑨 and 𝑹 𝑩
determine which columns of 𝑨 and 𝑩 form a basis for the column
space of each matrix.
Consider 𝑹 𝑨 : only the first and third columns are linearly inde-
pendent; thus, the first and third columns of the original matrix 𝑨
would form a basis for col (𝑨) ⊆ R3 . Since we have no information
on the original matrix 𝑨, we are unable to determine col (𝑨) .
Now, consider 𝑹 𝑩 : the second, third, and fourth columns are lin-
early independent; hence, the second, third, and fourth columns
of 𝑩 form a basis for col (𝑩) ⊆ R3 . Note that, in this case, three
linearly independent vectors in R3 span col (𝑩): thus, col (𝑩) must
be all of R3 . Though we do not have the original columns of 𝑩, we
know that they form a basis for all of R3 . Thus, we may take any
basis for R3 as our basis for col (𝑩) , for instance, the standard basis:

© 1 ª © 0 ª © 0 ª
­ 0 ®, ­ 1 ®, ­ 0 ®.
­ ® ­ ® ­ ®
« 0 ¬ « 0 ¬ « 1 ¬

Common questions

Powered by AI

Rotating coordinate axes in 3D space, specifically around a fixed axis, fundamentally alters the orientation of the reference frame while preserving spatial relationships. Unit vectors describe the new coordinate axes' directions post-rotation. For instance, rotating about the z-axis by 45 degrees transforms the standard (1, 0, 0) to (√2/2, √2/2, 0) and (0, 1, 0) to (-√2/2, √2/2, 0), forming a new orthogonal basis for x-y plane views .

Determining a least squares approximation involves linear algebra concepts like orthogonality and projections, embodied in solving A^T A x = A^T y, enabling polynomial curve fitting over datasets. The practical application lies in minimizing residuals between model predictions and actual data points, effectively using the polynomial's coefficients calculated through the normal equations from matrix arithmetic. This method is critical for enhancing model suitability in context-specific scenarios, such as trajectory analysis of an aircraft .

When expressing a vector relative to new axes, the transformation involves representing the vector as a linear combination of the new basis vectors. For instance, the operation requires solving the system obtained by setting the original vector equal to a sum of the new basis vectors scaled by coefficients, with the specific rotation matrix used in calculations derived from the geometric effects on the standard basis. Parameters can then directly map to new coordinates .

Projections onto a vector space allow discovery of orthogonal components by computing differences from the original vector. Orthogonality, ensuring zero inner products, plays a critical role in defining perpendicular solutions that complement the span of given vectors. For instance, projecting a vector onto a subspace and computing the perpendicular component involves subtracting the projection to gain zero-dot-product orthogonal vectors, crucial for constructing bases in complementary spaces .

The nullspace dimension, or nullity, indicates the number of free variables in a linear system, affecting the nature of its solutions. A greater nullity implies a larger solution space. It's determined from a matrix's RREF by counting non-pivot columns. For instance, a matrix with three columns and nullity 1 suggests a solution space's dimension is directly tied to these free variables and, by consequence, a dependent variable-based combination forms the solution set .

To determine the basis for the solution space of a homogeneous system from its reduced row-echelon form (RREF), assign parameters to the free variables that correspond to non-pivot columns. For each free variable, express the pivot variables in terms of these parameters. Each choice of parameter values gives a linearly independent solution, which forms a vector of the basis. For example, in the RREF where only x3 and x4 are free variables, the basis vectors might be derived as vector combinations of these free parameters, representing independent solutions .

When Gaussian elimination reduces the augmented matrix (A|b) to a form where a row corresponds to an equation like 0 = 1, it indicates inconsistency, proving that no solution exists. This contradiction implies that the original set of equations cannot simultaneously satisfy the conditions given by the vector b. This form affects solving strategies, as instead of direct solutions, we pivot to finding least squares or approximate solutions .

Least squares solutions offer an approximation method for inconsistent systems by finding x such that the error between Ax and b is minimized. This is achieved through the normal equation A^T A x = A^T b, which ensures orthogonality of the error vector to the column space of A. Solving this system yields x values that minimize the sum of squared residuals, practically offering the best-fit vector within this space's span .

The rank-nullity theorem states that for a matrix A, rank(A) + nullity(A) equals the number of its columns. This balance confirms properties like dimensional integrity across the matrix's image and kernel. For instance, a rank of 3 and nullity of 0 for a matrix with 3 columns ensures maximal rank with no free variables, signifying a unique solution or no solution depending on consistency. This theorem helps verify solution space dimensions and matrix properties through its structural implications across equivalence classes .

Reconstructing original matrices from row-echelon forms is challenging without the row operations history, implying information is lost in the form of original scale factors or specific row updates. This loss hinders exact reconstruction, only allowing inference about linear independence or dependences retained. Typically, row-echelon forms only preserve essential information such as spans and independent sets rather than structural details like numerical entries or original normalization factors .

You might also like