0% found this document useful (0 votes)
266 views16 pages

Linear Algebra Lesson 1

This document provides a lesson on solving systems of linear equations using Gaussian elimination. It begins with examples of linear equations and systems of linear equations. It then presents three methods for solving systems: graphing, elimination, and Gaussian elimination. Gaussian elimination uses row operations on the augmented matrix of the system to systematically eliminate variables until the solution can be read directly from the reduced matrix. Several examples demonstrate this process.

Uploaded by

Mab Shi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
266 views16 pages

Linear Algebra Lesson 1

This document provides a lesson on solving systems of linear equations using Gaussian elimination. It begins with examples of linear equations and systems of linear equations. It then presents three methods for solving systems: graphing, elimination, and Gaussian elimination. Gaussian elimination uses row operations on the augmented matrix of the system to systematically eliminate variables until the solution can be read directly from the reduced matrix. Several examples demonstrate this process.

Uploaded by

Mab Shi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Module in

Linear
Algebra
Prepared by: Domarjun Sibayan Taguinod
Mathematics Instructor

LESSON 1: Systems of Linear Equations


Learning Outcomes:

At the end of the lesson, the students are expected to:


a. Give examples of linear equations
b. Find a polynomial function that fits into the given set of points using a system of
linear equations
c. Solve systems of Linear equations using the Gaussian Elimination Method
Lesson 1.1: Linear Equations

The basic problem of linear algebra is to solve a system of linear equations. A linear equation in
the n variables—or unknowns— x 1, x 2, …, and x  n is an equation of the form

    

where b and the coefficients a  i are constants. A finite collection of such linear equations is
called a linear system. To solve a system means to find all values of the variables that satisfy all
the equations in the system simultaneously. For example, consider the following system, which
consists of two linear equations in two unknowns:

Although there are infinitely many solutions to each equation separately, there is only one pair of
numbers x 1 and x 2 which satisfies both equations at the same time. This ordered pair, (x 1, x 2) =
(2, 1), is called the solution to the system.

Lesson 1.2: Solutions on Systems of Linear Equations

The analysis of linear systems will begin by determining the possibilities for the solutions.
Despite the fact that the system can contain any number of equations, each of which can involve
any number of unknowns, the result that describes the possible number of solutions to a linear
system is simple and definitive. The fundamental ideas will be illustrated in the following
examples.

Example 1:

Solve this system of equations by graphing.

To solve using graphing, graph both equations on the same set of coordinate axes and see where
the graphs cross. The ordered pair at the point of intersection becomes the solution (see Figure
1).

Check the solution.


The solution is x = 3, y = –2.

Figure 1. Two linear equations.

Solving systems of equations by graphing is limited to equations in which the solution lies close
to the origin and consists of integers; even then, that solution is an approximation solved by
eyeballing. For those reasons, graphing is used least frequently of all the solution methods.

Here are two things to keep in mind:

 Dependent system. If the two graphs coincide—that is, if they are actually two versions
of the same equation—then the system is called a dependent system, and its solution can
be expressed as either of the two original equations.
 Inconsistent system. If the two graphs are parallel—that is, if there is no point of
intersection—then the system is called an inconsistent system, and its solution is
expressed as an empty set {}, or the null set, 
Example 2:

Solve this system of equations using elimination.

All the equations are already in the required form.

Choose a variable to eliminate, say x, and select two equations with which to eliminate it, say
equations (1) and (2).

Select a different set of two equations, say equations (2) and (3), and eliminate the same variable.

Solve the system created by equations (4) and (5).

Now, substitute z = 3 into equation (4) to find y.


Use the answers from Step 4 and substitute into any equation involving the remaining variable.

Using equation (2), 

Check the solution in all three original equations.


The solution is x = –1, y = 2, z = 3.

Example 3:

Solve this system of equations using the elimination method.

Write all equations in standard form.

Notice that equation (1) already has the y eliminated. Therefore, use equations (2) and (3) to
eliminate y. Then use this result, together with equation (1), to solve for x and z. Use these results
and substitute into either equation (2) or (3) to find y.

Substitute z = 3 into equation (1).


Substitute x = 4 and z = 3 into equation (2).

Use the original equations to check the solution (the check is left to you).

The solution is x = 4, y = –2, z = 3.

Lesson 1.3: Gaussian Elimination

The purpose of this article is to describe how the solutions to a linear system are actually found.
The fundamental idea is to add multiples of one equation to the others in order to eliminate a
variable and to continue this process until only one variable is left. Once this final variable is
determined, its value is substituted back into the other equations in order to evaluate the
remaining unknowns. This method, characterized by step‐by‐step elimination of the variables, is
called Gaussian elimination.

Example 1: Solve this system:

  

Multiplying the first equation by −3 and adding the result to the second equation eliminates the
variable x:

  

This final equation, −5 y = −5, immediately implies y = 1. Back‐substitution of y = 1 into the
original first equation, x + y = 3, yields x = 2. (Back‐substitution of y = 1 into the original second
equation, 3 x − 2 y = 4, would also yield x = 2.) The solution of this system is therefore (x, y) =
(2, 1), as noted in Example 1.

Gaussian elimination is usually carried out using matrices. This method reduces the effort in
finding the solutions by eliminating the need to explicitly write the variables at each step. The
previous example will be redone using matrices.

Example 2: Solve this system:

  

The first step is to write the coefficients of the unknowns in a matrix:

  

This is called the coefficient matrix of the system. Next, the coefficient matrix is augmented by
writing the constants that appear on the right‐hand sides of the equations as an additional
column: 

This is called the augmented matrix, and each row corresponds to an equation in the given
system. The first row, r 1 = (1, 1, 3), corresponds to the first equation, 1 x + 1 y = 3, and the
second row, r 2 = (3, −2, 4), corresponds to the second equation, 3 x − 2 y = 4. You may choose
to include a vertical line—as shown above—to separate the coefficients of the unknowns from
the extra column representing the constants.

Now, the counterpart of eliminating a variable from an equation in the system is changing one of
the entries in the coefficient matrix to zero. Likewise, the counterpart of adding a multiple of one
equation to another is adding a multiple of one row to another row. Adding −3 times the first row
of the augmented matrix to the second row yields

  

The new second row translates into −5 y = −5, which means y = 1. Back‐substitution into the first
row (that is, into the equation that represents the first row) yields x = 2 and, therefore, the
solution to the system: (x, y) = (2, 1).

Gaussian elimination can be summarized as follows. Given a linear system expressed in matrix
form, A x = b, first write down the corresponding augmented matrix:

  
Then, perform a sequence of elementary row operations, which are any of the following:

Type 1. Interchange any two rows.

Type 2. Multiply a row by a nonzero constant.

Type 3. Add a multiple of one row to another row.

The goal of these operations is to transform—or reduce—the original augmented matrix into one
of the form   where A′ is upper triangular (aij  ′ = 0 for i > j), any zero rows appear at the
bottom of the matrix, and the first nonzero entry in any row is to the right of the first nonzero
entry in any higher row; such a matrix is said to be in echelon form. The solutions of the system
represented by the simpler augmented matrix, [ A′ | b′], can be found by inspection of the bottom
rows and back‐substitution into the higher rows. Since elementary row operations do not change
the solutions of the system, the vectors x which satisfy the simpler system A′ x = b′ are precisely
those that satisfy the original system, A x = b.

Example 3: Solve the following system using Gaussian elimination:

  

The augmented matrix which represents this system is

  

The first goal is to produce zeros below the first entry in the first column, which translates into
eliminating the first variable, x, from the second and third equations. The row operations which
accomplish this are as follows:

  

The second goal is to produce a zero below the second entry in the second column, which
translates into eliminating the second variable, y, from the third equation. One way to accomplish
this would be to add −1/5 times the second row to the third row. However, to avoid fractions,
there is another option: first interchange rows two and three. Interchanging two rows merely
interchanges the equations, which clearly will not alter the solution of the system:

  
Now, add −5 times the second row to the third row:

  

Since the coefficient matrix has been transformed into echelon form, the “forward” part of
Gaussian elimination is complete. What remains now is to use the third row to evaluate the third
unknown, then to back‐substitute into the second row to evaluate the second unknown, and,
finally, to back‐substitute into the first row to evaluate the first unknwon.

The third row of the final matrix translates into 10 z = 10, which gives z = 1. Back‐substitution of
this value into the second row, which represents the equation y − 3 z = −1, yields y = 2. Back‐
substitution of both these values into the first row, which represents the equation x − 2 y + z = 0,
gives x = 3. The solution of this system is therefore (x, y, z) = (3, 2, 1).

Example 4: Solve the following system using Gaussian elimination:

  

For this system, the augmented matrix (vertical line omitted) is

  

First, multiply row 1 by 1/2:

  

Now, adding −1 times the first row to the second row yields zeros below the first entry in the
first column:

  

Interchanging the second and third rows then gives the desired upper‐triangular coefficient
matrix: 
The third row now says z = 4. Back‐substituting this value into the second row gives y = 1, and
back‐substitution of both these values into the first row yields x = −2. The solution of this system
is therefore (x, y, z) = (−2, 1, 4).

Lesson 1.4: Gauss‐Jordan Elimination

Gaussian elimination proceeds by performing elementary row operations to produce zeros below
the diagonal of the coefficient matrix to reduce it to echelon form. (Recall that a matrix A′ =
[aij ′] is in echelon form when aij ′= 0 for i > j, any zero rows appear at the bottom of the matrix,
and the first nonzero entry in any row is to the right of the first nonzero entry in any higher row.)
Once this is done, inspection of the bottom row(s) and back‐substitution into the upper rows
determine the values of the unknowns.

However, it is possible to reduce (or eliminate entirely) the computations involved in back‐
substitution by performing additional row operations to transform the matrix from echelon form
to reduced echelon form. A matrix is in reduced echelon form when, in addition to being in
echelon form, each column that contians a nonzero entry (usually made to be 1) has zeros not
just below that entry but also above that entry. Loosely speaking, Gaussian elimination works
from the top down, to produce a matrix in echelon form, whereas Gauss‐Jordan
elimination continues where Gaussian left off by then working from the bottom up to produce a
matrix in reduced echelon form. The technique will be illustrated in the following example.

Example 5: The height, y, of an object thrown into the air is known to be given by a quadratic
function of t (time) of the form y = at2 + bt + c. If the object is at height y = 23/4 at time t = 1/2,
at y = 7 at time t = 1, and at y = 2 at t = 2, determine the coefficients a, b, and c.

Since t = 1/2 gives y = 23/4

    

while the other two conditions, y(t = 1) = 7 and y(t = 2) = 2, give the following equations for a, b,
and c:

Therefore, the goal is solve the system

  

The augmented matrix for this system is reduced as follows: 


At this point, the forward part of Gaussian elimination is finished, since the coefficient matrix
has been reduced to echelon form. However, to illustrate Gauss‐Jordan elimination, the
following additional elementary row operations are performed:

  

This final matrix immediately gives the solution: a = −5, b = 10, and c = 2.

Example 6: Solve the following system using Gaussian elimination:

  

The augmented matrix for this system is

  

Multiples of the first row are added to the other rows to produce zeros below the first entry in the
first column:

  

Next, −1 times the second row is added to the third row:


  

The third row now says 0 x + 0 y + 0 z = 1, an equation that cannot be satisfied by any values
of x, y, and z. The process stops: this system has no solutions.

The previous example shows how Gaussian elimination reveals an inconsistent system. A slight
alteration of that system (for example, changing the constant term “7” in the third equation to a
“6”) will illustrate a system with infinitely many solutions.

Example 7: Solve the following system using Gaussian elimination:

  

The same operations applied to the augment matrix of the system in Example 6 are applied to the
augmented matrix for the present system:

  

Here, the third row translates into 0 x + 0 y + 0 z = 0, an equation which is satisfied by any x, y,
and z. Since this offer no constraint on the unknowns, there are not three conditions on the
unknowns, only two (represented by the two nonzero rows in the final augmented matrix). Since
there are 3 unknowns but only 2 constrants, 3 − 2 =1 of the unknowns, z say, is arbitrary; this is
called a free variable. Let z = t, where t is any real number. Back‐substitution of z = t into the
second row (− y + 5 z = −6) gives 

Back substituting z = t and y = 6 + 5 t into the first row ( x + y − 3 z = 4) determines x: 

Therefore, every solution of the system has the form

    

where t is any real number. There are infinitely many solutions, since every real value of t gives
a different particular solution. For example, choosing t = 1 gives ( x, y, z) = (−4, 11, 1), while t =
3 gives ( x, y, z) = (4, −9, −3), and so on. Geometrically, this system represents three planes
in R 3 that intersect in a line, and (*) is a parametric equation for this line.

Example 7: provided an illustration of a system with infinitely many solutions, how this case
arises, and how the solution is written. Every linear system that possesses infinitely many
solutions must contain at least one arbitrary parameter (free variable). Once the augmented
matrix has been reduced to echelon form, the number of free variables is equal to the total
number of unknowns minus the number of nonzero rows:

  

This agrees with Theorem B above, which states that a linear system with fewer equations than
unknowns, if consistent, has infinitely many solutions. The condition “fewer equations than unknowns”
means that the number of rows in the coefficient matrix is less than the number of unknowns. Therefore,
the boxed equation above implies that there must be at least one free variable. Since such a variable can,
by definition, take on infinitely many values, the system will have infinitely many solutions.

Example 8: Find all solutions to the system

  

First, note that there are four unknwons, but only thre equations. Therefore, if the system is
consistent, it is guaranteed to have infinitely many solutions, a condition characterized by at least
one parameter in the general solution. After the corresponding augmented matrix is constructed,
Gaussian elimination yields 

The fact that only two nonzero rows remain in the echelon form of the augmented matrix means
that 4 − 2 = 2 of the variables are free:

  

Therefore, selecting y and z as the free variables, let y = t 1 and z = t 2. The second row of the
reduced augmented matrix implies

    
and the first row then gives

Thus, the solutions of the system have the form

    

where t 1 t 2 are allowed to take on any real values.

Lesson 1.5: Polynomial Curve Fitting

P ( x ) =a0 +a 1 x +a2 x 2+ …+an−1 xn +1

( x 1 , y 1 ) , ( x 2 , y 2 ) , … .. ( x n , y n )

a 0+ a1 x +a 2 x 2+ …+a n−1 x n+1= y 1


a 0+ a1 x 2 +a2 x 22+ …+an−1 x n+1
2 = y2

a 0+ a1 x 3 +a 2 x 3+ …+a n−1 x n+1


2
3 = y3
.
.
.
a 0+ a1 x n +a 2 x 2n +…+a n−1 x n+1
n = yn

Example:

Find a polynomial that fits the points( 1,4 ) , ( 2,0 ) ,∧(3,12).

Solution:

x=(1,2,3)
y=(4,0,12)

P ( x ) =a0 +a 1 x +a2 x 2
P ( 1 )=a0 + a1 ( 1 ) +a 2( 1)2=4
P ( 2 )=a 0+ a1 ( 2 ) + a2 (2)2 =0
P ( 3 )=a 0+ a1 ( 3 )+ a2 (3)2=12

a 0+ a1 +a2=4
a 0+ 2a 1+ 4 a 2=0
a 0+3 a1+ 9 a2=12
By Gauss-Jordan Elimination, the values are a 0=24 , a1 =−28 and a 2=8

The polynomial that fits the points( 1,4 ) , ( 2,0 ) ,∧(3,12) is P ( x ) =24−28 x+8 x 2

You might also like