0% found this document useful (0 votes)
19 views

OTUnit 2

Optimization techniques

Uploaded by

muskandahiya343
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

OTUnit 2

Optimization techniques

Uploaded by

muskandahiya343
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Numerical

Computing -I
UNIT 3 SOLUTION OF LINEAR
ALGEBRAIC EQUATIONS
Structure Page Nos.

3.0 Introduction 40
3.1 Objectives 41
3.2 Gauss Elimination Method 41
3.3 Pitfalls of Gauss Elimination Method 45
3.4 Gauss Elimination Method with Partial Pivoting 46
3.5 LU Decomposition Method 46
3.6 Iterative Methods 49
3.7 Summary 55
3.8 Exercises 55
3.9 Solutions to Exercises 56

3.0 INTRODUCTION

Systems of linear equations arise in many areas of study, both directly in modelling
physical situations and indirectly in the numerical solution of other mathematical
models. Linear algebraic equations occur in the linear optimization theory, least
square fitting of data, numerical solution of ordinary and partial differential equations,
statistical interference etc. Therefore, finding the numerical solution of a system of
linear equations is an important area of study.

From study of algebra, you must be familiar with the following two common methods
of solving a system of linear equations :

1) By the elimination of the variables by elementary row operations.


2) By the use of determinants, a method better known as Cramer’s rule.

When smaller number of equations are involved, Cramer’s rule appears to be better
than elimination method. However, Cramer’s rule is completely impractical when a
large number of equations are to be solved because here n+1 determinants are to be
computed for n unknowns.

Numerical methods for solving linear algebraic systems can be divided into two
methods, direct and iterative. Direct methods are those which, in the absense of
round-off or other errors, yield exact solution in a finite number of arithmetic
operations. Iterative methods, on the other hand, start with an initial guess and by
applying a suitable procedure, give successively better approximations.

To understand, the numerical methods for solving linear systems of equations, it is


necessary to have some knowledge of properties of the matrices. You might have
studied matrices, determinants and their properties in your linear algebra course.

In this unit, we shall discuss two direct methods, namely, Gauss elimination method
and LU decomposition method, and two iterative methods, viz.; Jacobi method,
Gauss – Seidel method and Successive over relaxation method.These methods are
frequently used to solve systems of linear equations.

40
Solution of
Linear Algebraic
3.1 OBJECTIVES Equations

After studying this unit, you should be able to:

• state the difference between direct and iterative methods for solving a system of
linear equations;
• learn how to solve a system of linear equations by Gauss elimination method;
• understand the effect of round off errors on the solution obtained by Gauss
elimination method;
• learn how to modify Gauss elimination method to Gaussian elimination with
partial pivoting to avoid pitfalls of the former method;
• learn LU decomposition method to solve a system of linear equations;
• learn how to find inverse of a square matrix numerically;
• learn how to obtain the solution of a sytem of linear equations by using an
iterative method, and
• state whether an iterative method will converge or not.

3.2 GAUSS ELIMINATION METHOD

One of the most popular techniques for solving simultaneous linear equations is the
Gaussian elimination method. Karl Friedrich Gauss, a great 19th century
mathematician, suggested this elimination method as a part of his proof of a particular
theorem. Computational scientists use this “proof” as a direct computational method.
The approach is designed to solve a general set of n equations and n unknowns

a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1


a21x1 + a22 x2 + a23 x3 + ... + a2 n xn = b2
. . . . (1)
an1 x1 + an 2 x2 + an3 x3 + ... + ann xn = bn

In matrix form, we write Ax = b, where

 a11 a12 .. a1n   x1   b1 


a a 22 .. a 2 n  x  b 
A=   2 ,b=  2
21
,x=
 .. .. .. ..  .  .
     
 a n1 an2 .. a nn  xn  bn 

Gaussian elimination consists of two steps:

1) Forward Elimination: In this step, the elementary row operations are applied on
the augmented matrix [A|b] to transform the coefficient matrix A into upper
triangular form.

2) Back Substitution: In this step, starting from the last equation, each of the
unknowns is found by back substitution.

Forward Elimination of Unknowns: In this first step the first unknown, x1 is


eliminated from all rows below the first row. The first equation is selected as the
pivot equation to eliminate x1. So, to eliminate x1 in the second equation, one divides
41
Numerical
Computing -I the first equation by a11 (hence called the pivot element) and then multiply it by a21.
That is, same as multiplying the first equation by a21/ a11 to give

a21 a a
a21x1 + a12 x2 + ... + 21 a1n xn = 21 b1
a11 a11 a11

Now, this equation is subtracted from the second equation to give

 a   a  a
 a22 − 21 a12  x2 + ... +  a2 n − 21 a1n  xn = b2 − 21 b1
 a11   a11  a11

'
or a 22 x 2 + ... + a 2' n x n = b2
' a 21
where a 22 = a 22 − a12 ,…,
a11
a 21 a
a 2' n = a 2 n − a1n , b2' = b2 − 21 b1 .
a11 a11

This procedure of eliminating x1 , is now repeated for the third equation to the
nth equation to reduce the set of equations as

a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1


' '
a 22 x 2 + a 23 x3 + ... + a 2' n x n = b2 (2)
' ' '
a x 2 + a x + ... + a x n = b3
32 33 3 3n
. . . .
' ' ' '
a x 2 + a x3 + ... + a x n = b .
n2 n3 nn n

This completes the first step of forward elimination. Now, for the second step of
forward elimination, we start with the second equation as the pivot equation and a ' 22
as the pivot element. So, to eliminate x2 in the third equation, one divides the second
a'
equation by a ' 22 (the pivot element) and then multiply it by 32 . That is, same as
a' / a'
multiplying the second equation by 32 22 and subtracting from the third equation.

This makes the coefficient of x2 zero in the third equation. The same procedure is
now repeated for the fourth equation till the nth equation to give

a11 x1 + a12 x2 + a13 x3 + ... + a1n xn = b1


' '
a 22 x 2 + a 23 x3 + ... + a 2' n x n = b2'
''
a 33 x3 + ... + a 3'' n x n = b3'' (3)
. .
a n'' 3 x3 + ... + a nn
''
x n = bn''

The next steps of forward elimination are done by using the third equation as a pivot
equation and so on. That is, there will be a total of (n–1) steps of forward elimination.
At the end of (n–1) steps of forward elimination, we get the set of equations

a11x1 + a12 x2 + a13 x3 + ... + a1n xn = b1


' '
a 22 x 2 + a 23 x3 + ... + a 2' n x n = b2'
42
Solution of
''
a 33 x3 + ... + a n'' x n = b3'' (4) Linear Algebraic
Equations
. .
(n −1) (n −1 )
ann xn = bn

Back Substitution: Now, the equations are solved starting from the last equation as it
has only one unknown. We obtain

bn( n −1)
xn = ( n −1)
a nn

Now, we solve the (n–1)th equation to give

1
x n −1 = ( n − 2)
[b( n −2)
n −1 − a n( n−−1,2n) x n ]
a n −1, n − 2

since x n is determined.

We repeat the procedure until x1 is determined. The solution is given by

bn( n −1)
xn = ( n −1)
a nn

n
bi(i −1) − ∑ aij(i −1) x j
j =i +1
xi =
and aii(i −1) , for i = n – 1, n – 2,…, 1 (5)

Example 1: Solve the following linear system of equations


x1+ x2 + x3 = 3,
4x1 + 3x 2 + 4x 3= 8,
9x1 + 3 x 2+ 4x3 = 7

using the Gauss elimination method.

Solution: In augmented form, we write the system as


1 1 1 3
 
4 3 4 8 
9 3 4 7 

Subtracting 4 times the first row from the second row gives
1 1 1 3 
 
0 − 1 0 − 4 
9 3 4 7 

Subtracting 9 times the first row from the third row, we get
1 1 1 3 
 
0 − 1 0 − 4 
0 − 6 − 5 − 20
43
Numerical
Computing -I Subtracting 6 times the second row from the third row gives
1 1 1 3
 
0 − 1 0 − 4
0 0 − 5 4 

Restoring the transformed matrix equation gives

−4
Solving the last equation, we get x 3 = . Solving the second equation, we get
5
4 −1
x 2 = 4 and the first equation gives x1 = 3 – x 2 – x 3 = 3 – 4 + = .
5 5

Example 2: Use Gauss Elimination to solve


10 x1 − 7 x 2 = 7
− 3 x1 + 2.099 x 2 + 6 x3 = 3.901
5 x1 − x2 + 5 x3 = 6
correct to six places of significant digits.

Solution : In matrix form , we write

 10 − 7 0  x1   7 
− 3 2.099 6  x  = 3.901
   2  
 5 − 1 5  x3   6 

Multiply the first row by 3/10 and add to the second equation, we get
10 −7 0  x1   7 
 0 − 0.001 6  x  = 6.001
   2  
 5 −1 5  x3   6 

Multiply the first row by 5/10 and subtract from the third equation, we get
10 −7 0  x1   7 
 0 − 0.001 6  x  = 6.001
   2  
 0 2.5 5  x3   2.5 

This completes the first step of forward elimination.

Multiply the second equation by 2.5/(–0.005)= –2500 and subtract from the third
equation, we obtain
10 −7 0   x1   7 
 0 − 0.001 6  x  =  6.001 
  2  
 0 0 15005  x3  15005

We can now solve the above equations by back substitution. From the third equation,
we get

44
Solution of
15005x3 = 15005 , or x 3 = 1. Linear Algebraic
Equations

Substituting the value of x3 in the second equation, we get

− 0.001x 2 + 6 x3 = 6.001, or –0.001 x 2 = 6.001 − 6 = 0.001 , or x 2 = −1

Substituting the values of x3 and x2 in the first equation, we get

10 x1 − 7 x 2 = 7 , or 10 x1 = 7 + 7 x 2 = 0 , or x1 = 0 .

Hence, the solution is [0 − 1 1] ,


T

3.3 PITFALLS OF GAUSS ELIMINATION


METHOD

There are two pitfalls in the Gauss elimination method.

Division by zero: It is possible that division by zero may occur during forward
elimination steps. For example, for the set of equations

10 x 2 − 7 x3 = 7
6 x1 + 2.099 x 2 − 3 x3 = 3.901
5 x1 − x 2 + 5 x3 = 6

during the first forward elimination step, the coefficient of x1 is zero and hence
normalisation would require division by zero.

Round-off error: Gauss elimination method is prone to round-off errors. This is true,
when there are large numbers of equations as errors propagate. Also, if there is
subtraction of almost equal numbers, it may create large errors. We illustrate through
the following examples.

Example 3: Solve the following linear equations


10 −5 x + y = 1.0,
x + y = 2.0 (6)
correct to 4 places of accuracy.

Solution: For 4 places of accuracy the solution is , x ≈ y ≈ 1.0 .


Applying the Gauss elimination method, we get (by dividing with the pivotal element)

x + 10 5 y = 10 5
( 1 − 10 5 ) y = 2.0 − 10 5.

Now, 10 5 – 1 when rounded to four places of accuracy, become is 10 5. Similarly,


10 5 – 2 when rounded to four places of accuracy becomes 10 5.

Hence, from the second equation we get, 10 5 y = 10 5, or y = 1.0.

Substituting in the first equation, we get x = 0.0, which is not the solution.
Such errors can also arise when we perform computations with less number of digits.
To avoid these computational disasters, we apply partial pivoting to gauss elimination.
45
Numerical
Computing -I
3.4 GAUSS ELIMINATION METHOD WITH
PARTIAL PIVOTING

We perform the following modification to the Gauss elimination method. At the


beginning of each step of forward elimination, a row interchange is done, if necessary,
based on the following criterion. If there are n equations, then there are (n − 1)
forward elimination steps. At the beginning of the kth step of forward elimination, we
find the maximum of
a kk a k +1,k a nk
, , …………,

That is, maximum in magnitude of this elements on or below the diagonal element.
a
Then, if the maximum of these values is pk in the pth row, k ≤ p ≤ n, then
interchange rows p and k.. The other steps of forward elimination are the same as in
Gauss elimination method. The back substitution steps remain exactly the same as in
Gauss elimination method.

Example 4: Consider Example 3. We now apply partial pivoting on system (6).

Solution: We obtain the new system as

Since, a11 < a 21, we interchange the first and second rows (equations).
x + y = 2.0
10 −5 x + y = 1.0

On elimination , we get second equation as y = 1.0 correct to 4 places. Substituting in


the first equation, we get x = 1.0, which is the correct solution.

3.5 LU DECOMPOSITION METHOD

The Gauss elimination method has the disadvantage that the right-hand sides are
modified (repeatedly) during the steps of elimination). The LU decomposition method
has the property that the matrix modification (or decomposition) step can be
performed independent of the right hand side vector. This feature is quite useful in
practice. Therefore, the LU decomposition method is usually chosen for computations.

In this method, the coefficient matrix into a product of two matrices is written as
A=LU (7)

where L is a lower triangular matrix and U is an upper triangular matrix.

Now, the original system of equations, A x = b becomes


LU x = b (8)

Now, set U x = y, then, (8) becomes


Ly = b (9)

The rationale behind this approach is that the two systems given in (9) are both easy to
solve. Since, L is a lower triangular matrix, the equations, Ly = b , can be solved for
y using the forward substitution step. Since U is an upper triangular matrix, U x = y
can be solved for x using the back substitution algorithm.

46
Solution of
We define writing A as LU as the Decomposition Step. We discuss the following Linear Algebraic
three approaches of Decomposition using 4 × 4 matrices. Equations

Doolittle Decomposition

We choose lii = 1, i=1, 2, 3, and write

1 0 0 0  u11 u12 u13 u14   a11 a12 a13 a14 


l 0   0 u22 u23 u24   a21 a24 
 21 1 0
=
a22 a23
(10)
l31 l32 1 0  0 0 u33 u34   a31 a32 a33 a34 
    
l41 l42 l43 1  0 0 0 u44   a41 a42 a43 a44 

Because of the specific structure of the matrices, we can derive a systematic set of
formulae for the components of L and U .

Crout Decomposition:

We choose u ii = 1 , I = 1, 2, 3, 4 and write

 l11 0 0 0  1 u12 u13 u14   a11 a12 a13 a14 


l 0 0  0 1 u23 u24   a21 a24 
 21 l22  =
a22 a23
(11)
l31 l32 l33 0  0 0 1 u34   a31 a32 a33 a34 
    
l41 l42 l43 l44  0 0 0 1   a41 a42 a43 a44 

The evaluation of the components of L and U is done in a similar fashion as above.

Cholesky Factorization:

If A is a symmetric and positive definite matrix, then we can write the decomposition
as

Where L is the lower triangular matrix

 l11 0 0 0
l l 0 0 
L =  21 22 (12)
l31 l32 l33 0
 
l41 l42 l43 l44 

We now describe the rationale behind the choice of lii = 1 in (10) or u ii = 1 in (11).
Consider the decomposition of a 3 × 3 matrix as follows.

 l11 0 0  u11 u12 u13   a11 a12 a13 


l 0   0 u22 u23  =  a21 a23 
 21 l22 a22
l31 l32 l33   0 0 u33   a31 a32 a33 

 l11u11 l11u12 l11u13   a11 a12 a13 


l u l u + l u l21u13 + l22u23  = a a23 
  21 a22
(13)
 21 11 21 12 22 22
 l31 l31u12 + l32u22 l31u13 + l32u23 + l33u33   a31 a32 a33 
47
Numerical
Computing -I We note that L has 1+2+3=6 unknowns and U has 3+2+1=6 unknowns, that is, a total
of 12 unknowns. Comparing the elements of the matrices on the left and right hand
sides of (13), we get 9 equations to determine 12 unknowns . Hence, have 3 arbitrary
parameters, the choice of which can be done in advance. Therefore, to make
computations easy we choose lii = 1 in Doolittle method and u ii = 1 in Crout’s
method.

In the general case of decomposition of an N × N matrix, L has 1+2+3+….+N=


[N ( N + 1) / 2] And U also has N(N+1)/2 unknowns, that is a total of N 2 + N
unknowns comparing the elements of A and the product LU, we obtain N 2
equations. Hence, we have N arbitrary parameters. Therefore, we choose either
lii = 1 or u ii = 1 ,i = 1,2,….n,

Now , let us give the solution for the Doolittle and Crout decomposition.

Doolittle Method: Here l ii = 1, i = 1 to N. In this case, generalisation of (13) gives


u 1j = a 1j , j = 1 to N
l i1 = a i1 / a 11 , i = 2 to N
u2 j = a2j – l21 . u1 j , j=2 to N
l i2 = ( a i2 – l i1 u 12 ) / u22 , i = 3 to N, and so on

Crout’s Method: Here u ii = 1, i = 1 to N . In this case , we get


l i1 = a i1 , i=1 to N
u1j = a1j / a11 , j= 2 to N
l i 2 = a i2 – l i1 u 12, i = 2 to N
u 2j = ( a 2j – l21 u 1j ) / l22, j= 3 to N, and so on

Example 5: Given the following system of linear equations, determine the value of
each of the variables using the LU decomposition method.

6x1 – 2x2 = 14
9x1 – x2 + x3= 21
3x1 – 7x2 + 5x3= 9

Solution: We write A = LU, with uii = 1 as

 6 −2 0   l11 0 0  1 u12 u13 


 9 −1 1  =  l 0  0 1 u23 
   21 l22
 3 +7 5  l31 l32 l33  0 0 1 
 l11 l11u12 l11u13 

= l21 l21u12 + l22 l21u13 + l22u33


l31 l31u12 + l32 l31u13 + l32u23 + l33 

We obtain l11 = 6, l21 = 9, l31 = 3, l11u12 = – 2, u12 = – 1/3;

l11 u13 = 0, u13 = 0

l21u12 + l22 = – 2, l22 = – 1 + 3 = 2; l21u13 + l22 u23 = 1

u23 = 1/2, l31u12 + l32 = + 7, l32 = + 7 + 1 = 8;

l31u13 + l32u23 + l33 = 5, l33 = 5 – 4 = +1.

48
Solution of
6 0 0   1 −1/ 3 0  Linear Algebraic
Hence, L = 9 2 0 , U =  0
  1/ 2 
Equations
1
   
 3 8 +1  0 0 1 

 6 0 0   y  14  1

Solving Ly = b, 9 2 0   y  =  21
    
2

 3 8 1   y   9 
3

14 7 1
We get, y1 = = ; 9 y1 + 2 y2 = 21, y2 = (21–21) = 0
6 3 2

3 y1 + 8 y2 + y3 = 9, y3 = 9 – 7 = 2.

1 −1/ 3 0   x  7 / 3 1

Solving Ux = y,  0 1 1/ 2   x  =  0 
    
2

 0 0 1   x   2  3

1 1 7 7 1
We get, x3 = 2, x2 + x3 = 0, x2 = –1; x1 – x2 = , x1 = – = 2.
2 3 3 3 3

The solution vector is [2 –1 2].

Solving the system of Equations

After decomposing A as A = LU, the next step is to computate the solution. We have,

LUX = b, set UX = y

Solve first Ly = b, by forward substitution. Then, solve Ux = y, by backward


substitution to get the solution vector x.

3.6 ITERATIVE METHODS

Iterate means repeat. Hence, an iterative method repeats its process over and over,
each time using the current approximation to produce a better approximation for the
true solution, until the current approximation is sufficiently close to the true solution –
or until you realize that the sequence of approximations resulting from these iterations
is not converging to the true solution.

Given an initial guess or approximation x(0) for the true solution x, we use x(0) to find
a new approximation x(1), then we use x(1) to find the better approximation x(2), and so
on. We expect that x(k) → x as k → ∞ ; that is, our approximations should become
closer to the true solution as we take more iterations of this process.

Since, we do not actually have the true solution x, we cannot check to see how close
our current approximation x(k) is to x. One common way to check the closeness of x(k)
to x is instead by checking how close Ax(k) is to Ax, that is, how close Ax(k) is to b.

49
Numerical
Computing -I Another way to check the accuracy of our current approximation is by looking at the
magnitude of the difference in successive approximations, | x(k) − x(k-1) |. We expect
x(k) to be close to x if | x(k) − x(k-1) | is small .

The Jacobi Method

This method is also called Gauss – Jacobi method. In Jacobi method, the first equation
is used to solve for x1, second equation is used to solve x2 etc. That is,
1
x1 = [b1 − (a12 x 2 + ...... + a1n xn)]
a11
1
x 2 = [b 2 − (a 21 x1 + a 23 x 3 + ...... + a 2 n xn )]
a 22
If in the ith equation
n

∑a
j =1
i, j x j =bi (15)

we solve for the value of xi, we obtain ,


n
xi = (bi − ∑ ai , j x j ) / ai , j (16)
j ≠1
This suggests an iterative method defined by

xi( k ) = (bi − ∑ ai , j x (jk −1) / ai , j (17)


j ≠i
which is the Jacobi method. Note that the order in which the equations are solved is
irrelevant, since the Jacobi method treats them independently. For this reason, the
Jacobi method is also known as the method of simultaneous displacements, since the
updates could in principle be done simultaneously.

Jacobi method can be written in matrix notation.

Let A be written as A = L + D + U, where L is strictly lower triangular part, D the


diagonal part and U is strictly upper triangular part.

0 0 L 0
0 a 12 L a 
1na 11 0
a 0 O 0  0 0 O a   a 
L=  ,U =  ,D =  
21 2n 22

M O O M M O O M   O 
     
a n1 a L 0
n2
0 0 L 0 0 a 
nn

Therefore, we have
(L + D + U) X = b, or DX = – (L + U) x + b

Since, aii ≠ 0, D–1 exists and is equal to


D–1 = diag ( 1/a11, 1/a12, ……1/ann).

Inverting D, we write the intersection as


X(k+1) = – D–1 (L + U) X(k) + D–1 b (18)

= MJ x(k) + C (19)

where MJ – D–1 (L + U) and C = D–1 b.

The matrix MJ is called the iteration matrix. Covergence of the method depends on the
properties of the matrix MJ.
50
Solution of
Diagonally dominant: A matrix A is said to be diagonally dominant if Linear Algebraic
n Equations
| aii |≥ ∑
j =1,i ≠ j
| aij | (20)

with inequality satisfied for atleast one row.

Convergence: (i) The Jacobi method converges when the matrix A is diagonally
dominant. However, this is a sufficient condition and necessary condition.

(ii) The Jacobi method converges if Spectral radius (MJ) < 1. Where Spectiral radius
of a matrix = max =| λ | and xi are eigenralues of MJ.This is a necessary and sufficient
i
i

condition. If no intial approximation is known, we may assume X(0) = 0.

Exercise 1: Are the following matrices diagonally dominant?

 2 −5.81 34  124 34 56 
A =  45 43 1

and B = 23 53 5

   
123 16 1   96 34 129

Solution: In A, all the three rows violate the condition (20). Hence, A is not
diagonally dominant.

In B, in the row | –129 | = 129 < 96 + 34 = 130. Therefore, B is also not diagonally
dominant.

Example 2: Solve the following system, of equations


x + y – z = 0, – x + 3y = 2, x – 2z = –3 by Jacobi Method, both directly and in
matrix form. Assume the initial solution vector as [0.8 0.8 2.1]T.

Solution : We write the Jacobi method as


1 1
x(k+1) = – y(k) + z(k), y(k+1) = (2 + x(k)), z(k+1) = (3 + x(k))
3 2
with x(0) = 0.8, y(0) = 0.8, z(0) = 2.1, we get the following approximations.

x(1) = 1.3, y(1) = 0.9333, z(1) = 1.9;


x(2) = 0.9667, y(2) = 1.1, z(2) = 2.5;
x(3) = 1.0500, y(3) = 0.9889, z(3) = 1.98335;
x(4) = 0.99445, y(4) = 1.01667, z(4) = 2.025;
x(5) = 1.00833, y(5) = 0.99815, z(5) = 1.997225;
x(6) = 0.988895, y(6) = 1.00278, z(6) = 2.004165;
x(7) = 1.001385, y(7) = 0.99630, z(7) = 1.99445;
x(8) = 0.99815, y(8) = 1.00046, z(8) = 2.00069;
x(9) = 1.00023, y(9) = 0.99938, z(9) = 1.999075.

At this stage, we have,

| x(9) – x(8) | = 0.002, | y(9) – y(8) | = 0.0019, | z(9) – z(8) | = 0.0016.

Therefore, the 9th interaction is correct to two decimal places.

51
Numerical
Computing -I Let us represent the matrix A in the form

 0 0 0  1 0 0  0 1 1 
A = L + D + U =  −1 0 0  +  0 3 0  +  0 0 0 
     
 1 0 0   0 0 −2  0 0 0 

We have,
1 0 0   0 1 −1  0 −1 1 
−1
Mj = − D ( L + U ) = − 0 1/ 3 0   −1 0 0 = 1/ 3 0 0 
 
    
 0 0 −1/ 2   1 0 0  1/ 2 0 0 

1 0 0  0 
 −1
c = D b = 0 1/ 3 0   2 / 3
  
 0 0 −1/ 2  3 / 2 

Therefore, Jacobi method gives,

 0 −1 1   0 
X ( k +1)
= 1/ 3 0 0 X +  2 / 3
  (k )
   
1/ 2 0 0  3 / 2 

The intial approximation is given as X(0) = [0.8 0.8 2.1]T

Then, we have

 0 −1 1   0.8  0   1.3 
X (1)
= 1/ 3 0 0   0.8 +  2 / 3 =  0.9333
      
1/ 2 0 0   2.1 3 / 2   1.9 

which is same as X(1) obtained earlier.

Since, the two procedures (direct and in matrix form) are identical, we get the same
approximations x(2), ….. x(9). The exact solution is x = [1 1 2]T.

Note that the coefficient matrix A is not diagonal dominant. But, we have obtained the
solution correct to two decimal places in 9 interactions. This shows that the
requirement of A being diagonal dominant is a sufficient condition.

Example 3: Solve by Jacobi’s method the following system of linear equations.

2x1 – x2 + x3 = –1
x1 + 2x2 – x3 = 6
x1 – x2 + 2x3 = –3.

Solution: This system can be written as

x1 = 0.5 x2 – 0.5 x3 – 0.5


x2 = – 0.5 x1 + 0.5 x3 + 3.0
x3 = – 0.5 x1 + 0.5 x2 – 1.5
52
Solution of
So the Jacobi iteration is Linear Algebraic
Equations
 x ( k +1)   0.0 0.5 −0.5  x ( k )   −0.5
1 1

 ( k +1)     (k )   
x 2
 =  −0.5 0.0 0.5   x  +  3.0  2

 x ( k +1)   −0.5 0.5 0.0   x ( k )   −1.5


3 3

Since, no intial approximation is given, we start with x(0) = (0, 0, 0)T. We get the
following approximations.

X(1) = [– 0.5000 3.0000 – 1.5000]T


X(2) = [1.7500 2.5000 0.2500]T
X(3) = [0.6250 2.2500 – 1.1250]T
X(4) = [1.1875 2.1250 – 0.6875]T
X(5) = [0.9063 2.0625 – 1.0313]T
X(6) = [1.0469 2.0313 – 0.9219]T
X(7) = [0.9766 2.0156 – 1.0078]T
X(8) = [1.0117 2.0078 – 0.9805]T
X(9) = [0.9941 2.0039 – 1.0020]T
X(10) = [1.0029 2.0020 – 0.9951]T
X(11) = [0.9985 2.0010 – 1.0005]T
X(12) = [1.0007 2.0005 – 0.9988]T
X(13) = [0.9996 2.0002 – 1.0001]T
X(14) = [1.0002 2.0001 – 0.9997]T

After 14 iteractions, the errors in the solutions are

| x1(14) – x1(13) | = 0.0006, | x2(14) – x2(13) | = 0.0001, | x3(14) – x3(13) | = 0.0004.

The solutions x(14) are therefore almost correct to 3 decimal places.

The Gauss-Seidel Method

We observe from Examples 2 and 3 that even for a 3 x 3 system, the number of
iterations taken by the Jacobi method (to achieve 2 or 3 decimal accuracy) is large.
For large systems, the number of iterations required may run into thousands. Hence,
the Jacobi method is slow. We also observe that when the variable xi is being iterated
in say the k-th iteration, the variables, x1, …. xi-1 have already been updated in the k-th
iteration. However, these values are not being used to compute xi(k). This is the
disadvantage of the Jacobi method. If we use all the current available values, we call it
the Gauss-Seidel method.

Therefore, Gauss – seidel method is defined by


n n
x ( k ) = (bi − ∑ ai , j x ( k ) − ∑ ai , j x ( k −1) ) / a
i j j i, i (21)
j <i j >i

Two important facts about the Gauss-Seidel method should be noted. First, the
computations in (21) are serial. Since, each component of the new iterate depends
upon all previously computed components, the updates cannot be done simultaneously
as in the Jacobi method. Second, the new iterate depends upon the order in which the
equations are being used. The Gauss-Seidel method is sometimes called the method of
successive displacements to indicate the dependence of the iterates on the ordering. If
this ordering is changed, the components of the new iterate (and not just their order)
will also change.

53
Numerical
Computing -I To derive the matrix formulation, we write,
AX = (L + D + U) X = b or (L + D) X = – UX + b.
The Gauss-Seidel method can be expressed as
X(k+1) = – (L + D) –1 U X(k) + (L + D)–1b
= MG X (k) + C (22)
where MG = – (L + D) –1 U is in iteration matrix and C = (L + D) –1 b.
Again, convergence depends on the properties of MG if Spectral radius(MG) < 1, the
iteraction converges always for any intial solution vector. Further, it is known that
Gauss-Seidel method converges atleast two times faster than for Jacobi method.

Example 4: Solve the system in Example 2, by the Gauss-Seidel method. Write its
matrix form.

Solution: Gauss-Seidel method for solving the system in Example 2 is given by


1 1
x(k+1) = – y(k) + z(k), y(k+1) = (2 + x(k)), z(k+1) = (3 + x(k))
3 2
with x(0) = 0.8, y(0) = 0.8, z(0) = 2.1, we obtain the following results.

x(1) = 1.3, y(1) = 1.1, z(1) = 2.15;


x(2) = 1.05, y(2) = 1.01667, z(2) = 2.025;
x(3) = 1.00833, y(3) = 1.00278, z(3) = 2.004165;
x(4) = 1.001385, y(4) = 1.00046, z(4) = 2.00069;
x(5) = 1.00023, y(5) = 1.000077, z(5) = 2.000115;

The errors after the 5th iteractions are

| x(5) – x(4) | = 0.0012, | y(5) – y(4) | = 0.00038, | z(5) – z(4) | = 0.00057.

In 5 iteractions, we have for two place accuracy, while 9 iteractions we required in the
Jacobi method.

The matrix function can be written as


−1
1 0 0  0 1 −1 1 0 0  0 
X ( k +1)
= −  −1 3 0   0 0 0  X ( k ) +  −1 3 0   2 
      
 1 0 −2   0 0 0   1 0 −2   −3
 −6 0 0 0 1 −1  −6 0 0  0 
=+
1
−2 −2 0 0 0 0X
(k )
=
1
−2 −2 0   2
6    6  
 3 0 3
  0 0 
0  −3 0 3  −3
0 −6 6 0
− −4 
1 1
2 X
(k )
=+ 0 −2
6  6 
0 −3 3  −9 

Starting with X(0) = [ 0.8 0.8 2.1]T, we get the same iterated values as above.

Example 5: Solve the system given in Example 3 by Gauss-Seidel method.


Solution: For Gauss- Seidel iterations, the system in Example 3 can be written as
x1(k+1) = 0.5 x2(k) – 0.5 x3(k) – 0.5
x2(k+1) = –0.5 x1(k+1) + 0.5 x3(k) +3.0
x3(k+1) = –0.5 x1(k+1) + 0.5 x2(k+1) + 1.5
54
Solution of
Start with (0, 0, 0), we get the following values Linear Algebraic
Equations
X(1) = [– 0.5000 3.2500 0.3750]T
X(2) = [0.9375 2.7188 – 0.6094]T
X(3) = [1.1641 2.1133 – 1.0254]T
X(4) = [1.0693 1.9526 – 1.0583]T
X(5) = [1.0055 1.9681 – 1.0187]T
X(6) = [0.9934 1.9939 – 0.9997]T
X(7) = [0.9968 2.0017 – 0.9976]T
X(8) = [0.9996 2.0014 – 0.9991]T
X(9) = [1.0003 2.0003 – 1.0000]T
X(10) = [1.0001 1.9999 – 1.0001]T

After 10 iterations, the errors in solutions are


| x1(10) – x1(9) | = 0.0002, | x2(10) – x1(9) | = 0.0.0004, | x3(10) – x3(9) | = 0.0001.

The solutions are correct to 3 decimal places.

3.7 SUMMARY
In this unit, we have discussed direct and interative methods for solving a system of
linear equations. Under these categories of methods, used to solve a system of linear
equations, we have discussed Gauss elimination method and LU decomposition
method. We have also discussed the method of finding inverse of a square matrix.
Further, under the category of iterative methods for root determination we have
discussed Jacobi and Gauss Scidel method.

3.8 EXERCISES
E1. Solve the following systems using the Gauss elimination method.

(a) 3x1 + 2x2 + 3x3 = 5, (b) 3x1 + x2 + x3 = 1.8,


x1 + 4x2 + 2x3 = 4, 2x1 + 4x2 + x3 = 2.7,
2x1 + 4x2 + 8x3 = 8, x1 + 3x2 + 5x3 = 4.0,

(c) x1 – x2 + x3 = 0, (d) 3x1 + x2 = 5,


2x1 + 3x2 + x3 − 2x4 = −7 x1 + 3x2 + 6x3 = 6
3x1 + x2 − x3 + 4x4= 12, 4x2 + x3 + 3x4= 7
3x2 – 5x3 + x4= 9 x3 + 5x4= 8,

E2. Solve the following systems using the LU decomposition method.


(a) 3x + y + z = 3, (b) 2x + y + z = 5,
x + 4y + 2z = 0, x + 3y + 2z = 4,
2x + y + 5z = 4, −x + y + 6z = 4,
(c) 4x + y + 2z = 3.6, (d) 3x + y = −2,
x + 3y + z = 2.5, x + 3y - z = 0,
2x + y + 2z = 4.0, − y + 7z = 13.

55
Numerical
Computing -I E3. For problems in 1(a), (b); 2(a), (b), (c), (d), obtain the solution to 3 decimals
using the Jacobi and Gauss Seidel methods. Write the matrix formulations
also. Assume the initial solution vectors respectively as
(i) [0.8, 0.6, 0.5]T,
(ii) [0.3, 0.3, 0.6]T,
(iii) [0.9, −0.6, 0.6]T,
(iv) [1.9, 0.2, 0.9]T,
(v) [0.2, 0.5, 1.1]T,
(vi) [−1.1, 0.9, 2.1]T.

3.9 SOLUTIONS TO ANSWERS


1. (a) 1, ½, ½ . (b) 0.3, 0.4, 0.5.
(c) 1, −1, −2, 2. (d) 3/2, ½, ½, 3/2.
2. (a) 1, −1/2, 1/2. (b) 2, 0, 1.
(c) 0.3, 0.4, 1. (d) −1, 1, 2, (You can also try LLT decomposition)

3. Refer to Page 49 and 52.

56

You might also like