0% found this document useful (0 votes)
61 views7 pages

Linear System: 2011 Intro. To Computation Mathematics LAB Session

This document summarizes various methods for solving linear systems of equations, including Gaussian elimination, LU decomposition, and iterative methods. It begins with an introduction to solving linear systems and the infeasibility of direct methods for large systems. It then covers LU decomposition and Gaussian elimination, forward/backward substitution, permutation to deal with zero pivots, and iterative methods like Jacobi, Gauss-Seidel, and SOR. The document concludes that the most efficient method depends on the properties of the coefficient matrix A.

Uploaded by

陳仁豪
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views7 pages

Linear System: 2011 Intro. To Computation Mathematics LAB Session

This document summarizes various methods for solving linear systems of equations, including Gaussian elimination, LU decomposition, and iterative methods. It begins with an introduction to solving linear systems and the infeasibility of direct methods for large systems. It then covers LU decomposition and Gaussian elimination, forward/backward substitution, permutation to deal with zero pivots, and iterative methods like Jacobi, Gauss-Seidel, and SOR. The document concludes that the most efficient method depends on the properties of the coefficient matrix A.

Uploaded by

陳仁豪
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

2011 Intro.

to Computation Mathematics LAB session:

Linear System

Chi-Hao, Li, Department of Mathematics, Oct. 19th , 2011.

CONTENT 1. Introduction. 2. LU-decomposition and Gaussian Elimination. 3. Backward and Forward Substitution. 4. Permutation. 5. Iterative methods. 6. Conclusion. 7. Exercise.

1. Introduction
In our real life, linear systems appear commonly. Sometimes the size may be small, but in other situations, the size of the problem will be extremely large. For example, to forecast the weather, we should observe data and photos, and try to analyze them, then do some prediction. However, when it comes to analyzing, we faced a 108 -size problem in numerical size, but perhaps no leisure time to solve it out! A classical way to solve the linear system Ax = b is the Cramers Rule: xi = i det A

However, the diculty enlarges quickly as n arises. How desperated? In fact, to solve a system with 50 variables, assume the CPU computes in 109 ops per second (a op includes a oating-point addition and a oating-point multiplication), it takes about 9.6 1047 years to solve it, which is extremly ridiculous! Thus, we should nd some way to deal with this problem. The rst comes Gaussian Elimination.

2. LU -decomposition and Gaussian Elimination.


A brief example for the LU -decomposition 1 2 3 A= 2 3 2 3 3 6 is stated as follows: let 4 5 2 1 8 0

First, we should eliminate the 2 on the position (2,1), so we multiply the rst row by - 2/1 = -2 and add it to the second row, which will lead to: 1 2 3 4 5 A = 0 1 4 6 9 3 3 6 8 0 Then, do the same thing from the rst row to the third row. This time the multiplier should be -3/1 = -3. Similar process being held, we will last arrive at: 1 0 0 1 2 3 4 5 A = LU = 2 1 0 0 1 4 6 9 3 3 1 0 0 9 14 12 Here we produce a lower triangular matrix L and an upper triangular matrix U . Noted that L is a squared matrix, with the multiplier implied above at the positions respectively, and U is the same size as A.

3. Forward and Backward substitution.


When solving Ax = b, the LU -decomposition is useful. Without the necessarity of permutation, which will be mentioned later, we can write the system as LU x = b Thus, dene U x = y, we have L(U x) = Ly = b Here L is lower triangular, assume having size n, the rst component of y, say y1 , will be equal to b1 . Look at the second row, we have l21 y1 + y2 = b2 , so y2 = b2 l21 y1 is easily compute. Keep having the same process we obtain the solution of y. This is called the Forward substitution, since it solve y forwardly.

Now we come to the problem Ux = y


yn Look at the last row, say nth row, of U , we have unn xn = yn . Thus xn = unn . Second, we look at the n 1th row. Hence we have un1,n1 xn1 + un1,n xn = 1 yn1 , which is the same as xn1 = un1,n1 (yn1 un1,n xn ).

Again, continue the process and we can get the desired solution x. Conclusively, although the LU -decomposition is somehow expensive, i.e. O(n3 ), this provide a better way then computing the inverse of A to get x = A1 b.

4. Permutation.
Here we have another example: 1 A= 2 7 2 4 8 3 5 9

When set up the LU -factorization for A, we face a problem. In the second step, we will obtain 1 2 3 A(2) = 0 0 1 0 6 12 The 0 on the (2, 2) position, or the second pivot, causes the divisor to be 0 and consequently terminates the algorithm. So this factorization isnt perfect, although the matrix is invertible. To amend this stu, our way is to multiply a permutation matrix on the left: 1 0 0 P = 0 0 1 0 1 0 which exchanges the second and third row of A. In this case, the new pivot becomes nonzero, and the elimination process can be continued.

5. Iterative methods.
This paragraph is yet beyond the lecture, but it is another point of view to deal with the linear systems. The basic concept of the iterative methods is to take apart the square matrix A rst: A = M + N, (M + N )x = b So if we exchange the component, this can be rewritten as (assuming that M 1 exists) x = M 1 b M 1 N x Hence, just like weve done in solving f (x) = 0, we can dene the iterative formula intuitively: xn+1 = M 1 b M 1 N xn Theorems show that the convergence of this formula is depended on the eigenvalues of M 1 N . That is, if all the eigenvalues i of M 1 N satisfy |i | < 1, then we can expect xn x as n large, which is the desired solution. The next issue is how to choose M, N ? There are some common ways to do it: (A = [aij ]) 1. Jacobi method. Take A = D + R, where a11 0 0 a22 D= . . .. . . . . . 0 0

0 0 . . . ann

,R =

0 a21 . . . an1

a12 0 . . . an2

.. .

a1n a2n . . . 0

So (D + R)x = b, Dx + Rx = b, x = D1 (b Rx), weve arrived at: xn = D1 (b Rxn1 ) 2. Gauss-Seidel method. Take A = L + D + U , where a11 0 0 0 0 0 a22 a21 D= . . . ,L = . .. . . . . . . . . . 0 0 ann an1

0 0 . . . an2

.. .

0 0 . . . 0

U =

0 0 . . . 0

a12 0 . . . 0

.. .

a1n a2n . . . 0

So (L + D + U )x = b, (L + D)x = b U x, x = (L + D)1 (b U x)(note that we need L + D to be invertible!). Finally, xn = (L + D)1 (b U xn1 ) 3. SOR (successive over-relaxation) The formula for this method is: xn = (D + L)1 (b [U + ( 1)D]xn1 ) where is a given parameter. Note that the case = 1 is just the same as the Gauss-Seidel method.

6. Conclusion.
Solving the linear system seemed simple, but for large scale problems, this can be a dicult one. Thus if we can have some preknowledge to the system, we can somehow choose a better method to solve it. For example, if we know that A is positive-dente but dense matrix, then the LU -factorization and forward, backward substitution might seem not so eecient at all, but we can thus try the iterative Gauss-Seidel method, for it applies A to be positive-denite. One way to evaluate the coeecient matrix A is the condition number (A), dened as (A) = A p A1 p Here p denotes the matrix pnorm. By Schwartz inequality, (A) 1. Theorems show that if (A) is close to 1, then the system is stable, which implies that the computational output will act normal. In MATLAB, a simple command >> cond(A) shows the condition number of the matrix A, having p = 2, the matrix 2-norm. In conclusion, the more knowledge we can obtain from our problem will improve our eeciency and accuracy on solving it.

7. Exercise.
Write a MATLAB function including title function [L U k] = func_your(A) INPUT: m by n matrix A. OUTPUT: L, lower triangular, and U , upper triangular, and k, the position of the rst zero pivot(if no zero pivot, then output min(m, n), which is the step size.)

You might also like