Linear System: 2011 Intro. To Computation Mathematics LAB Session
Linear System: 2011 Intro. To Computation Mathematics LAB Session
Linear System
CONTENT 1. Introduction. 2. LU-decomposition and Gaussian Elimination. 3. Backward and Forward Substitution. 4. Permutation. 5. Iterative methods. 6. Conclusion. 7. Exercise.
1. Introduction
In our real life, linear systems appear commonly. Sometimes the size may be small, but in other situations, the size of the problem will be extremely large. For example, to forecast the weather, we should observe data and photos, and try to analyze them, then do some prediction. However, when it comes to analyzing, we faced a 108 -size problem in numerical size, but perhaps no leisure time to solve it out! A classical way to solve the linear system Ax = b is the Cramers Rule: xi = i det A
However, the diculty enlarges quickly as n arises. How desperated? In fact, to solve a system with 50 variables, assume the CPU computes in 109 ops per second (a op includes a oating-point addition and a oating-point multiplication), it takes about 9.6 1047 years to solve it, which is extremly ridiculous! Thus, we should nd some way to deal with this problem. The rst comes Gaussian Elimination.
First, we should eliminate the 2 on the position (2,1), so we multiply the rst row by - 2/1 = -2 and add it to the second row, which will lead to: 1 2 3 4 5 A = 0 1 4 6 9 3 3 6 8 0 Then, do the same thing from the rst row to the third row. This time the multiplier should be -3/1 = -3. Similar process being held, we will last arrive at: 1 0 0 1 2 3 4 5 A = LU = 2 1 0 0 1 4 6 9 3 3 1 0 0 9 14 12 Here we produce a lower triangular matrix L and an upper triangular matrix U . Noted that L is a squared matrix, with the multiplier implied above at the positions respectively, and U is the same size as A.
Again, continue the process and we can get the desired solution x. Conclusively, although the LU -decomposition is somehow expensive, i.e. O(n3 ), this provide a better way then computing the inverse of A to get x = A1 b.
4. Permutation.
Here we have another example: 1 A= 2 7 2 4 8 3 5 9
When set up the LU -factorization for A, we face a problem. In the second step, we will obtain 1 2 3 A(2) = 0 0 1 0 6 12 The 0 on the (2, 2) position, or the second pivot, causes the divisor to be 0 and consequently terminates the algorithm. So this factorization isnt perfect, although the matrix is invertible. To amend this stu, our way is to multiply a permutation matrix on the left: 1 0 0 P = 0 0 1 0 1 0 which exchanges the second and third row of A. In this case, the new pivot becomes nonzero, and the elimination process can be continued.
5. Iterative methods.
This paragraph is yet beyond the lecture, but it is another point of view to deal with the linear systems. The basic concept of the iterative methods is to take apart the square matrix A rst: A = M + N, (M + N )x = b So if we exchange the component, this can be rewritten as (assuming that M 1 exists) x = M 1 b M 1 N x Hence, just like weve done in solving f (x) = 0, we can dene the iterative formula intuitively: xn+1 = M 1 b M 1 N xn Theorems show that the convergence of this formula is depended on the eigenvalues of M 1 N . That is, if all the eigenvalues i of M 1 N satisfy |i | < 1, then we can expect xn x as n large, which is the desired solution. The next issue is how to choose M, N ? There are some common ways to do it: (A = [aij ]) 1. Jacobi method. Take A = D + R, where a11 0 0 a22 D= . . .. . . . . . 0 0
0 0 . . . ann
,R =
0 a21 . . . an1
a12 0 . . . an2
.. .
a1n a2n . . . 0
So (D + R)x = b, Dx + Rx = b, x = D1 (b Rx), weve arrived at: xn = D1 (b Rxn1 ) 2. Gauss-Seidel method. Take A = L + D + U , where a11 0 0 0 0 0 a22 a21 D= . . . ,L = . .. . . . . . . . . . 0 0 ann an1
0 0 . . . an2
.. .
0 0 . . . 0
U =
0 0 . . . 0
a12 0 . . . 0
.. .
a1n a2n . . . 0
So (L + D + U )x = b, (L + D)x = b U x, x = (L + D)1 (b U x)(note that we need L + D to be invertible!). Finally, xn = (L + D)1 (b U xn1 ) 3. SOR (successive over-relaxation) The formula for this method is: xn = (D + L)1 (b [U + ( 1)D]xn1 ) where is a given parameter. Note that the case = 1 is just the same as the Gauss-Seidel method.
6. Conclusion.
Solving the linear system seemed simple, but for large scale problems, this can be a dicult one. Thus if we can have some preknowledge to the system, we can somehow choose a better method to solve it. For example, if we know that A is positive-dente but dense matrix, then the LU -factorization and forward, backward substitution might seem not so eecient at all, but we can thus try the iterative Gauss-Seidel method, for it applies A to be positive-denite. One way to evaluate the coeecient matrix A is the condition number (A), dened as (A) = A p A1 p Here p denotes the matrix pnorm. By Schwartz inequality, (A) 1. Theorems show that if (A) is close to 1, then the system is stable, which implies that the computational output will act normal. In MATLAB, a simple command >> cond(A) shows the condition number of the matrix A, having p = 2, the matrix 2-norm. In conclusion, the more knowledge we can obtain from our problem will improve our eeciency and accuracy on solving it.
7. Exercise.
Write a MATLAB function including title function [L U k] = func_your(A) INPUT: m by n matrix A. OUTPUT: L, lower triangular, and U , upper triangular, and k, the position of the rst zero pivot(if no zero pivot, then output min(m, n), which is the step size.)