0% found this document useful (0 votes)
15 views53 pages

تحليلات Curve Fitting

Uploaded by

Haider One
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views53 pages

تحليلات Curve Fitting

Uploaded by

Haider One
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

Numerical Analysis.

Chapter One: Solution of Simulations linear and nonlinear Algebraic Equation.

- Solution of Linear Algebraic Equation


- Direct Method
- Cramer 's Rule Method
- Gauss Elimination Method
- Gauss Jordan Method
- Matrix Inverse Method
- Indirect Method
- Gauss Seidel Method
- Solution of Non-linear Algebraic Equation
- Gauss Seidel Method
- Newton – Iteration Method
- Eigen Value and Eigen Vector
Solution of Linear Algebraic Equation

If we consider a set of simulation linear function with four unknowns

𝑎11 𝑥1 + 𝑎12 𝑥2 + 𝑎13 𝑥3 + 𝑎14 𝑥4 = 𝑏1

𝑎21 𝑥1 + 𝑎22 𝑥2 + 𝑎23 𝑥3 + 𝑎24 𝑥4 = 𝑏2

𝑎31 𝑥1 + 𝑎32 𝑥2 + 𝑎33 𝑥3 + 𝑎34 𝑥4 = 𝑏3

𝑎41 𝑥1 + 𝑎42 𝑥2 + 𝑎43 𝑥3 + 𝑎44 𝑥4 = 𝑏4

𝑎11 𝑎12 𝑎13 𝑎14

𝑎21 𝑎22 𝑎23 𝑎24 = [𝐴]

𝑎31 𝑎32 𝑎33 𝑎34

𝑎41 𝑎42 𝑎43 𝑎44

𝑎11 𝑎12 𝑎13 𝑎14 𝑥1 𝑏1

𝑎21 𝑎22 𝑎23 𝑎24 𝑥2 = 𝑏2

𝑎31 𝑎32 𝑎33 𝑎34 𝑥3 𝑏3

𝑎41 𝑎42 𝑎43 𝑎44 𝑥4 𝑏4

[𝐴] . { X } = [𝑏]

Notes:

1- If all the element of lower triangle are zero matrix will be call Upper Triangle Matrix
2- If all the element of upper triangle are zero matrix will be call Lower Triangle Matrix

Method of Solving Simulations Linear Equations.

1-Direct Method

It is the method used for solving small sets of equations (set have no zero coefficients or a few
zero coefficients therefore it will be called dense coefficients) so this method depending on
finding the value of X without initial value.

2-Indirect Method
It is the method used for solving large sets of equations (set with zero coefficients but not
bonded matrix therefore it will be spared coefficients such that it will be depending on the
finding the value of X with the initial value.

Direct Method

Cramer Rule Method

If the coefficient matrix A of a system AX=B of n linear equations in n unknowns is nonsingular


the system has the unique solution.

|𝐷 | |𝐷 | |𝐷 |
𝑋1 = |𝐴|1 , 𝑋2 = |𝐴|2 , 𝑋3 = |𝐴|3

Where D, is the matrix obtained from A by replacing the i-column of A by the column vector B.

Example 1 : Solve the system of linear equation

2X + 3Y = 8

4X + 5Y=11

Solution: write in matrix form

2 3 𝑋 8
[ ] [ ] =[ ]
4 5 𝑌 11

|𝐴| = |2 3
| = 2x5 – 3x4 = -2
4 5

|𝑋| = | 8 3| = 8x5 – 3x11 = 7


11 5

|𝑌| = |2 8 | = 2x11 – 8x4 = -10


4 11
|𝑋| 7
X = |𝐴| = = -3.5
−2

|𝑌| −10
Y = |𝐴| = −2
=5

Example 2 : Solve the system of linear equation by Cramer Rule Method

𝑋1 + 2𝑋2 + 3𝑋3 = -2

2𝑋1 + 3𝑋2 + 2𝑋3 = 0

3𝑋1 + 3𝑋2 + 4𝑋3 = -1

Solution: write in matrix form


1 2 3 𝑋1 −2
[2 3 2] [𝑋2 ] = [ 0 ]
3 3 4 𝑋3 −1

1 2 3
|𝐴| = |2 3 2| = 1x (3x4-2x3) – 2x (2x4-2x3) +3x (2x3-3x3) =6-4-9=6-13=-7
3 3 4
−2 2 3
|𝑋1 | = | 0 3 2|= -2x(3x4-2x3) – 2x(0x4-2x-1)+3x(0x3-3x-1)=-7
−1 3 4
1 −2 3
|𝑋2 | = |2 0 2|= 1x(0x4-2x-1)-(-2)x(2x4-2x3)+3x(2x-1-0x3)=0
3 −1 4
1 2 −2
|𝑋3 | = |2 3 0 |=1x(3x-1-0x3)-2x(2x-1-0x3)+(-2)x(2x3-3x3)=7
3 3 −1
𝑋1 −7
𝑋1 = 𝐴
= −7 = 1 𝑋1 = 1

𝑋2 0
𝑋2 = 𝐴
= −7 = 0 𝑋2 = 0

𝑋3 7
𝑋3 = = =- 1 𝑋3 = -1
𝐴 −7

Gauss Elimination Method

This is one of the most popular and effective method for solving only square system of
equations .

Consider the following set of equations

𝑎11 𝑥1 + 𝑎12 𝑥2 + 𝑎13 𝑥3 + 𝑎14 𝑥4 = 𝑏1

𝑎21 𝑥1 + 𝑎22 𝑥2 + 𝑎23 𝑥3 + 𝑎24 𝑥4 = 𝑏2

𝑎31 𝑥1 + 𝑎32 𝑥2 + 𝑎33 𝑥3 + 𝑎34 𝑥4 = 𝑏3

𝑎41 𝑥1 + 𝑎42 𝑥2 + 𝑎43 𝑥3 + 𝑎44 𝑥4 = 𝑏4

𝑎11 𝑎12 𝑎13 𝑎14 𝑥1 𝑏1

𝑎21 𝑎22 𝑎23 𝑎24 𝑥2 = 𝑏2

𝑎31 𝑎32 𝑎33 𝑎34 𝑥3 𝑏3

𝑎41 𝑎42 𝑎43 𝑎44 𝑥4 𝑏4


This method depend if putting the element of the main diagonal equals to (1) and all the value of
the lower triangle equals to zero

After n of operation:-
′ ′ ′
1 𝑎12 𝑎13 𝑎14 𝑥1 𝑏1′
′ ′
0 1 𝑎23 𝑎24 𝑥2 = 𝑏2′

0 0 1 𝑎34 𝑥3 𝑏3′

0 0 0 1 𝑥4 𝑏4′

𝒙𝟒 = 𝑏4′

𝒙𝟑 = 𝑏3′ - 𝑎34

𝒙𝟒

𝒙𝟐 = 𝑏2′ - 𝑎23
′ ′
𝒙𝟑 - 𝑎24 𝒙𝟒

𝒙𝟏 = 𝑏1′ - 𝑎12

𝒙𝟐 - ′
𝑎13 𝒙𝟑 - ′
𝑎14 𝒙𝟒

Gauss Jordan Method

This method is extension of gauss elimination method. The only difference is that all the
value of the element in the upper and lower triangle is zero that the coefficient above
and below the main diagonal is zero.

𝑎11 𝑎12 𝑎13 𝑎14 𝑥1 𝑏1

𝑎21 𝑎22 𝑎23 𝑎24 𝑥2 = 𝑏2

𝑎31 𝑎32 𝑎33 𝑎34 𝑥3 𝑏3

𝑎41 𝑎42 𝑎43 𝑎44 𝑥4 𝑏4

[𝐴] . {𝑋} = [𝐵]

After n of operation:-

1 0 0 0 𝑥1 𝑏1′

0 1 0 0 𝑥2 = 𝑏2′

0 0 1 0 𝑥3 𝑏3′

0 0 0 1 𝑥4 𝑏4′
Note:-

It is popular to make maximization to the coefficient before solving any set of equations.it means
that we put a large value in the main diagonal and the bigger value would be at the first row and
first column when we transferred values in the rows the values in [𝐵] must be transformed, also
but when we transferred values in the columns the values in [𝑋] must be transformed.

We do not make maximization in the cases of :-

When the determinate of matrix equal to zero (|𝐴|=0)

When the matrix is bonded matrix.

Example:-

-3 1 0 2 𝑥1 13

1 3 2 1 𝑥2 = 4

4 2 1 -5 𝑥3 -20

-3 -2 -3 -10 𝑥4 -25

We transform the fourth column instead of the first column such that the values of x will be
transformed

2 1 0 -3 𝑥4 13

1 3 2 1 𝑥2 = 4

-5 2 1 4 𝑥3 -20

-10 -2 -3 -3 𝑥1 -25

After that we transform the fourth row instead of the first row so that we will transformed the
values of [𝐵]

-10 -2 -3 -3 𝑥4 -25

1 3 2 1 𝑥2 = 4

-5 2 1 4 𝑥3 -20

2 1 0 -3 𝑥1 13

Example: Solve the system of linear equation by


1-Gauss Elimination Method 2- Gauss Jordan Method

3𝑋1 - 𝑋2 + 2𝑋3 = 12

𝑋1 + 2𝑋2 + 3𝑋3 = 11

2𝑋1 - 2𝑋2 - 𝑋3 = 2

Solution: write in matrix form

3 −1 2 𝑥1 12
[1 2 3 ] [𝑥2 ] = [11]
2 −2 −1 𝑥3 2

1-Gauss Elimination Method

1 −1/3 2/3 𝑥1 4
[1 2 𝑥
3 ] [ 2 ]=[11]
2 −2 −1 𝑥3 2
1 −1/3 2/3 𝑥1 4
[0 7/3 7/3 ] [𝑥2 ]=[ 7 ]
0 −4/3 −7/3 𝑥3 −6

1 −1/3 2/3 𝑥1 4
[0 1 1 ] [𝑥2 ]=[ 3 ]
0 −4/3 −7/3 𝑥3 −6

1 −1/3 2/3 𝑥1 4
[0 1 𝑥
1 ] [ 2 ]=[ 3 ]
0 0 −1 𝑥3 −2
1 −1/3 2/3 𝑥1 4
[0 1 1 ] [𝑥2 ]=[3]
0 0 1 𝑥3 2

𝑥3 =2

𝑥2 = 3-𝑥3 =3-2=1
1 2
𝑥1 =4 - 3
𝑥2 - 3
𝑥3 = 3

2-Gauss Jordan Method

1 −1/3 2/3 𝑥1 4
[0 1 1 ] [𝑥2 ]=[3]
0 0 1 𝑥3 2

1 −1/3 2/3 𝑥1 4
[0 1 0 ] [𝑥2 ]=[1]
0 0 1 𝑥3 2
1 −1/3 0 𝑥1 8/3
[0 1 0] [𝑥2 ]=[ 1 ]
0 0 1 𝑥3 2
1 0 0 𝑥1 3
[0 1 0] [𝑥2 ]=[1]
0 0 1 𝑥3 2

𝑥1 =3 , 𝑥2 =1 , 𝑥3 =2

Matrix Inversion Method

[𝐴] . {𝑋} = [𝐵]

Multiplying both sides by [𝐴]−1

[𝐴]−1 [𝐴] . {𝑋} = [𝐴]−1 [𝐵]

[𝐼] . {𝑋} = [𝐴]−1 [𝐵]

{𝑋} = [𝐴]−1 [𝐵]

𝑎𝑑𝑗 [𝐴]
{𝑋} = [𝐵]
|𝐴|

Example: Solve the system of linear equation by Inverse Matrix method.

2𝑋1 + 3𝑋2 + 𝑋3 = 9

𝑋1 + 2𝑋2 + 3𝑋3 = 6

3𝑋1 + 𝑋2 + 2𝑋3 = 8

Solution:- Inverse of Matrix

2 3 1
[𝐴] = [1 2 3]
3 1 2

𝐶𝑖𝑗 = 𝑎𝑖𝑗 (-1)i+j , where : 𝐶𝑖𝑗 is the cofactor

𝐶11 = 2x2 – 3x1 = 1

𝐶12 = 1x2 – 3x3 = -7 (-1)3 = 7

𝐶13 = 1x1 – 2x3 = -5 (-1)4 = -5

𝐶21 = 3x2 – 1x1 = 5 (-1)3=-5

𝐶22 = 2x2 – 3x1 = 1(-1)4 = 1

𝐶23 = 2x1 – 3x3 = -7 (-1)5= 7


𝐶31 = 3x3 – 1x2 = 7(-1)4 = 7

𝐶32 = 2x3 – 1x1 = 5 (-1)5 = -5

𝐶33 = 2x2 – 3x1 = 1(-1)6 = 1

1 7 −5
Adj[𝐴] = [−5 1 7]
7 −5 1
1 −5 7
adj[𝐴] = [ 7 1 −5]
−5 7 1
𝑎𝑑𝑗[𝐴]
[𝐴]−1 =
|𝐴|

|𝐴| = 2X(4-3) -3X(2-9) + 1X(1-6) = 2 X1 + 3X7 – 5 = 2 + 21 – 5 = 18

1 −5 7
[7 1 −5]
[𝐴]−1 = −5 7 1
18

{𝑋} = [𝐴]−1 [𝐵]

1 −5 7
𝑥1 [7 1 −5] 9
𝑥 −5 7 1
[ 2] = 18
[6]
𝑥3 8
1 35
𝑥1 = 18 ( 9 – 30 + 56 ) = 18

1 29
𝑥2 = 18 ( 63 + 6 - 56 ) = 18

1 5
𝑥3 = 18 ( -45 + 42 + 8 ) = 18

OR
𝑥1 35
1
𝑥
[ 2] = [29]
18
𝑥3 5

Matrix Inversion by Gauss – Jordan Method

The matrix form has generally the form of

[𝐴] . {𝑋} = [𝐵]

To find (x).

{𝑋} = [𝐴]−1 [𝐵]


To find [𝐴]−1

[𝐴]−1 = [𝐴] [𝐼] , [𝐼] =

To check the write magnitude of [𝐴]−1

[𝐴]−1 . [𝐴] = [𝐼]

We do not apply the inverse of any matrix in the following cases

1-If the determinate [𝐴] = 0

2-When there is ill conditions matrix if

det[𝐴]
< 1
𝑛 𝑛
√∑𝑖=1 ∑𝑗=1(𝑎𝑖𝑗 )2

Example: find the inverse of matrix [𝐴] and then find the value of x.

2 1 1
[𝐴] = [1 2 1]
1 1 2

[𝐴]−1 . [𝐴] = [𝐼]

2 1 1 1 0 0
[𝐴]−1 = [1 2 1] [0 1 0]
1 1 2 0 0 1
1 1/2 1/2 1/2 0 0
[𝐴]−1 = [1 2 1 ][ 0 1 0]
1 1 2 0 0 1
1 1/2 1/2 1/2 0 0
[𝐴]−1 = [0 3/2 1/2] [−1/2 1 0]
0 1/2 3/2 −1/2 0 1

1 1/2 1/2 1/2 0 0


[𝐴]−1 = [0 1 1/3] [−1/3 2/3 0]
0 0 4/3 −1/3 −1/3 1

1 1/2 1/2 1/2 0 0


[𝐴]−1 = [0 1 1/3] [−1/3 2/3 0 ]
0 0 1 −1/4 −1/4 3/4

1 1/2 0 5/8 1/8 −3/8


[𝐴]−1 = [0 1 0] [−1/4 3/4 −1/4]
0 0 1 −1/4 −1/4 3/4
1 0 0 3/4 −1/4 −1/4
[𝐴]−1 = [0 1 0] [−1/4 3/4 −1/4]
0 0 1 −1/4 −1/4 3/4

For check

[𝐴]−1 . [𝐴] = [𝐼]

{𝑋} = [𝐴]−1 [𝐵]

𝑥1 3/4 −1/4 −1/4 𝑏1


[𝑥2 ] = [−1/4 3/4 −1/4] [𝑏2 ]
𝑥3 −1/4 −1/4 3/4 𝑏3

3-Indirect Method

Gauss Seidel Method

Gauss – Seidel iteration is the most popular and one of the most powerful iterative
techniques for the solution of sets of linear equations consider a set of linear equations;

𝑎11 𝑥1 + 𝑎12 𝑥2 + 𝑎13 𝑥3 = 𝑏1

𝑎21 𝑥1 + 𝑎22 𝑥2 + 𝑎23 𝑥3 = 𝑏2 --------------------------------- (1)

𝑎31 𝑥1 + 𝑎32 𝑥2 + 𝑎33 𝑥3 = 𝑏3

We now solve for the unknowns

(°) 𝑏1 −𝑎12 𝑥2 − 𝑎13 𝑥3


𝑥1 = 𝑎11

(°) 𝑏2 −𝑎21 𝑥1 − 𝑎23 𝑥3


𝑥2 = ------------------------------------------ (2)
𝑎22

(°) 𝑏3 − 𝑎31 𝑥1 − 𝑎32 𝑥2


𝑥3 =
𝑎33

The initial values are needed for 𝑥1 , 𝑥2 , 𝑥3 which is denoted by


𝑥1
𝑥
( 2 ) From equation (1) applied in system of equation(2) such that we can find the values of
𝑥3 °

𝑥1
𝑥
( 2 ) Such as:
𝑥3 1

(1) 𝑏1 − 𝑎12 𝑥2° −𝑎13 𝑥3°


𝑥1 = 𝑎11
(1) 𝑏2 − 𝑎21 𝑥11 −𝑎23 𝑥3°
𝑥2 =
𝑎22

(1) 𝑏3 − 𝑎31 𝑥11 −𝑎32 𝑥21


𝑥3 = 𝑎33

The new values of the knowns replaced the old values. The operation will continue until the
convergence criteria is satisfied

|𝑥𝑖𝑛+1 − 𝑥𝑖𝑛 | ≤ ∈ Absolute convergence criteria

OR

|𝑥𝑖𝑛+1 −𝑥𝑖𝑛 |
≤ ∈ Relative convergence criteria
|𝑥𝑖𝑛+1 |

Generally the values of 𝑥1 , 𝑥2 , and 𝑥3 for any iterations can be found such as:

𝑏1 − 𝑎12 𝑥2𝑛 −𝑎13 𝑥3𝑛


𝑥1𝑛+1 = 𝑎11

𝑏2 − 𝑎21 𝑥1𝑛+1 −𝑎23 𝑥3𝑛


𝑥2𝑛+1 =
𝑎22

𝑏3 − 𝑎31 𝑥1𝑛+1 −𝑎32 𝑥2𝑛+1


𝑥3𝑛+1 =
𝑎33

The convergence cannot be satisfied unless:

1-Diagonal elements should not be zero.

2-For the following matrix


𝑎11 𝑎12 𝑎13
𝑎
[ 21 𝑎22 𝑎23 ] |𝑎11 | > |𝑎22 | , |𝑎22 | > |𝑎33 |
𝑎31 𝑎32 𝑎33

|𝑎11 | Must be > |𝑎12 + 𝑎13 |

|𝑎22 | Must be > |𝑎21 + 𝑎23 |

|𝑎33 | Must be > |𝑎31 + 𝑎32 |

Generally

|𝑎𝑖𝑖 | > ∑𝑛𝑖=1 𝑎𝑖𝑗 for any row


𝑖≠𝑗

Example: carry out the first three iteration of the following set of equations using gauss seidel
method. Use initial values (guess) 𝑥1 = 𝑥2 = 𝑥3 = 1
8𝑋1 + 2𝑋2 + 3𝑋3 = 30

𝑋1 - 9𝑋2 + 2𝑋3 = 1

2𝑋1 + 3𝑋2 + 6𝑋3 = 31

Solution:

Using the initial values of 𝑥1 = 𝑥2 = 𝑥3 = 1 yields

30− 2𝑥2𝑛 −3𝑥3𝑛


𝑥1𝑛+1 = 8

−1+ 𝑥1𝑛+1 +2𝑥3𝑛


𝑥2𝑛+1 = 9

31− 2𝑥1𝑛+1 −3𝑥2𝑛+1


𝑥3𝑛+1 =
6

(°) (°) (°)


𝑥1 = 𝑥2 = 𝑥3 = 1
(°) (°)
(1) 30− 2𝑥2 −3𝑥3 30−2−3
𝑥1 = = = 3.125
8 8

(1) (°)
(1) −1+ 𝑥1 +2𝑥3 −1+3.125−2𝑥1
𝑥2 = 9
= 9
= 0.4583

(1) (1)
(1) 31− 2𝑥1 −3𝑥2 31−2𝑥3.125−3𝑥 0.4583
𝑥3 = 6
= 6
= 3.89

For second iteration

(2) 30−2𝑥 0.4583−3𝑥3.89


𝑥1 = 8
= 2.1745

(2) −1+2.1745+2𝑥3.89
𝑥2 = 9
= 0.9963

(2) 31−2𝑥2.1745−3𝑥0.9963
𝑥3 = 6
= 3.9437

𝑥1 2.022
(𝑥2 ) = (0.9899)
𝑥3 3 3.997

And
𝑥1 2
𝑥
( 2) = (1)
𝑥3 𝑒𝑥𝑎𝑐𝑡 4
Solution of Non-linear Algebraic Equation

1- Gauss Seidel Method

2-Newton – Iteration Method

The system of nonlinear algebraic equations may arise when it is dealing with problem
involving a system of nonlinear algebraic equations such as optimization and
numerical integration problems.

𝒇𝟏 ( 𝑥1′ , 𝑥2′ , -------------------- , 𝑥𝑛′ ) = 0

𝒇𝟐 ( 𝑥1′ , 𝑥2′ , -------------------- , 𝑥𝑛′ ) = 0

𝒇𝒏 ( 𝑥1′ , 𝑥2′ , -------------------- , 𝑥𝑛′ ) = 0

𝑥1′ = 𝑥1 + ℎ1

𝑥𝑛′ = 𝑥𝑛 + ℎ𝑛

By Taylor Series

ℎ1𝑛
f( 𝑥1′ ) = f (𝑥1 + ℎ1 ) = f (𝑥1 ) + ℎ1 𝑓 ′ (𝑥1 ) + ------- + 𝑓 𝑛 (𝑥1 )
𝑛!

ℎ2𝑛
f( 𝑥2′ ) = f (𝑥2 + ℎ2 ) = f (𝑥2 ) + ℎ2 𝑓 ′ (𝑥2 ) + ------- + 𝑛!
𝑓 𝑛 (𝑥2 )

𝑛
ℎ𝑛
f( 𝑥𝑛′ ) = f (𝑥𝑛 + ℎ𝑛 ) = f (𝑥𝑛 ) + ℎ𝑛 𝑓 ′ (𝑥𝑛 ) + ------- + 𝑛!
𝑓 𝑛 (𝑥𝑛 )

𝒇𝟏 ( 𝑥1′ , 𝑥2′ , -------------------- , 𝑥𝑛′ ) = 𝒇𝟏 ( 𝑥1 + ℎ1 , 𝑥2 + ℎ2 , --------------------, 𝑥𝑛 + ℎ𝑛 )


𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
= f(𝑥1 , 𝑥2 , ------------------, 𝑥𝑛 )+ ℎ1 𝜕𝑥1 + ℎ2 𝜕𝑥1 + ℎ3 𝜕𝑥1 + -------- + ℎ𝑛 𝜕𝑥1 + higher order terms
1 2 3 𝑛

𝜕𝑓 𝜕𝑓 𝜕𝑓 𝜕𝑓
𝒇𝒏 ( 𝑥1′ , 𝑥2′ , -------- , 𝑥𝑛′ ) = 𝒇𝒏 (𝑥1 , 𝑥2 , -------, 𝑥𝑛 )+ ℎ𝑛 𝜕𝑥𝑛 + ℎ𝑛 𝜕𝑥𝑛 + ℎ𝑛 𝜕𝑥𝑛 + ----- + ℎ𝑛 𝜕𝑥𝑛
1 2 3 𝑛
+ higher order terms
In a matrix form

𝜕𝑓1 𝜕𝑓1 𝜕𝑓1 𝜕𝑓1


𝜕𝑥1 𝜕𝑥2 𝜕𝑥3
--------- 𝜕𝑥𝑛
ℎ1 𝒇𝟏

𝜕𝑓2 𝜕𝑓2 𝜕𝑓2 𝜕𝑓2


𝜕𝑥1 𝜕𝑥2 𝜕𝑥3
--------- 𝜕𝑥𝑛
ℎ2 𝒇𝟐

𝜕𝑓3 𝜕𝑓3 𝜕𝑓3 𝜕𝑓3


𝜕𝑥1 𝜕𝑥2 𝜕𝑥3
--------- 𝜕𝑥𝑛
ℎ3 = - 𝒇𝟑

𝜕𝑓𝑛 𝜕𝑓𝑛 𝜕𝑓𝑛 𝜕𝑓𝑛


𝜕𝑥1 𝜕𝑥2 𝜕𝑥3
--------- 𝜕𝑥𝑛
ℎ𝑛 𝒇𝒏

Jacobin Matrix

[𝐽] . {ℎ} =- [𝑓]

{ℎ} = - [𝑗]−1 [𝑓]

𝑥𝑛+1 = 𝑥𝑛 + h

𝑥𝑛+1 = 𝑥𝑛 - [𝑗]−1 [𝑓]

Example: estimate one set of roots for the following system of nonlinear algebraic equation
using Newton-Raphson method.

𝑥12 + 𝑥22 = 18

𝑥1 - 𝑥2 = 0

Solution:

f(x,y)= 𝑥12 + 𝑥22 – 18

f(x,y)= 𝑥1 - 𝑥2

we begin our problem by finding one set of root such as 𝑥1 = 2 , 𝑥2 = 2

for 𝑥1 = 2 , 𝑥2 = 2 we evaluate

𝒇𝟏 = 𝑥12 + 𝑥22 – 18 = 22 + 22 – 18 = -10

𝒇𝟐 = 𝑥1 - 𝑥2 = 2 -2 = 0
𝜕𝑓1 𝜕𝑓1
𝜕𝑥1 𝜕𝑥2 2𝑥1 2𝑥2
[𝐽] = [ 𝜕𝑓2 𝜕𝑓2
]= [ ]
1 −1
𝜕𝑥1 𝜕𝑥2

𝑎𝑑𝑗[𝐽]
[𝐽]−1 =
|𝐽|

−1 −2𝑥2
Adj [𝐽] = [ ]
−1 2𝑥1

Det [𝐽] = -2 (𝑥1 + 𝑥2 )


−1 −2𝑥 1 2𝑥
[−1 2𝑥 2 ] [1 −2𝑥2 ]
[𝐽]−1 = 1
= 1
−2 (𝑥1 + 𝑥2 ) 2 (𝑥1 + 𝑥2 )

{ℎ} = - [𝑗]−1 [𝑓]


1 2𝑥2
ℎ [1 −2𝑥 ] −10
{ 1 } = 2 (𝑥 + 𝑥1 ) [ ]
ℎ2 1 2 0

For 𝑥1 = 𝑥2 = 2
1 4
ℎ [ ] −10
{ 1 } = 2 1(2 +−42) [ ]
ℎ2 0
10
ℎ1 = = 1.25
8

10
ℎ2 = = 1.25
8

𝑥𝑛+1 = 𝑥𝑛 + h

𝑥11 = 𝑥1° + ℎ1

𝑥1 = 2 + 1.25 = 3.25

𝑥2 = 2 + 1.25 = 3.25

For second iteration

𝒇𝟏 = 3.252 + 3.252 – 18 = 31.25

𝒇𝟐 = 3.25 – 3.25 = 0
1 2𝑥
ℎ [1 −2𝑥2 ]
31.25
{ 1 } = - [𝑗]−1 [𝑓] = - 1
[ ]
ℎ2 2 (𝑥1 + 𝑥2 ) 0

ℎ 1 1 6.5 31.25
{ 1 } = - 13 [ ][ ]
ℎ2 1 −6.5 0
ℎ1 = -0.24038

ℎ2 = -0.24038

𝑥1 = 3.25 – 0.24038 = 3.0096

𝑥2 = 3.25 – 0.24038 = 3.0096

For third iteration

ℎ 1 1 6.0192 0.11538
{ 1 } = - 12.0384 [ ][ ]
ℎ2 1 −6.0192 0
−0.11538
ℎ1 = = -0.00958
12.0384

ℎ2 = -0.00958

𝑥1 = 3.0096-0.00958 = 3.0002

𝑥2 = 3.0096-0.00958 = 3.0002

For check

𝑥12 + 𝑥22 = 18 3.00022 + 3.00022 18

Eigen Value and Eigen Vector

A variety of practical problems having to do with alternating current and voltage and
other oscillatory phenomena lead to linear algebraic system of the type

𝑎11 𝑥1 + 𝑎12 𝑥2 + --------- + 𝑎1𝑛 𝑥𝑛 = 𝜆𝑥1

𝑎21 𝑥1 + 𝑎22 𝑥2 + ---------- + 𝑎2𝑛 𝑥𝑛 = 𝜆𝑥2

𝑎𝑛1 𝑥1 + 𝑎𝑛2 𝑥2 + ----------- + 𝑎𝑛𝑛 𝑥𝑛 = 𝜆𝑥𝑛

AX=λX

Where: λ : Eigen values , X : Eigen vector

Example: Find the Eigen values and Eigen vector.

2 1
If A= [ ]
1 2
2−𝜆 1 𝑥1
=[ ] {𝑥 } = 0
1 2−𝜆 2

2−𝜆 1
| | = (2 – λ )2 -1 = 0
1 2−𝜆

𝜆1 = 3 , 𝜆2 = 1

For λ = 3

-𝑥1 + 𝑥2 = 0

𝑥1 - 𝑥2 = 0
𝑥1
{𝑥 } , λ = 3
1

For λ = 3

𝑥1 + 𝑥2 = 0

𝑥1 + 𝑥2 = 0

𝑥2 = - 𝑥1
𝑥1
{−𝑥 } , λ = 1
1
Sheet NO.2: Solution of Simulations linear and nonlinear Algebraic Equation

Q1: solve the system of linear algebraic equations using Cramer’s Rule.

6X – 3Y + 12Z = 15

2X + 3Y + 5Z = 10

4X – 2Y + 8Z = 21

Q2: solve the system of linear algebraic equations using Cramer’s Rule.

2X – 6Y + 3Z = 9

3X + Y - 2Z = 5

3X – 9Y + 6Z = 3

Q3: Solve the following set of equations using the following method.

3𝑋1 + 𝑋2 - 𝑋3 = 2

𝑋1 + 4𝑋2 + 𝑋3 = 12

2𝑋1 + 𝑋2 + 2𝑋3 = 10

Use Gauss – Elimination method and Cramer’s Rule.

Ans, 𝑥1 = 1 , 𝑥2 = 2 , 𝑥3 = 3

Q4: Solve the following set of equations using the following method.

7𝑋1 + 𝑋2 + 2𝑋3 = 47

−𝑋1 + 4𝑋2 - 𝑋3 = 19

3𝑋1 + 15𝑋2 + 20𝑋3 = 87

Use Gauss – Jordan method and Cramer’s Rule.

Ans, 𝑥1 = 6.165, 𝑥2 = 6.019, 𝑥3 = -1.089

Q5: Solve the following set of equations using Newton-Raphson with initial values

𝑥1 =0.5 , 𝑥2 = 1.5 , 𝑥3 = 2.5

𝑋1 + 𝑋2 + 𝑋3 = 6

𝑥1 . 𝑥2 + 𝑥1 . 𝑥3 + 𝑥2 . 𝑥3 = 11
𝑥1 . 𝑥2 . 𝑥3 = 6

Ans, 𝑥1 = 1 , 𝑥2 = 2 , 𝑥3 = 3

Q6: Solve the following set of equations using Gauss – Siedel method and Newton-
Raphson method with initial values x = y = z = 1

4x + y2 + z = 11

X + 4y + z2 = 18

X2 + y + 4z = 15

Ans, 𝑥1 = 1 , 𝑥2 = 2 , 𝑥3 = 3

Q7:Determine the equation of the parabola having the form of Y = AX2 + BX + C which
passes through the points (3,20) , (2,13) and (1,8). Use Gauss-Siedel method with initial
values A=0.5 , B=1.5 , C=2.5 . carry out only two iteration.

Q8: Using Gauss – Elimination method to find the stream flow rates by material balance
as,

0.52 𝐹2 + 0.3 𝐹1 + 𝐹3 + 0.01 = 0

0.5 𝐹1 + 𝐹2 + 1.9 𝐹3 = 0.67

0.44 + 0.3 𝐹2 + 0.5 𝐹3 = -0.1 𝐹1

Q9: Using Matrix – Inverse method to find the stream flow rates (𝐹1 , 𝐹2 , 𝐹3 ) by
material balance as,

0.52 𝐹2 + 0.3 𝐹1 + 𝐹3 + 0.01 = 0

0.5 𝐹1 + 𝐹2 + 1.9 𝐹3 = 0.67

0.44 + 0.3 𝐹2 + 0.5 𝐹3 = -0.1 𝐹1

Q10: Used Newton- Raphson method to find the root of two functions.

f( r , 𝜃 ) = log r – tan 𝜃 – 𝜃 ------------------- 1

f( r , 𝜃 ) = csc 𝜃 + ln r – 𝜃 ----------------------2

start with 𝑟1 = 1 and 𝜃1 = 1 as initial values.


Q11: Used Newton- Raphson method to find the root of two functions.

f(x,y) = ln (x+y) + sin xy - 2 = 0 ---------------------- 1

f(x,y) = 0.52x+3y + log x = 0 -------------------------- 2

start with 𝑥1 = 1 and 𝑦2 = 2 as initial values.

Q12 : Find the Eigen values and Eigen vector.

3 1 4
IF A = [0 2 6]
0 0 5
Chapter Two: Approximations, Errors and Taylor Series.
Accuracy and Precision
The errors associated with both calculations and measurements can be characterized with regard to their
accuracy and precision. Accuracy refers to how closely a computed or measured value agrees with the true
value. Precision refers to how closely individual computed or measured values agree with each other

Error Definitions
Numerical errors arise from the use of approximations to represent exact mathematical operations and
quantities. These include truncation errors, which result when approximations are used to represent exact
mathematical procedures, and round-off errors, which result when numbers having limited significant
figures are used to represent exact numbers. For both types, the relationship between the exact, or true,
result and the approximation can be formulated as
True value = approximation + error ------------------------- (1)
By rearranging Eq. (1), we find that the numerical error is equal to the discrepancy between the truth and
the approximation, as in
Et = true value − approximation ------------------------------ (2)
where Et is used to designate the exact value of the error. The subscript t is included to designate that this is
the “true” error. This is in contrast to other cases, as described shortly, where an “approximate” estimate of
the error must be employed. A shortcoming of this definition is that it takes no account of the order of
magnitude of the value under examination. For example, an error of a centimeter is much more significant
if we are measuring a rivet rather than a bridge. One way to account for the magnitudes of the quantities
being evaluated is to normalize the error to the true value, as in
True fractional relative error = true error / true value
where, as specified by Eq. (2 ), error = true value − approximation. The relative error can also be
multiplied by 100 percent to express it as
εt = ( true error / true value ) 100% --------------------------------(3)
where εt designates the true percent relative error
Notice that for Eqs. (2) and (3), E and ε are subscripted with a t to signify that the
error is normalized to the true value. For numerical methods, the true value will be known only when we
deal with functions that can be solved analytically. Such will typically be the case when we investigate the
theoretical behavior of a particular technique for simple systems. However, in real-world applications, we
will obviously not know the true answer a priori. For these situations, an alternative is to normalize the
error using the best available estimate of the true value, that is, to the approximation itself, as in
εa = ( approximate error / approximation ) 100% ----------------------- (4)
where the subscript a signifies that the error is normalized to an approximate value. Note also that for real-
world applications, Eq. (2) cannot be used to calculate the error term for Eq. (4). One of the challenges of
numerical methods is to determine error estimates in the absence of knowledge regarding the true value.
For example, certain numerical methods use an iterative approach to compute answers. In such an
approach, a present approximation is made on the basis of a previous approximation. This process is
performed repeatedly, or iteratively, to successively compute (we hope) better and better approximations.
For such cases, the error is often estimated as the difference between previous and current approximations.
Thus, percent relative error is determined according to
εa = [( current approximation − previous approximation ) / current approximation ] 100% ------------- (5)
This and other approaches for expressing errors will be elaborated on in subsequent chapters. The signs of
Eqs. (2) through (5) may be either positive or negative. If the approximation is greater than the true value
(or the previous approximation is greater than the current approximation), the error is negative; if the
approximation is less than the true value, the error is positive. Also, for Eqs. (3) to (5), the denominator
may be less than zero, which can also lead to a negative error. Often, when performing computations, we
may not be concerned with the sign of the error, but we are interested in whether the percent absolute value
is lower than a prespecified percent tolerance εs . Therefore, it is often useful to employ the absolute value
of Eqs. (2) through (5). For such cases, the computation is repeated until

1
|εa| < εs ------------------------------------------------- (6)
If this relationship holds, our result is assumed to be within the prespecified acceptable level εs . Note that
for the remainder of this text, we will almost exclusively employ absolute values when we use relative
errors. It is also convenient to relate these errors to the number of significant figures in the approximation.
It can be shown (Scarborough, 1966) that if the following criterion is met, we can be assured that the result
is correct to at least n significant figures.

Taylor Series
Taylor’s theorem and its associated formula, the Taylor series, is of great value in the study of numerical
methods. In essence, the Taylor series provides a means to predict a function value at one point in terms of
the function value and its derivatives at another point. In particular, the theorem states that any smooth
function can be approximated as a polynomial. A useful way to gain insight into the Taylor series is to
build it term by term. For example, the first term in the series is
f(xi+1) = f(xi ) ------------------------------------ (8)
This relationship, called the zero-order approximation, indicates that the value of f at the new point is the
same as its value at the old point. This result makes intuitive sense because if xi and xi+1 are close to each
other, it is likely that the new value is probably similar to the old value. Equation (8) provides a perfect
estimate if the function being approximated is, in fact, a constant. However, if the function changes at all
over the interval, additional terms
----------------------------------------------------------------------------------------------------------------------------- ----
Taylor Theorem
If the function f and its first n + 1 derivatives are continuous on an interval containing a and x, then the value of the
function at x is given by
𝑓 ′′ (𝑎) 𝑓 ′′′ (𝑎) 𝑓 𝑛 (𝑎)
f(x) = f(a) + f '(a) (x-a) + (x-a)2 + (x-a)3 + -------- + (x-a)n + 𝑅𝑛 -----(A)
2! 3! 𝑛!
Where the remainder 𝑅𝑛 is defined as

𝑥 (𝑥−𝑡)𝑛
𝑅𝑛 = ∫𝑎 𝑓 (𝑛+1) (t) dt ------------------------------ (B)
𝑛!

Where t = a dummy variable. Equation (A) is called the Taylor series or Taylor’s formula. If the remainder is omitted,
the right side of Eq. (A) is the Taylor polynomial approximation to f (x). In essence, the theorem states that any smooth
function can be approximated as a polynomial. Equation (B) is but one way, called the integral form, by which the
remainder can be expressed. An alternative formulation can be derived on the basis of the integral mean-value theorem.

of the Taylor series are required to provide a better estimate. For example, the first-order approximation is
developed by adding another term to yield
f(𝑥𝑖+1 ) = f(𝑥𝑖 ) + 𝑓 ′ (𝑥𝑖 ) (𝑥𝑖+1 - 𝑥𝑖 ) ------------------------------- (9)
The additional first-order term consists of a slope f _(xi) multiplied by the distance between xi and xi+1.Thus,
the expression is now in the form of a straight line and is capable of predicting an increase or decrease of
the function between xi and xi+1. Although Eq. (9) can predict a change, it is exact only for a straight-line,
or linear, trend. Therefore, a second-order term is added to the series to capture some of the curvature that
the function might exhibit:

𝑓 ′′ (𝑥𝑖 )
f(𝑥𝑖+1 ) = f(𝑥𝑖 ) + f '(𝑥𝑖 ) (𝑥𝑖+1 -𝑥𝑖 ) + (𝑥𝑖+1 -𝑥𝑖 )2 ------------------------- (10)
2!
In a similar manner, additional terms can be included to develop the complete Taylor series Expansion:

𝑓 ′′ (𝑥 ) 𝑓 ′′′ (𝑥𝑖 ) 𝑓 𝑛 (𝑥𝑖 )


f(𝑥𝑖+1 ) = f(𝑥𝑖 ) + f '(𝑥𝑖 ) (𝑥𝑖+1 -𝑥𝑖 ) + 2 ! 𝑖 (𝑥𝑖+1 -𝑥𝑖 )2 + (𝑥𝑖+1 -𝑥𝑖 )3 + (𝑥𝑖+1 -𝑥𝑖 )n
3! 𝑛!
+𝑅𝑛 ----------------------------------- (11)

2
Note that because Eq. (11) is an infinite series, an equal sign replaces the approximate sign that was used in
Eqs. (8) through (10). A remainder term is included to account for all terms from n + 1 to infinity:
𝑓 𝑛+1 (𝜉)
𝑅𝑛 = (𝑛+1)! (𝑥𝑖+1-𝑥𝑖 )n+1 ---------------------------- (12)

Where the subscript n connotes that this is the remainder for the nth-order approximation and ξ is a value of
x that lies somewhere between xi and xi+1. The introduction of the ξ is so important that we will devote an
entire section to its derivation. For the time being, it is sufficient to recognize that there is such a value that
provides an exact determination of the error.
It is often convenient to simplify the Taylor series by defining a step size h = xi+1 – xi and expressing Eq.
(11) as
𝑓 ′′ (𝑥𝑖 ) 𝑓 ′′′ (𝑥𝑖 ) 𝑓 𝑛 (𝑥𝑖 )
f(𝑥𝑖+1 ) = f(𝑥𝑖 ) + f '(𝑥𝑖 ) h + h2 + h3 + ------ + hn +𝑅𝑛 --------------(13)
2! 3! 𝑛!

𝑓 𝑛+1 (𝜉)
𝑅𝑛 = (𝑛+1)! hn+1 ---------------------------- (14)
In general, the nth-order Taylor series expansion will be exact for an nth-order polynomial. For other
differentiable and continuous functions, such as exponentials and sinusoids, a finite number of terms will
not yield an exact estimate. Each additional term will contribute some improvement, however slight, to the
approximation.
Although the above is true, the practical value of Taylor series expansions is that, in most cases, the
inclusion of only a few terms will result in an approximation that is close enough to the true value for
practical purposes. The assessment of how many terms are required to get “close enough” is based on the
remainder term of the expansion. Recall that the remainder term is of the general form of Eq. (14 ). This
relationship has two major drawbacks. First, ξ is not known exactly but merely lies somewhere between xi
and xi+1. Second, to evaluate Eq. ( 14 ), we need to determine the (n + 1)th derivative of f (x). To do this,
we need to know f (x). However, if we knew f (x), there would be no need to perform the Taylor series
expansion in the present context!
Despite this dilemma, Eq. (14) is still useful for gaining insight into truncation errors. This is because we
do have control over the term h in the equation. In other words, we can choose how far away from x we
want to evaluate f (x), and we can control the number of terms we include in the expansion. Consequently,
Eq. ( 14 ) is usually expressed as
Rn = O(ℎ𝑛+1 )
where the nomenclature O(ℎ𝑛+1 ) means that the truncation error is of the order of ℎ𝑛+1 . That is, the error
is proportional to the step size h raised to the (n + l)th power. Although this approximation implies nothing
regarding the magnitude of the derivatives that multiply ℎ𝑛+1 . , it is extremely useful in judging the
comparative error of numerical methods based on Taylor series expansions. For example, if the error is
O(h), halving the step size will halve the error. On the other hand, if the error is O(h2), halving the step size
will quarter the error.
In general, we can usually assume that the truncation error is decreased by the addition of terms to the
Taylor series. In many cases, if h is sufficiently small, the first- and other lower-order terms usually
account for a disproportionately high percent of the error. Thus, only a few terms are required to obtain an
adequate estimate.

3
Example: Suppose that you have the task of measuring the lengths of a bridge and a rivet and come up with
9999 and 9 cm, respectively. If the true values are 10,000 and 10 cm, respectively, compute (a) the true
error and (b) the true percent relative error for each case.
Solution:
(a) The error for measuring the bridge is
Et = 10,000 − 9999 = 1 cm
and for the rivet it is
Et = 10 − 9 = 1 cm
(b) The percent relative error for the bridge is
εt = (1/10,000 ) 100% = 0.01%
and for the rivet it is
εt =( 1/10) 100% = 10%
Thus, although both measurements have an error of 1 cm, the relative error for the rivet is much greater.
We would conclude that we have done an adequate job of measuring the bridge, whereas our estimate for
the rivet leaves something to be desired.

Example: In mathematics, functions can often be represented by infinite series. For example, the
exponential function can be computed using
𝑥2 𝑥3 𝑥𝑛
𝑒 𝑥 = 1 + x + + + ------------- + , Estimate the true value of 𝑒 𝑥 .
2 3! 𝑛!
Solution:
First, Eq. ( ) can be employed to determine the error criterion that ensures a result is correct to at least
three significant figures:
εs = (0.5 × 102-3 )% = 0.05%
Thus, we will add terms to the series until εa falls below this level. The first estimate is simply equal to Eq.
( ) with a single term. Thus, the first estimate is equal to 1. The second estimate is then generated by
adding the second term, as in
𝑒 𝑥 = 1 + x or for x = 0.5 𝑒 0.5 = 1+0.5 = 1.5
This represents a true percent relative error of [Eq. ]
εt = [( 1.648721 − 1.5) / 1.648721 ] x 100% = 9.02%
Equation ( ) can be used to determine an approximate estimate of the error, as in
εa = [(1.5 – 1)/1.5]x100% = 33.3%
Because εa is not less than the required value of εs,we would continue the computation by adding another
term, x2/2!, and repeating the error calculations. The process is continued until εa < εs . The entire
computation can be summarized as

Terms Result εt (%) εa (%)


1 1 39.3
2 1.5 9.02 33.3
3 1.625 1.44 7.69
4 1.645833333 0.175 1.27
5 1.648437500 0.0172 0.158
6 1.648697917 0.00142 0.0158

Thus, after six terms are included, the approximate error falls below εs = 0.05% and the computation is
terminated
Example: Use Taylor series expansions with n = 0 to 6 to approximate f (x) = cos x at xi+1 = π/3 on the
basis of the value of f (x) and its derivatives at xi = π/4. Note that this means that h = π/3 − π/4 = π/12.
Solution:
Our knowledge of the true function means that we can determine the correct value f (π/3) =0.5.

4
The zero-order approximation is
𝜋 𝜋
F ( )= cos ( ) = 0.707106781
3 4
Which represents a percent relative error of
εt = [ ( 0.5 − 0.707106781) / 0.5 ] x 100% = − 41.4%
For the first-order approximation, we add the first derivative term where f ' (x) = - sin x
𝜋 𝜋 𝜋 𝜋
f ( ) = cos ( ) – sin( ) ( ) = 0.521986659
3 4 4 12
Which has εt = − 4.40 percent.
For the second-order approximation, we add the second derivative term where
f'' (x) = − cos x :

𝜋
𝜋 𝜋 𝜋 𝜋 cos( ) 𝜋
f ( ) = cos ( ) – sin( ) ( )- 4
( )2 = 0.497754491
3 4 4 12 2 12
With εt = 0.449 percent. Thus, the inclusion of additional terms results in an improved estimate.
The process can be continued and the results listed, as in Table.
Order n f (n) (x) f(π/3) 𝜺𝒕
0 cos x 0.707106781 − 41.4
1 − sin x 0.521986659 − 4.4
2 − cos x 0.497754491 0.449
3 sin x 0.499869147 2.62 × 10− 2
4 cos x 0.500007551 − 1.51 × 10− 3
5 − sin x 0.500000304 − 6.08 × 10− 5
6 − cos x 0.499999988 2.44 × 10− 6

have added the third-order term, the error is reduced to 2.62 × 10-2 percent, which means that we have
attained 99.9738 percent of the true value. Consequently, although the addition of more terms will reduce
the error further, the improvement becomes negligible.

You might also like