0% found this document useful (0 votes)
60 views147 pages

Numerical Analysis Book

The document is a teaching material for a Numerical Analysis II course at Wolkite University, covering topics such as interpolation, numerical integration, approximation theory, and numerical methods for ordinary differential equations. It aims to provide students with a foundational understanding of numerical analysis techniques and their applications in engineering and science. The material includes detailed explanations, methods, and examples to facilitate self-study and improve students' programming skills in MATLAB.

Uploaded by

solhny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views147 pages

Numerical Analysis Book

The document is a teaching material for a Numerical Analysis II course at Wolkite University, covering topics such as interpolation, numerical integration, approximation theory, and numerical methods for ordinary differential equations. It aims to provide students with a foundational understanding of numerical analysis techniques and their applications in engineering and science. The material includes detailed explanations, methods, and examples to facilitate self-study and improve students' programming skills in MATLAB.

Uploaded by

solhny
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 147

WOLKITE UNIVERSITY

COLLEGE OF NATURAL AND


COMPUTATIONAL SCIENCE
DEPARTMENT OF MATHEMATICS
NUMERICAL ANALYSIS II (Math3162)

Written by:

Mekashaw Ali (Numerical Analysis(MSc.)


Desta Sodano (Numerical Analysis(MSc.)
Edited by:
Tamirat Aklilu (Numerical Analysis(MSc.)
Nuri Abera (Numerical Analysis(MSc.)

January, 2020
Wolkite University, Ethioplia
ii
Contents

Preface vii

1 Revision of Interpolation and Numerical Integration (12 Hrs) 1

1.1 Interpolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.1 Interpolation With Equal Interval . . . . . . . . . . . . . 5

1.1.2 Interpolation With Unequal Spacing . . . . . . . . . . . . 9

1.2 Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . 16

1.2.1 Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . 18

1.2.2 Simpson’s one-third (1/3) rule . . . . . . . . . . . . . . . 19

1.2.3 Simpson’s three-eighth (3/8) rule . . . . . . . . . . . . . 20

1.2.4 Waddle’s rule . . . . . . . . . . . . . . . . . . . . . . . . . 21

1.2.5 Errors in quadrature formulas . . . . . . . . . . . . . . . 22

1.2.6 Gaussian Quadrature Formula . . . . . . . . . . . . . . 27

1.2.7 Multiple Integrals . . . . . . . . . . . . . . . . . . . . . . 32

2 Approximation Theory (12 hrs) 39


2.1 Discrete Least-Square Approximation . . . . . . . . . . . . . . . 41

2.1.1 Linear Least Square Method . . . . . . . . . . . . . . . . 42

2.1.2 Non-Linear Least Square Method . . . . . . . . . . . . . 45

2.2 Continuous Least-Square Approximation . . . . . . . . . . . . 50

2.2.1 Least Square Approximation of a Function Using Mono-


mial Polynomial . . . . . . . . . . . . . . . . . . . . . . . 50

2.2.2 Least Square Approximation of Function Using Orthog-


onal polynomials . . . . . . . . . . . . . . . . . . . . . . . 52

2.2.3 Least Square Approximation of Function Using Legen-


der’s polynomials . . . . . . . . . . . . . . . . . . . . . . . 55

3 Numerical Methods For Ordinary Differential Equations (16 Hrs) 61

3.1 Initial Value Problems . . . . . . . . . . . . . . . . . . . . . . . . 63

3.1.1 Taylor’s Series method for nth order . . . . . . . . . . . . 65

3.1.2 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . 67

3.1.3 Modified Euler’s method . . . . . . . . . . . . . . . . . . 70

3.1.4 Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . 72

3.1.5 Multi-step Methods . . . . . . . . . . . . . . . . . . . . . 76

3.1.6 Higher Order Equations and Systems . . . . . . . . . . . 83

3.2 Boundary Value Problems . . . . . . . . . . . . . . . . . . . . . 87

3.2.1 Linear Shooting method . . . . . . . . . . . . . . . . . . 90

3.2.2 Shooting Method for Non-Linear Boundary Value Prob-


lems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

3.2.3 Finite Difference Method for Linear Problems . . . . . . 95

iv
3.2.4 Finite Difference Method for Non Linear Boundary Prob-
lems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

4 Eigenvalue Problems (8 Hrs) 109

4.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . 110

4.2 Iterative Methods for Eigenvalues . . . . . . . . . . . . . . . . . 115

4.2.1 Power Method . . . . . . . . . . . . . . . . . . . . . . . . . 115

4.3 QR Factorization Method and Householder Method . . . . . . . 118

v
vi
Preface

This module (teaching material) provides an introduction to the numerical


analysis II that are typically encountered (and used) in engineering, sci-
ence and mathematics undergraduate courses. We want you learn enough
about the mathematical functions in Matlab that you will be able to use
them correctly, appreciate their limitations, and modify them when nec-
essary to suit your own needs. The topic includes revision of numerical
integration, approximation theory, numerical methods for ordinary differ-
ential equations, Eigen values and singular values and partial differential
equations. It gives a brief introduction to the learner on basic concepts
and revision of interpolation and numerical integration, approximation
theory, numerical solution of ordinary differential equations, and Eigen
value problems. The first chapter is on basic concepts revision of inter-
polation and numerical integration and In the first chapter of this module
numerical differentiation using forward difference and backward differ-
ence interpolation formula and numerical integration techniques, Trape-
zoidal rule, Simpson’s one-third and three-eight rules are illustrated. The
second chapters about numerical methods to solve approximation theory,
such as etc. are illustrated, in the third chapter numerical solution of
ordinary differential equations such as initial value problems, the meth-
ods presented in this chapter are Euler’s Method, Taylor series method of
order N and the Fourth order Runge-Kutta method, all the methods are
illustrated with examples and boundary value problems such as linear
shooting method, shooting method for non-linear problems, finite Differ-
ence for linear problems and finite Difference method for non-linear prob-
lems are discussed and all the methods are illustrated with examples. In
the fourth chapter numerical methods to solve Eigen value problems such
as basic Properties of Eigen Values and Eigen vectors, the power Method
for finding dominant Eigen values and Householder’s Method and the QL
algorithm are discussed.

viii
Rational

We have been teaching Numerical Analysis for the seven years in Wolkite
University for science and engineering students. Besides, we have noticed
that the following problems were affecting the students’ performance and
motivation while taking the course:

♣ Programming experience.

♣ Limited time to devote to either develop algorithms or read material


before class.

♣ Weak background in mathematics, such as calculus, solving systems


of equations, solving basic differential equations.

♣ If the student falls behind in one of the topics covered is difficult to


get back on track given the cumulative nature of the course.

Given the fact that we use MATLAB, and that experience programming is
an important issue for student performance, we have developed a series
of tutorials in MATLAB and we would like to test their usefulness as well
as how to improve them. Also, critical thinking about the course will help
to determine what kind of experience of programming we require, how can
we make the students gain that skill (if they don’t have it), and determine
what do we expect from them at the end of the course from the program-
ming point of view. To improve students’ performance, we devoted to write
this book so as students can use it as supplementary book. We orga-
nize this small Laboratory manual for science and engineering students
who have a course of numerical methods/numerical analysis. It will help
them as a self study for their computing practices on solving problems.
We briefly discus the MATLAB techniques for matrix analysis as matrix
inverse, determinant, transpose of a matrix, matrix arithmetic. More de-
tails are also given to solutions of nonlinear equations, especially on the
methods of bisection, Newton method, and on solutions of linear systems
like Gaussian elimination methods. We developed short methods to find
solutions to Science and engineering problems using symbolic computer
coding.

x
Module Description

This Module covers: Revision of Interpolation and Numerical Integration,


Approximation theory, Numerical Solutions of Differential Equations, and
Eigen value Problems.
xii
1
Revision of Interpolation and
Numerical Integration (12 Hrs)

Objectives

Upon completion of this chapter, students should be able to:

• Define interpolation.

• Interpolate a polynomial using, Lagrange interpolation formula, Di-


vided Difference Interpolation formula, Newton’s Forward and Back
ward interpolation formulas.

• Define numerical integration.


• List methods of Numerical integration.

• Integrate functions using Simpson’s methods, Trapezoidal method,Gaussian


quadrature.

• Evaluate multiple integrals.

Introduction

In this chapter, we discuss the problem of approximating a given func-


tion by polynomials. There are two main uses of these approximating
polynomials. The first use is to reconstruct the function f (x) when it
is not given explicitly and only values of f (x) are given at a set of dis-
tinct points called nodes or tabular points. The second use is to perform
the required operations which were intended for f (x) , like determina-
tion of roots, differentiation and integration etc. can be carried out us-
ing the approximating polynomial P (x) . The approximating polynomial
P (x) can be used to predict the value of f (x) at a non tabular point. The
deviation of P (x) from f (x) , that is f (x) − P (x) is called the error of ap-
proximation. Let f (x) be a continuous function defined on some interval
[a, b], and f (x) be prescribed at n + 1 distinct tabular points x0 , x1 , x2 , ..., xn
such that a = x0 < x1 < x2 < ... < xn = b. The distinct tabular points
x0 , x1 , x2 , ..., xn may be non-equi-spaced or equi-spaced, that is xi − xi−1 = h
, for i = 1, 2, 3, ..., n The problem of polynomial approximation is to find a
polynomial P (x) , of degree ≤ n, which fits the given data exactly, that is:

Pn (xi ) = fn (xi ), i = 0, 1, 2, ..., n (1.1)

The polynomial Pn (x) is called the interpolating polynomial. The conditions


given in (1.1) are called the interpolating conditions. The interpolation
polynomial fitting a given data is unique. We may express it in various
forms but are otherwise the same polynomial. For example, f (x) = x2 −

2
2x − 1 can be written as x2 − 2x − 1 = −2 + (x − 1) + (x − 1)(x − 2). In particular,
if a census of the population of Ethiopia is taken every 10 years. the
following table lists the population, in thousands of people, from 1950 to
2000 and the data are also represented in the following figure 1.1.

Year 1950 1960 1970 1980 1990 2000


Population(P(t)) 151,326 179,323 203,302 226,542 249,633 281,422

Figure 1.1: Population census

In reviewing these data, we might ask whether they could be used to pro-
vide a reasonable estimate of the population,say in 1965, 1978, 1990, e.t.c..
Prediction of this type or simply approximate the polynomial that express
such data is called Interpolation or if the data is given in (x0 , y0 ), (x1 , y1 ), (x2 , y2 )
· · · (xn , yn ), then computing the value of y at the value of x in [x0 , xn ] is called
Interpolation But computing the value of y at the value of x outside [x0 , xn ]
is called Extrapolation. for example the process of computing the popula-
tion of the above data in 1945 or 2020 is called extrapolation. Interpolation
is required in many Engineering and Science applications that use tabular

3
data as input. Polynomials are the most common choices of interpolation
because they are easy to evaluate, differentiate and integrate.

1.1 Interpolation

Objectives

At the end of this section students must be able to:

? Define interpolation.

? Differentiate equal spacing and unequal spacing data.

? Interpolate polynomial using Newton’s forward interpolation formula.

? Interpolate polynomial using Newton’s backward interpolation for-


mula.

? Interpolate polynomial using Lagrange interpolation formula.

? Interpolate polynomial using Newton’s Divided Difference interpola-


tion formula.

Brainstorming Question 1.1. .

1. Define interpolation and extrapolation.

2. Interpolate the function f (x) = ex sin(x) using f (1) = 2.2874, f (2) = 6.7188
and f (3) = 2.8345.

3. Do you think that the real application of interpolation?

4
1.1.1 Interpolation With Equal Interval

A. Newton’s Forward Difference Interpolation Formula

Given the set of n + 1 values, with (x0 , y0 ), (x1 , y1 ), (x2 , y2 ), ..., (xn , yn ) of x and
y , it is required to find yn (x) , a polynomial of degree nth such that y and
yn (x) agree at the tabulated points. Let the values of x be equidistant, that
is:

xi − xi−1 = h, f or i = 1, 2, 3, ..., n

Therefore,

x1 = x0 + h

x2 = x0 + 2h
..
.

xi = x0 + ih

Let yn (x) be a polynomial of nth degree, it may be written as:

yn (x) = a0 + a1 (x − x0 ) + a2 (x − x0 )(x − x1 ) + a3 (x − x0 )(x − x1 )(x − x2 )


(1.2)
+ ... + an (x − x0 )(x − x1 )...(x − xn−1 )

Here, the coefficients, a0 , a1 , ..., an+1 , are computed as follow:


Putting x = x0 in (1.2) gives,
yn (x0 ) = a0 ⇒ y0 = a0 (Since the other terms in (1.2) vanish)
Again putting x = x1 in (1.2) gives,
yn (x1 ) = a0 + a1 (x − x0 ) ⇒ yn (x1 ) = y1 = a0 + a1 h (b/c x1 − x0 = h)
Now, solving for a1 gives:
y1 −y0 4y0
a1 = h

h
Similarly, putting x = x2 , x = x3 , ..., x = xn in (1.2), we obtain
42 y 0 43 y 0 4n y 0
a2 = , a 3 = , ..., a n =
2!h2 3!h3 n!hn
5
Now, Substitute these values of a0i s, in (1.2) we obtain

4y0 42 y0 43 y 0
yn (x) = y0 + (x − x0 ) + (x − x 0 )(x − x 1 ) + (x − x0 )(x − x1 )(x − x2 )
h 2!h2 3!h3
4n y0
+ ... + (x − x0 )(x − x1 )...(x − xn−1 )
n!hn
(1.3)

Again put, x = x0 + ph in (1.3) and rearranging we get:

p(p − 1) 2 p(p − 1)(p − 2) 3


yn (x) = y0 + p4y0 + 4 y0 + 4 y0 + ...+
2! 3! (1.4)
p(p − 1)(p − 2)...(p − (n − 1)) n
4 y0
n!

This equation (1.4) is called Newton’s Forward Interpolation Formula.

B. Newton’s Back Ward Difference Interpolation Formula

Given the set of n + 1 values, with (x0 , y0 ), (x1 , y1 ), (x2 , y2 ), ..., (xn , yn ) of x and
y , it is required to find yn (x) , a polynomial of degree nth such that y and
yn (x) agree at the tabulated points. Let the values of x be equidistant, that
is:
xi − xi−1 = h, f or i = 1, 2, 3, ..., n

Therefore,

x1 = x0 + h

x2 = x0 + 2h
..
.

xi = x0 + ih

Like (1.2) let yn (x) be a polynomial of nth degree, and written as:

yn (x) = a0 + a1 (x − xn ) + a2 (x − xn )(x − xn−1 ) + a3 (x − xn )(x − xn−1 )(x − xn−2 )

+ ... + an (x − xn )(x − xn−1 )...(x − x0 )


(1.5)

6
Here, the coefficients, a0 , a1 , ..., an+1 , are computed as follow:
Putting x = xn in (1.5) gives,
yn (xn ) = a0 ⇒ yn = a0 (Since the other terms in (1.5) vanish)
Again putting x = xn−1 in (1.5) gives,
yn (xn−1 ) = yn−1 = a0 + a1 (x( n − 1) − xn ) ⇒ yn (xn−1 ) = yn−1 = yn − a1 h
Now, solving for a1 gives:
yn − yn−1 Oyn
a1 = ⇒
h h
Similarly, putting x = xn−2 , x = xn−3 , ..., x = x0 in (1.5), we obtain
O 2 yn O3 yn O n y0
a2 = , a 3 = , ..., a n =
2!h2 3!h3 n!hn
Now, Substitute these values of a0i s, in (1.5) we obtain

Oyn O 2 yn
yn (x) = yn + (x − xn ) + (x − xn )(x − xn−1 )
h 2!h2
O3 yn On yn
+ (x − x n )(x − x n−1 )(x − xn − 2) + ... + (x − xn )(x − xn−1 )...(x − x0 )
3!h3 n!hn
(1.6)

Again put, x = xn + ph in (1.6) and rearranging we get:

p(p + 1) 2 p(p + 1)(p + 2) 3


yn (x) = yn + pOyn + O yn + O yn
2! 3! (1.7)
p(p + 1)(p + 2)...(p + (n − 1)) n
+ ... + O yn
n!

This equation (1.7) is called Newton’s Backward Interpolation Formula.

Example 1.1. Using Newton’s forward difference interpolation formula, to


find the form of the function y = f (x) from the following table:

x 0 1 2 3
f(x) 1 2 1 10

Solution 1.1. The forward difference table for the data is given above.
Since n = 3 ,the cubic Newton’s forward difference interpolation polynomial

Figure 1.2: Forward Difference Table

7
becomes:

p(p − 1) 2 p(p − 1)(p − 2) 3


y3 (x) = y0 + p4y0 + 4 y0 + 4 y0
2! 3!
x − x0 x−0
W here, p = = =x
h 1
x(x − 1) 2 x(x − 1)(x − 2) 3
y3 (x) = y0 + x4y0 + 4 y0 + 4 y0
2! 3!
x(x − 1) x(x − 1)(x − 2)
y3 (x) = 1 + x(1) + (−2) + (12)
2! 3!
y3 (x) = 1 + x − (x2 − x) + 2(x)(x2 − 3x + 2)

y3 (x) = 2x3 − 7x2 + 6x + 1

Example 1.2. .
Find the interpolating polynomial corresponding to the data (1, 5), (2, 9), (3, 14),and
(4, 21), Using Newton’s backward difference interpolation polynomial.

Solution 1.2. We have the following backward difference table for the data.
The backward differences can be written in a tabular form as in Table.

Figure 1.3: Backward Difference Table

. Since n = 3,the cubic Newton’s backward difference interpolation


polynomial becomes:

p(p + 1) 2 p(p + 1)(p + 2) 3


y3 (x) = yn + pOyn + O yn + O yn
2! 3!
x − x3 x−4
W here, p = = =x−4
h 1
(x − 4)((x − 4) − 1) 2 (x − 4)((x − 4) − 1)((x − 4) − 2) 3
y3 (x) = y3 + (x − 4)Oy3 + O y3 + O y3
2! 3!
(x − 4)((x − 4) − 1) (x − 4)((x − 4) − 1)((x − 4) − 2)
y3 (x) = 21 + (x − 4)(7) + (2) + (1)
2! 3!
x3 x2 26x
y3 (x) = − + +1
6 2 6
8
1.1.2 Interpolation With Unequal Spacing

A. Lagrange Interpolation

Given the data:

x x0 x1 x2 ··· xn
f (x) f (x0 ) f (x1 ) f (x2 ) ··· f (xn )

Be given at distinct unevenly spaced points or non-uniform points x0 , x1 , x2 , ...xn .


This data may also be given at evenly spaced points. For this data, we can
fit a unique polynomial of degree ≤ n . Since the interpolating Polyno-
mial must use all the ordinates it can be written as a linear combination
of these ordinates f (x0 ), f (x1 ), f (x2 ), · · · , f (xn ). That is, we can write the
polynomial as:

Pn (x) = L0 (x)f (x0 ) + L1 (x)f (x1 ) + L2 (x)f (x2 ) + ... + Ln (x)f (xn )

= L0 (x)f0 + L1 (x)f1 + L2 (x)f2 + ... + Ln (x)fn

n
X
Pn (x) = Li (x)fi (1.8)
i=0

In (1.8) Pn (x) is called Lagrange interpolating polynomial.

x − x1 x−x1
• For Linear Lagrange polynomial: L0 (x) = and L1 (x) = x0 −x1
x0 − x1

(x − x1 )(x − x2 )
• For Quadratic Lagrange polynomial: L0 (x) = , L1 (x) =
(x0 − x1 )(x0 − x2 )
(x − x0 )(x − x2 ) (x − x0 )(x − x1 )
and L2 (x) = e.t.c.
(x1 − x0 )(x1 − x2 ) (x2 − x0 )(x2 − x1 )

Example 1.3. Determine the linear Lagrange interpolating polynomial that


passes through the points (2, 4) and (5, 1).

9
Solution 1.3. In this case we have

x − x1 x−5 (x − 5)
L0 (x) = = = and
x0 − x1 2−5 −3
x − x0 x−2 (x − 2)
L1 (x) = = =
x1 − x0 5−2 3
(x − 5) (x − 2)
Pn (x) = L0 (x)f (x0 ) + L1 (x)f (x1 ) = (4) + (1) = 6 − x
−3 3

Exercise 1.1. Given that f (0) = 1, f (1) = 3, andf (3) = 55 find the unique
polynomial of degree 2 or less, which fits the given data.

B. Newton’s Divided Difference Interpolation

Divided Difference

Let the data values, (xi , f (xi )), i = 0, 1, 2, ..., n be given. We define the di-
vided differences as follows.
First divided difference: Consider any two consecutive data values (xi , f (xi ))
and (xi+1 , f (xi+1 )). Then, we define the first divided difference as;

f (xi+1 ) − f (xi )
f [xi , xi+1 ] = , i = 0, 1, 2, ..., n − 1
xi+1 − xi
f (x1 ) − f (x0 ) f (x2 ) − f (x1 )
T heref ore, f [x0 , x1 ] = , f [x1 , x2 ] = , e.t.c.
x1 − x0 x2 − x1

Note that:

f (xi ) f (xi+1 )
f [xi , xi+1 ] = f [xi+1 , xi ] = + , i = 0, 1, 2, ..., n − 1
xi − xi+1 xi − xi+1

We say that the divided differences are symmetrical about their argu-
ments.
Second divided difference: Consider any three consecutive data values
(xi , f (xi )), (xi+1 , f (xi+1 )) and (xi+2 , f (xi+2 )) Then, we define the second

10
divided difference as;

f [xi+1 , xi+2 ] − f [xi , xi+1 ]


f [xi , xi+1 , xi+2 ] = , i = 0, 1, 2, ..., n − 2
xi+2 − xi
f [x1 , x2 ] − f [x0 , x1 ]
T heref ore, f [x0 , x1 , x2 ] = ,
x2 − x 0
f [x2 , x3 ] − f [x1 , x2 ]
f [x1 , x2 , x3 ] = , e.t.c.
x3 − x1

The nth divided difference using all the data values in the table, is de-
fined as;

f [x1 , x2 , ..., xn ] − f [x0 , x1 , ..., xn−1 ]


f [xi , xi+1 , xi+2 , ..., xn ] = (1.9)
xn − x0

The divided differences can be written in a tabular form in Table below:

Example 1.4. .
Obtain the divided difference table for the data:

x −1 0 2 3
f (x) −8 3 1 12

Solution 1.4. . We have the following divided difference table for the data.
The divided differences can be written in a tabular form in Table below:

11
We mentioned earlier that the interpolating polynomial representing a
given data values is unique, but the polynomial can be represented in
various forms. If we write the interpolating polynomial as;

f (x) = pn (x) = a0 + a1 (x − x0 ) + a2 (x − x0 )(x − x1 ) + a3 (x − x0 )(x − x1 )(x − x2 )

+ ... + an (x − x0 )(x − x1 )...(x − xn−1 )


(1.10)

The polynomial fits the data pn (xi ) = f (xi ) = f0


Setting, pn (x0 ) = f (x0 ) = f0 , since the remaining terms vanish.
Setting,

f 1 − a0 f1 − f0
pn (x1 ) = f (x1 ) = f1 = a0 + a1 (x − x0 ) ⇒ a1 = = = f [x0 , x1 ]
x − x0 x − x0

Similarly;

a2 = f [x0 , x1 , x2 ], ..., an = f [x0 , x1 , x2 , ..., xn ]

So, we can write the interpolating polynomial using Divided Difference as;

f (x) = pn (x) = f (x0 ) + f [x0 , x1 ](x − x0 ) + f [x0 , x1 , x2 ](x − x0 )(x − x1 )+

f [x0 , x1 , x2 , x3 ](x − x0 )(x − x1 )(x − x2 ) (1.11)

+ ... + f [x0 , x1 , x2 , ..., xn ](x − x0 )(x − x1 )...(x − xn−1 )

This polynomial is called the Newton’s divided difference interpolating


polynomial.
For an example of a second order polynomial, given (x0 , y0 ), (x1 , y1 ) and
(x3 , y3 )

p2 (x) = f (x0 ) + f [x0 , x1 ](x − x0 ) + f [x0 , x1 , x2 ](x − x0 )(x − x1 )

Example 1.5. .
Find f (x) as a polynomial in x for the following data by Newton’s divided

12
difference formula.

x −4 −1 0 2 5
f (x) 1245 33 5 9 1335

13
Solution 1.5. .
First we form the divided difference table for the data as follow:

The Newton’s divided difference formula gives;

14
Self check exercise 1.1. .
1. Find f (x) as a polynomial in x for the following data by Newton’s forward
interpolation formula.

x −2 −1 0 1 2 3
f (x) 9 16 17 18 44 81

2. Compute f (−2.2) and f (−2.67) using the interpolated polynomial.


3. Evaluate

y = e2x

at x = 0.005 for the following data.

x 0 0.1 0.2 0.3 0.4


f (x) = y = e2x 1 1.2214 1.4918 1.8221 2.255

3. Compute y(0.35) from question 2 above using Newton’s backward inter-


polation formula.
4. Find f (x) as a polynomial in x for the following data by Newton’s back-
ward interpolation formula.

x −2 −1 0 1 2 3
f (x) 9 16 17 18 44 81

5. Compute f (2.5) and f (2.98) using the interpolated polynomial above.


6. Find f (x) as a polynomial in x for the following data by Newton’s divided
difference formula.

x −2 −1 0 1 3 4
f (x) 9 16 17 18 44 81

7. Compute f (1.5) and f (1.98) using the interpolated polynomial above.


8. Is the interpolated polynomial using different formulas is unique ?

15
1.2 Numerical Integration

Objectives

At the end of this section students must be able to:

? Define numerical integration.

? List numerical integration techniques/methods.

? Integrate a function using Trapezoidal rule.

? Integrate a function using Simpson’s 1/3 rule.

? Integrate a function using simpson’s 3/8 rule.

? Integrate a function using Waddles rule.

? Analyze the error of numerical integration methods.

Brainstorming Question 1.2. .

 When numerical integration is more important?

Introduction

Some functions may not be analytically integrable. In this case the help of
numerical analysis is very important to compute the given definite integral.
In this chapter we will see some of the numerical methods those we apply
to integrate a function numerically over a given interval.

Definition 1.1. Numerical integration is used to obtain approximate an-


swers for definite integrals that cannot be solved analytically (has no ex-

16
plicit anti-derivative). Numerical integration is a process of finding the nu-
merical value of a definite integral
Z b
I= f (x)dx (1.12)
a

Where, a function y = f (x) is not known explicitly or difficult to integrate


analytically.

The process of evaluating a definite integral from a set of tabulated val-


ues of the integrand f (x) is called numerical Integration . This process
when applied to a function of a single variable is known as quadrature.
The problem of numerical integration, like that of numerical differentia-
tion, is solved by representing f (x) by an interpolation formula and then
integrating it between the given limits. In this way, we can derive quadra-
ture formula for approximate integration of function defined by a set of
numerical values only.

Newton-Cotes Quadrature Formula

Let Z b
I= f (x)dx
a

where y = f (x) takes the values y0 , y1 , y2 , . . . ., yn for x = x0 , x1 , x2 , . . . ., xn . Let


us divide the interval (a, b) in to n sub-interval of width h so that x0 = a, x1 =
x0 + h, x2 = x0 + 2h, ...., xn = x0 + nh = b . Then

Z x0 +nh
I= f (x)dx
x0

Putting x = x0 + rh, dx = hdr on the above integral gives;


Z n
I=h f (x0 + rh)dr
0

17
n
r(r − 1) 2 r(r − 1)(r − 2) 3 r(r − 1)(r − 2)...(r − (n − 1)) n
Z
= h [ y0 +r4y0 + 4 y0 + 4 y0 +...+ 4 y0 ] dr
0 2! 3! n!
By Newton’s forward interpolation formula, so integrating this term by
term, gives us;

x0 +nh
n(2n − 3) 2 n(n − 2)2 3
Z
n
I= f (x)dx = nh[ y0 + 4y0 + 4 y0 + 4 y0
x0 2 12 24
n4 3n3 11n2 44 y0 n5 35n3 50n2 45 y 0
+( − + − 3n) + ( − 2n4 + − + 12n)
5 2 3 4! 6 4 3 5!
n6 15n5 255n3
274n 2
4 6
y 0
+( − + 17n4 + − − 60n) + ...] (*)
7 6 4 3 6!

The above equation (∗) is known as Newton-Cotes quadrature formula.


From this general formula , we deduce the following important quadrature
rules by taking n = 1, 2, 3, ....

1.2.1 Trapezoidal Rule

Putting n = 1 in equ (∗) above and taking the curve through x0 , y0 and
x1 , y1 as a straight line i.e. a polynomial of first degree so that differences
of order higher than first becomes zero, we get;
Z x0 +h  
1 h
f (x) dx = h y0 + 4y0 = (y0 + y1 )
x0 2 2

Similarly
Z x0 +2h  
1 h
f (x) dx = h y1 + 4y1 = (y1 + y2 )
x0 +h 2 2

..
.
Z x0 +nh  
1 h
f (x) dx = h yn−1 + 4yn−1 = (yn−1 + yn )
x0 +(n−1)h 2 2

18
Adding these n integrals, we obtain;
Z x0 +nh
h
f (x) dx = [(y0 + yn ) + 2(y1 + y2 + y3 + ... + yn−1 )] (1.13)
x0 2

This is known as the Trapezoidal rule.


Obs. The area of each strip (trapezium) is found separately. Then the area
under the curve and the ordinates x0 and x0 + nh is approximately equal
to the sum of the areas of the n trapeziums.

1.2.2 Simpson’s one-third (1/3) rule

Putting n = 2 in equ (∗) above and taking the curve through (x0 , y0 ) , (x1 , y1 )
and (x3 , y3 ) as a parabola i.e. a polynomial of second degree so that differ-
ences of order higher than second vanish , we get
Z x0 +2h  
1 2 h
f (x) dx = 2h y0 + 4y0 + 4 y0 = (y0 + 4y1 + y2 )
x0 6 3

Similarly,
Z x0 +4h
h
f (x) dx = (y2 + 4y3 + y4 )
x0 +2h 3
..
.
Z x0 +nh
h
f (x) dx = (yn−2 + 4yn−1 + yn )
x0 +(n−2)h 3

Adding these n integrals, we obtain;


Z x0 +nh
h
f (x) dx = [(y0 + yn ) + 4(y1 + y3 + y3 + ... + yn−1 ) + 2(y2 + y4 + ..., +yn−2 )]
x0 3
(1.14)

This is known as the Simpson’s one-third rule or simply Simpson’s rule and
it is most commonly used.
Obs. While applying (1.14), the given interval must be divided in to even
number of equal sub-intervals, since we find the area of two strips at a

19
time.(i.e.n must be even.)

1.2.3 Simpson’s three-eighth (3/8) rule

Putting n = 3 in equ (∗) above and taking the curve through (xi , yi ) i =
0, 1, 2, 3 as a parabola i.e. a polynomial of degree three so that differences
of order higher than second vanish , we get
Z x0 +3h  
3 3 2 1 3 3h
f (x) dx = 3h y0 + 4y0 + 4 y0 + 4 y0 = (y0 + 3y1 + 3y2 + y3 )
x0 2 2 8 8

Similarly
Z x0 +6h
3h
f (x) dx = (y3 + 3y4 + 3y5 + y6 )
x0 +3h 8
..
.

Adding all such expressions from x0 to x0 + nh , where n is a multiple of 3,


we obtain
Z x0 +nh
3h
f (x) dx = [(y0 + yn ) + 3(y1 + y2 + y4 + y5 + ... + yn−1 ) + 2(y3 + y6 + ..., +yn−3 )]
x0 8
(1.15)

Which is known as Simpson’s three-eighth rule.


Obs. While applying (1.15), the number of sub-intervals should be taken
as a multiple of 3.

20
1.2.4 Waddle’s rule

Putting n = 6 in equ (∗) above and neglecting all differences above the sixth
order, we obtain;
Z x0 +6h  
9 2 3 123 4 11 5 1 41 6
f (x) dx = 6h y0 + 34y0 + 4 y0 + 44 y0 + 4 y0 + 4 y0 + ( 4 )y0
x0 2 60 20 6 140
3h
= (y0 + 5y1 + y2 + 6y3 + y4 + 5y5 + y6 )
10

Similarly,
Z x0 +12h
3h
f (x) dx = (y6 + 5y7 + y8 + 6y9 + y10 + 5y11 + y12 )
x0 +6h 10

..
.

and so on. Adding all these integrals from x0 to x0 + nh , where n is a


multiple of 6, we obtain
Z x0 +nh
3h
f (x) dx = [y0 + 5(y1 + y5 + y7 + y11 + ...) + (y2 + y4 + y8 + ...)+
x0 10

2(y6 + y12 + ...) + 6(y3 + y6 + y9 + ...) + yn ] (1.16)

This is known as Weddle’s rule.


Obs. While applying (1.16), the number of sub-intervals should be taken
as a multiple of 6. Weddle’s rule is generally more accurate than any of
the others. Of the two Simpson’s rules, the 1/3 rule is better.

21
1.2.5 Errors in quadrature formulas

The error in the quadrature formulae is given by


Z b Z b
E= y dx − p(x) dx (1.17)
a a

where p(x) is the polynomial representing the function y = f (x), in the


interval [a, b].
i. Error in Trapezoidal rule
Expanding y = f (x) around x = x0 by Taylor’s series, we get;

(x − x0 )2 00 (x − x0 )3 000
y = y0 + (x − x0 )y00 + y0 + y0 + ... (1.18)
2! 3!

Therefore,

x1 x0 +h
(x − x0 )2 00 (x − x0 )3 000
Z Z  
y dx = y0 + (x − x0 )y00 + y0 + y0 + ... dx
x0 x0 2! 3!

x0 +h
h2 0 h3 00 h4 000
Z
y dx = hy0 + y + y + y0 + ... (1.19)
x0 2! 0 3! 0 4!

Also A1 = area of the first trapezium in the interval [x0 , x1 ].

1
A1 = h (y0 + y1 ) (1.20)
2

Putting, x = x0 + h and y = y1 in (1.18), we get;

h2 00 h3 000
y1 = y0 + hy00 + y + y0 + ... (1.21)
2! 0 3!

Substituting this value of y1 in (1.20), we get;

h2 00 h3 000
 
1 0
A1 = h y0 + y0 + hy0 + y0 + y0 + ...
2 2! 3!

22
h2 0 h3 00 h4 000
A1 = hy0 + y0 + y0 + y + ... (1.22)
2 2.2! 2.3! 0

Therefore, Error in the interval [x0 , x1 ].


Z x1
y dx − A1 [(1.19) − (1.22)]
x0
h3
 
1 1
− h3 y000 + ... = − y000
3! 2.2! 12

3
That is, principal part of the error in [x0 , x1 ] = − h12 y000
3
Similarly, principal part of the error in [x1 , x2 ] = − h12 y100 and so on.
3
Hence the total error E = − h12 y000 + y100 + y200 + ...yn−1
00


Assuming that y 00 (X) is the largest of the n quantities , we obtain y000 + y100 +
y200 + ...yn−1
00

nh3 00 (b − a)h2 00
E< − y (X) = − y (X) (1.23)
12 12

Since nh = b − a
Hence the error in the trapezoidal rule is of order h2 .
ii.Error in Simpson’s 1/3 rule
Expanding y = f (x) around x = x0 by Taylor’s series, we get;

(x − x0 )2 00 (x − x0 )3 000
y = y0 + (x − x0 )y00 + y0 + y0 + ... (1.24)
2! 3!

Therefore, over the first doubt strip, we get;

x2 x0 +2h
(x − x0 )2 00 (x − x0 )3 000
Z Z  
y dx = y0 + (x − x0 )y00 + y0 + y0 + ... dx
x0 x0 2! 3!

x0 +2h
4h2 0 8h3 00 16h4 000
Z
y dx = 2hy0 + y + y + y + ... (1.25)
x0 2! 0 3! 0 4! 0

Also A1 = area over the first doubt strip by Simpson’s 1/3 rule [x0 , x2 ].

1
A1 = h (y0 + 4y1 + y2 ) (1.26)
3

23
Putting, x = x0 + h and y = y1 in (1.24), we get;

h2 00 h3 000
y1 = y0 + hy 0 + y + y + ... (1.27)
2! 3!

Again putting x = x0 + 2h and y = y2 in (1.24), we have;

4h2 0 8h3 00 16h4 000


y2 = y0 + 2hy0 + y + y + y + ... (1.28)
2! 0 3! 0 4! 0

Substituting these values of y1 and y2 in (1.26), we get;

h2 00 h3 000 4h2 0 8h3 00 16h4 000


 
1 0
A1 = h y0 + 4(y0 + hy + y + y + ...) + y0 + 2hy0 + y + y + y + ...
3 2! 3! 2! 0 3! 0 4! 0

4h3 00 2h4 000 5h5 iv


A1 = 2hy0 + 2h2 y00 + y + y + y + ... (1.29)
3 0 3 0 18 0

Therefore, Error in the interval [x0 , x2 ].


Z x1
y dx − A1 [(1.25) − (1.29)]
x0
h5
 
4 5
− h5 y000 + ... = − y0iv
15 18 90

5
Similarly principal part of the error in [x2 , x4 ]= − h90 y2iv
Hence the total error

h5 iv
y0 + y1iv + y2iv + ...yn−1
iv

E=−
90

Assuming that y iv (X) is the largest of the n quantities , we obtain y0iv + y1iv +
y2iv + ...yn−1
iv

nh5 iv (b − a)h4 iv
E< − y (X) = − y (X) (1.30)
90 180

Since 2nh = b − a
Hence the error in the trapezoidal rule is of order h4 .

24
iii. Error in Simpson’s 3/8 rule

Proceeding as above, here the principal part of the error in the interval
[x0 , x3 ]

3h5 iv
E= − y (X) (1.31)
80

iii. Error Weddle’s rule

Proceeding as above, here the principal part of the error in the interval
[x0 , x6 ]

h7 iv
E= − y (X) (1.32)
140

Example 1.6. .
Evaluate
Z 6
1
dx
0 1 + x2

By using:

a. Trapezoidal rule.

b. Simpson’s 1/3 rule.

c. Simpson’s 3/8 rule.

d. Weddell’s rule.

e. Compare the results with its actual value.

Solution 1.6. Divide the interval (0, 6) into six equal parts each of width
1
h = 1. The values of f (x) = are given below:
1 + x2

25
x 0 1 2 3 4 5 6
f (x) 1 0.5 0.2 0.1 0.0588 0.0385 0.027
y y0 y1 y2 y3 y4 y5 y6

i. Solution by Trapezoidal rule:

Z 6
dx h
2
= ((y0 + y6 ) + (y1 + y2 + y3 + y4 + y5 ))
0 1+x 2
1
= [(1 + 0.027) + 2(0.5 + 0.2 + 0.1 + 0.0588 + 0.0385)]
2
= 1.4108.

ii. Solution by Simpson’s 1/3 rule:

Z 6
dx h
2
= ((y0 + y6 ) + 4(y1 + y3 + y3 + y5 ) + 2(y2 + y4 ))
0 1+x 3

1
= [(1 + 0.027) + 4(0.5 + 0.1 + 0.0385) + 2(0.2 + 0.0588)]
3
= 1.3662.

iii. Solution by Simpson’s 3/8 rule:

Z 6
dx 3h
2
= [(y0 + y6 ) + 3(y1 + y2 + y4 + y5 ) + 2(y3 )]
0 1+x 8
3
= [(1 + 0.027) + 3(0.5 + 0.2 + 0.0588 + 0.0385) + 2(0.1)]
8
= 1.3571.

26
iv. Solution by Waddle’s rule:

Z 6
dx 3h
= [y0 + 5y1 + y2 + 6y3 + y4 + 5y5 + y6 ]
0 1 + x2 10
= 0.3[1 + 5(0.5) + 0.2 + 6(0.1) + 0.0588 + 5(0.0385) + 0.027

= 1.3735

Also;
Z 6
dx
= tan−1 (6) − tan−1 (0) = 1.4056
0 1 + x2

This shows that the value of the integral found by Waddle’s rule is the
nearest to the actual value followed by its value given by Simpson’s 1/3
rule.

Exercise 1.2. .
Compute the following using Trapezoidal rule; Simpson’s rules and Waddles
rule with h = 0.01.
Z π/2 Z 1 Z 0.3
p 1
a. xsin(x) dx b. sin(x) + cos(x) dx c. (1 − 8x3 ) 2 dx
0 0 0

1.2.6 Gaussian Quadrature Formula

Gaussian Quadrature choose the points for evaluation in an optimal, rather


than equally spaced way. The nodes x1 , x2 , x3 , ..., xn in the interval [a, b] and
coefficients c1 , c2 , c3 , ..., cn are choosen to minimize the expected error ob-
tained in the approximation.

Z b n
X
f (x) dx ≈ ci f (xi ) (1.33)
a i=1

For c1 , c2 , c3 , ..., cn and x1 , x2 , x3 , ..., xn gives us 2n parameters to choose. Class


of polynomials at most 2n − 1.

27
First transform any finite intervals [a, b] to [−1, 1].
Z b Z 1
⇒ f (x) dx = g(x) dx (1.34)
a −1

The mechanism of converting this limit of integration is maintained by


letting x = mt + c


 b = m(1) + c
⇒ ⇒ m = b−a 2
and c = b+a2
 a = m(−1) + c
⇒ x = b−a2
t + b+a
2
and dx = b−a 2
dt
Rb R1
⇒ a f (x) dx = −1 f ( b−a t + b+a ) b−a

2 2 2
dt

b   Z 1   
b−a b−a
Z
b+a
f (x) dx = f t + dt (1.35)
a 2 −1 2 2

a. One Point Gaussian Quadrature Formula


For n = 1, 2n − 1 = 2(1) − 1 = 1 ( degree one polynomial. )
Z b
f (x) dx ≈ c1 f (x1 ), a ≤ x1 ≤ b, f (x) = a0 + a1 x
a

b  2
b − a2
Z  
(a0 + a1 x) dx = a0 (b − a) + a1 (1.36)
a 2
c1 f (x1 ) = c1 (a0 + a1 x1 ) = c1 a0 + a1 (c1 x1 ) (1.37)

By relating the coefficients of a0 and a1 from 1.36 and 1.37, we have;

b 2 − a2
 
c1 = b − a and c1 x1 =
2
b 2 − a2
 
a+b
N ow, c1 x1 = (b − a)x1 ⇒ (b − a)x1 = ⇒ x1 =
2 2

Therefore;
Z b  
a+b
f (x) dx ≈ c1 f (x1 ) = (b − a)f (1.38)
a 2

28
Here, if a = −1 and b = 1, then (1.38) becomes;

1  
−1 + 1
Z
f (x) dx ≈ c1 f (x1 ) = (1 − (−1))f = 2f (0) (1.39)
−1 2

Example 1.7. Evaluate the integral


Z 1.3
5xe−2x dx (1.40)
0.1

Using one point Gaussian quadrature rule.

Solution 1.7. Here, f (x) = 5xe−2x , a = 0.1 and b = 1.3. So (3.8), we have;
Z 1.3  
−2x 0.1 + 1.3
= 1.2f (0.7) = 1.2 5(0.7)e−2(0.7) = 1.0357

5xe dx = (1.3 − 0.1)f
0.1 2
(1.41)

Note that the exact solution is 0.8939, so the error is |0.8939 − 1.0357| = 0.1418

b. Two Point Gaussian Quadrature Formula

For n = 2, 2n − 1 = 2(2) − 1 = 3 ( degree three polynomial. )


Z 1
f (x) dx ≈ c1 f (x1 ) + c2 f (x2 ) = a ≤ x1 ≤ b, and a ≤ x2 ≤ b
−1

And also choose a f unction of degree three , f (x) = a0 + a1 x + a2 x2 + a3 x3

And from the integral calculus,


Z 1 Z 1
f (x)dx = [ a0 + a1 x + a2 x2 + a3 x3 ] dx = a0 (2) + a1 (0) + a2 (2/3) + a3 (0)
−1 −1

(1.42)

And also,

c1 f (x1 ) + c2 f (x2 ) = c1 (a0 + a1 x1 + a2 x21 + a3 x31 ) + c2 (a0 + a1 x2 + a2 x22 + a3 x32 )


 

= a0 (c1 + c2 ) + a1 (c1 x1 + c2 x2 ) + a2 (c1 x21 + c2 x22 ) + a3 (c3 x23 + x23 )


 

29
Now, relating the coefficients if a0 , a1 , a2 , a3 , from (1.42) and the above equa-
tion, we have the following four systems of equations;




 c1 + c2 = 2



 c x +c x =0
1 1 2 2


 c1 x21 + c2 x22 = 2/3


 c x2 + x2 = 0

3 3 3

Solving this system using ny method of solving non-linear system, we get;


c1 = 1, c2 = 1, x1 = ± √13 and x2 = ∓ √13
Therefore, the two point Gaussian Quadrature formula is given by;
Z 1    
1 1
f (x) dx ≈ c1 f (x1 ) + c2 f (x2 ) = f − √ +f √ (1.43)
−1 3 3

Example 1.8. Evaluate the integral


Z 1.3
5xe−2x dx (1.44)
0.1

Using Two point Gaussian quadrature rule.

Solution 1.8. First transform the interval [0.1, 1.3] to [−1, 1] by letting x =
mt + c


 1.3 = m(1) + c
⇒ ⇒ m = 0.6 and c = 0.7 ⇒ x = 0.6t + 0.7
 0.1 = m(−1) + c

Then,

f (x) = 5xe−2x

x = 0.6t + 0.7

f (0.6t + 0.7) = 5(0.6t + 0.7)e−2(0.6t+0.7) = (3t + 3.5)e−1.2t−1.4

and dx = 0.6 dt

30
Therefore,
Z 1.3 Z 1 Z 1
−2x −1.2t−1.4
5xe dx = (3t + 3.5)e (0.6)dt = (1.8t + 2.1)e−1.2t−1.4 dt
0.1 −1 −1
   
1 1
= f −√ +f √
3 3
1 −1.2(− √1 )−1.4 1 −1.2( √1 )−1.4
= (1.8(− √ ) + 2.1)e 3 + 1.8( √ ) + 2.1)e 3
3 3
= 0.52299 + 0.38719 = 0.91018

Note that the exact solution is, 0.8939, so the error is |0.8939 − 0.91018| =
0.01628, which is more accurate than one point Gaussian Quadrature rule.

Example 1.9. Evaluate the integral


Z 1
dx
dx
0 1 + x2

Using Two point Gaussian quadrature rule.

Solution 1.9. Exercise

c. Three Point Gaussian Quadrature Formula

For n = 3, 2n − 1 = 2(3) − 1 = 5 ( degree five polynomial. )


Z 1
f (x) dx ≈ c1 f (x1 ) + c2 f (x2 ) + c3 f (x3 ) = a ≤ x1 ≤ b, a ≤ x2 ≤ b and a ≤ x3 ≤ b
−1

And also choose a f unction of degree three , f (x) = a0 + a1 x + a2 x2 + a3 x3 + 44 x4 + a5 x5

Then using, similar procedure that of one point and two point Gaussian
quadrature formula, we take the three point Gaussian quadrature formula
as;

Z 1
r ! r !
5 3 8 5 3
f (x) dx ≈ c1 f (x1 ) + c2 f (x2 ) + c3 f (x3 ) = f + f (0) + f −
−1 9 5 9 9 5
(1.45)

Table of the roots and coefficients of ci of the Gass-Legendre quadrature:

31
R1
Example Approximate −1
ex cosxdx Using Gass-Legendre quadrature with
n = 3.
Solution 1.10
Since n = 3, we use three point Gassian Quadrature formula given in (1.45)
and taking the entries from the above table and we obtain;
Z 1
ex cosxdx = 1.93340
−1

And the exact solution is 1.9334214 so, the error is 0.0000214 which is very
small.

1.2.7 Multiple Integrals

Double integral

Double Integral is definite integral over two dimensional region < and de-
noted as:
Z Z Z b Z d  Z d Z b 
I= f (x, y)d< = f (x, y) dy dx = f (x, y) dx dy
< a c c a

(1.46)

(1.47)

Example 1.10. .
Evaluate the integral;
Z 2.6 Z 4.6 
1
I= dx dy
2 4 xy

a. By taking h = 0.2 in x direction and k = 0.3 using Simpson’s 3/8 rule in


x direction and using Simpson’s 1/3 rule y direction.

b. In both direction using Trapezoidal rule.

32
1
Solution 1.10. First we calculate the value of f (x, y) = xy
and the result is
in the following table.

a. Applying Simpson’s 3/8 rule in x direction

i. For y = 2;

3
I0 = (0.2)[ (0.125 + 0.1087) + 3(0.1191 + 0.1136) ] = 0.0698
8

ii. For y = 2.3;

3
I1 = (0.2)[ (0.1087 + 0.0945) + 3(0.1035 + 0.0988) ] = 0.0608
8

iii. For y = 2.6;

3
I2 = (0.2)[ (0.0962 + 0.0836) + 3(0.0916 + 0.0874) ] = 0.0538
8

Then, use Simpson’s 1/3 rule in y direction for the data given in the

following table.

y 2 2.3 2.6
f (y) 0.0698 0.0608 0.0538

0.3
I= [(0.0698 + 0.0538) + 4(0.0608)] = 0.0367
3

b. Applying Trapezoidal rule in x-direction, we get;

i. For y = 2;

0.2
I0 = [ (0.125 + 0.1087) + 2(0.1191 + 0.1136) ] = 0.0698
2
33
ii. For y = 2.3;

0.2
I1 = [ (0.1087 + 0.0945) + 2(0.1035 + 0.0988) ] = 0.0608
2

iii. For y = 2.6;

0.
I2 = [ (0.0962 + 0.0836) + 2(0.0916 + 0.0874) ] = 0.0538
2

Again, use Trapezoidal rule in y direction for the data given in the

following table.

y 2 2.3 2.6
f (y) 0.0698 0.0608 0.0538

0.3
I= [(0.0698 + 0.0538) + 4(0.0608)] = 0.0368
2

34
Self check exercise 1.2. .
1. Evaluate the integral
Z 1.2
1
dx,
0 1 + x2

taking n=6 and using


i. Trapezoidal rule

ii. Simpson’s 1/3 rule

iii. Simpsons 3/8 rule

iv. Waddles rule

v. Compute the error of each methods.

vi. Which method is more accurate?

2. Evaluate the integral


Z 1
2
ex dx (1.48)
0

using waddle’s rule and taking 6 subdivisions.

3. Evaluate the integral;


!
Z 2 Z 2
dx
I= 1 dy dx
1 1 (x2 + y 2 ) 2

a. By taking h = k = 0.25 in both direction and using Trapezoidal rule.

b. In both direction using Simpson’s rule with h = k = 0.25.

35
Unit Summary

• interpolation is the process of finding the value of a function within a


given set of data.

• Interpolation can be done for two types of given data. These are equal
spacing and unequal spacing data.

• For equal spacing data we can use Newton’s forward interpolation


formula, Newton’s backward interpolation formula, Newton’s central
difference interpolation formula e.t.c.

• For unequal spacing data we can use Interpolation formula, Newton’s


Divided Difference formula, e.t.c.

• Newton’s forward interpolation formula used to interpolate the value


of the function near to the beginning of the set of tabular data points.
Newton’s backward interpolation formula used to interpolate the value
of the function near to the end of the set of tabular data points.

• Numerical integration is used to obtain approximate answers for def-


inite integrals that cannot be solved analytically (has no explicit anti-
derivative).

• Numerical integration is a process of finding the numerical value of a


definite integral.

• We can use Trapezoidal rule, Simpson’s 1/3 rule, Simpson’s 3/8 rule,
waddle’s rule, Gaussian quadrature rule, e.t.c. to integrate numeri-
cally.

36
Review Exercise 1

1. Find sin(500 ) from the following data using forward and backward
difference

x 00 150 300 450 600 750 900


y = f (x) 0 0.259 0.5 0.707 0.866 0.966 1.00

2. Use Newton’s Divided Difference formula and evaluate f (3), given

x 3.2 2.7 1.0 4.8 5.6


y = f (x) 22 17.8 14.2 38.3 51.7

3. Find sin850 using backward difference interpolation formula for the


data

x 00 150 300 450 600 750 900


y = f (x) 0 0.259 0.5 0.707 0.866 0.966 1.00

4. Find acubic polynomial which takes the values y(0) = 1, y(1) = 0, y(2) =
1, and y(3) = 10 using Newton’s backward difference formula and find
y(2.5)

5. A second degree polynomial passes through (0, 1), (1, 3), (2, 7), (3, 13).
Then find the polynomial.

6. Use Lagrange interpolation formula to find y when x = 5 from the


data (0, 1), (1,3), (3,13) and (8, 123).

7. Find the value of


Z 5
logx10 dx,
1

taking 8 sub intervals and correct to four s.f by trapezoidal rule

8. Find the value of


Z 0.6
ex dx,
0

37
taking 6 sub intervals and correct to five s.f by Simpson’s 1/3 rule

9. Calculate
Z π/2
esinx dx,
0

taking 3 sub intervals and correct to four d.p by Simpson’s 3/8 rule

10. Find the value of


Z 5.2
logxe dx,
4

taking 6 sub intervals and correct to four d.p by trapezoidal rule

References

1. Richard L. Burden, Numerical Analysis, ninth edition.

2. Rao V. Dukkipati, Numerical methods, 2010.

3. G. Shanker Rao, Numerical Analysis, revised third edition, 2006.

4. PETER V. O’NEIL, Advanced Engineering Mathematics.

38
2
Approximation Theory (12 hrs)

Objectives

At the end of this unit students should be able to:

• Define Approximation and curve fitting.

• Fit the data using Discrete least square approximation.

• Fit the data using Continuous least square approximation.

• Approximate a function using least square method.

• Approximate a function using least square method, orthogonal polyno-


mial.

• Approximate a function using least square method, Legender’s polyno-


mial.

Introduction

In mathematics and Other Sciences, approximation theory is concerned


with how functions can best be approximated with simpler functions, and
with quantitatively characterizing the errors introduced thereby. Note that
what is meant by best and simpler will depend on the application. Ap-
proximation theory involves two general types of problems. One problem
arises when a function is given explicitly, but we wish a simpler type of
function, such as polynomial, to approximate value of a given function.
The other problem in approximation theory is concerned with fitting func-
tions to given data (tabular) and finding the best function in a certain
class to represent the data. An approximate non-mathematical relation-
ship between the two variables, can be established by a diagram called
scattered diagram. scattered diagram finds a relationship between the
set of paired observations x and y, we plot their corresponding values on
the graph, taking one of the variable along the x axis other along the y axis
i.e.(x1 , y1 ), (x2 , y2 ), ..., (xn , yn ). The resulting diagram showing a collection of
dots is scattered diagram. A smooth curve that approximates the above
set of data points is known as approximating curve. The exact mathemat-
ical relationship between the two variables is given by simple algebraic
expression called curve-fitting. In this chapter we will see best fit of differ-
ent type of curves such as strait line, quadratic, polynomial, exponential
e.t.c. using different techniques.

40
2.1 Discrete Least-Square Approximation

Objectives

At the end of this section students must be able to:

? Define Approximation Theory.

? List approximation techniques/methods.

? Approximate a function using linear least square method.

? Approximate a function using nonlinear least square method.

? Fitting a polynomial function.

? Fitting exponential function .

Brainstorming Question 2.1. .

1. Define curve fitting under approximation theory.

2. Define curve fitting.

3. What is a scatter diagram ?

4. What is Approximation in numerical analysis? define it briefly.

5. When approximation is important?

6. Do you know some approximation techniques from your Numerical Anal-


ysis I concept.

41
In many Natural Sciences, social Sciences or behavioral Sciences, an
experiment or survey often produces a mass of discrete data. To interpret
these data, the investigator report using graphical methods. For instance,
an experiment in physics might produce a numerical table of the form:

x x0 x1 x2 ··· xn
f (x) f (x0 ) f (x1 ) f (x2 ) ··· f (xn )

straight line is the simplest and one of the most important curve fitting to
approximate a function that best fits of such data. least-square method
is the method that is used to find the best fit for a straight line.

Definition 2.1 (DiscreteLeast − SquareApproximation:). Given a set of n dis-


crete data points, (xi , yi ), i = 0, 1, 2, 3, ...m, then finding an algebraic polyno-
mial

pn (x) = a0 + a1 x + a2 x2 + ... + an xn (n < m)

such that the error E(a0 , a1 , a3 , ..., an ) in least square senses minimized; that
is

n
X
E(a0 , a1 , a2 , ..., an ) = (yk − (a0 + a1 xk + a2 x2k + ... + an xnk ))2 is minimum.
k=0

Here E(a0 , a1 , a2 , ..., an ) is a f unction of (n + 1) variables : a0 , a1 , a2 , ..., an

2.1.1 Linear Least Square Method

In most case, Straight line can be represented the equation

y = ax + b (2.1)

Now, best fitting the data means finding a best values of a and b in (2.1)
using suitable technique that mostly fit to the given data. For example, if
the nine points are represented in the following figure (2.1).

42
Figure 2.1: Scattered data diagram

As, we observe from the figure, the data points will not fall on the line
y = ax + b. if by chance the k th datum falls on the line, then:

(axk + b) − yk = 0

If it does not, then there is an error with magnitude

|(axk + b) − yk | =
6 0

The total absolute error, E for all n + 1 data points is given by:

n
X
E(a, b) = |(axk + b) − yk |
k=0

Now, sine this error is small(< 1), we can minimize the error by squaring
as:

n
X
E(a, b) = (|(axk + b) − yk |)2 (2.2)
k=0

By the help of derivative, let us minimize the total absolute error as:

∂ ∂
E(a, b) = 0 and E(a, b) = 0
∂a ∂b

( Since S is a function of two variables a and b, the partial derivative of S


with respect to a and b respectively are necessary at the minimum. )
Computing the partial derivative of (2.2), we obtain the following system;


 Pn 2(|(ax + b) − y |)x = 0
k=0 k k k
P n
k=0 2(|(axk + b)| − yk ) = 0

43
This is a pair of simultaneous linear equation with unknowns a and b.
They are called normal equation and rearranged as:

 (Pn x2 )a + (Pn x )b = Pn x y
k=0 k k=0 k k=0 k k
(2.3)
 ( n x )a + ( n 1)b =
P P P n
k=0 k k=0 k=0 yk

Therefore, using any appropriate direct methods, like Gaussian Elimina-


tion, Cramer’s rule, e.t.c. for the system in (2.3)to solve a and b, and then
substitute the resulting values of a and b in (2.1) and take as "best fit" of
straight line.

Example 2.1. Find the best fit value of a and b using the linear least square
method for the following tabular data.

x 4 7 11 13 17
y = f (x) 2 0 2 6 7

Solution 2.1. Plot the original data gives scatter diagram and let the straight
line y = a + bx best fits the given data. To find the values of a and b from the
normal equation, we compute the summations using table as follow:


 644a + 52b = 227
 52a + 5b = 17

Now, solving this system, we obtain the values of a and b is 0.4864 and
−1.6589 respectively. Therefore, the linear line y = 0.4864 + 1, 6589x best fits
the given data points.
Again, the total absolute square error is:

4
X
E(a, b) = (|yk − (0.4864 + 1.6589xk )|)2
k=0

= 2.9354 + 3.0482 + 2.8612 + 1.7841 + 0.1522 = 10.7813

44
Example 2.2. Find the best fit value of a and b using the linear least square
method for the following tabular data.

x 0 1 2 3 4
y = f (x) 1 1.8 3.3 4.5 6.3

Solution 2.2. Exercise!!! ( reach to the answer y = 1.33 + 0.72x)

2.1.2 Non-Linear Least Square Method

a. Polynomial Least Square Method

The idea of approximating a set of n discrete data points, (xi , yi ), i = 0, 1, 2, 3, ...m


with an algebraic polynomial

pn (x) = a0 + a1 x + a2 x2 + ... + an xn of degreen < m − 1

,using the linear least square procedure in handled similarly, we choose


the constants a0 , a1 , a3 , ..., an to minimize the error E(a0 , a1 , a3 , ..., an ), where

m
X
E(a0 , a1 , a2 , ..., an ) = (yi − pn (xi ))2
i=0
Xm m
X m
X
2
= (yi ) − 2 (yi pn (xi )) + (pn (xi ))2
i=1 i=1 i=1
m
X m X
X n m X
X n
= (yi )2 − 2 ( (aj (xi )j ))yi + ( (aj (xi )j ))2
i=1 i=1 j=0 i=1 j=0
Xm Xn X m n
XX n m
X
= (yi )2 − 2 aj ( y i xi ) j + aj ak ( (xi )j )2
i=1 j=0 i=1 j=0 k=0 i=1

As in linear case, for E minimizing is necessary, that is,

∂E
= 0, f or j = 0, 1, 2, ...n
∂aj

45
Thus, for each j we have:

m n m
∂E X X X
0= = −2( (yi xi )j ) + 2 ak ( (xi )j+k
∂aj i=1 k=0 i=1

This gives (n + 1) normal equation in (n + 1) unknowns of aj . These are:

n
X m
X m
X
ak ( (xi )j+k = yi xji j = 0, 1, 2, ..., n (2.4)
k=0 i=1 i=1

Thus, it is helpful to write equation (2.4) in the following system to deter-


mine the coefficient a0i s.
 Pm Pm Pm Pm


 a0 i=1 x0i + a1 i=1 x1i + ... + an i=1 xni = i=1 yi x0i

 Pm Pm Pm Pm
a0 x1i + a1 x2i + ... + an xn+1 = yi x1i




 i=1 i=1 i=1 i i=1
Pm Pm n+2
Pm Pm
a0 2
i=1 xi + a1 i=1
3
x i = 2
i=1 yi xi
i=1 xi + ... + an
(2.5)
.. ..


. .





 Pm n Pm n+1 Pm 2n Pm
a0 i=1 xi + a1 i=1 xi + ... + an i=1 xi = i=1 yi xm


i

Here, equation (2.5) also called the normal equation and has unique solu-
tion provided that x0i s are unique.

Example 2.3. Fit the data below with the discrete least square polynomial
of degree at most 2.

Solution 2.3. For this problem n = 2 and m = 5. Also the three normal
equations are constructed from the following table.

a0 = 0.9268, a2 = 1.1383, a3 = 0.6401


Thus, the least square polynomial of degree 2 that fitting the data is:

p2 (x) = 0.9268 + 1.1383x + 0.6401x2

46
5
X
E= (yi − p(xi ))2 = 0, 00720286
i

is the least that can be obtained using a polynomial of degree 2.

b. Fitting Exponential Function

Suppose that the exponential curve be in the form of

y = aebx (2.6)

Now, taking the logarithm of both sides,we get:

logy = loga + bxloge (2.7)

Now, let Y = logy, A = loga, B = bloge, then (2.7) can be rewrite as;

Y = A + Bx (2.8)

Now, we use linear least square method to fit Equation (2.8) and so, the
normal equation is:

Pn
Yi = nA + B ni=1 xi
 P
i=1
Pn Pn Pn 2 (2.9)
i=1 xi Yi = A i=1 xi + B i=1 xi

Solving the above equation for A and B, then solving for a and b, we get
B
a = antilogA and b = loge

Example 2.4. Fit the curve y = abx for the following data.

x 2 3 4 5 6
y = f (x) 144 172.8 207.4 248.8 298.5

47
Solution 2.4. We have given that y = abx ⇒ logy = loga + xlogb and reduced
to Y = A + Bx, where, A = loga, and B = logb
The normal equation are:

48
Self check exercise 2.1. .

1. Find the least square line y = ax + b for the data points, (-1, 10), (0, 9),
(1, 7), (2, 5), (3, 4), (4, 3), (5, 0) and (6, -1).

2. Find the linear least square approximation for the data

x −2 −1 0 1 2
y = f (x) 1 2 3 3 4

3. Fit a second degree parabola to the following data:

x 0 1 2 3 4
y = f (x) 0 1.8 1.3 2.5 6.3

4. Using a method of least square fit a curve y = abx to the following data:

x 2 3 4 5 6
y = f (x) 8.3 15.4 33.1 65.2 127.4

49
2.2 Continuous Least-Square Approximation

Objectives

At the end of this section students must be able to:

? Define Approximation Theory.

? List approximation techniques/methods.

? Approximate a function using orthogonal functions.

? Approximate a function using Chebyshev polynomials.

? Approximate a function using Legendre polynomials

? Approximate a function using Fourier series.

Brainstorming Question 2.2. .

• Define curve fitting under approximation theory.

This approximation concerns for the approximation of function. Suppose


the function f  C[a, b] and a polynomial Pn (x) of degree at most n that will
minimize the error
Z b
[f (x) − pn (x)]2 dx (2.10)
a

2.2.1 Least Square Approximation of a Function Using


Monomial Polynomial

Given f (x) continuous on [a, b], find a polynomial Pn (x) of degree at most n

Pn (x) = a0 + a1 x + a2 x2 + .... + an−1 xn−1 + an xn (2.11)

50
Such that, the integral of the square of the error minimize. That is
Z b
E= [f (x) − pn (x)]2 dx is minimized. (2.12)
a

Now, to set up the normal equation:

n
X
2 n−1 n
Let Pn (x) = a0 + a1 x + a2 x + .... + an−1 x + an x = ai xi and
i=0
Z b
E = E(a0 , a1 , a2 , ..., an ) = [f (x) − pn (x)]2 dx
a

The problem is to find the real coefficients a0 , a1 , a2 , ..., an that will


minimize E. The necessary condition for the coefficients a0 , a1 , a2 , ..., an
∂E
that minimize is ∂aj
= 0, j = 0, 1, 2, ....n Now,

n n
!2
Z b X Z b Z b X
E= [f (x)2 dx − 2 ai xi f (x) dx + ai x i dx
a i=0 a a i=0
Z b n Z b
∂E j
X
= −2 x f (x) dx + 2 ai xi+j dx
∂aj a i=0 a

Thus, to find Pn (x), the (n + 1) normal equation is

n
X Z b Z b
i+j
ai x dx = xj f (x) dx , j = 0, 1, 2, ..., n (2.13)
i=0 a a

we will be solved for n + 1 unknowns aj . The difficulties in obtaining least


square polynomial approximation (n + 1)X(n + 1) linear system for
unknowns, a0 , a1 , a2 , ..., an must be solved and the coefficients in the linear
system are of the form

b
bi+j+1 − ai+j+1
Z
xi+j dx =
a i+j+1

Then the matrix in the system is known Hilbert matrix

Example 2.5. Find the least square approximating polynomial of degree 2


for the function f (x) = sinx on [0, 1] .

Solution 2.5. Let the degree 2 polynomial be P2 (x) = a0 + a1 x + a2 x2 , then

51
the normal equations are:

2.2.2 Least Square Approximation of Function Using Or-


thogonal polynomials

52
53
Express ak as Weight function w(x)

Algorithm 2.1 (10.6). Least Square Approximation of Function Using Or-


thogonal polynomials

54
2.2.3 Least Square Approximation of Function Using Leg-
ender’s polynomials

Example 2.6. Find the linear and quadratic least square approximation of
f (x) = ex using Legender’s Polynomials

55
Self check exercise 2.2. .

1. Find the least square approximating polynomial of degree 1 for the


function
f (x) = sinx on [0, 1].

2. Find the least square approximating polynomial of degree 2 for the


function
f (x) = cosπx on [0, 1].

3. Find the least square approximating polynomial of degree 3 for the


function
f (x) = sinπx on [0, 1]

56
Unit Summary

• Numerical Analysis is a branch of mathematics which leads to ap-


proximate solution by approximate solution by repeated application
of four basic operations of algebra.

• An approximate non-mathematical relationship between the two vari-


ables, can be established by a diagram called scattered diagram.

• scattered diagram finds a relationship between the set of paired ob-


servations x and y, we plot their corresponding values on the graph,
taking one of the variable along the x axis other along the y axis
i.e.(x1 , y1 ), (x2 , y2 ), ..., (xn , yn ).

• Curve fitting is the process of constructing a curve, or mathematical


function, that has the best fit to a series of data points, possibly
subject to constraints.

• Curve fitting can involve either interpolation, where an exact fit to


the data is required, or smoothing, in which a "smooth" function is
constructed that approximately fits the data.

• There are two types of approximation; discrete data approximation


and continuous function approximations.

• We can use least square approximation method(linear or non-linear)


method for discrete data.

• We can use orthogonal polynomials, Legendre polynomials, Fourier


series for continuous function.

57
Review Exercise 2

1. Fit a strait line to the following data regarding x as independent vari-

able.

x 0 1 2 3 4
y = f (x) 0 1.8 1.3 2.5 6.3

2. Find the least square line y = a + bx:

x −2 0 2 4 6
y = f (x) 1 3 6 8 13

3. Find the least square line y = a + bx:

x −4 −2 0 2 4
y = f (x) 1.2 2.8 6.2 7.8 13.2

4. Find the least square line y = a + bx + cx2 :

x −3 −1 1 3
y = f (x) 15 5 1 5

5. Find the least square parabola for the points (-3, 3), (0, 1), (2, 1), (4,
3).

6. Find the linear least square line y = a0 + a1 x:

x 1 2 3 4
y = f (x) 05 1 1 2

7. Find the least square geometric curve y = axb for the following data:

x 1 2 3 4 5
y = f (x) 0.2 2 4.5 8 12.5

58
8. Using the method of least square fit a relation of the form y = abx :

x 2 3 4 5 6
y = f (x) 144 172.8 207.4 248.8 298.5

9. Using the method of least square fit a relation of the form y = abx :

x 1 2 3 4
y = f (x) 4 11 35 100

10. Find a least square curve of the form y = abx for the following data for

constants a and b:

x 61 26 7 2.6
y = f (x) 350 400 500 600

11. The pressure and volume of a gas are related by the equation P V λ = k

for λ and k are constants. Fit this equation for the given data bellow;

x 0.5 1 1.5 2 2.5 3


y = f (x) 1.62 1 0.75 0.62 0.52 0.46

References

1. Richard L. Burden, Numerical Analysis, ninth edition.

2. Rao V. Dukkipati, Numerical methods, 2010.

3. G. Shanker Rao, Numerical Analysis, revised third edition, 2006.

4. PETER V. O’NEIL, Advanced Engineering Mathematics.

59
60
3
Numerical Methods For Ordinary
Differential Equations (16 Hrs)

Objective

At the end of this unit students will be able to:

• Solve the first Ordinary differential equations, using Euler’s method,


Runge-Kutta 4th order method.

• Write a MATLAB program for Euler’s method, Runge-Kutta 4th order


method.

• Solve linear and non-linear initial value problems using Euler’s method,
Runge-Kutta 4th order method.

• Solve linear and non-linear Boundary value problems using shooting


method, Finite difference method.

Introduction

In our real world many problems in different discipline appear in the


form of differential equations. Although, differential equations can be
solved analytically using different techniques, numerical methods applied
to solve differential equations when analytical solution is difficult to ob-
tain. In this chapter we will see some numerical techniques those we
apply to solve differential equations such as Taylor’s Series Method, Eu-
ler’s method, Runge-Kutta method, Multi-Step method.
Differential Equations are used to model problems in Science and Engi-
neering that involves the rate of change of one variable with respect to
an other. Most of these problems requires the solution of an initial value
problems (IVPs), that is the solution to Differential Equation (DE) that sat-
isfy the given condition. In common real-life situation, the Differential
Equations that model the problem is too complicated to solve exactly, and
one can uses Numerical methods to approximate the original problem.

62
3.1 Initial Value Problems

Objectives

At the end of this section students must be able to:

? Define Initial value problem (IVP).

? List techniques/methods those we used to apply to solve initial value


problems.

? Solve IVPs using Euler’s method.

? Solve IVPs using modified Euler’s method.

? Solve IVPs using Taylor’s method.

? Solve IVPs using Runge-Kutta method.

? Solve IVPs using multistep method.

? Solve higher order IVPs using Runge-Kutta method.

Brainstorming Question 3.1. .


1. Define Differential Equation.
2. What is Initial value problem ?
3. What is Boundary value problem.
4. Solve

dy x+y
= , y(0) = 1.
dx x−y

5. Solve

d2 y
= 2y 0 − xy, y(0) = 1, y(2) = 4, f or1 ≤ x ≤ 2
dx2

63
6. Solve

y 00 = y 0 + 2y + ex , y(0) = 1, y 0 (0) = 2, h = 0.1, 0 ≤ x ≤ 1

Definition 3.1. An equation that can be written in the form of

dy
= f (x, y), y(x0 ) = y0 (3.1)
dx

is called first order initial value differential equation of independent variable


x and dependent variable y.

Definition 3.2. A problem given with the form:

d2 y dy
= f (x, y, ), a ≤ x ≤ b, y(a) = α, y 0 (a) = β (3.2)
dx2 dx

is called linear-second order initial value ordinary differential equation with


dy
two dependent variables y and dx
and one independent variable x. We
say that the problem in (3.2) is linear initial value, since the degree of the
derivative term is one and the function defined at one initial point a.

Example 3.1.

dy
a. = f (x, y), a ≤ x ≤ b, y(a) = α,
dx
b. y 0 = 1 + xsin(xy), 1 ≤ x ≤ 2, y(0) = 1

c. y 00 + p(x)y 0 + q(x)y + r(x) = 0, a ≤ x ≤ b, y(a) = α, y(b) = β

d. y 00 + p(x)(y 0 )2 + q(x)y + r(x) = 0, a ≤ x ≤ b, y(a) = α, y(b) = β

Note In example above a and b are of the first order, c is linear-second order
and d is non-linear second order initial value problems.

Definition 3.3. : A differential equation involving one or more derivatives of


a dependent variable with respect to only one variable is called an ordinary
differential equation (ODE). But, if the number of independent variables is
two or more then it is called a partial differential equation(PDE).

64
dy
Example 3.2. a. dx
= x − 3,

b. y 00 + xy 0 + y = 2, e.t.c. are ordinary differential equations.


whereas,

∂u
c. ∂x
= y,

d. uxx + uyy = 0e.t.c. are partial differential equation.

Definition 3.4. A Solutions to differential equations may require satisfied


certain defined conditions and such conditions are called initial conditions
if they are given at only one point of the independent variable.

 Conditions given at more than one point of the independent variable


are called boundary conditions.

 The problem of solving an nth order ordinary differential equation to-


gether with an initial conditions is called an initial value problem(IVP).

 The problem of solving an nth order ordinary differential equation to-


gether with a boundary conditions is called a boundary value prob-
lem(BVP).

Example 3.3.

y 00 + xy 0 = cosx y(0) = 1, andy 0 (0) = −1 is an IV P.

and

y 00 + xy 0 = cosx, y(0) = 1andy(1) = −1 is a BV P.

3.1.1 Taylor’s Series method for nth order

Consider the first order differential equation

dy
= f (x, y), y(x0 ) = y0 (3.3)
dx
d2 y ∂f ∂f dy
⇒ 2
= + = fx + fy y 0 (3.4)
dx ∂x ∂x dx

65
Then, differentiating successively we obtained y 000 , y (4) , y (5) , ...
The Taylor’s Series expansion about of y(x) about x = x0 gives;

(x − x0 )2 00 (x − x0 )3 000
y(x) = y(x0 ) + (x − x0 )y 0 (x0 ) + y (x0 ) + y (x0 ) + ...
2! 3!

(x − x0 )2 00 (x − x0 )3 000
y(x) = y0 + (x − x0 )y00 + y0 + y0 + ... (3.5)
2! 3!

Substituting the values of y0 , y00 , y000 , y0000 , ..., we obtain y(x) for all values of x
for which (3.5) converge.
Putting x1 = x0 + h, and y(x0 + h) = y(x1 )

h2 00 h3 000
y(x1 ) = y1 = y0 + hy00 + y + y0 + ... (3.6)
2! 0 3!

Continuing in this way we compute y2 , y3 , y4 , ... for the solution of y(x).

h2 00 h3 000
y(x2 ) = y2 = y1 + hy10 + y1 + y1 + O(h4 ) (3.7)
2! 3!

Where O(h4 ) represent all the terms involving fourth and higher order
terms.

Example 3.4. Given the IVP, y 0 = x − y 2 and y(0) = 1. Then find the approxi-
mate solution by Taylor’s method of order 3 at x0 = 0.1 and x1 = 0.2.

Solution 3.1. The formula is given by

(x − x0 )2 00 (x − x0 )3 000
y(x) = y0 + (x − x0 )y00 + y0 + y0 + ...
2! 3!
But, y 0 (x) = x − y 2 , then,

y00 = y(x0 ) = 0 − (12 ) = −1

y000 = y 00 (x0 ) = (1 − 2yy 0 )(x0 ) = 1 − 2(1)(−1) = 3

y0000 = y 000 (x0 ) = −2(y 02 + yy 00 )(x0 ) = −2((−1)2 + 1(3)) = −8

Now, substituting the values of y0 , y00 , y000 and y0000 in formula for y(x) above we

66
get;

(x − x0 )2 (x − x0 )3
y(x) = 1 + (x − x0 )(−1) + (3) + (−8)
2! 3!
x2 8x3
1 + (−1)x + 3 −
2 6
3x0 2 8x0 3
T hen, y(x0 ) = 1 + (−1)x0 + −
2 6
3(0.1)2 8(0.1)3
⇒ y(0.1) = 1 + (−1)(0.1) + −
2 6
= 1 − 0.1 + 0.015 + 0.0013333

= 0.9156667

T heref ore, y(0.1) ≈ 0.9156667

y(0.2) is Exercise!!!

3.1.2 Euler’s Method

Let the differential equation is given by:

dy
y0 = = f (x, y), with y(x0 ) = y0 (3.8)
dx

To solve the equation given in (3.8) for the values of y at


x = xi = x0 + ih, i = 1, 2, ...n integrate equation (3.8) with one step size.
Z x1 Z x1
dy =f (x, y) dx (3.9)
x0 x0
Z x1
Hence, y1 = y0 + f (x, y) dx (3.10)
x0

Take f (x, y) = f (x0 , y0 ) in the range x0 to x1 , then we get;

y1 = y0 + (x1 − x0 )f (x0 , y0 )

67
⇒ y1 = y0 + hf (x0 , y0 ) (3.11)

Similarly, Take f (x, y) = f (x1 , y1 ) in the range x1 to x2 , then we get;

y2 = y1 + (x2 − x1 )f (x1 , y1 )

⇒ y2 = y1 + hf (x1 , y1 ) (3.12)

Generally, we take the following general formula.

• Euler’s Formula

yn+1 = yn + hf (xn , yn )

, where h = xn − xn−1 , xn = x0 + nh Thus,

F or n = 0, y1 = y0 + hf (x0 , y0 )

F or n = 1, y2 = y1 + hf (x1 , y1 )

F or n = 2, y3 = y2 + hf (x2 , y2 )

F or n = 3, y4 = y3 + hf (x3 , y3 )

F or n = 4, y5 = y4 + hf (x4 , y4 )
..
.

Example 3.5. . Using Euler’s method solve for x = 0.1 of


dy/dx = (x − y)/(x + y) given that y(x0 ) = y(0) = y0 = 1 and h = 0.002.

Solution 3.2. . The values of x with the given step size h are,

68
x0 = 0, x1 = 0.02, x2 = 0.04, x3 = 0.06, x4 = 0.08 and x5 = 0.1. then,

y1 = y0 + hf (x0 , y0 ), but f (x0 , y0 ) = f (0, 1) = (0 − 1)/(0 + 1) = −1

= 1 + 0.02(−1)

= 1 − 0.02

= 0.98

y2 = y1 + hf (x1 , y1 ), but f (x1 , y1 ) = f (0.02, 0.98) = (0.02 − 0.98)/(0.02 + 0.98) = −0.96

= 0.98 + 0.02(−0.96) == 0.9608

y3 = y2 + hf (x2 , y2 ), but ‘f (x2 , y2 ) = f (0.04, 0.9608) = (0.04 − 0.9608)/(0.04 + 0.9608)

= −0.92006

= 0.9608 + 0.02(−0.92006)

= 0.942398.

y4 = y3 + hf (x3 , y3 ), but f (x3 , y3 ) = f (0.06, 0.942398) = (0.06 − 0.942398)/(0.06 + 0.942398)

= −0.880287

= 0.942398 + 0.02(−0.880287)

= 0.92479.

y5 = y4 + hf (x4 , y4 ), but f (x4 , y4 ) = f (0.08, 0.92479) = (0.08 − 0.92479)/(0.08 + 0.92479)

= −0.84076

= 0.92479 + 0.02(−0.84076)

= 0.907975.

69
y6 = y5 + hf (x5 , y5 ), but f (x5 , y5 ) = f (0.1, 0.907975) = (0.1 − 0.907975)/(0.1 + 0.907975)

= −0.80158

= 0.907975 + 0.02(−0.80158)

= 0.89194.

⇒ y(0.1) = y(x5 ) = y6 = 0.89194.

Exercise 3.1. Compute y3 , y6 and y8 the following using Euler’s method.

dy
a. = x + y 2 , y(0) = y0 = 1, h = 0.1
dx
dy
b. + y = 1, y(1) = y0 = 2, h = 0.1
dx
dy
c. = 1 + xy 3 , y(0) = 1, h = 0.1
dx

3.1.3 Modified Euler’s method

In case of Euler’s method we approximate f (x, y) by f (x0 , y0 ) in the range


x0 to x1 , but in this case we approximate f (x, y) by means of Trapezoidal
rule i.e.

h
f (x0 , y0 ) + f (x1 , y10 )

f (x, y) =
2

h
f (x0 , y0 ) + f (x1 , y10 )

⇒ y1 =
2

h
y12 = f (x1 , y1 ) + f (x2 , y21 )

2

Then, we get the iteration formula as:

h n

yn+1 = yn + f (xn , yn ) + f (xn+1 , yn+1 ) , n = 1, 2, 3, ...n (3.13)
2

70
And y1th is the nth approximation of y1 . The formula can be starting from y10
by Euler’s formula. y1 = y0 + hf (x0 , y0 ).

Example 3.6. .
Using Modified Euler’s method solve for x = 0.06 of dy/dx = (x − y)/(x + y)
given that x0 = 0, y0 = 1 and h = 0.02.

Solution 3.3.

y10 = y0 + hf (x0 , y0 ) butf (x0 , y0 ) = f (0, 1) = (0 − 1)/(0 + 1) = −1.

= 1 + 0.02(−1)

= 0.98

Then,

y1 = y0 + h/2(f (x0 , y0 ) + f (x1 , y10 )), but f (x0 , y0 ) = −1 and f (x1 , y10 ) = f (0.02, 0.98) = −0.96

= 1 + 0.02/2(−1 − 0.96)

= 0.9804

y20 = y1 + hf (x1 , y1 ) butf (x1 , y1 ) = f (0.02, 0.9804) = (0.02 − 0.9804)/(0.02 + 0.9804) = −0.9600

= 0.9804 + 0.02(−0.9600).

= 0.9612.

Then,

y2 = y1 + h/2(f (x1 , y1 ) + f (x2 , y20 )), but f (x1 , y1 ) = −0.9600 and f (x2 , y20 ) = f (0.04, 0.9612)

= −0.9201

= 0.9804 + 0.02/2(−0.9600 − 0.9201)

= 0.961599.

71
y30 = y2 + hf (x2 , y2 ) butf (x2 , y2 ) = f (0.04, 0.961599) = (0.04 − 0.961599)/(0.04 + 0.961599)

= −0.9201

= 0.961599 + 0.02(−0.9201)

= 0.943197

Then,

y3 = y2 + h/2(f (x2 , y2 ) + f (x3 , y30 )), but f (x2 , y2 ) = −0.9201 and f (x3 , y30 ) = f (0.06, 0.943197)

= −0.88038

= 0.961599 + 0.02/2(−0.9201 − 0.88038)

= 0.9435942

⇒ y(0.06) = y3 = 0.9435942 i.e. the solution at x = 0.06 is 0.9435942.

Exercise 3.2. Using the Modified Euler’s method solve y 0 = x + y with y(0) =
1 and h = 0.02.

3.1.4 Runge-Kutta Method

Consider the differential equation IV P :

dy
= f (x, y), y(x0 ) = y0
dx

, then Runge-Kutta method is applied to calculate y at xn (i.e y(xn ) = yn ?)

a. First order runge-kutta method: The Euler’s method is the first order
Runge – kutta method. i.e.

yn+1 = yn + hf (xn , yn ) (3.14)

b. Second order Runge – kutta method: The Modified Euler’s method is

72
called the second order Runge – kutta method. i.e.

h
f (x0 , y0 ) + f (x1 , y10 )

y1 = y0 + (3.15)
2
h
y1 = y0 + (k1 + k2 ) , (3.16)
2

Where

k1 = f (x0 , y0 ),

k2 = f (x1 , y10 )

c. Third order Runge – kutta method: It is given by the following formula

1
y1 = y0 + (k1 + 4k2 + k3 ) , (3.17)
6

Where,
k1 = hf (x0 , y0 )
k1
k2 = hf (x0 + h2 , y0 + 2
)
k3 = f (x0 + h, y0 + k 0 ), k 0 = hf (x0 + h, y0 + k1 )

d. Fourth order Runge – kutta method: This is given by the following


formula

1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 ) , (3.18)
6

Where,

k1 = hf (x0 , y0 )
h k1
k2 = hf (x0 + , y0 + )
2 2
h k2
k3 = hf (x0 + , y0 + )
2 2
k4 = hf (x0 + h, y0 + k3 )

73
The fourth order Runge-Kutta method is most commonly applied method
rather than other order methods, and it is given by the following general
formula:

Runge-Kutta 4th order method General Formula

1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 )
6
k1 = hf (xn , yn )
h k1
k2 = hf (xn + , yn + )
2 2
h k2
k3 = hf (xn + , yn + )
2 2
k4 = hf (xn + h, yn + k3 )

h = xn − xn−1 , xn = x0 + nh

Example. Using Runge – kutta 4th order method find the value of y for
dy (x − y)
x = 0.04 in step size of 0.02 for the equation = with y(0) = 0.1.
dx (x + y)
Solution .
1
The Runge – kutta 4th order method is yn+1 = yn + (k1 + 2k2 + 2k3 + k4 )
6
1
⇒ y1 = y0 + (k1 + 2k2 + 2k3 + k4 ),
6
Where;

k1 = hf (x0 , y0 ) = 0.02(−1) = −0.02, since f (x0 , y0 ) = −1


h k1
k2 = hf (x0 + , y0 + ) = 0.02f (0 + 0.02/2, 1 − 0.02/2) = 0.02f (0.01, 0.99) = −0.0196
2 2

h k2
k3 = hf (x0 + , y0 + ) = 0.02f (0 + 0.02/2, 1 − 0.0196/2) = −0.0196
2 2
k4 = hf (x0 + h, y0 + k3 ) = 0.02f (0 + 0.02, 1 − 0.0196) = −0.0192

74
1
⇒ y1 = 1 + (−0.02 + 2(−0.0196) + 2(−0.0196) + −0.0192)
6
= 1 + 1/6(−0.02 − 0.0392 − 0.0392 − 0.0192)

= 0.9804

1
y2 = y1 + (k1 + 2k2 + 2k3 + k4 ) ,
6
W here;

k1 = hf (x1 , y1 ) = 0.02(−0.9600) = −0.0192, sincef (x1 , y1 ) = −0.9600

h k1
k2 = hf (x1 + , y1 + ) = 0.02f (0.02 + 0.02/2, 0.9804 − 0.0192/2) = −0.0188
2 2

h k2
k3 = hf (x1 + , y1 + ) = 0.02f (0.02 + 0.02/2, 0.9804 − 0.0188/2) = −0.0188
2 2

k4 = hf (x1 + h, y1 + k3 ) = 0.02f (0.02 + 0.02, 0.9804 − 0.0188) = −0.0184

1
⇒ y2 = 0.9804 + (−0.0192 + 2(−0.0188) + 2(−0.0188) + −0.0184)
6
1
= 0.9804 + (−0.0192 − 0.0376 − 0.0376 − 0.0184)
6
= 0.9616

1
y3 = y2 + (k1 + 2k2 + 2k3 + k4 ) ,
6
W here;

k1 = hf (x2 , y2 ) = 0.02(−0.9201) = −0.0184

h k1
k2 = hf (x2 + , y2 + ) = 0.02f (0.04 + 0.02/2, 0.9616 − 0.0184/2) = −0.018
2 2

75
h k2
k3 = hf (x2 + , y2 + ) = 0.02f (0.04 + 0.02/2, 0.9616 − 0.018/2) = −0.0178
2 2

k4 = hf (x2 + h, y2 + k3 ) = 0.02f (0.04 + 0.02, 0.9616 − 0.0178) = −0.0176

1
⇒ y3 = 0.9616 + (−0.0184 + 2(−0.018) + 2(−0.0178) + −0.0176)
6
= 0.9616 + 1/6(−0.0184 − 0.036 − 0.0356 − 0.0176)

= 0.9437

T heref ore, y(0.04) = y3 = 0.9437. i.e. the solution at x = 0.04is0.9437.

Exercise 3.3. Compute y3 , y6 and y8 the following using Runge Kutta 4th or-
der method.

dy
a. = x + y 2 , y(0) = y0 = 1, h = 0.1
dx
dy
b. + y = 1, y(1) = y0 = 2, h = 0.1
dx
dy
c. = 1 + xy 3 , y(0) = 1, h = 0.1
dx

3.1.5 Multi-step Methods

The methods those we discussed before are called one-step methods. Be-
cause the approximation for the mesh point xi+1 involves information from
any one of the previous mesh points xi . Although, these methods use for
function evaluation information at points between xi and xi+1 , they do not
retain information for discrete use in future approximation. All the infor-
mation used by these methods is obtained with in the sub-interval over
which the solution is being approximation. Methods using the approxi-
mation at more than one previous mesh points to determine the approxi-

76
mations at the next point are called Multi-step methods.

Definition 3.5. .
A Multi-step methods for solving the initial value problem:

y 0 = f (x, y) (3.19)

y(a) = α, , a ≤ x ≤ b (3.20)

has different equation for finding the approximation yi+1 at the mesh point
xi+1 represented by the following equation, where m > 1 is an integer.

yi+1 =am−1 yi + am−2 yi−1 + ... + a0 yi+1−m + h[bm f (xi+1 , yi+1 ) + bm−1 f (xi , yi )+, ...,

+ b0 f (xi+1−m , yi+1−m )]
(3.21)

(b − a)
for i = m−1, m, ..., N −1, where h = , the a0 , a1 , ..., am−1 and b0 , b1 , ..., bm−1
N
are constants and the starting values
y0 = α0 , y1 = α1 , ..., ym−1 = αm−1 are specified.

Note:

i. When bm = 0 in (3.21), the method is called explicit or open method


because yi+1 determined explicitly interms of the previous value.

ii. When bm 6= 0 in (3.21), the method is called implicit or closed method


because yi+1 occurs on both sides of the equation. So, yi+1 is specified
only implicitly.

For example, the equation;

h
yi+1 = yi + [55f (xi , yi ) − 59f (xi−1 , yi−1 ) + 37f (xi−2 , yi−2 ) − 9f (xi−3 , yi−3 )]
24

(3.22)

y0 = α, y1 = α1 , y2 = α2 and y3 = α3 , i = 3, 4, 5, ..., N − 1 (3.23)

77
Define an explicit four-step method known as the fourth order Adams-
Bashforth method
Whereas

h
yi+1 = yi + [9f (xi+1 , yi+1 ) + 19f (xi , yi ) − f (xi−1 , yi−1 ) + f (xi−2 , yi−2 )] (3.24)
24
y0 = α, y1 = α1 , and y2 = α2 , i = 2, 3, 4, 5, ..., N − 1 (3.25)

Define an implicit three-step method known as the fourth order Adams-


Mounlton method
NB: The starting values in (3.23) and (3.25) above must specified, gener-
ally by assuming y0 = α and generating the remaining values by Euler’s
method, Taylor’s Series method or Runge-Kutta method.

Adams-Bashforth Explicit Method

i. Adams-Bashforth Two-Step Explicit Method

h
yi+1 = yi + [3f (xi , yi ) − f (xi−1 , yi−1 )] (3.26)
2
y0 = α, y1 = α1 , i = 1, 2, 3, ..., N − 1 (3.27)

5 000
• Local truncation error, τi+1 (h) = 12
y (µi )h2 , for some µi ∈ (xi−1 , ti+1 )

ii. Adams-Bashforth Three-Step Explicit Method

h
yi+1 = yi + [23f (xi , yi ) − 16f (xi−1 , yi−1 ) + 5f (xi−2 , yi−2 )] (3.28)
12
y0 = α, y1 = α1 , y2 = α2 i = 2, 3, ..., N − 1 (3.29)

• Local truncation error, τi+1 (h) = 38 y (4) (µi )h3 , for some µi ∈ (xi−1 , ti+1 )

78
iii. Adams-Bashforth Four-Step Explicit Method

h
yi+1 = yi + [55f (xi , yi ) − 59f (xi−1 , yi−1 ) + 37f (xi−2 , yi−2 ) + 9f (xi−3 , yi−3 )]
24
(3.30)

y0 = α, y1 = α1 , y2 = α2 , y3 = α3 i = 3, 4, ..., N − 1 (3.31)

251 (5)
• Local truncation error, τi+1 (h) = 720
y (µi )h4 , for some µi ∈ (xi−1 , ti+1 )

iv. Adams-Bashforth five-Step Explicit Method

h
yi+1 = yi + [1901f (xi , yi ) − 2774f (xi−1 , yi−1 ) + 2616f (xi−2 , yi−2 ) − 1274f (xi−3 , yi−3 )
720
+ 251f (xi−4 , yi−4 )]
(3.32)

y0 = α, y1 = α1 , y2 = α2 , y3 = α3 , y4 = α4 i = 4, 5, 6..., N − 1 (3.33)

95 (6)
• Local truncation error, τi+1 (h) = 288
y (µi )h5 , for some µi ∈ (xi−1 , ti+1 )

Adams-Moulton Implicit Method

i. Adams-Moulton Two-Step Implicit Method

h
yi+1 = yi + [5f (xi+1 , yi+1 ) + 8f (xi , yi ) − f (xi−1 , yi−1 )] (3.34)
12
y0 = α, y1 = α1 , i = 1, 2, 3, ..., N − 1 (3.35)

−1 (4)
• Local truncation error, τi+1 (h) = 24
y (µi )h3 , for some µi ∈ (xi−1 , ti+1 )

79
ii. Adams-Moulton Three-Step Implicit Method

h
yi+1 = yi + [9f (xi+1 , yi+1 ) + 19f (xi , yi ) − 5f (xi−1 , yi−1 ) + f (xi−2 , yi−2 )]
24
(3.36)

y0 = α, y1 = α1 , y2 = α2 i = 2, 3, ..., N − 1 (3.37)

−19 (5)
• Local truncation error, τi+1 (h) = 720
y (µi )h4 , for some µi ∈ (xi−1 , ti+1 )

iii. Adams-Moulton Four-Step Implicit Method


h
yi+1 = yi + [251f (xi+1 , yi+1 ) + 646f (xi , yi ) − 246f (xi−1 , yi−1 ) + 106f (xi−2 , yi−2 ) − 19f (xi−3 , yi−3 )]
720
y0 = α, y1 = α1 , y2 = α2 , y3 = α3 i = 3, 4, ..., N − 1

−3 (6)
• Local truncation error, τi+1 (h) = 160
y (µi )h5 , for some µi ∈ (xi−1 , ti+1 )

Note: Generally, the implicit method is more accurate than the explicit
method, but to apply the implicit method, we must solve the implicit equa-
tion for yi+1 . This is not always possible, and even when it can be done the
solution for yi+1 may not be unique.

Example 3.7. .
Solve the IVP

y 0 = y − x2 , 0 ≤ x ≤ 1, y(0) = 0.5

Using Adams-Bashforth four step explicit method and Adams-Moulton three


step implicit method with h = 0.2 and compare with the exact solution y(x) =
(x + 1)2 − 0.5ex

Solution 3.4. .
i. For the explicit four step Adams-Bashfourth method:
we have given y0 = 0.5 and to take the next three approximations, we use
Runge-Kutta fourth order method and we get; y1 = y(0.2) ≈ 0.8292933, y2 =
y(0.4) ≈ 1.2140762, y3 = y(0.6) ≈ 1.6489220 and we use these as starting values
Adams-Bashforth four step explicit method to compute the next approxima-
tions y4 = y(0.8) and y5 = y(1.0)

80
So,

h
y(0.8) = y4 = y3 + [55f (x3 , y3 ) − 59f (x2 , y2 ) + 37f (x1 , y1 ) + 9f (x0 , y0 )]
24
y0 = 0.5, y1 = 0.8292933, y2 = 1.2140762, y3 = 1.6489220

W here;

f (x, y) = y − x2 + 1
0.2
= 1.6489220 + [55f (0.6, 1.6489220) − 59f (0.4, 1.2140762) + 37f (0.2, 0.8292933)+
24
9f (0, 0.5)]

= 1.6489220 + 0.0083333 (55(2.2889220) − 59(2.0540762) + 37(1.7892933) − 9(1.5))

= 2.1272892

and
h
y(1.0) = y5 = y4 + [55f (x4 , y4 ) − 59f (x3 , y3 ) + 37f (x2 , y2 ) + 9f (x1 , y1 )]
24
y1 = 0.8292933, y2 = 1.2140762, y3 = 1.6489220, y4 = 2.1272892
0.2
= 2.1272892 + [55f (0.8, 2.1272892) − 59f (0.6, 1.6489220) + 37f (0.4, 1.2140762)+
24
9f (0.2, 0.8292933)]

= 2.1272892 + 0.0083333 (55(2.4872892) − 59(2.2889220) + 37(2.0540762) − 9(1.7892933))

= 2.6410533

The error for this approximation at x = 0.8 and x = 1.0 respectively are;

|1.12722295 − 2.1272892| = 5.97x10−5

|2.6410533 − 2.6408591| = 1.94x10−4

The error for the corresponding Runge-Kutta approximation at x = 0.8 and


x = 1.0 respectively are;

|1.1272027 − 2.1272892| = 2.69x10−5

|2.6408227 − 2.1272892| = 3.64x10−4

ii. Adams-Moulton three step implicit method we have given y0 = 0.5

81
and to take the next two approximations, we use Runge-Kutta fourth order
method and we get; y1 = y(0.2) ≈ 0.8292933, y2 = y(0.4) ≈ 1.2140762, and we
use these as starting values Adams-Moulton three step implicit method to
compute the next approximations y3 = y(0.6) , y4 = y(0.8) and y5 = y(1.0)
So,

h
yi+1 = yi + [9f (xi+1 , yi+1 ) + 19f (xi , yi ) − 5f (xi−1 , yi−1 ) + f (xi−2 , yi−2 )]
24
y0 = α, y1 = α1 , y2 = α2 i = 2, 3, ..., N − 1

Now, we simplify this using f (x, y) = y − x2 + 1, h = 0.2 and xi = 0.2i and we


get;

1 
1.8yi+1 27.8yi − yi−1 + 0.2yi−2 − 0.192i2 − 0.192i + 4.736

yi+1 =
24

To use this method explicitly, we need to solve this implicit equation by


solving for yi+1 we get;

1 
+27.8yi − yi−1 + 0.2yi−2 − 0.192i2 − 0.192i + 4.736 , i = 2, 3, ...

yi+1 =
22.2
1 
27.8y2 − y1 + 0.2y0 − 0.192(2)2 − 0.192(2) + 4.736

⇒ y3 =
22.2
1 
27.8(1.2140762) − 0.8292933 + 0.2(0.5) − 0.192(2)2 − 0.192(2) + 4.736

y(0.6) ≈ y3 =
22.2
1.6489341

Similarly we‘compute

y4 ≈ y(0.8) = 2.1272136, error = |2.1272136 − 2.1272295| = 0.0000160

y5 ≈ y(1.0) = 2.6408298, error = |2.6408298 − 2.6408591| = 0.0000293

As we compare the error of this approximation that of explicit method the


implicit method gives consistently better result.

82
3.1.6 Higher Order Equations and Systems

Definition 3.6. Differential equations more than order one are said to be
higher order differential equations.

Many physical problems, for example, electrical circuit and vibrating


systems involves IVPs which equations have order higher than one. In
this case, to solve such problems new technique is not required. we simply
reduce their order(i.e. the higher order DE ) to systems of first order DE
and, then use any one of appropriate method like Euler’s method, Taylor’s
Series method or Runge-Kutta method that we have already discussed. In
this section we introduce a numerical solution of higher order initial value
problems (IVPs). the technique maintain by transforming the higher order
equation to systems of first order equations.

Definition 3.7. .
An mth system of first order initial value problem has of the form:

du1
= f1 (x, y1 , y2 , ...ym ) (3.38)
dx
du2
= f2 (x, y1 , y2 , ...ym ) (3.39)
dx
du3
= f3 (x, y1 , y2 , ...ym ) (3.40)
dx
..
. (3.41)
dum
= fm (x, y1 , y2 , ...ym ) (3.42)
dx

for a ≤ x ≤ b with initial conditions

y1 (a) = α1 , y2 (a) = α2 , ..., ym (a) = αm (3.43)

the objective is to find m functions y1 (x),y2 (x),..., ym (x) that satisfy each of
the differential equations together with the initial conditions.

Definition 3.8. .

83
A general mth order IVP is:

y (m) (x) = f (x, y, y 0 , y 00 , ..., y (m−1) ), a ≤ x ≤ b (3.44)

y(a) = α1 , y 0 (a) = α2 , ..., y (m−1) (a) = αm (3.45)

Can be converted in to system of first order IVPs in the following way:


let u1 (x) = y(x), u2 (x) = y 0 (x),...,um (x) = y (m−1) (x). This produces the first
order system

du1 dy du2 dy 0 du(m−1) dy (m−2)


= = u2 , = = u3 , ..., = = um
dx dx dx dx dx dx
and
dum dy (m−1)
= = y (m) = f (x, y, y 0 , y 00 , ..., y (m) ) = f (x, u1 , u2 , ..., um )
dx dx

with initial conditions

u1 (a) = y(a) = α1 , u2 (a) = y 0 (a) = α2 , ..., um (a) = y (m−1) (a) = αm

Example 3.8. Solve

y 00 − 2y 0 + 2y = e2x sinx, ‘f or0 ≤ x ≤ 1 (3.46)

with y(0) = −0.4, y 0 (0) = −0.6, h = 0.1 (3.47)

Solution 3.5. .
Since the given IVP is of second order let use two function representations
u(x) and v(x) to reduce systems of first order IVPs.
So, Let
 
 u(x) = y(x)  du
dx
= y 0 (x) = v(x)

 v(x) = y 0 (x)  dv
= y 00 (x) = e2x sinx + 2v(x) − 2u(x)
dx

with initial conditions u(0) = −0.4 − u0 and v(0) = −0.6 = v0

x0 = 0, x1 = 0.1, x2 = 0.2

84
Even if, many methods are appropriate to solve the above method, for sim-
plicity let use Euler’s method separately, we have;
 
 u1 = u0 + hv0  u1 = −0.4 + 0.1(−0.6) = −0.46
 v = h(e2x0 sinx + 2v − 2u  v = 0.1(e2(0) sin(0) + 2(−0.6) − 2(−0.4) = −0.64
1 0 0 0 1

 
 u2 = u1 + hv1  u2 = −0.46 + 0.1(−0.64) = −0.524
 v = h(e2x1 sinx + 2v − 2u  v = 0.1(e2(0.1) sin(0.1) + 2(−0.64) − 2(−0.46) = −0.6401
2 1 1 1 2

Here, u(0.1) ≈ u2 = y(0.1) ≈ −0.524, and so on.

Exercise: Approximate this using Runge-Kutta fourth order method and


compare the result to the exact solution!!!

85
Self check exercise 3.1. .

1. solve

dy
= x + y, y(1) = 0
dx

using Euler’s method at x = 1.2 and x = 1.5 with h = 0.1.

2. solve

dy
= 1 − y, y(0) = 0
dx

at x = 0.3 with h = 0.1. using

i. Euler’s method
ii. Runge-Kutta fourth order method.

3. Solve the second order Ivp

y 00 − 2y 0 + y = sinx, y(0) = 1, y 0 (0) = 2

86
3.2 Boundary Value Problems

Objectives

At the end of this section students must be able to:

? Define Boundary value problem (BVP).

? List techniques/methods those we used to apply to solve boundary


value problems.

? Solve BVPs using Shooting method.

? Solve BVPs using Finte difference method.

Brainstorming Question 3.2. .

 Solve

y 00 = y 0 + 2y + ex , y(0) = 1, y(1) = 2, h = 0.1, 0 ≤ x ≤ 1

Definition 3.9. A problem given with an equation of the form

dy 2 dy
2
= f (x, y, ), a ≤ x ≤ b, y(a) = α, y(b) = β (3.48)
dx dx

is called linear-second order boundary value ordinary differential equa-


dy
tion/problem with two dependent variables y and , and one independent
dx
variable x. we say that boundary value, since the degree of the derivative
term is one and the function defined at both boundaries [a, b]. Here the
value is specified at two points a and b, so it is called two point boundary
value problem.

In equation given above (3.50), y(a) = α and y(b) = β are called boundary
conditions.

87
Existance and Uniquness of Solution

Theorem 3.1. Suppose the function f in the boundary value problem

dy 2 dy
= f (x, y, ), a ≤ x ≤ b, y(a) = α, y(b) = β
dx2 dx

is continuous on the set:

D = (x, y, y 0 ) : f or a ≤ x ≤ b with − ∞ < y < ∞ and − ∞ < y 0 < ∞

and the partial derivatives fy and fy0 are continuous on D.


If,

i.fy (x, y, y 0 ) > 0 f or all (x, y, y 0 ) D and

ii.A constant M exits , with

|fy0 (x, y, y 0 )| ≤ M f or all (x, y, y 0 ) D

, then the boundary-value problem has Unique Solution.

Example 3.9. Consider the following Boundary Value Problem (BVP):

y 00 + e−xy + sin(y 0 ) = 0, f or 1 ≤ x ≤ 2, with y(1) = y(2) = 0 (3.49)

, then show that whether the given BVP has unique solution or not.

Proof 3.1. .
Rewrite the given differential equation as:

y 00 = −e−xy − sin(y 0 ), so f (x, y, y 0 ) = −e−xy − sin(y 0 )

fy (x, y, y 0 ) = xe−xy , and fy0 (x, y, y 0 ) = −cos(y 0 )

88
Thus, clearly fy and fy0 are continuous on;

D = (x, y, y 0 ) : f or 1 ≤ x ≤ 2 with − ∞ < y < ∞ and − ∞ < y 0 < ∞

Additionally, we check the conditions:

fy (x, y, y 0 ) = xe−xy > 0 on D

|fy0 (x, y, y 0 )| = | − cos(y 0 )| ≤ 1 = M

So, by theorem (3.1) we conclude that the given BVP in (3.4) has a unique
solution.

Example 3.10. Given the linear BVP:

y 00 = p(x)y 0 + q(x)y + r(x), a ≤ x ≤ b, y(a) = α, y(b) = β (3.50)

Under what condition(s) the given linear BVP has unique solution?

Solution 3.6.

f (x, y, y 0 ) = p(x)y 0 + q(x)y + r(x), fy (x, y, y 0 ) = q(x), fy0 (x, y, y 0 ) = p(x)

Now, fy and fy0 are continuous on D if p(x), q(x) and r(x) are continuous on
a ≤ x ≤ b. Again,

i.fy (x, , y, y 0 ) = q(x) > 0 f or a ≤ x ≤ b.

ii.|fy0 (x, y, )| ≤ M

(since fy0 is continuous on [a, b], then fy0 is bounded) So, if p(x), q(x) and r(x)
are continuous for a ≤ x ≤ b and q(x) > 0 for a ≤ x ≤ b, then by theorem (3.1)
the given linear BVP in (3.5) has unique solution.

In this lecture note we will see shooting method and finite difference
method to solve BVPs.

89
Shooting Method

The shooting method uses the methods used in solving initial value prob-
lems. This is done by assuming initial values that would have been given if
the ordinary differential equation were a initial value problem. The bound-
ary value obtained is compared with the actual boundary value. Using
trial and error or some scientific approach, one tries to get as close to the
boundary value as possible. As the name suggest shooting means shoot
to reach the target point as close as possible.

3.2.1 Linear Shooting method

The linear shooting method for linear equations is based on the replace-
ment of the linear BVP by IVP . To formulate this consider the following
linear BVP of the form:

y 00 = p(x)y 0 + q(x)y + r(x), a ≤ x ≤ b, y(a) = α, y(a) = β (3.51)

Where, p(x), q(x) and r(x) are continuous for a ≤ x ≤ b and q(x) > 0 for
a≤x≤b
Now, to solve this BVP, we convert (3.53) to IVP and apply initial conditions
for y and y 0 gives;

y 00 = p(x)y 0 + q(x)y + r(x), a ≤ x ≤ b, y(a) = α, y(b) = δ (3.52)

Now, ourtarget is "Does the the IVP in (3.54) satisfy y(b) = β ? "
 y 00 + p(x)y 0 + q(x)y = r(x), a ≤ x ≤ b
So, BVP:
 y(a) = α, y(b) = β
0
When we write this BVP as system by letting y = z, we get:
 y 0 = z, y(a) = α
IVP: Here, this system of IVP
 z 0 = r − pz − qy z(a) = y 0 (a) =? say δ
can be solved using one of the favorite IVP technique like, Euler’s Method,

90
Runge-Kutta method, Multi-step Method, Taylor’s Series Method, ...
To satisfy our target let us compare y(b; δ) − β.
If |y(b; δ) − β| < , then our target satisfied and we solved the BVP. But to
choose an appropriate value of δ that satisfy our target let us consider a
function

φ(δ) = y(b; δ) − β = 0 (3.53)

Now, the question is how to find the root of φ(δ) = 0 ?


To answer this question we guess two values say, δ0 and δ1 , and imple-
ment an appropriate technique , like Secant Method, Newton’s Raphson’s
Method,
 Interpolation, e.t.c. separately y for the
 equivalent IVP in (3.55).
 y 00 + py 0 + qy = r, y(a) = α  y 00 + py 0 + qy = r, y(a) = α
IVP1: IVP2:
 y(a) = α, z(a) = y 0 (a) = δ  y(a) = α, z(a) = y 0 (a) = δ
0 1

After solving IVP1 and IVP2 to get y(b, δ0 ) and y(b, δ1 and then compare
|y(b, δ0 ) − β| <  and |y(b, δ1 ) − β| < , where φ(δ0 ) = y(b, δ0 ) − β and
φ(δ1 ) = y(b, δ1 ) − β
In particular, let us use secant method for two initial guess δ0 and δ1 for
the function φ(δ).

(δn − δn − 1)
δn+1 = δn − φ(δn ) (3.54)
φ(δn ) − φ(δn−1 )

So,

(δ1 − δ0)
δ2 = δ1 − φ(δ1 ) (3.55)
φ(δ1 ) − φ(δ0 )

Again, formulate

 y 00 + py 0 + qy = r, y(a) = α
IVP3:
 y(a) = α, z(a) = y 0 (a) = δ
2

For y(b, δ2 ), we check if |y(b, δ2 ) − β| < . If yes, then the BVP is solved. If
no, then refine δ by computing δ3 , δ4 , δ5 , e.t.e. for Secant Method.

91
Example 3.11. Given the BVP:

1
y 00 = 6y 2 − x, y(0) = 1, y(1) = 5, h = (3.56)
3

Solution 3.7. .
We formulate two IVPs by choosing two initial guesses δ0 = 1.2 and δ1 = 1.5

1
IV P 1 : y 00 = 6y 2 − x, y(0) = 1, y 0 (0) = 1.2, h =
3
1
IV P 2 : y 00 = 6y 2 − x, y(0) = 1, y 0 (0) = 1.5, h =
3

NB: To solve these two IVPs, we can use Euler’s Method for simplicity of
calculation!!! 
 y 0 = z, y(0) = 1
so for IVP1: y 00 = 6y 2 − x ⇒
 z 0 = 6y 2 − x z(0) = y 0 (0) = 1.2
Here, x0 = 1, x1 = 1/3, x2 = 2/3, x3 = 1
Now, by Euler’s Method the formula is:


 yn+1 = yn + hzn
 zn+1 = zn + h(6yn2 − xn )
1

 y1 = y0 + hz0 ⇒ 1 + (1.2) = 1.4
3
 z1 = z0 + h(6y02 − x0 ) ⇒ 1.2 + 13 (6(1) − 0) = 3.2

1
y2 = y1 + hz1 ⇒ 1.4 + (3.2) = 2.466


3
1 1
 z2 = z1 + h(6y1 − x1 ) ⇒ 3.2 + (6(1.4)2 − ) = 7.01
 2
3 3
1
y3 = y2 + hz2 ⇒ 2.466 + (7.01) = 4.7966 = y(1, 1.2) = y(b, δ0 )
3

Again, for IVP2 we have:


 y 0 = z, y(0) = 1
 z 0 = 6y 2 − x z(0) = y 0 (0) = 1.5

92
Similar computations with IVP1 gives the following :

y1 = 1.5, z1 = 3.5

y2 = 2.666, z2 = 7.89

y3 = 5.29, = y(1; 1.5) = y(b, δ1 )

So, φ(δ0 ) = |y(b; δ0 | = 4.7966 and |y(1; 1.2) − 5| = |4.7966 − 5| = 0.2034 < 

φ(δ1 ) = |y(b; δ1 | = 5.29 and |y(1; 1.5) − 5| = |5.29 − 5| = 0.29 < 

Now, using applying the secant method:

(δn − δn − 1)
δn+1 = δn − φ(δn ), n = 1, 2, 3, 4, ..., N
φ(δn ) − φ(δn−1 )

(δ1 − δ0 )
δ2 = δ1 − φ(δ1 )
φ(δ1 ) − φ(δ0 )

(δ1 − δ0 )
δ2 = δ1 − (y(1; 1.5) − 5)
y(1; 1.5) − 5 − (y(1; 1.2) − 5)

(1.5 − 1.2)
δ2 = 1.5 − (0.29) = 1.32
5.29 − 4.7966

Now, using
 δ2 , we formulate IVP3 and solve;
 y 0 = z, y(0) = 1
IVP3:
 z 0 = 6y 2 − x z(0) = y 0 (0) = 1.32 = δ
2

y1 = 1.44, z1 = 3.32

y2 = 2.57, z2 = 7.3672

y3 = 4.965, = y(1, 1.32) = y(b, δ2 )

So, φ(δ2 ) = |y(b; δ2 | = 4.965 and |y(1, 1.32) − 5| = |4.965 − 5| = 0.035 < 

93
Again, compute δ3 using,

(δ2 − δ1)
δ3 = δ2 − φ(δ2 )
φ(δ2 ) − φ(δ1 )
(1.32 − 1.5)
δ3 = 1.32 − (0.035) = 1.300
4.965 − 5.29

y1 = 1.426, z1 = 3.28

y2 = 2.519, z2 = 7.2369

y3 = 4.933, = y(1; 1.3) = y(b, δ3 )

So, φ(δ3 ) = |y(b; δ3 − 5| = 4.933 and |y(1; 1.28) − 5| = |4.933 − 5| = 0.06 < 

Now, one can compute δ4 , δ5 , δ6 , e.t.c. to take better approximation of the BVP.

3.2.2 Shooting Method for Non-Linear Boundary Value


Problems

Definition 3.10. A given BVP which is not linear in the dependent variable
of the given DE is called is called non-linear BVP.

Example 3.12. The following are some examples of non-linear BVP.


a. y 00 + p(x)y 02 + q(x)y = r(x), y(a) = α, y(b) = β
b. (y 00 )3 + p(x)y 0 + q(x)y 2 = r(x), y(a) = α, y(b) = β

Note: To solve non-linear BVP by shooting method, we follow the same


procedures and techniques as linear case!!!

Example 3.13. Given the non-linear BVP:

y 00 = 2(y 0 )2 + 8x2 , y(0) = 0, y(9) = 0

, then solve the BVP using shooting method.

Solution 3.8. Exercise!!!

94
3.2.3 Finite Difference Method for Linear Problems

Finite Difference Method (FDM) is an other numerical technique for solving


a Differential Equation (DE) by replacing the derivatives in the given DE
with approximations. Then the approximations can convert to algebraic
equations to obtain an approximate solution of the BVP.(, i.e Differential
Equation By F inite Dif f erences Difference Equation)
−−−−−−−−−−−−−−−−−→
In general, the DE may contain y 0 , y 00 , y 000 , ..., e.t.c. So by finite differences
we find the approximation for y 0 , y 00 , y 000 , ..., e.t.c. those are given in DE using
the most known approximation, which is Taylor’s series approximation as
follow:

1 2 00
y(x + h) = y(x) + hy 0 (x) + h y (x)+, ..., + (3.57)
2!
y(x + h) − y(x) 1 00
⇒ y 0 (x) ≈ − y (ξ) (3.58)
h 2

Now,

Let y 0 (x) ≈ y 0 (xi ) ≈ yi0 , y(x) ≈ y(xi ) ≈ yi andy(x + h) ≈ y(xi+1 ) ≈ yi+1

Then, (3.60), becomes

yi+1 − yi
yi0 = + O(h), i = 1, 2, 3, ...n (3.59)
h

Thus, (3.61) is called Euler’s Forward Approximation for first derivative.


Again,

1 2 00
y(x − h) = y(x) − hy 0 (x) + h y (x)−, ..., + (3.60)
2!
y(x) − y(x − h) 1 00
⇒ y 0 (x) ≈ + y (ξ) (3.61)
h 2

Now,

Let y 0 (x) ≈ y 0 (xi ) ≈ yi0 , y(x) ≈ y(xi ) ≈ yi andy(x − h) ≈ y(xi−1 ) ≈ yi−1

95
Then, (3.63), becomes

yi+1 − yi
yi0 = + O(h), i = 1, 2, 3, ...n (3.62)
h

Thus, (3.64) is called Euler’s Back ward Approximation for first derivative.
Again,

1 2 00
y(x + h) = y(x) + hy 0 (x) + h y (x)+, ..., + (3.63)
2!
1
y(x − h) = y(x) − hy 0 (x) + h2 y 00 (x)−, ..., + (3.64)
2!

Now, (3.65) − (3.66) gives

y(x + h) − y(x − h)
y 0 (x) ≈ + O(h) (3.65)
2h
yi+1 − yi−1
yi0 = + O(h), i = 1, 2, 3, ...n (3.66)
2h

Thus, (3.68) is called Euler’s central difference Approximation for first


derivative.
Again, (3.65) + (3.66) gives

y(x + h) − 2y(x) + y(x − h)


y 00 (x) ≈ 2
+ O(h2 ) (3.67)
h
00 yi+1 − 2yi + yi−1
yi = 2
+ O(h2 ), i = 1, 2, 3, ...n (3.68)
h

Thus, (3.70) is called Euler’s central difference Approximation for second


derivative. In general, (3.61), (3.64), (3.68) and (3.70) are called Finite Differ-
ence Approximations.

Finite Difference Method for Linear BVPs

Consider the linear second order BVP of the form:

y 00 = p(x)y 0 + q(x)y + r(x), a ≤ x ≤ b, y(a) = α, y(a) = β (3.69)

96
now, to solve (3.71) using FDM, we implement three steps:

1. Discritizing the Grid/interval (DG): In this step we discritize the


given interval using the grid/mesh points, a = x0 , x1 , x2 , ..., xn−1 , xn = b
b−a
of equal spacing h = where N is the number of grid points.
N −1

2. Discritizing the boundary condition (DB): In this step we need to


discretize the boundary conditions as y(x0 ) ≈ y0 and y(xN +1 ) ≈ yN +1

3. Discritizing the Differential equation (DD): In this step we dis-


critize the Differential equation using finite difference approximations
for y 0 , y 00 , y 000 , e.t.c. and rewrite the given DE as finite difference equa-
tion as:
 
yi+1 − 2yi + yi−1 yi+1 − yi
+ p(x) + q(x)yi = r(xi )
h2 h
⇒ yi+1 − 2yi + yi−1 + hp(x)(yi+1 − yi ) + h2 q(x)yi = r(xi )

⇒ (1 + hp(x))yi+1 + (h2 q(x)hp(x) − 2)yi + yi−1 = h2 r(xi ), i = 1, 2, 3, ..., n

If we take :

i = 1, (1 + hp(x))y2 + (h2 q(x) − hp(x) − 2)y1 + y0 = h2 r(x1 )

i = 2, (1 + hp(x))y3 + (h2 q(x) − hp(x) − 2)y2 + y1 = h2 r(x2 )

i = 3, (1 + hp(x))y4 + (h2 q(x) − hp(x) − 2)y3 + y2 = h2 r(x3 )


..
.,

i = n, (1 + hp(x))yn+1 + (h2 q(x) − hp(x) − 2)yn + yn−1 = h2 r(xn )

So this recursion can be transform in matrix form as:

AY=b (3.70)

Where, A=

97
 
1 h2 q(x) − hp(x) − 2 1 + hp(x) 0 0 0
 
0 1 h2 q(x) − hp(x) − 2 1 + hp(x) 0 0
 

..
 
h2 q(x) − hp(x) − 2
 
0 0 1 . 0 
.. .. ..
 
. . . 1 h2 q(x) − hp(x) − 2 1 + hp(x)
   
y r(x )
 0  1 
 y1   r(x2 ) 
   
   
Y=  y2  b=h2  r(x3 ) 
   
 ..   .. 
   
.  . 
   
yn r(xn )

Example 3.14. Solve the following BVP for h = 1.

y 00 − y = 0 (3.71)

y(0) = 0, y(3) = 3 (3.72)

Solution 3.9. We can be discretize as Given the interval [a, b] = [0, 3], and
xi = x0 + ih ⇒ x0 = 0, x1 = 1, x2 = 2, x3 = 3 such that y0 = y(x0 ) = y(0) =
0, y1 = y(x1 ) =?, y2 = y(x2 ) =?, y3 = y(x3 ) = y(3) = 3. To find the unknown
values y1 and y2 , using finite difference method, we approximate the given
DE, y 00 − y = 0, and we get;

yi+1 − 2yi + yi−1


y 00 ≈ and y ≈ yi (3.73)
h2
f or y0 = 0, y3 = 3 (3.74)

Now, substituting (3.75) in to (3.73) we get;

yi+1 − 2yi + yi−1


− yi = 0
h2
⇒ yi+1 − 2yi + yi−1 − h2 yi = 0

⇒ yi+1 − 2yi + yi−1 = h2 yi , i = 1, 2 (3.75)

98
Now, for two values of i in (3.77), we get the following system;

i = 1, y2 − 2y1 + y0 = h2 y1

i = 2, y3 − 2y2 + y1 = h2 y2

But, we have given that h = 1, y0 = 0 and y3 = 3. Substituting these given


values in to above and rearranging we get;


 y2 − 3y1 = 0
 −3y + y = −3
2 1

Now, solving this system using any one of appropriate direct method, we
get y1 = 0.375, and y2 = 0.75

Note: If we take the step size h, very small in the finite difference approx-
imation, then the approximation goes to the exact solution rapidly!!!

Exercise 3.4. Solve

x
y 00 − (3 + )y = x
2
y(1) = −2, y(3) = 1

with h = 1, 0.5, 0.25 and 0.01 and compare the result.

3.2.4 Finite Difference Method for Non Linear Boundary


Problems

Given the non-linear second order BVP:

y 00 = f (x, y), a ≤ x ≤ b, y(a) = α, y(b) = β (3.76)

99
Thus, when we rewrite equation (3.78) as discretize form as:

yi+1 − 2yi + yi−1 = h2 f (xi , yi ), y0 = α, yN +1 = β (3.77)

yi+1 − 2yi + yi−1 − h2 f (xi , yi ) = 0 (3.78)

Now, to solve (3.79), we can use any appropriate approach like fixed point
iteration method, Newton’s-Raphson Method, e.t.c.

Example 3.15. Solve the non-linear BVP:

y 00 = xy 2 + x, y(−1) = 2, y(3) = −1, h=1 (3.79)

Solution 3.10. .
Given that h = 1, a ≤ x ≤ b ⇒ x0 = −1, x1 = 0, x2 = 1, x3 = 2, x4 = 3
Here, y(x0 ) = y(−1) = y0 = 2 and y(x4 ) = y(3) = y4 = −1 are given as
boundary value. So we need to find y1 , y2 , y3 and
The discretize form of (3.81) is:

yi+1 − 2yi + yi−1 = xi yi2 + xi

i = 1, ⇒ y2 − 2y1 + y0 = x1 y12 + x1

y2 − 2y1 + 2 = 0

i = 2, ⇒ y3 − 2y2 + y1 = x2 y22 + x2

y3 − 2y2 + y1 = 1.y22 + 1

y3 − 2y2 − y22 + y1 − 1 = 0

i = 3, ⇒ y4 − 2y3 + y2 = x3 y32 + x3

y4 − 2y3 + y2 = 2y32 + 2

−2y3 + y2 − 2y32 − 3 = 0




 y2 − 2y1 + 2 = 0

Non-Linear System: y3 − 2y2 − y22 + y1 − 1 = 0


−2y3 + y2 − 2y32 − 3 = 0

Even if, we can use other techniques to solve this non-linear system, let us

100
use Newton’s-Raphson’s method to solve this system.
Now, consider:
F (Y) = F (y1 , y2 , y3 ) = [f1 (y1 , y2 , y3 ), f2 (y1 , y2 , y3 ), f3 (y1 , y2 , y3 )] Where,

f1 (y1 , y2 , y3 ) = y2 − 2y1 + 2

f2 (y1 , y2 , y3 ) = y3 − 2y2 − y22 + y1 − 1

f3 (y1 , y2 , y3 ) = −2y3 + y2 − 2y32 − 3

Now, we form a matrix called Jacobian matrix J(Y) from the system, F (y1 , y2 , y3 )
as:

∂f1 ∂f1 ∂f1


 
 ∂y1 ∂y2 ∂y3 
 ∂f ∂f2 ∂f2 
 2
J(Y) = J(y1 , y2 , y3 ) =  (3.80)

 ∂y1 ∂y2 ∂y3 

 ∂f3 ∂f3 ∂f3 
∂y1 ∂y2 ∂y3

 
−2 1 0
 
=  1 −2 − 2y2
 
1 
 
0 1 −2 − 4y3

Now, let us take Y 0 = [0, 0, 0]t , then F (Y 0 )= F (y1 , y2 , y3 ) = [2, −1, −3]t
and

 
−2 1 0
 
J(Y 0 ) =  1 −2 1 
 
 
0 1 −2

Now, we use,

J(Y 0 )X0 = −F (Y 0 ) (3.81)

101
And obtain a vector X0 as follow;

X0 = J(Y 0 )−1 )(−F (Y 0 ))


    
(0)
−2 1 0 x1 2
    
 1 −2 1  x(0)  = − −1
    
  2   
(0)
0 1 −2 x3 −3

Solving this system using appropriate direct/indirect method, we get;

 
0.2500
 
X 0 = −1.5000
 
 
−2.2500

Again, use

k+1
Y =Y k+X k
(3.82)

To solve the unknown vector Yk+1 , k = 0, 1, 2, 3, ....


So,
       
(1)
y 0 0.2500 0.2500
 1       
1  (1)  0 0 1
Y = y2  = Y + Y = 0 + −1.5000 ⇒ Y = −1.5000
     
       
(1)
y3 0 −2.2500 −2.2500

Again,

(1) (1) (1) (1)


F (Y 1 ) = [ y2 − 2y1 (1) + 2, y3 − 2y2 (1) − ((y2 )2 )1 + y1 − 1, y2 − 2y3 (1) − ((y3 )2 )1 − 3 ]t

= [ −1.5 − 2(0.25) + 2, −2.25 − 2(−1.5) − (−1.5)2 + 0.25 − 1, −1.5 − 2(−2.25) − 2(−2.25)2 − 3 ]t

= [ 0, −2.25, −10.125 ]t

and
     
−2 1 0 −2 1 0 −2 1 0
     
1 (1)
J(Y ) =  1 −2 − 2y2  =  1 −2 − 2(−1.5) =
     
1 1  1 1 1
     
(1)
0 1 −2 − 4y3 0 1 −2 − 4(−2.25) 0 1 7

102
Again;

J(Y 1 )X1 = −F (Y 1 ) (3.83)

And obtain a vector X 1 as follow;

X1 = J(Y 1 )−1 )(−F (Y 1 ))


    
(1)
−2 1 0 x1 0
    
 1 1 1 x(1) = − −2.25 
    
 2 
 
  
(1)
0 1 7 x3 −10.125

Solving this system using appropriate direct/indirect method, we get;

 
−0.2961
 
X 1 = −0.5921
 
 
−1.3618

Again, use

Y 2 =Y 1+X 1 (3.84)

to solve Y 2 gives;
    
0.2500 −0.2961 −0.0461
     
Y 2 = Y 1 + X 1 = −1.5000 + −0.5921 ⇒ Y 2
−2.0921
     
     
−2.2500 −1.3618 −3.6118

Continuing for the computation of Y 3 , Y 4 , Y 5 until we have reach the accu-


rate solution.

103
Self check exercise 3.2. 1. Solve

x
y 00 − (3 + )y 2 = x
2
y(1) = −2, y(3) = 1

with h = 1, 0.5, 0.25 and 0.01 and compare the result.

2. Given the BVP:

y 00 − 4(y − x), 0 ≤ x ≤ 1, y(0) = 1, y(1) = 2 h = 0.2,

, then solve this BVP using

i. show that this BVP has unique solution.

ii. Shooting method.

iii. using finite difference method.

104
Unit Summary

• An equation that can be written in the form of

dy
= f (x, y), y(x0 ) = y0
dx

is called first order initial value differential equation of independent


variable x and dependent variable y.

• A differential equation involving one or more derivatives of a depen-


dent variable with respect to only one variable is called an ordi-
nary differential equation (ODE). But, if the number of independent
variables is two or more then it is called a partial differential equa-
tion(PDE).

• A Solutions to differential equations may require satisfied certain de-


fined conditions and such conditions are called initial conditions if
they are given at only one point of the independent variable.

 Conditions given at more than one point of the independent vari-


able are called boundary conditions.

 The problem of solving an nth order ordinary differential equa-


tion together with an initial conditions is called an initial value
problem(IVP).

 The problem of solving an nth order ordinary differential equation


together with a boundary conditions is called a boundary value
problem(BVP).

• Differential equation can classify in to two; Initial value and boundary


value problems.

• We can use Euler’s method, Taylor’e series method, Runge-Kutta


method e.t.c. to solve initial value problems and Shooting method,
finite difference method e.t.c. to solve boundary value problems.

105
Review Exercise 3

1. Find y(4.2) for

dy 1
= 2 , y(4) = 4,
dx x +1

using Taylor’s series method with h = 0.1

2. find y(1) by using Euler’s method from the differential equation

dy −y
=
dx 1+x

, when y(0.3) = 2 and h = 0.1.

3. Given
dy
= x2 + y, y(0) = 1
dx
, then evaluate y(0.02) and y(0.04) using Euler’s method with h = 0.01.

4. Solve the differential equation

dy
= x − y 2 , y(0) = 1
dx

, at x= 0.2 and x= 0.4 correct to 3 d.p by Runge-Kutta fourth order


method with h = 0.1.

5. Using R-K4 method approximate y at x=0.1 and x=0.2 for

dy
= x + y, y(0) = 1
dx

6. Solve

y 00 − 2y 0 + 2y = e2x sinx, 0 ≤ x ≤ 1 with y(0) = −0.4, y 0 (0) = −0.6, h = 0.1

and compute y2 .

106
7. Using non-linear shooting method solve

y 00 = y 3 , −1 ≤ x ≤ 0, y(−1) = 1/2 and y(0) = 1/3.

and compare the result to the exact solution y(x) = 1/(x + 3) .

8. The boundary value problem

y 00 = y 0 + 2y + cosx, 0 ≤ x ≤ π/2, y(0) = −0.3 and y(π/2) = −0.1.

. Then approximate the solution using finite difference method with;

i. h = 0.5

ii. h = 0.25

iii. which value of h leads to the accurate approximation? give your


justification.

References

1. Richard L. Burden, Numerical Analysis, ninth edition.

2. Rao V. Dukkipati, Numerical methods, 2010.

3. G. Shanker Rao, Numerical Analysis, revised third edition, 2006.

4. PETER V. O’NEIL, Advanced Engineering Mathematics.

107
108
4
Eigenvalue Problems (8 Hrs)

Objective

At the end of this chapter students will be able to:

• Define an eigenvalue and eigenvector.

• Compute an eigenvalue and eigenvector of a given matrix using the


characteristics polynomial.

• Find the dominant eigenvalue and eigenvector of a given matrix using


the power method.

• Obtain the eigenvalue and eigenvector of a given matrix using QR


method.
• Find the eigenvalue and eigenvector of a given matrix using House-
holder method.

Introduction

Many problems in different area of studies or the result of researches may


contain problems in the form of eigenvalues. In this case to solve these
problems we can use the characteristics polynomial and find the roots of
this characteristics polynomial those considered as the eigenvalues. But
we now that there is a determinant computation in the characteristics
polynomial and it is difficult to compute the determinant for large order
matrix. In this case it is preferable to use numerical techniques to com-
pute the eigenvalues of that matrix. In this chapter we will discuss some
numerical methods those we apply to find the eigenvalues such as Power
method, QR method, Householder method etc. . .

4.1 Eigenvalues and Eigenvectors

Objectives

At the end of this section students must be able to:

? Define eigenvalue for the given martix.

? Define eigenvector for the given martix.

? Define the dominant eigenvalue.

? Define the dominant eigenvector.

? Find the dominant eigenvalue.

110
? Find the dominant eigenvector using power method.

? Find all eigenvalue using QR method.

Brainstorming Question 4.1. .

1. Define eigenvalues and eigenvectors.

2. What is the dominant eigenvalue?

3. What is the dominant eigenvector?


 
−1 2 4
 
4. Find the dominant eigenvalue of matrix A =  5 1 0
 
 
2 1 3
 
0 2 4
 
5. Find all eigenvalue and corresponding eigenvectors of matrix A = 1 1 0
 
 
2 1 3
 
1 2 4 5
 
 
4 3 1 0
6. Given matrix A =  , then find

1 4 5 7
 
6 2 −1 −2

i. Find all eigenvalues.

ii. Find all eigenvectors.

iii. Find dominant eigenvalue.

iv. Find dominant eigenvector.

Definition 4.1. Let A be n × n matrix, then a scalar λ is called an eigen


value of A, if there is a non – zero vector X such that:

AX = λX (4.1)

111
Such a vector X is called an eigen-vector of a matrix A corresponding to
λ.
   
2 3 2
Example 4.1. Show that   is an eigen vector of A=   correspond-
1 3 −2
ing to λ = 4.

Solution 4.1. Toshow  this,


 we check
  that
 the
 equality
  of AX = λX .
3 2 2 2 8 8
So, AX = λX ⇒   =4 ⇒ = 
3 −2 1 1 4 4
   
2 3 2
Therefore, X =   is an eigenvector of the matrix   corresponding
1 3 −2
to λ = 4.
   
1 3 −2 0
   
Exercise 4.1. Show that X = 1 is an eigen vector of A= −2 3 0
   
   
0 0 0 5
corresponding to λ = 1.

 Question:How to find the eigenvalue and eigenvector of the given square


matrix?
To answer this question, use equation (4.1) above and rewrite it as follow:

AX − λX = 0 (4.2)

⇒ (A − Iλ)X = 0 (4.3)

Where, I is the identity matrix. Thus, equation (4.3) is called eigenvalue


equation. Again, take determinant both sides of equation (4.3), we get:
det(A − Iλ)X = det(0), and we know that det(0) = 0 and X 6= 0 so we get;

det(A − Iλ) = 0 (4.4)

 Thus, equation (4.4) is called the characteristic equation. Conversely, if


det(A − Iλ) = 0, then AX − λX = 0 has non trivial solution X. The
polynomial det(A − Iλ) is called the characteristic polynomial.
 Hence, the eigenvalues of A are the roots of the characteristic

112
polynomial of A and once the eigenvalues of a matrix A have been found,
we can find the eigenvectors.
Example Find the eigenvalues 
and eigenvectors
 of the matrix:
2 1
A = 
1 2

Exercise 4.2. Find the eigen value and corresponding eigenvectors of ma-
trix  
1 −3 3
 
A = 3 −5 3
 
 
6 −6 4

113
Theorem 4.1. If A is an n × n matrix, then the following are equivalent.

a. λ is an eigenvalue of A.

b. The system of equation (A − λI)X = 0 has non trivial solutions.

c. There is a non-zero vector X such that AX = λX.

d. λ is the solution of the characteristics equation det(A − λI) = 0.

Note:

• We can determine the eigenvalues of an n × n matrix by constructing


a characteristics polynomial p(λ)=det(A − λI) and then determine the
zeros of P (λ) so that these roots are eigenvalues of the given matrix.

• Finding the determinant while computing eigenvalue is computation-


ally expensive for large matrix. In this case we able to use iterative
methods like Power method, Inverse of power method, Householder
method and QR method.

Note: If A is diagonal matrix,upper triangular matrix or lower triangular


matrix, then its eigenvalues are its diagonal elements. At the end of this
section students must be able to:

? Define the dominant eigenvalue.

? Define the dominant eigenvector.

? Find the dominant eigenvalue using power method.

? Find the dominant eigenvector using power method.

? Find all eigenvalue using QR method.

Brainstorming Question 4.2. .

114
 
1 2 4 5
 
 
4 3 10
Given matrix A = 

, then find

1 4 5 7
 
6 2 −1 −2

Find all eigenvalues using QR method.

4.2 Iterative Methods for Eigenvalues

• To determine the eigen values of an n × n matrix A, we constract the


characteristics equation p(λ) = det(A − λI) and then determine the
zeros of p(λ) and these roots are eigenvalues.

• But finding the determinant of an n × n matrix A is computationally


expensive, if the matrix A is very large.

• In this case iterative methods for finding eigenvalues and correspond-


ing eigenvector are needed.

• Some of such methods are power method, QR method, House Holder


method,...

4.2.1 Power Method

The power method is an iterative method that we use to determine the


dominant eigenvalue (i.e the eigenvalue with the largest magnitude). It
also produce dominant eigenvector that corresponds to the largest
eigenvalue.

Definition 4.2 (Dominant eigenvalue and Dominant eigenvector). Let λ1 ,


λ2 , λ3 , .... λn be the eigenvalues of an n × n matrix A. λ1 is called dominant
eigenvalue if
|λ1 | > |λi | , i = 2, 3, ...n and

115
The eigenvector corresponding to λ1 is called dominant eigenvector.

Note: Every square matrix has no dominant eigenvalue.

Procedure

1. Assume a trial value X 0 for eigenvector X and choose one component


of X as unity designated the component as unity component.

2. perform the matrix multiplication AX 0 = Y 1 .

3. Scale Y 1 so that the unity component remains unity and Y 1 = λ1 X 1 .

4. Repeat 2 and 3 until it converge.

General Algorithm

AX(k) = Y (k) = λ(k+1) X(k + 1) (4.5)

Stopping Criteria

|λ(k+1) − λ(k) | <  (4.6)

Example4.2. Find the dominant eigenvalue and dominant eigenvector for:


−2 −3
a. A =   use X (0) = [1, 1, 1]t
6 7
   
−2 −3 1
Solution 4.2. Given A =   and X (0) =   Now, apply the power
6 7 1
method by scaling
 thesecond
  component
  as follow:
−2 −3 1 −5
AX 0 = Y 1 ⇒     = 
6 7 1 13
Here, scaling the second element gives:
◦ First iteration:

116
 
−0.3856
13 
1

 
−0.3856
Now, λ1 =13 and X 1 =   Again,
1
    
−2 −3 −0.3856 −2.2308
AX 1 = Y 2 ⇒    = 
6 7 1 4.6923
Here, scaling the second element gives:
◦ Second iteration:

 
−0.4734
4.6923 
1

    
−0.4734 −2 −3 −0.4734
Now, λ2 =4.6923 and X 2 =   AX 2 = Y 3 ⇒   
1 6 7 1
 
−2.0492
= 
4.1475
Here, scaling the second element gives:
◦ Third iteration:

 
−0.4942
4.1475 
1

    
−0.4942 −2 −3 −0.4942
Now, λ3 =4.1475 and X 3 =   AX 3 = Y 4 ⇒   
1 6 7 1
 
−2.0119
= 
4.0356
Here, scaling the second element gives:
◦ Fourth iteration:

 
−0.4985
4.0356 
1

117
    
−0.4985 −2 −3 −0.4985
Now, λ4 =4.0356 and X 4 =   AX 4 = Y 5 ⇒   
1 6 7 1
 
−2.0029
= 
4.0088
Here, scaling the second element gives:
◦ Fith iteration:
 
−0.4996
4.0088 
1
 
−0.4996
Now, λ5 = 4.0088 and X 5 =  
1
−1
◦ |λ5 − λ4 | = 0.0268 <  = 10
Hence, we obtain the approximate dominant eigenvalue of the given matrix
at the fifth iteration
 and the eigenvalue is 4.0088 and the corresponding
−0.4996
eigenvector is λ =  
1

4.3 QR Factorization Method and Householder


Method

QR method finds all eigenvalues of the given square matrix A. The QR


method for calculating the eigenvalues of a square matrix A is the fact
that A can be written as A = QR, where Q is an orthogonal matrix and R
is an upper triangular matrix.

• QR method finds all eigenvalues of the given square matrix A.

• The QR method for calculating the eigenvalues of matrix A is the


fact that an n × n matrix A can be written as A=QR where Q is an
orthogonal matix and R is an upper triangular matrix.

• Because of writing a matrix A in the factorized form, A=QR the method


is called QR method.

118
Procedures
Generate a sequence of matrices
A1 , A2 , A3 , ...Am , Q1 , Q2 , Q3 , ...Qm , R1 , R2 , R3 , ...Rm such that:

1. Set A = A1 , Q = Q1 R = R1 so that A1 = Q1 R1 .

2. Set A2 = R1 Q1 . and factorize A2 = Q2 R2 .

3. Set A3 = R2 Q2 . and factorize A3 = Q3 R3 .

4. Repeat step 3 up to (k)(th) step, such that


Ak = R(k−1) Q(k−1) . and take akk as λ0(k) s from each A0k s.

• Ak approaches upper triangular matrix.

 Stopping Criteria:
|λk − λ(k−1) |∞
|λk |∞
 Note: To find the matrices Q and R in the QR factorization method we
must form a symmetric and orthogonal matrix P = I − 2V V T where
||V || = 1 by using a method called Householder method.
Procedures of Householder method

• A Householder method begin by determining a transformation P1 and


multiply it on the left side of matrix
 A. 
a a12 ...a1n a˜ a˜ ...a˜1n
 11   11 12 
 a21 a22 ...a2n   0 a˜22 ...a˜2n 
   
i.e P1 A = p1 
 .. .. . = .
 
.. . 

 . . .....   .. . ..... 
   
an1 an2 ...ann 0 0 ...a˜nn

• After this is done we go tofind P2 which will produce


a˜ a˜ ...a˜1n
 11 12 
 0 a˜22 ...a˜2n 
 
P2 P1 A = p 2 
 .. .. . 

 . . ..... 
 
0 0 ...a˜nn

119
• This process continues
 until wehave
a a ...a1n
 11 12 
 a21 a22 ...a2n 
 
Pn−1 Pn−2 ...P1 A = 
 .. .. ..  such that ;

 . . .... 
 
an1 an2 ...ann
Pn−1 Pn−2 ...P1 A = R = QT A
⇒ Q = (Pn−1 Pn−2 ...P1 )T

• So that the factorized form of matrix A is, A = QR.

 Question: How to find Pk , k = 1, 2, 3, ...n − 1?

• To constract Pk , k = 1, 2, 3, ...n − 1, we follow the following steps:

1. pull column k out of the  matrix


 A.:
a
 1k 
 a2k 
 
0
 ..  of Pk s
Pn−1 Pn−2 ...P1 A i.e  
 . 
 
ank

2. Normalize the column  vector


 in step(1) above as:
  d
a1k  1
   d2 

  
1  a2k   . 

  =  .. 
Pn 1/2  .. 

2
( i=1 (aik ) )  .   
   dk 


ank  
dn

pP 
n
3. Set D = −Sign(dk ) i=k di

q
1 dk

4. Set V1 = 0, V2 = 0, ...Vk−1 = 0 and evaluate Vk = 2
1− D
and P =
dj
−DVk again compute Vj =
2P
120
 
0
 
 0 
 
 .. 
 
 .Vk 
5. Write V as V = 



Vk+1 
 .. 
 
 . 
 
Vm

6. Evaluate Pk = I − 2V V T

Example  4.3. Find


 all eigenvalues
  of
 the following
 matrices using QR meth-
2 −1 3 7 5 −2
ods. a.   b.   c.  
−1 2 4 4 −2 8
   
−1 2 3 2 1 1
   
d.  2 3 −2 e. 1 2 1 (U se  = 10−2 )
   
   
3 1 −1 1 1 2
 
2 −1
Solution 4.3. Here A=  . To find the eigenvalues of A using QR
−1 2
factorization method first factorize A=QR using householder transfarmation.
• To generate
  P1 from A we consider the first colomn of A.
2
i.e   and normalizing it we get;
−1

     
2 0.8944 d1
√ 1   =   =  
22 +12
1 −0.4472 d2
p p
2
D = −sign(d
s 1) d1 + d22 =s (0.8944)2 + (−0.4472)2 = −1
 
1 d1 1 0.8944
V1 = 1− = 1− = 0.9732
2 D 2 −1
P = −DV1 = 0.9732
d2 −0.4472
V2 = = = −0.2298
2P 2(0.9732)
   
V1 0.9732
Now, V =   =  
V2 −0.2298
   
T
1 0 0.9732 h i
Now, P1 = I − 2V V =   -2  0.9732 −0.2298
0 1 −0.2298

121
   
−0.8944 0.4472 −2.2360 1.7888
=  P1 A =   = R1 and
0.4472 0.8944 0 1.3416
 
−0.8944 0.4472
P1 = QT1 ⇒ Q1 = P1T =  
0.4472 0.8944
 So, the QR factorization
 of A is :
−0.8944 0.4472 −2.2360 1.7888
A=QR=    (Verify!!!)
0.4472 0.8944 0 1.3416
Now, after factorizing A=QR by householder transformation, we use QR
method to find all eigenvalues.
Taking A = A1 , Q = Q1 , R = R1
We have A1 = Q1 R1 
2.7999 0.6000
A2 = R1 Q1 =  
0.6000 1.2000
st
  1 iteration:
2.7999
⇒ λ1 =   factorizing A2 = R1 Q1 by householder transformation. To
1.2000
 
2.7999
do this generate P1 of A2 . Consider first column of A2 . i.e =   and
0.6000
     
1 2.7999 0.9777 d
normalizing it gives: p  =  =  1
(2.7999)2 + (0.6000)2 0.6000 0.2095 d2
p p
2 2 2 2
D = −sign(d
s 1 ) d1 + d 2 = s(0.9777) + (0.2095) = −1.0220
 
1 d1 1 0.9777
V1 = 1− = 1− = 0.9944
2 D 2 −1.022
P = −DV1 = 0.9943
d2 0.2095
V2 = = = 0.1054
2P 2(0.9943)
   
V1 0.9944
Now, V =   =  
V2 0.1054
   
T
1 0 0.9944 h i
Now, P1 = I − 2V V =   -2  0.9944 0.1054
0 1 0.1054
   
−0.9777 −0.2095 −2.863 −0.8380
=  P1 A2 =   = R2 and
−0.2095 0.9778 0 1.0476
 
−0.9777 −0.2095
P1 = QT2 ⇒ Q2 = P1T =  
−0.2095 0.9778

122
So, the QR factorization
 of A2 is :
2.9748 −0.2196
A2 = Q2 R2 ⇒ A3 = R2 Q2 =   (Verify!!!)
−0.2196 1.0243
Now, after factorizing A2 = Q2 R2 by householder transformation, we use
QR method to find all eigenvalues.

2.9748 −0.2196
A3 = R2 Q2 =  
−0.2195 1.0243
nd
  2 iteration:
2.9748
⇒ λ2 =   factorizing A2 3 = R2 Q2 by householder transformation. To
1.0243
 
2.9748
do this generate P1 of A3 . Consider first column of A3 . i.e =   and
−0.2195
   
1 2.9748 0.1112
normalizing it gives: p  = =
(2.9748) + (−0.2195)2 −0.2195
2
−0.1131
D = −1
V1 = 0.9993
P = −DV1 = −0.9993

 −0.0368
V2 =   
V1 0.9944
Now, V =   =  
V2 0.1054
   
−0.9913 0.0736 −2.9829 0.2944
Now, P1 = I − 2V V T = =   P1 A3 =   = R3
0.0736 0.9973 0 1.0044
and 
−0.9973 −0.0736
P1 = QT3 ⇒ Q3 = P1T =  
0.0736 0.9973
So, the QR factorization
 of A3 is :
2.9965 0.0741
A3 = Q3 R3 ⇒ A4 = R3 Q3 =   (Verify!!!)
−0.0741 1.0027
Now, after factorizing A3 = Q3 R3 by householder transformation, we use
QR method to find
 all eigenvalues.

2.9965 0.0741
A4 = R3 Q3 =  
−0.0741 1.0027
3rd iteration:

123
 
2.9965
⇒ λ3 =   Now,
1.0027

λ3 − λ2
| | = 0.0051 <  = 10−2
λ3

Therefore, we achieve the approximation with the given error of tolerance


of the eigenvalue at 3rd
 iteration
 and these are:
2.9965
λ= 
1.0027

124
Unit Summary

• In this chapter we discussed on numerical methods those applied


to solve eigenvalues and eigenvectors numerically. A matrix can be
written in the form of A = QR, where Q is an orthogonal matrix and
R is an upper triangular matrix.

• Dominant eigenvalue is an eigenvalue which is dominant from all the


given eigenvalues.

• Dominant eigenvector is an eigenvector which is dominant from all


the given eigenvectors.

• Power method produce dominant eigenvector that corresponds to the


dominant eigenvalue.

• A matrix can be written in the form of A=QR, where Q is an orthogonal


matrix and R is an upper triangular matrix.

• If a matrix can be written in the form of A=QR, where Q is an orthog-


onal matrix and R is an upper triangular matrix, then this process is
called QR method.

• QR method produces all eigenvalues of the given matrix.

125
126
Review Exercises 4

 
1 −3 3
 
1. Find eigenvalues and eigenvectors of the matrix A = 3 −5 3 us-
 
 
6 −6 4
ing characteristics polynomial.

2. Calculate seven iterations of the power method and find the


 dominant

1 2 0
 
eigenvalue and dominant eigenvector for the matrix A = −2 1 2.
 
 
1 3 1
 
4 5
3. Let A =  , then find the dominant eigenvalue and dominant
6 5
eigenvector using power method by performing four iterations.
 
−1 2 3
 
4. Find the QR factorization of A =  2 3 −2,
 
 
3 1 −1
 
4 −5
5. Find all eigenvalues using QR method for A =  ,.
2 −3
 
−1 −6 0
 
6. Find all eigenvalues of the matrix A =  2 0 , using House-
 
7
 
1 2 −1
holder method.
128
References

1. S.R. Otto and J.P Denier, An Introduction To Programming and Nu-


merical Analysis In MATLAB.

2. John H. Mathews and Kurtis D. Fink, Numerical Methods Using


MATLAB, Third edition, 1999

1. Richard L. Burden, Numerical Analysis, ninth edition.

3. Rao V. Dukkipati, Numerical methods, 2010.

4. G. Shanker Rao, Numerical Analysis, revised third edition, 2006.

5. PETER V. O’NEIL, Advanced Engineering Mathematics.

6. Jaan Kiusalaas, Numerical Methods in Engineering with MATLAB,


2005

7. D. Houcque. Applications of MATLAB: Ordinary Differential Equa-


tions. Internal communication, Northwestern University, pages 1–12,
2009.

8. J. W. Demmel. Applied Numerical Linear Algebra. Siam,1997.

9. C. F. Van Loan. Introduction to Scientific Computing. Prentice Hall,


1997.

10 D. J. Higham and N. J. Higham. MATLAB Guide. Siam, second


edition edition, 2005.
11. K. R. Coombes, B. R. Hunt, R. L. Lipsman, J. E. Osborn, and G. J.
Stuck. Differential Equations with MATLAB. John Wiley and Sons,
2004.

12. A. Gilat. MATLAB: An introduction with Applications. John Wiley


and Sons, 2004.

13. J. Cooper. A MATLAB Companion for Multivariable Calculus. Aca-


demic Press, 2001.

14. J. C. Polking and D. Arnold. ODE using MATLAB. Prentice Hall,


2008.

15. D. Kahaner, C. Moler, and S. Nash. Numerical Methods and Software.


Prentice-Hall, 1989.

16. S. J. Chapman. MATLAB Programming for Engineers. Thomson,


2019.

130
Answer for Review Exercises

Answer for self check exercise 1.1

1. f (x) = −0.4833x5 + 0.7917x4 + 5x3 − 0.7917x2 − 3.5167x + 17


f (−2.2) = 11.1195
f (−2.67) = 31.3932

2. f (0.005) = 1.0097

3. y(0.35) = 2.0218

4. f (x) = −0.4833x5 + 0.7917x4 + 5x3 − 0.7917x2 − 3.5167x + 17

5. f (2.5) = 65.1094 and y(2.98) = 80.6531

6. f (x) = −0.4833x5 + 0.7917x4 + 5x3 − 0.7917x2 − 3.5167x + 17

7. f (1.5) = 27.1563 and y(1.98) = 43.2042

8. Yes.

Answer for self check exercise

1. i. 0.8747
ii. 0.8761
iii. 1.0415
Answer for Review Exercise 1

1. sin500 = 0, 76604

2. f (3) = 20.267296

3. sin850 = 0, 966

4. P3 (x) = x3 − 2x2 + 1 and y(2.5) = 4.125

5. 22.0948

6. f (5)=75

7. 1.751.

8. 0.82212.

9. 0.0911

10. 1.8279

Answer for self check exercise 2.1

1. y = −1.6071x + 8.6429

2. y = 0.7x + 2.6

3. y = 0.4071x2 − 0.2986x + 0.5343

4. y = (6.309573445 × 10( 28)) × (4.786300932 × 10( − 6))x

Answer for self check exercise 2.2

1. p(x) = 0.0316 + 0.8562x

2. p(x) = 0.9984x

3. p(x) = 0.0006 − 0.0046x + 0.0642x2 − 0.0056x3

132
Answer for Review Exercise 2

1. y = 0.72 + 1.33x

2. y = 6.24 + 1.45x

3. y = 3.3 + 1.45x

4. y = 2.125 − 1.70x + 0.875x2

5. y = 0.851 − 0.192x + 0.178x2

6. − 21 + 35 x

7. y = 0.5012x1.9977

8. y = 9986x1.2

9. y = (1.3268).(2.9485x )

10. y = (701.94).x−0.1708

11. P V 1.425 = 0.997

Answer for self check Exercise 3.1

1. y(1.2) = 0.22 and y(1.5) = 0.7210

2. i. Euler’s method solution, y(0.3) = 0.0290


ii. R-K4 method solution, y(0.3) = 0.0424
iii. As we compared to the exact solution the Runge-Kutta fourth
order method is more accurate.

Answer for Review Exercise 3

1. 4.0098

133
2. 1.2632

3. 1.0202, 1.0408, 1.0619

4. 0.851, 0.780

5. 1.1103, 1.2428

6. -0.524

7. Left for the reader.

8. left for the reader.

Answer for Review Exercise 4

1. λ1 = 4, λ2 = −2,
 λ3 = −2 and eigenvector corresponding to
 λ1 = 4 is
−0.4082 −0.8103
   
X = −0.4082 and corresponding to λ2 = λ3 = −2 is X = −0.3185
   
   
−0.8165 0.4918

2. The 
dominant
 eigenvalue λ = 2.9999 and corresponding eigenvector
0.5
 
X =  0.5 
 
 
1.00

3. The 
dominant eigenvalue λ = 2.9999 and corresponding eigenvector
0.833
X= 
1.00
 
−2.673 −0.7715 0.5774
 
4. A = QR , where Q =  0.5344 −0.6173 −0.5774 and
 
 
0.8018 0.1543 0.5773
 
3.7417 1.8706 −2.6726
 
R = −0.0003 −3.2405 −1.2342
 
 
−0.0002 0 2.3095

5. λ1 = 2, λ2 = −1,.

134
6. λ1 = 5, λ2 = 1, λ3 = −1.

135

You might also like