UG B.sc. Mathematics 113 53 BSc-Mathematics Numerical Analysis CRC 2329
UG B.sc. Mathematics 113 53 BSc-Mathematics Numerical Analysis CRC 2329
B.Sc. [Mathematics]
V - Semester
113 53
NUMERICAL ANALYSIS
Author:
Dr. N Datta, Retired Senior Professor, Department of Mathematics, Indian Institute of Technology, Kharagpur, West Bengal
Units: (1, 2.0 - 2.2, 3, 4.0 - 4.6, 5, 6.0 - 6.2, 7.0 - 7.2, 7.3 - 7.8, 9, 10.3 - 10.8, 11-13, 14.0 - 14.3)
Dr. Kalika Patrai, Associate Professor, Institute of Innovation in Technology & Management (IITM), Janakpuri, New Delhi
Units: (6.3, 7.2.1, 8.0 - 8.6, 10.0 - 10.2)
Vikas Publishing House, Units: (2.3 - 2.8, 4.7 - 4.12, 6.4 - 6.9, 8.7 - 8.13, 14.4 - 14.9)
All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Alagappa
University, Karaikudi, Tamil Nadu.
Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Alagappa University, Publisher and its Authors shall in no event be liable for
any errors, omissions or damages arising out of use of this information and specifically disclaim any
implied warranties or merchantability or fitness for any particular use.
Work Order No.AU/DDE/DE12-27/Preparation and Printing of Course Materials/2020 Dated 12.08.2020 Copies - 500
SYLLABI-BOOK MAPPING TABLE
Numerical Analysis
Syllabi Mapping in Book
INTRODUCTION
Numerical analysis is the study of algorithms to find solutions for problems of
NOTES continuous mathematics. It helps in obtaining approximate solutions while
maintaining reasonable bounds on errors. Although numerical analysis has
applications in all fields of engineering and the physical sciences, yet in the 21st
century life sciences and both the arts have adopted elements of scientific
computations. Ordinary differential equations are used for calculating the movement
of heavenly bodies, i.e., planets, stars and galaxies. Besides, it evaluates optimization
occurring in portfolio management and also computes stochastic differential
equations to solve problems related to medicine and biology. Airlines use
sophisticated optimization algorithms to finalize ticket prices, airplane and crew
assignments and fuel needs. The basic aim of numerical analysis is to design and
analyse techniques to compute approximate and accurate solutions to unique
problems.
In numerical analysis, two methods are involved, namely direct and iterative
methods. Direct methods compute the solution to a problem in a finite number of
steps whereas iterative methods start from an initial guess to form successive
approximations that converge to the exact solution only in the limit. Iterative methods
are more common than direct methods in numerical analysis. The study of errors
is an important part of numerical analysis.
This book, Numerical Analysis, is divided into four blocks, which are
further subdivided into fourteen units. This book provides a basic understanding
of the subject and helps to grasp its fundamentals. In a nutshell, it explains various
aspects, such as algebraic, transcendental and polynomial equations, Newton-
Raphson method, system of linear equations, Guass-Jordan elimination method,
triangularisation method, solutions of linear systems, Jacobi and Gauss-Siedal
iterative methods, interpolation, finite differences, forward and backward
differences, central differences, interpolating polynomials using finite differences,
Lagrange and Newton interpolations, divided differences and their properties,
central differences interpolation formulae (Guass, Stirling, Bessel, Everett and
Hermite), numerical differentiation, numerical integration, trapezoidal rule,
Simpson’s 1/3 and 3/8 rule, Weddle’s rule, Cote’s method, numerical solutions of
ordinary differential equations (Taylor’s series, Picard, Euler and Runge-Kutta
methods), numerical solutions of ordinary differential equations using Runge-Kutta
2nd and 4th order methods, and predictor-corrector methods (Milne’s and Adam’s
methods).
The book follows the Self-Instructional Mode (SIM) wherein each unit
begins with an ‘Introduction’ to the topic. The ‘Objectives’ are then outlined before
going on to the presentation of the detailed content in a simple and structured
format. ‘Check Your Progress’ questions are provided at regular intervals to test
the student’s understanding of the subject. ‘Answers to Check Your Progress
Questions’, a ‘Summary’, a list of ‘Key Words’, and a set of ‘Self-Assessment
Questions and Exercises’ are provided at the end of each unit for effective
recapitulation. This book provides a good learning platform to the people who
need to be skilled in the area of operating system functions. Logically arranged
topics, relevant examples and illustrations have been included for better
understanding of the topics and for effective recapitulation.
Self-Instructional
8 Material
Algebraic, Transcendental,
BLOCK - I and Polynomial Equations
UNIT 1 ALGEBRAIC,
TRANSCENDENTAL, AND
POLYNOMIAL EQUATIONS
Structure
1.0 Introduction
1.1 Objectives
1.2 Root Finding
1.2.1 Methods for Finding Location of Real Roots
1.2.2 Methods for Finding the Roots—Bisection and Simple Iteration Methods
1.2.3 Newton-Raphson Methods
1.2.4 Secant Method
1.2.5 Regula-Falsi Methods
1.2.6 Roots of Polynomial Equations
1.2.7 Descarte’s Rule
1.3 Curve Fitting
1.3.1 Method of Least Squares
1.4 Answers to Check Your Progress Questions
1.5 Summary
1.6 Key Words
1.7 Self Assessment Questions and Exercises
1.8 Further Readings
1.0 INTRODUCTION
Self-Instructional
Material 1
Algebraic, Transcendental, In this unit, you will study about the algebraic equations, transcendental
and Polynomial Equations
equations, polynomial equations, bisection method, iteration method, method of
false position, and Newton-Raphson methods.
NOTES
1.1 OBJECTIVES
Where it is easy to draw the graphs of y = x3 and y = 15.2 x + 13.2. Then, the
abscissa of the point(s) of intersection can be taken as the crude approximation(s)
of the root(s).
1
Graph of y and y log10 x
x
The point of intersection of the curves has its x-coordinates value 2.5
approximately. Thus, the location of the root is 2.5.
Tabulation Method: In the tabulation method, a table of values of f (x) is made
for values of x in a particular range. Then, we look for the change in sign in the
values of
f (x) for two consecutive values of x. We conclude that a real root lies between
these values of x. This is true if we make use of the following theorem on continuous
functions.
Theorem 1.1: If f (x) is continuous in an interval (a, b), and f (a) and f(b) are of
opposite signs, then there exists at least one real root of f (x) = 0, between a and b.
Consider for example, the equation f (x) = x3 – 8x + 5 = 0.
Constructing the following table of x and f (x),
x 4 3 2 1 0 1 2 3
f ( x) 27 2 13 12 5 2 3 8
Self-Instructional
4 Material
1.2.2 Methods for Finding the Roots—Bisection and Simple Algebraic, Transcendental,
and Polynomial Equations
Iteration Methods
Bisection Method: The bisection method is a root finding method which repeatedly
bisects an interval and then selects a subinterval in which a root must lie for further NOTES
processing. It is an extremely simple and robust method, but it is relatively slow. It
is normally used for obtaining a rough approximation to a solution which is then
used as a starting point for more rapidly converging methods. When an interval
contains more than one root, the bisection method can find one of them. When an
interval contains a singularity, the bisection method converges to that singularity.
The notion of the bisection method is based on the fact that a function will change
sign when it passes through zero. By evaluating the function at the middle of an
interval and replacing whichever limit has the same sign, the bisection method can
halve the size of the interval in each iteration to find the root.
Thus, the bisection method is the simplest method for finding a root to an
equation. It needs two initial estimates xa and xb which bracket the root. Let
fa = f(xa), and fb = f(xb), such that fa fb d 0. Evidently, if fa fb = 0 then one or both
of xa and xb must be a root of f(x) = 0. Figure 1.3 is a graphical representation of
the bisection method showing two initial guesses xa and xb bracketing the root.
Fig. 1.3 Graph of the Bisection Method showing Two Initial Estimates
xa and xb Bracketing the Root
The method is applicable when we wish to solve the equation f(x) = 0 for the real
variable x, where f is a continuous function defined on an interval [a, b] and f(a)
and f(b) have opposite signs.
Self-Instructional
Material 5
Algebraic, Transcendental, The bisection method involves successive reduction of the interval in which
and Polynomial Equations
an isolated root of an equation lies. This method is based upon an important theorem
on continuous functions as stated below.
Theorem 1.2: If a function f (x) is continuous in the closed interval [a, b], and f
NOTES
(a) and f (b) are of opposite signs, i.e., f (a) f (b) < 0, then there exists at least one
real root of f (x) = 0 between a and b.
The bisection method starts with two guess values x0 and x1. Then, this interval
1
[x0, x1] is bisected by a point x2 ( x0 x1 ), where f(x0) . f(x1) < 0. We compute
2
f(x2). If f(x2) = 0, then x2 is a root. Otherwise, we check whether f(x0) . f(x2) < 0 or
f(x1) . f(x2) < 0. If f (x2)/f (x0) < 0, then the root lies in the interval (x2, x0). Otherwise,
if f(x0) . f(x1) < 0, then the root lies in the interval (x2, x1).
The sub-interval in which the root lies is again bisected and the above process is
repeated until the length of the sub-interval is less than the desired accuracy.
The bisection method is also termed as bracketing method, since the method
successively reduces the gap between the two ends of an interval surrounding the
real root, i.e., brackets the real root.
The algorithm given below clearly shows the steps to be followed in finding a
real root of an equation, by bisection method to the desired accuracy.
Algorithm: Finding root using bisection method.
Step 1: Define the equation, f (x) = 0
Step 2: Read epsilon, the desired accuracy
Setp 3: Read two initial values x0 and x1 which bracket the desired root
Step 4: Compute y0 = f (x0)
Step 5: Compute y1 = f (x1)
Step 6: Check if y0 y1 < 0, then go to Step 6
else go to Step 2
Step 7: Compute x2 = (x0 + x1)/2
Step 8: Compute y2 = f (x2)
Step 9: Check if y0 y2 > 0, then set x0 = x2
else set x1 = x2
Step 10: Check if | ( x1 x0 ) / x1 | > epsilon, then go to Step 3
Step 11: Write x2, y2
Step 12: End
Self-Instructional
6 Material
Next, we give the flowchart representation of the above algorithm to get a Algebraic, Transcendental,
and Polynomial Equations
better understanding of the method. The flowchart also helps in easy implementation
of the method in a computer program.
Self-Instructional
Material 7
Algebraic, Transcendental, Example 1.2: Find the location of the smallest positive root of the equation
and Polynomial Equations
x3 – 9x + 1 = 0 and compute it by bisection method, correct to two decimal places.
Solution: To find the location of the smallest positive root we tabulate the function
f (x) = x3 – 9x + 1 below.
NOTES
x 0 1 2 3
f ( x) 1 2 9 1
We observe that the smallest positive root lies in the interval [0, 1]. The computed
values for the successive steps of the bisection method are given in the table.
n x0 x1 x2 f ( x2 )
1 0 1 0 .5 3.37
2 0 0.5 0.25 1.23
3 0 0.25 0.125 0.123
4 0 0.125 0.0625 0.437
5 0.0625 0.125 0.09375 0.155
6 0.09375 0.125 0.109375 0.016933
7 0.109375 0.125 0.11718 0.053
From the above results, we conclude that the smallest root correct to two decimal
places is 0.11.
Simple Iteration Method: A root of an equation f (x) = 0, is determined using
the method of simple iteration by successively computing better and better
approximation of the root, by first rewriting the equation in the form,
x = g(x) (1.4)
Then, we form the sequence {xn} starting from the guess value x0 of the root
and computing successively,
Evidently, since l < l, the right hand side tends to zero and
thus it follows that the sequence {xn}converges to the root This
completes the proof.
Order of Convergence: The order of convergence of an iterative process is
determined in terms of the errors en and en+1 in successive iterations. An iterative
en
process is said to have kth order of convergence if k
where M is a
n
finite number.
Roughly speaking, the error in any iteration is proportional to the kth power of
the error in the previous iteration.
Evidently, the simple iteration discussed in this section has its order of
convergence 1.
The above iteration is also termed as fixed point iteration since it determines
the root as the fixed point of the mapping defined by x = g(x).
Algorithm: Computation of a root of f (x) = 0 by linear iteration.
Step 1: Define g(x), where f (x) = 0 is rewritten as x = g(x)
Step 2: Input x0, epsilon, maxit, where x0 is the initial guess of root, epsilon is
accuracy desired, maxit is the maximum number of iterations allowed.
Step 3: Set i = 0
Step 4: Set x1 = g (x0)
Self-Instructional
Material 9
Algebraic, Transcendental, Step 5: Set i = i + 1
and Polynomial Equations
Step 6: Check, if |(x1 – x0)/ x1| < epsilon, then print ‘root is’, x1
else go to Step 6
NOTES Step 7: Check, if i < n, then set x0 = x1 and go to Step 3
Step 8: Write ‘No convergence after’, n, ‘iterations’
Step 9: End
Example 1.3: In order to compute a real root of the equation x3 – x – 1 = 0, near
x = 1, by iteration, determine which of the following iterative functions can be used
to give a convergent sequence.
1 x 1
? | g c(1) | 1. Hence, the form x would give a convergent
2 2 x
sequence of iterations.
Example 1.4: Compute the real root of the equation x3 + x2 – 1 = 0, correct to five
significant digits, by iteration method.
Solution: The equation has a real root between 0 and 1 since f (x) = x3 + x2 – 1 has
opposite signs at 0 and 1. For using iteration, we first rewrite the equation in the
following different forms:
1 1 § 1 ·
(ii) g c( x) . ¨ 2 1¸ and | g c( x) | ! 1 for all x in (0, 1). Finally, for
2 1 © x ¹
1
x
Self-Instructional the form
10 Material
Algebraic, Transcendental,
and Polynomial Equations
(iii) Thus, this form can be
x
used to form a convergent sequence for finding the root.
NOTES
1
We start the iteration x with x0 = 1. The results of suecessive iterations
1 x
are,
x1 0.70711 x2 0.76537 x3 0.75236 x4 0.75541
x5 0.75476 x6 0.75490 x7 0.75488 x8 0.75488
1 3 1 1
(i) x ( x 1) (ii) x 9/ x 2 (iii) x 9
9 x x
1 2
In case of (i), g c( x) x and for x in [2, 4], Hence it will not give
3
rise to a convergent sequence.
In case of (iii)
Thus, the forms (ii) and (iii) would give convergent sequences for finding the
root in [2, 3].
Self-Instructional
Material 11
Algebraic, Transcendental, We start the iterations taking x0 = 2 in the iteration scheme (iii). The result for
and Polynomial Equations
successive iterations are,
x0 = 2.0 x1 = 2.91548x4 = 2.94282
x2 = 2.94228 x3 = 2.94281
NOTES
Thus, the root can be taken as 2.94281, correct to four decimal places.
Self-Instructional
12 Material
The derivative or slope f(xn) can be approximated numerically as follows: Algebraic, Transcendental,
and Polynomial Equations
f ( xn 'x ) f ( xn )
f¢(xn) =
'x
To derive the formula for this method, we consider a Taylor’s series expansion of f NOTES
(x0 + h), x0 being an initial guess of a root of f (x) = 0 and h a small correction to the
root.
2
Or, h
i.e.,
f ( x1 )
x2 x1
f c( x1 )
f ( x2 )
x3 x2
f c( x2 )
... ... ...
f ( xn )
xn 1 xn
f c( xn ) (1.13)
xn2 a
We have, xn 1 xn
2 xn
xn2 a
xn 1 xn
2 xn
1§ a·
i.e., xn 1 ¨ xn ¸ , for n 0, 1, 2,...
2© xn ¹
Solution: The value k a is the positive root of xk – a = 0. Thus, the iterative scheme
for evaluating k a is,
xnk a
xn 1 xn 1
kxnk
Self-Instructional
Material 15
Algebraic, Transcendental,
and Polynomial Equations 1ª 2 º
We have, x1 «1.25 u 2 » 1.26
3¬ (1.25) 2 ¼
x2 1.259921, x3 1.259921
NOTES
Hence, 3 2 = 1.2599, correct to five significant digits.
Example 1.11: Find by Newton-Raphson methods, the real root of 3x – cos x – 1
= 0, correct to three significant figures.
Solution: The location of the real root of f (x) = 3x – cos x – 1 = 0, is [0, 1] since
f (0) = – 2 and f (1) > 0.
We choose x0 = 0 and use Newton-Raphson scheme of iteration.
xn
The results for successive iterations are,
x1 = 0.667, x2 = 0.6075, x3 = 0.6071
Thus, the root is 0.607 correct to three significant figures.
Example 1.12: Find a real root of the equation xx + 2x – 6 = 0, correct to four
significant digits.
Solution: Taking f (x) = xx + 2x – 6, we have f (1) = –3 < 0 and f (2) = 2 > 0. Thus,
a root lies in [1, 2]. Choosing x0 = 2, we use Newton-Raphson iterative scheme
given by,
x
xn e xn
The computed results for successive iterations are,
x1
x2
x3
Hence, the root is 1.723 correct to four significant figures.
Order of Convergence: We consider the order of convergence of the Newton-
Raphson method given by the formula,
Let us assume that the sequence of iterations {xn} converge to the root [
Then, expanding by Taylor’s series about xn, the relation f ([) = 0, gives
Self-Instructional
16 Material
Algebraic, Transcendental,
and Polynomial Equations
NOTES
Hence, using the condition for convergence of the linear iteration method, we
can write .
i.e., (1.15)
Self-Instructional
Material 17
Algebraic, Transcendental, This can be rewritten as,
and Polynomial Equations
x0 f ( x1 ) x1 f ( x0 )
x2
f ( x1 ) f ( x0 )
NOTES
Secant formula in general form is,
xn = xn–1 – f(xn–1)
The iterative formula is equivalent to the one for Regula-Falsi methods. The
distinction between secant method and Regula-Falsi methods lies in the fact that
unlike in Regula-Falsi methods, the two initial guess values do not bracket a root
and the bracketing of the root is not checked during successive iterations, in secant
method. Thus, secant method may not always give rise to a convergent sequence
to find the root. The geometrical interpretation of the method is shown in Figure
1.5.
Self-Instructional
18 Material
Step 12: Set x1 = x2 Algebraic, Transcendental,
and Polynomial Equations
Step 13: Go to Step 6
Step 14: Print “Root =”, x2
Step 15: Go to Step 17 NOTES
Step 16: Print ‘iterations do not converge’
Step 17: Stop
Next, we compute f (x2) and determine the interval in which the root lies in the
following manner. If (a) f (x2) and f (x1) are of opposite signs, then the root lies in
(x2, x1). Otherwise if (b) f (x0) and f (x2) are of opposite signs, then the root lies in
(x0, x2). The next approximate root is determined by changing x0 by x2 in the first
case and x1 by x2 in the second case.
The aforesaid process is repeated until the root is computed to the desired accuracy
H, i.e., the condition
should be satisfied.
Regula-Falsi methods can be geometrically interpreted by the following Figure
1.6.
Self-Instructional
20 Material
x5 = 2.2748, f (x5) = – 0.0529 Algebraic, Transcendental,
and Polynomial Equations
x6 = 2.2773, f (x6) = – 0.0316
x7 = 2.2788, f (x7) = – 0.0028
x8 = 2.2792, f (x8) = – 0.0022 NOTES
The root correct to four significant figures is 2.279.
(v) pn(x) has a quadratic factor for each pair of complex conjugate roots. Let,
and be the roots, then is the
quadratic factor.
(vi) There is a special method, known as Horner’s method of synthetic
substitution, for evaluating the values of a polynomial and its derivatives for
a given x.
Clearly there are three changes of sign and hence the number of positive real
roots is three or one. Thus, it must have a real root. In fact, every polynomial equation
of odd degree has a real root.
We can also use Descarte’s rule to determine the number of negative roots by
finding the number of changes of signs in pn(–x). For the above equation,
and it has two changes of sign. Thus,
it has either two negative real roots or none.
Self-Instructional
Material 21
Algebraic, Transcendental,
and Polynomial Equations
Check Your Progress
1. The roots of an equation are computed in how many stages?
NOTES 2. Define tabulation method.
3. State the procedure of bisection method.
4. How is the order of convergence of an iterative process determined?
5. State a property of Newton-Raphson methods.
Self-Instructional
22 Material
For example, g(x) may be a polynomial of some degree or an exponential or Algebraic, Transcendental,
and Polynomial Equations
logarithmic function. Thus g (x) may be any of the following:
(i) (ii)
NOTES
(iii) (iv)
(v)
Here D, E, J are parameters which are to be evaluated so that the curve y =
g(x), fits the data well. A measure of how well the curve fits is called the goodness
of fit.
In the case of least square fit, the parameters are evaluated by solving a system
of normal equations, derived from the conditions to be satisfied so that the sum of
the squared deviations of the estimated values from the observed values, is minimum.
¦
2
i.e., S f i g ( xi ) (1.18)
i 1
(1.19)
These equations are called normal equations, solving which we get the parameters
for the best approximate function g(x).
(1.20)
We now employ the method of least squares to determine D and E so that S will
be minimum. The normal equations are,
(1.21)
Self-Instructional
Material 23
Algebraic, Transcendental,
and Polynomial Equations And, (1.22)
Where,
Solving,
(1.25)
(1.26)
Where
It is clear that the normal equations form a system of linear equations in the
unknown parameters a, b, c. The computation of the coefficients of the normal
(1.27)
equations can be made in a tabular form for desk computations as shown below.
Self-Instructional
Material 25
Algebraic, Transcendental, Example 1.14: Find the straight line fitting the following data:
and Polynomial Equations
xi 4 6 8 10 12
y1 13.72 12.90 12.01 11.14 10.31
NOTES
Solution: Let y = a + bx, be the straight line which fits the data. We have the
5
S ¦ ( y a bx )
i 1
i i
2
.
5 5
Thus, ¦y
i 1
i na b¦ xi
i 1
0
5 5 5
¦x y a ¦ xi b¦ xi
2
And, i i 0
i 1 I 1 i 1
xi yi xi2 xi yi
4 13.72 16 54.88
6 12.90 36 77.40
8 12.01 64 96.08
10 11.14 100 111.40
12 10.31 144 123.72
Sum 40 60.08 360 463.48
xi 60 61 62 63 64
yi 40 40 48 52 55
Solution: Let the straight line fitting the data be y = a + bx. The data values being
large, we can use a change in variable by substituting u = x – 62 and v = y – 48.
Self-Instructional
26 Material
Let v = A + B u, be a straight line fitting the transformed data, where the normal Algebraic, Transcendental,
and Polynomial Equations
equations for A and B are,
5 5
¦
i 1
vi 5A B ¦u
i 1
i NOTES
5 5 5
¦
i 1
ui vi ¦
A
i 1
ui B ¦u
i 1
2
i
The computation of the various sums are given in the table below,
xi yi ui vi ui vi ui2
60 40 –2 –8 16 4
61 42 1 6 6 1
62 48 0 0 0 0
63 52 1 4 4 1
64 55 2 7 14 4
Sum 0 3 40 10
Self-Instructional
Material 27
Algebraic, Transcendental, Thus proceeding as in linear curve fitting,
and Polynomial Equations
NOTES (1.32)
Self-Instructional
28 Material
Step 2: Read (xi, yi) for i = 1, 2,..., n, the values of data points Algebraic, Transcendental,
and Polynomial Equations
Step 3: Initialize the sum to be computed for the normal equations,
i.e., sx = 0, sx2 = 0, sx3 = 0, sx4 = 0, sy = 0, sxy = 0.
Step 4: Compute the sums, i.e., For i = 1 to n do NOTES
Begin
sx sx xi
2
x xi xi
sx 2
sx 2 x 2
sx 3 sx 3 xi x 2
sx 4 sx 4 x 2 x 2
sy sy yi
sxy sxy xi yi
sx y 2
sx y x
2 2
yi
End
Step 5: Form the coefficients {aij } matrix of the normal equations, i.e.,
Self-Instructional
Material 29
Algebraic, Transcendental, Step 8: Print values of a, b, c (the coefficients of the parabola)
and Polynomial Equations
Step 9: Print the table of values of xk , yk and y pk where y pk a bxk cx 2 k ,
i.e., print
NOTES
Step 10: Stop.
Self-Instructional
30 Material
7. Regula-Falsi methods is also a bracketing method. As in bisection method, Algebraic, Transcendental,
and Polynomial Equations
we start the computation by first finding an interval (a, b) within which a real
root lies. Writing a = x0 and b = x1, we compute f(x0) and f(x1) and check
if f(x0) and f(x1) are of opposite signs. For determining the approximate
NOTES
root x2, we find the point of intersection of the chord joining the points (x0,
f(x0)) and (x1, f(x1)) with the x-axis, i.e., the curve y = f(x0) is replaced by
the chord given by,
8. There are situations where interpolation for approximating function may not
be efficacious procedure. Errors will arise when the function values
f(xi), i = 1, 2, …, n are observed data and not exact.
9. Let (x1, f1), (x2, f2), ..., (xn, fn) be a set of observed values and g(x) be the
approximating function. We form the sums of the squares of the deviations
of the observed values fi from the estimated values g(xi),
i.e.,
These equations are called normal equations, solving which we get the
parameters for the best approximate function g(x).
1.5 SUMMARY
Self-Instructional
Material 31
Algebraic, Transcendental, x Newton-Raphson methods is a widely used numerical method for finding a
and Polynomial Equations
root of an equation f (x) = 0, to the desired accuracy.
x Secant method can be considered as a discretized form of Newton-Raphson
NOTES methods. The iterative formula for this method is obtained from formula of
Newton-Raphson methods on replacing the derivative by the gradient of
the chord joining two neighbouring points x0 and x1 on the curve y = f (x).
x Regula-Falsi methods is also a bracketing method.
x Horner’s method of synthetic substitution is used for evaluating the values
of a polynomial and its derivatives for a given x.
x Descarte’s rule is used to determine the number of negative roots by finding
the number of changes of signs in pn(–x).
x By using the method of least squares, noisy function values are used to
generate a smooth approximation. This smooth approximation can then be
used to approximate the derivative more accurately than with exact
polynomial interpolation.
Short-Answer Questions
1. What are isolated roots?
2. What is crude approximation in graphical method?
3. Why is bisection method also termed as bracketing method?
4. What is the order of convergence of the Newton-Raphson methods?
Self-Instructional
32 Material
5. State the similarity between secant method and Regula-Falsi methods. Algebraic, Transcendental,
and Polynomial Equations
6. How many roots are there in a polynomial equation of degree n?
7. How many positive real roots are there in a polynomial equation?
Long-Answer Questions NOTES
1. Use graphical method to find the location of a real root of the equation
x3 + 10x – 15 = 0.
2. Draw the graphs of the function f (x) = cos x – x, in the range [0, S/2) and
find the location of the root of the equation f (x) = 0.
3. Compute the root of the equation x3 – 9x + 1 = 0 which lies between 2 and
3 correct upto three significant digits using bisection method.
4. Compute the root of the equation x3 + x2 – 1 = 0, near 1, by the iterative
method correct upto two significant digits.
5. Compute using Newton-Raphson methods the root of the equation ex = 4x,
near 2, correct upto four significant digits.
6. Find the real root of x log10x – 1.2 = 0 correct upto four decimal places
using Regula-Falsi methods.
7. Use the method of least squares to fit a straight line for the following data
points:
Self-Instructional
Material 33
Algebraic, Transcendental, Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
and Polynomial Equations
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
NOTES
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.
Self-Instructional
34 Material
System of Linear
EQUATIONS
NOTES
Structure
2.0 Introduction
2.1 Objectives
2.2 System of Linear Equations
2.2.1 Classical Methods
2.2.2 Elimination Methods
2.2.3 Iterative Methods
2.2.4 Computation of the Inverse of a Matrix by using Gaussian Elimination
Method
2.3 Triangularisation Method
2.4 Answers to Check Your Progress Questions
2.5 Summary
2.6 Key Words
2.7 Self Assessment Questions and Exercises
2.8 Further Readings
2.0 INTRODUCTION
2.1 OBJECTIVES
(2.4)
ª 2 3 1 º ª x1 º ª1 º
« 3 1 1» « x » «2»
« »« 2» « »
«¬1 1 1»¼ «¬ x3 »¼ «¬1 »¼
ª2 3 1 º
D «3 1 1» 2(1 1) 3(1 3) (3 1) 14
« »
«¬1 1 1»¼
1 3 1
D1 2 1 1 ( 1 1) 3(1 2) (2 1) 8
1 1 1
2 1 1
D2 3 2 1 2(2 1) (1 3) (3 2) 1
1 1 1
2 3 1
D3 3 1 2 2(1 2) 3(2 3) (3 1) 5
1 1 1
Hence by Cramer’s rule, we get
D1 8 4 D2 1 D3 5
x1 , x2 , x3
D 14 7 D 14 D 14
NOTES
Adj A
(2.6)
ª1 1 1 º ª x1 º ª4º
« 2 1 3 » «x » «1 »
« » « 2» « »
¬« 3 2 1¼» ¬« x3 ¼» ¬«1 ¼»
Solution: For solving the system of equations by matrix inversion method we first
compute the determinant of the coefficient matrix,
1 1 1
| A | 2 1 3 13
3 2 1
Since | A | z 0 , the matrix A is non-singular and A–1 exists. We now compute the
adjoint matrix,
Again for eliminating x2 from the last of the above two equations, we multiply
(1) (1)
the first Equation (2.9(a)) by m4 a32 / a 22 , and add to the second Equation (2.9(b)),
which would give the equation,
( 2)
a33 b3( 2) (2.10)
( 2) (1) (1)
Where a33 a33 m 4 a 23 , b3( 2) b3(1) m4 b2(1)
Self-Instructional
Material 39
System of Linear (ii) Then we write the transformed 2nd and 3rd rows after the elimination of x1 by
Equations
row operations [(m2 × 1st row + 2nd row) and (m3 × 1st row + 3rd row)] as new
2nd and 3rd rows along with the multiplier on the left.
(iii) Finally, we get the upper triangular transformed augmented matrix as given
below.
Notes:
1. The above procedure can be easily extended to a system of n unknowns, in
which case, we have to perform a total of (n–1) steps for the systematic
elimination to get the final upper triangular matrix.
2. The condition to be satisfied for using this elimination is that the first diagonal
elements at each step must not be zero. These diagonal elements
(1) ( 2)
[ a11 , a 22 , a33 , etc.] are called pivot. If the pivot is zero at any stage, the method
fails. However, we can rearrange the rows so that none of the pivots is zero, at
any stage.
Example 2.3: Solve the following system by Gauss elimination method:
x1 2 x2 x3 0
2 x1 2 x2 3x3 3
x1 3x2 2
Step 1: For elimination of x1 from the 2nd and 3rd equations we multiply the first
equation by –2 and 1, successively, and add them to the 2nd and 3rd equation. The
result is shown in the augmented matrix below.
ª1 2 1 : 0º
2 ««0 2 1 : 3»»
1 «¬0 1 1 : 2»¼
Self-Instructional
40 Material
Step 2: For elimination of x2 from the third equation we multiply the second equation System of Linear
Equations
1
by and add it to the third equation. The result is shown in the augmented matrix
2
below.
NOTES
ª1 2 1 : 0 º
1 / 2««0 2 1 : 3 »»
«¬0 0 1 / 2 : 1 / 2»¼
We assume that a11 is non-zero. If, however, a11 is zero, we can interchange
rows so that a11 is non-zero in the resulting system.
The first step is to divide the first row by a11 and then eliminating x1 from 2nd
and 3rd equations by row operations of multiplying the reduced first row by a21 and
subtracting from the second row and next multiplying the reduced first row by a31
and subtracting from the third row. This is shown in matrix transformations given
below.
Where,
c
a12 c
a12 / a11 , a13 a13 / a11 , b1c b1 / a11
a c22 c , a23
a22 a21 a12 c c , b2c
a23 a21 a13 b2 a21b1c
c
a32 c , a33
a32 a31 a12 c c , b3c
a33 a31 a13 b3 a31b1c
Self-Instructional
Material 41
System of Linear
Equations
Now considering ac22 as the non-zero pivot, we first divide the second row by
c and subtract it from the first
c and then multiply the reduced second row by a12
a22
row and also multiply the reduced second row by a32 c and subtracting it from the
NOTES third row. The operations are shown below in matrix notation.
ª1 a12c c : b1c º
a13 ª1 a12c c : b1c º
a13 ª1 0 a13cc : b1ccº
« » c «
R2 / a22 » c and R3R2 a32
R1R2 a12 c «
c c : b2c o 0 1 acc23 : b2cc» cc : b2cc»
o «0 1 a23
«0 a22 a23 » « »
¬«0 a32
c c : b3c »¼
a33 ¬«0 a32
c c : b3c »¼
a33 ¬«0 0 a33
cc : b3cc¼»
Where
cc
a13 c a12
a13 c a23
cc , b1cc1 b1c a12
c b2cc
cc
a 23 c / a 22
a 23 c , b2cc b2c / a 22
c
cc
a33 c a 23
a33 cc a32
c , b3cc b3c a32
c b2cc
ª 2 2 4 : 18º
«1 3 2 : 13»
« »
«¬ 3 1 3 : 14»¼
First, we divide the first row by 2 then subtract the reduced first row from 2nd
row and also multiply the first row by 2 and then subtract from the third. The results
are shown below:
ª1 2 4 18º ª1 1 2 : 9 º R R ª1 1 2 : 9 º
«1 3 2 13» R /2 «1 3 2 : 13» 2 1o «0 2 0 : 4 »»
» o
1
« « » «
«¬3 1 3 14»¼ «¬3 1 3 : 14»¼ R3 2 R2 «¬0 2 3 : 13»¼
Next considering 2nd row, we reduce the second column to [0, 1, 0] by row
operations shown below:
Self-Instructional
42 Material
System of Linear
ª1 1 2 : 9 º ª1 1 2 : 9 º R R ª1 0 2 : 7 º Equations
«0 2 » R2 / 2 «
0 : 4 » o«0 1 0 : 2 »» o
1 2 «0 1 0 : 2 »
« « »
«¬0 2 3 : 13»¼ «¬0 2 3 : 13»¼ R3 2 R2 «¬0 0 3 : 9»¼
NOTES
Finally, dividing the third row by –3 and then subtracting from the first row the
elements of the third row multiplied by 2, the result is shown below:
ª1 0 2 : 7 º ª1 0 2 : 7º ª1 0 0 : 1 º
«0 1 0 : 2 » R /( 3) « » R1 2 R3 « »
» o «0 1 0 : 2» o «0 1 0 : 2»
3
«
«¬0 0 3 : 9»¼ «¬0 0 1 : 3»¼ «¬0 0 1 : 3»¼
ª3 18 9º ª x1 º ª 18 º
« 2 3 3» « x » «117 »
« » « 2» « »
«¬4 1 2»¼ «¬ x3 »¼ «¬283»¼
Solution: We consider the augmented matrix and solve the system by Gauss-Jordan
elimination method. The computations are shown in compact matrix notation as
given below. The augmented matrix is,
ª3 18 9 : 18 º
«2 3 3 : 117 »
« »
«¬4 1 2 : 283»¼
Step 1: The pivot is 3 in the first column. The first column is transformed into [1, 0,
0] T by row operations shown below:
ª3 18 9 : 18 º ª1 6 3 6 º ª1 6 3 : 6 º
«2 3 3 : 117 » R /3 « 2R1
» R2 «0 9 3 : 105 »
» o «2 3 3 117 » o
1
« R3 4 R1 « »
¬«4 1 2 : 283»¼ ¬«4 1 2 283»¼ ¬«0 23 10 : 259»¼
Step 2: The second column is transformed into [0, 1, 0] by row operations shown
below:
ª1 6 3 : 6 º ª1 6 3 : 6 º ª1 0 1 : 76 º
«0 9 3 : 105 » R2 / 9 «0 R 6 R2
« » o « 1 1 / 3 : 35 / 3»» 1 o «0 1 1 / 3 : 35 / 3»
« »
R3 23R2
«¬0 23 10 : 259»¼ «¬0 23 10 : 259 »¼ «¬0 0 7 / 3 : 28 / 3 »¼
Step 3: The third column is transformed into [0 0 1]T by row operations shown
below:
ª1 0 1 : 76 º ª1 0 1 : 76 º ª1 0 0 : 72 º
«0 1 1 / 3 : 35 / 3» R3 /( 7 / 3) « R R3
« » o « 0 1 1 / 3 : 35 / 3»» 1 o «0 1 0 : 13»
« »
¬«0 0 7 / 3 : 28 / 3 ¼» ¬«0 0 1 : 4 ¼» R2 R3 / 3 ¬«0 0 1 : 4 ¼»
¦| a
j 1, j z i
ij | | aii |, for i 1, 2,..., n (2.15)
¦| a
i 1, i z j
ij | | a jj | , for j 1, 2,..., n (2.16)
There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
Jacobi Iteration Method: Consider a system of n linear equations,
The diagonal elements aii, i = 1, 2, ..., n are non-zero and satisfy the set of
sufficient conditions stated earlier. When the system of equations do not satisfy
these conditions, we may rearrange the system in such a way that the conditions
hold.
In order to apply the iteration we rewrite the equations in the following form:
x1( k 1) (b1 a12 x2( k ) a13 x3( k ) ... a1n xn( k ) ) / a11
x2( k 1) (b2 a 21 x1( k ) a 23 x3( k ) ... a 2 n xn( k ) ) / a 22
( k 1) (k ) (k ) (k )
x3 (b3 a31 x1 a32 x2 ... a3n xn ) / a33
(2.17)
.................................................................................
( k 1) (k ) (k ) (k )
xn (bn a n1 x1 a n 2 x2 ... a nn 1 xn 1 ) / a nn
Self-Instructional
44 Material
Where k = 0, 1, 2, ... System of Linear
Equations
The iterations are continued till the desired accuracy is achieved. This is checked
by the relations,
(2.18) NOTES
,i
end
It is clear from above that for computing x2( k 1) , the improved value of x1( k 1) is
used instead of x1( k ) ; and for computing x3( k 1) , the improved values x1( k 1) and
x2( k 1) are used. Finally, for computing xn( k ) , improved values of all the components
x1( k 1) , x2( k 1) ,..., xn( k11) are used. Further, as in the Jacobi iteration, the iterations are
continued till the desired accuracy is achieved.
Example 2.5: Solve the following system by Gauss-Seidel iterative method correct
upto four significant digits.
Self-Instructional
Material 45
System of Linear
Equations 10 x1 2 x2 x3 x4 3
2 x1 10 x2 x3 x4 15
x1 x2 10 x3 2 x4 27
NOTES x1 x2 2 x3 10 x4 9
Solution: The given system is clearly having diagonally dominant coefficient matrix,
n
i.e., | aii | t ¦| a
j 1
ij |, i 1, 2, ..., n
j zi
Hence, we can employ Gauss-Seidel iteration method, for which we rewrite the
system as,
( k 1) (k ) (k ) (k )
x1 0.3 0.2 x2 0.1 x3 0.1 x4
( k 1) ( k 1) (k ) (k )
x2 1.5 0.2 x1 0.1 x3 0.1 x4
( k 1) ( k 1) ( k 1) (k )
x3 2.7 0.1 x1 0.1 x2 0 .2 x 4
x4( k 1) 0.9 0.1 x1( k 1) 0.1 x2( k 1) 0.2 x3( k 1)
We start the iteration with,
( 0) (0 ) ( 0) (0)
x1 0.3, x2 1.5, x3 2. 7 , x 4 0.9
k x1 x2 x3 x4
1 0.72 1.824 2.774 –0.0196
2 0.9403 1.9635 2.9864 –0.0125
3 0.09901 1.9954 2.9960 –0.0023
4 0.9984 1.9990 2.9993 –0.0004
5 0.9997 1.9998 2.9998 –0.0003
6 0.9998 1.9998 2.9998 –0.0003
7 1.0000 2.0000 3.0000 0.0000
(3) 1
x1 (30 2 u 2.127 2.986) 1.138
20
1
x2(3) (75 1.138 3 u 2.986) 2.127
40
1
x3(3) (30 2 u 1.138 2.127) 2.985
10
Thus the solution correct to three significant digits can be written as x1 = 1.14,
x2 = 2.13, x3 = 2.98.
Example 2.7: Solve the following system correct to three significant digits, using
Jacobi iteration method.
10 x1 8 x 2 3 x 3 x 4 16
3 x1 4 x 2 10 x 3 x 4 10
2 x1 10 x2 x3 4 x4 9
2 x1 2 x2 3x3 10 x4 11
Self-Instructional
Material 47
System of Linear Solution: The system is first rearranged so that the coefficient matrix is diagonally
Equations
dominant. The equations are rewritten for starting Jacobi iteration as,
k x1 x2 x3 x4
1 1.07 0.92 0.77 0.90
2 1.050 0.969 0.957 0.933
3 1.0186 0.9765 0.9928 0.9923
4 1.0174 0.9939 0.9858 0.9989
5 0.9997 0.9975 0.9925 0.9974
6 1.0001 0.9997 0.9994 0.9984
7 1.0002 0.9998 1.0001 0.9999
By using the definition of matrix multiplication we can write that the above
relation equivalent to the following three systems of linear equations.
ª a11 a12 a13 º ªb11 º ª1 º ª a11 a12 a13 º ªb12 º ª 0º ª a11 a12 a13 º ª b13 º ª0 º
«a a 22 a 23 »» «b » «0 » , «a a 22 a23 »» «b » «1 » , «a a 22 a23 »» «b » «0 »
« 21 « 21 » « » « 21 « 22 » « » « 21 « 23 » « »
¬« a31 a32 a33 ¼» ¬«b31 ¼» ¬«0¼» ¬«a31 a32 a33 ¼» ¬«b32 ¼» ¬«0¼» ¬« a31 a32 a33 ¼» ¬«b33 ¼» ¬«1¼»
Thus, by solving each of the above systems we shall get the three columns of
the inverse matrix B = A–1. Since, the coefficient matrix is the same for each of the
three systems, we can apply Gauss elimination to all the three systems simultaneously.
We consider for this the following augmented matrix:
ª a11 a12 a13 : 1 0 0º
«a a 23 : 0 1 0»»
« 21 a 22
«¬a31 a32 a33 : 0 0 1»¼
We employ Gauss elimination to this augmented matrix. At the end of 1st stage
we get,
R2 a21 / a11 R1 ª a11 a12 a13 : 1 0 0º
o «0 (1)
a22 (1)
a23 : a21 / a11 1 0 »»
«
R3 a31 / a11 R1 «¬ 0 (1)
a32 (1)
a33 : a31 / a11 0 1 »¼
Where
(1) (1)
a 22 a 22 ( a 21 / a11 ) a12 , a 23 a 23 (a 21 / a11 ) a13
(1) (1)
a32 a32 ( a31 a11 ) a12 , a33 a33 ( a31 / a11 ) a13
Similarly, at the end of the second stage, we have
ª a11 a12 a13 : 1 0 0º
R3 a / a (1)
23
(1)
22 a (1)
32 o «« 0
(1)
a22 (1)
a23 : c21 1 0»»
«¬ 0 0 (2)
a33 : c31 c32 1 »¼
Self-Instructional
Material 49
System of Linear Where
Equations
(2)
a33 (1)
a33 a32
(1) (1)
/ a22 (1)
a23 , c21 (a21 / a12 )
R 2 R ª2 3 1 : 1 0 0 º
2 1 o «0 2 1 : 2 1 0 »
« »
R3 R1 «¬0 6 2 : 1 0 1»¼
Similarly, at the end of 2nd step we get,
ª2 3 1 : 1 0 0º
«
o«0 2 1 : 2 1 0»»
R3 3R2
«¬0 0 5 : 5 3 1»¼
Thus, we get the three columns of inverse matrix by solving the following three
systems:
ª2 3 1 : 1 º ª2 3 1 : 0 º ª 2 3 1 : 0º
«0 2 1 : 2» «0 2 1 : 1 » «0 2 1 : 0 »
« »« » « »
«¬0 0 5 : 5 »¼ «¬0 0 5 : 3»¼ «¬0 0 5 : 1»¼
The solution of the three are easily derived by back-substitution, which give the
three columns of the inverse matrix given below:
ª1 / 4 0 1/ 4 º
«1 / 2 1 / 5 1 / 10»
« »
«¬ 1 3 / 5 1 / 5 »¼
Self-Instructional
50 Material
We can also employ Gauss-Jordan elimination to compute the inverse matrix. System of Linear
Equations
This is illustrated by the following example:
Example 2.9: Compute the inverse of the following matrix by Gauss-Jordan
elimination.
ª2 3 1º
NOTES
A «4 4 3»
« »
«¬2 3 1 »¼
Solution: We consider the augmented matrix [A : I],
ª 2 3 1 : 1 0 0 º ª 1 3 / 2 1/ 2 : 1/ 2 0 0 º
[A: I] « 4 4 3 : 0 1 0 » o « 3 : 0 1 0 »»
« » R / 2 «4 4
¬« 2 3 1 : 0 0 1 »¼ ¬« 2 3 : 0 0 1 ¼»
1
1
R3 2 R1 ª1 3 / 2 1/ 2 : 1/ 2 0 0 º ª1 3 / 2 1/ 2 : 1/ 2 0 0º
o «0 2 1 : 2 1 0 » o «0 1 1/ 2 : 1 1/ 2 0 »
« » R / 2 « »
R2 4 R1 «0 6 : 1 0 1 ¼» ¬«0 6 1 »¼
2
¬ 2 2 : 1 0
R1 3R2 /2 ª1 0 5 / 4 : 1 3 / 4 0 º ª 1 0 5 / 4 : 1 3 / 4 0 º
o «0 1 1/ 2 : 1 1/ 2 0 » o « 0 1 1/ 2 : 1 1/ 2 0 »
« » R /5 « »
R3 6 R2 «0 0 5 : 5 3 1 »¼
3
«¬ 0 0 1 : 1 3 / 5 1/ 5 ¼»
¬
R1 5R3 / 4 ª1 0 0 : 1/ 4 0 1/ 4 º
o «0 1 0 : 1/ 2 1/ 5 1/10»»
«
R2 1R3 / 2 «0 0 1 : 1 3 / 5 1/ 5 »¼
¬
ª1 / 4 0 1/ 4 º
1 «1 / 2 1 / 5 1 / 10»
Which gives A « »
«¬ 1 3 / 5 1 / 5 »¼
Self-Instructional
Material 51
System of Linear
Equations In the lower triangular matrix all elements above the diagonal are zero, in
the upper triangular matrix, all the elements below the diagonal are zero. For
example, for a 3 × 3 matrix A, its LU decomposition looks like this:
NOTES
Where L and U are again lower and upper triangular matrices, and P is a
permutation matrix, which, when left-multiplied to A, reorders the rows of A. It
turns out that all square matrices can be factorized in this form, and the factorization
is numerically stable in practice. This makes LUP decomposition a useful technique
in practice.
LU Factorization with Full Pivoting
An LU factorization with full pivoting involves both row and column permutations:
Self-Instructional
Material 53
System of Linear
Equations Check Your Progress
1. When is the system of equations homogenous and when is it non-
homogenous?
NOTES 2. What is Gauss elimination method?
3. What happens in Gauss-Jordan elimination method?
4. When is iteration methods used?
5. Define the process of Gauss-Seidel iteration method.
6. Elaborate on the triangularisation method.
2.5 SUMMARY
x Many engineering and scientific problems require the solution of a system
of linear equations.
x The system of equations is termed as a homogeneous one if all the elements
in the column vector b of the equation Ax = b, are zero.
x Cramer’s rule and matrix inversion method are two classical methods to
solve the system of equations.
x If D = |A| be the determinant of the coefficient matrix A and Di is the
determinant obtained by replacing the ith column of D by the column vector
b, then the Cramer’s rule gives the solution vector x by the equations,
Di
xi for i = 1, 2, …, n.
Self-Instructional D
54 Material
x Gaussian elimination method consists in systematic elimination of the System of Linear
Equations
unknowns so as to reduce the coefficient matrix into an upper triangular
system, which is then solved by the procedure of back-substitution.
x In Gauss-Jordan elimination, the augmented matrix is transformed by row
operations such that the coefficient matrix reduces to the identity matrix. NOTES
x We can use iteration methods to solve a system of linear equations when
the coefficient matrix is diagonally dominant.
x There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
x Gaussian elimination can be used to compute the inverse of a matrix.
x In numerical analysis, triangularisation method is also known as
decomposition method or the factorization method. As a direct method,
this is most useful method for solving a linear simultaneous equation. The
inverse of matrix can also be determined by this method.
Short-Answer Questions
1. Define the system of linear equations.
2. How many determinants d0 we have to compute in Cramer’s rule?
3. What is the basic difference between Gaussian elimination and Gauss-Jordan
elimination method?
4. What are iterative methods?
5. State an application of Gaussian elimination method.
Self-Instructional
Material 55
System of Linear Long-Answer Questions
Equations
1. Use Cramer’s rule to solve the following systems of equations:
(i) x1 – x2 – x3 = 1 (ii) x1 + x2 + x3 = 6
NOTES 2x1 – 3x2 + x3 = 1 x1 + 2x2 + 3x3 = 14
3x1 + x2 – x3 = 2 x1 – 2x2 + x3 = 2
2. Using the matrix inversion method to solve the following systems of equation:
(i) 4x1 – x2 + 2x3 = 15 (ii) x1 + 4x2 + 9x3 = 16
x1 – 2x2 – 3x3 = –5 2x1 + x2 + x3 = 10
5x1 – 7x2 + 9x3 = 8 3x1 + 2x2 + 3x3 = 18
3. Solve the following systems of equation using Gaussian elimination method:
(i) 2x + 2y + 4z = 18 (ii) x1 + 2x2 + x3 + 4x4 = 13
x + 3y + 2x = 13 x1 + 4x3 + 3x4 = 28
3x + y + 3x = 14 4x1 + 2x2 + 2x3 + x4 = 20
–3x1 + x2 + 3x3 + 2x4 = 6
4. Apply Gauss-Jordan elimination method to solve the following systems:
(i) x1 + 2x2 + 3x3 = 4 (ii) 5x1 + 3x2 + x3 = 2
x1 + x2 + x3 = 3 4x1 + 10x2 + 4x3 = –4
2x1 + 2x2 + x3 = 1 2x1 + 3x2 + 5x3 = 11
5. Compute the solution of the following systems correct to three significant
digits using Gauss-Jordan iteration method:
(i) 9x1 – 3x2 + 2x3 = 23 (ii) x1 + 2x2 + 3x3 + 4x4 = 30
6x1 + 3x2 + 14x3 = 38 4x1 + x2 + 2x3 + 3x4 = 24
4x1 + 2x2 – 3x3 = 35 3x1 + 4x2 + x3 + 2x4 = 22
2x1 + 3x2 + 4x3 + x4 = 24
SYSTEMS
NOTES
Structure
3.0 Introduction
3.1 Objectives
3.2 Solution of Linear Systems
3.3 Jacobi and Gauss-Seidal Iterative Methods
3.4 Answers to Check Your Progress Questions
3.5 Summary
3.6 Key Words
3.7 Self Assessment Questions and Exercises
3.8 Further Readings
3.0 INTRODUCTION
3.1 OBJECTIVES
Self-Instructional
Material 57
Solution of Linear x Explain the Jacobi iterative method
Systems
x Define the Gauss-Seidel iteration methods
¦| a
j 1, j z i
ij | | aii |, for i 1, 2,..., n (3.4)
¦| a
i 1, i z j
ij | | a jj | , for j 1, 2,..., n (3.5)
There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
Jacobi Iteration Method: Consider a system of n linear equations,
a11 x1 a12 x2 a13 x3 ........ a1n xn b1
a 21 x1 a22 x2 a 23 x3 ........ a2 n xn b2
a31 x1 a32 x2 a33 x3 ........ a3n xn b3
..................................................................
a n1 x1 an 2 x2 an3 x3 ........ a nn xn bn
The diagonal elements aii, i = 1, 2, ..., n are non-zero and satisfy the set of
sufficient conditions stated earlier. When the system of equations does not satisfy
these conditions, we may rearrange the system in such a way that the conditions
hold.
In order to apply the iteration we rewrite the equations in the following form.
Where k = 0, 1, 2, ...
The iterations are continued till the desired accuracy is achieved. This is checked
by the relations,
xi( k 1) xi( k ) H , for i 1, 2, ..., n (3.7)
x1( k 1) (b1 a12 x 2( k ) a13 x3( k ) ... a1n xn( k ) ) / a11
x2( k 1) (b2 a 21 x1( k 1) a23 x3( k ) ... a 2 n xn( k ) ) / a22
( k 1) ( k 1) ( k 1) (k )
x3 (b3 a31 x1 a32 x 2 ... a3n xn ) / a33
(3.8)
.................................................................................
( k 1) ( k 1) ( k 1) ( k 1)
xn (bn a n1 x1 an2 x2 ( k 1) ... a nn 1 xn 1 ) / a nn
It is clear from above that for computing x2( k 1) , the improved value of x1( k 1)
is used instead of x1( k ) ; and for computing x3( k 1) , the improved values x1( k 1) and
x2( k 1) are used. Finally, for computing xn( k ) improved values of all the components
x1( k 1) , x2( k 1) ,..., xn( k11) are used. Further, as in the Jacobi iteration, the iterations are
continued till the desired accuracy is achieved.
Example 3.1: Solve the following system by Gauss - Seidel iterative method
correct upto four significant digits.
10 x1 2 x2 x3 x4 3
2 x1 10 x2 x3 x4 15
x1 x2 10 x3 2 x4 27
x1 x2 2 x3 10 x4 9
Self-Instructional
60 Material
Solution: The given system is clearly having diagonally dominant coefficient matrix, Solution of Linear
Systems
n
i.e., | aii | t ¦| a
j 1
ij |, i 1, 2, ..., n
j zi
NOTES
Hence, we can employ Gauss-Seidel iteration method, for which we rewrite
the system as,
Self-Instructional
Material 61
Solution of Linear For starting the iterations, we rewrite the equations as,
Systems
1
x1 (30 2 x2 x3 )
20
NOTES 1
x2 (75 x1 3 x3 )
40
1
x3 (30 2 x1 x2 )
10
(1) 1
x1 (30 2 u 2.0 3.0) 1.15
20
1
x2(1) (75 1.15 3 u 3.0) 2.14
40
1
x3(1) (30 2 u 1.15 2.14) 2.98
10
( 2) 1
x (30 2 u 2.14 2.98) 1.137
1 20
1
x2( 2) (75 1.137 3 u 2.98) 2.127
40
1
x3( 2) (30 2 u 1.137 2.127) 2.986
10
( 3) 1
x1 (30 2 u 2.127 2.986) 1.138
20
1
x2(3) (75 1.138 3 u 2.986) 2.127
40
1
x3(3) (30 2 u 1.138 2.127) 2.985
10
Self-Instructional
62 Material
Solution of Linear
10 x1 8 x 2 3 x 3 x 4 16 Systems
3 x1 4 x 2 10 x 3 x 4 10
2 x1 10 x2 x3 4 x4 9
NOTES
2 x1 2 x2 3x3 10 x4 11
Solution: The system is first rearranged so that the coefficient matrix is diagonally
dominant. The equations are rewritten for starting Jacobi iteration as,
k x1 x2 x3 x4
1 1.07 0.92 0.77 0.90
2 1.050 0.969 0.957 0.933
3 1.0186 0.9765 0.9928 0.9923
4 1.0174 0.9939 0.9858 0.9989
5 0.9997 0.9975 0.9925 0.9974
6 1.0001 0.9997 0.9994 0.9984
7 1.0002 0.9998 1.0001 0.9999
¦| a
Systems
ij | | aii |, for i 1, 2,..., n
j 1, j z i
3.5 SUMMARY
x Many engineering and scientific problems require the solution of a system
of linear equations. We consider a system of m linear equations in n
unknowns written as,
a11 x1 a12 x2 a13 x3 ... a1n xn b1
a21 x1 a22 x2 a23 x3 ... a2 n xn b2
a31 x1 a32 x2 a33 x3 ... a3n xn b3
... ... ...
am1 am 2 x2 am 3 x3 ... am xn bm
¦| a
j 1, j z i
ij | | aii |, for i 1, 2,..., n
¦| a
i 1, i z j
ij | | a jj | , for j 1, 2,..., n
Short-Answer Questions
1. Explain the solution of linear systems.
2. Elaborate on the Jacobi iterative method.
3. State the Gauss-Seidel iteration methods.
Long-Answer Questions
1. Explain the solution of linear systems with the help of example.
2. Discuss briefly the Jacobi iterative method.
3. Analyse the Gauss-Seidel iteration methods. Give an appropriate example.
UNIT 4 INTERPOLATION
Structure NOTES
4.0 Introduction
4.1 Objectives
4.2 Graphical Method of Interpolation
4.3 Finite Difference
4.4 Forward Difference
4.5 Backward Difference
4.6 Central Difference
4.7 Fundamental Theorem of Finite Differences
4.8 Answers to Check Your Progress Questions
4.9 Summary
4.10 Key Words
4.11 Self Assessment Questions and Exercises
4.12 Further Readings
4.0 INTRODUCTION
Self-Instructional
Material 67
Interpolation
4.1 OBJECTIVES
Self-Instructional
68 Material
small quantity. Geometrically, it may be interpreted that the graph of the polynomial Interpolation
Age x 3 5 7 9
Weight y (kg ) 5 8 12 17
Self-Instructional
Material 69
Interpolation Solution: Since the values of x are equidistant, we form the finite difference table
for using Newton’s forward difference interpolation formula to compute weight of
the baby at the age of required years.
x y 'y '2 y
NOTES
3 5
3
5 8 1
4
7 12 1
5
9 17
Taking x = 2, u
Newton’s forward difference interpolation gives,
(0.5)(1.5)
y at x 1, y (1) 5 0.5 u 3 u1
2
5 1.5 0.38 3.88 3.9 kg.
Similarly, for computing weight of the baby at the age of ten years, we use
Newton’s backward difference interpolation given by,
x xn
10 9
v 0.5
h 2
0.5 u1.5
y at x 10, y (10) 17 0.5 u 5 u1
2
17 2.5 0.38 19.88
Let us assume that values of a function y = f (x) are known for a set of equally
spaced values of x given by {x0, x1,..., xn}, such that the spacing between any
two consecutive values is equal. Thus, x1 = x0 + h, x2 = x1 + h,..., xn = xn–1 + h,
so that xi = x0 + ih for i = 1, 2, ...,n. We consider two types of differences known
as forward differences and backward differences of various orders. These
differences can be tabulated in a finite difference table as explained in the subsequent
sections.
'f ( x ) f ( x h) f ( x ) (4.4)
Thus, ' yi = yi+1 – yi, for i = 0, 1, 2, ..., n – 1, are the first order forward
differences at xi.
NOTES
The differences of these first order forward differences are called the second
order forward differences.
Thus, ' 2 y ' ( 'y ) i i
Self-Instructional
Material 71
Interpolation The entries in any column of the differences are computed as the differences of
the entries of the previous column and one placed in between them. The upper data
in a column is subtracted from the lower data to compute the forward differences.
We notice that the forward differences of various orders with respect to yi are along
NOTES the forward diagonal through it. Thus, 'y0, '2y0, '3y0, '4y0 and '5y0 lie along the top
forward diagonal through y0. Consider the following example.
Example 4.2: Given the table of values of y = f (x),
x 1 3 5 7 9
y 8 12 21 36 62
form the diagonal difference table and find the values of 'f (5), '2 f (3), '3 f (1) .
Solution: The diagonal difference table is,
i xi yi 'y i '2 yi '3 yi '4 yi
0 1 8
4
1 3 12 5
9 1
2 5 21 6 4
15 5
3 7 36 11
26
4 9 62
From the table, we find that 'f (5) 15, the entry along the diagonal through the
entry 21 of f (5).
Similarly, '2 f (3) 6, the entry along the diagonal through f (3). Finally,,
'3 f (1) 1.
Self-Instructional
72 Material
Hence, Interpolation
2 y2 y2 2 y1 y0 , and 2 yn yn 2 yn 1 yn 2 (4.10)
Higher order backward differences can be defined in a similar manner.
NOTES
Thus, 3 yn yn 3 yn 1 3 yn 2 yn 3 , etc. (4.11)
Finally,
(4.12)
i xi yi y i 2 yi 3 yi 4 yi 5 yi
0 x0 y0
y1
1 x1 y1 2 y2
y 2 3 y3
2
2 x2 y2 y3 4 y4
y 3 3 y4
2
3 x3 y3 y4 4 y5
3
y 4 y5
2
4 x4 y4 y5
y5
5 x5 y5
The entries along a column in the table are computed (as discussed in previous
example) as the differences of the entries in the previous column and are placed in
between. We notice that the backward differences of various orders with respect to
yi are along the backward diagonal through it. Thus, y5 , 2 y5 , 3 y5 , 4 y5 and
5
y 5 are along the lowest backward diagonal through y5.
We may note that the data entries of the backward difference table in any
column are the same as those of the forward difference table, but the differences
are for different reference points.
Specifically, if we compare the columns of first order differences we can see
that,
'y 0 y1 , 'y1 y 2 , ..., 'y n 1 y n
Self-Instructional
Material 73
Interpolation
Similarly, '2 y 0 2 y 2 , '2 y1 2 y 3 ,..., '2 y n 2 2 y n
Conversely, k yi 'k yi k .
Example 4.3: Given the following table of values of y = f (x):
x 1 3 5 7 9
y 8 12 21 36 62
2 3
Find the values of y( 7) , y (9) , y (9) .
Solution: We form the diagonal difference table,
xi yi y i 2 yi 3 yi 4 yi
1 8
4
3 12 5
9 1
5 21 6 4
15 5
7 36 11
26
9 62
From the table, we can easily find y(7) 15, y(9) 11, y(9) 5.
2 3
Thus,
i.e.,
Self-Instructional
74 Material
Further, Interpolation
NOTES
(4.13)
Even though the central difference operator uses fractional arguments, still it is
widely used. This is related to the averaging operator and is defined by,
(4.14)
Squaring,
? (4.15)
? (4.16)
Further,
E 1
Thus, { or E { E 1 { '
E
Hence proved.
Self-Instructional
Material 75
Interpolation (ii) From Equation (1), we have E { ' 1 (3)
And from Equation (2) we get E 1 { 1 (4)
Combining Equations (3) and (4), we get (1 ' )(1 ) { 1.
NOTES
Example 4.5: If fi is the value of f (x) at xi where xi = x0 + ih, for i = 1,2,..., prove
that,
i
§i·
fi E i fo ¦¨ j ¸ ' i
f0
j 0 © ¹
Solution: We can write Ef (x) = f (x + h)
Using Taylor series expansion, we have
ehD . f ( x)
? 1 ' ehD
Now, fi f ( xi ) f ( x0 ih) E i f ( x0 )
Hence proved.
Example 4.6: Compute the following differences:
(i) 'n e x (ii) 'n x n
Solution:
(i) We have, ' e x e xh e x e x (e h 1)
Self-Instructional
76 Material
(ii) We have, Interpolation
'( x n ) ( x h) n x n
n(n 1) 2 n 2
n h x n 1 h x .... h n
2! NOTES
' f ( x) ½
(ii) '{log f ( x)} log ®1 ¾
¯ f ( x) ¿
Solution:
(i) We have,
f ( x) ½ f ( x h) f ( x )
'® ¾
¯ g ( x) ¿ g ( x h) g ( x )
f ( x h) g ( x ) f ( x ) g ( x h)
g ( x h) g ( x )
f ( x h) g ( x ) f ( x ) g ( x ) f ( x ) g ( x ) f ( x ) g ( x h)
g ( x h) g ( x )
g ( x){ f ( x h) f ( x)} f ( x){g ( x h) g ( x)}
g ( x ) g ( x h)
g ( x) 'f ( x) f ( x) 'g ( x)
g ( x ) g ( x h)
(ii) We have,
' {log f ( x)} log{ f ( x h)} log{ f ( x)}
f ( x h) f ( x h) f ( x) f ( x) ½
log log ® ¾
f ( x) ¯ f ( x) ¿
'f ( x ) ½
log ® 1¾
¯ f ( x) ¿
Self-Instructional
Material 77
Interpolation
4.7 FUNDAMENTAL THEOREM OF FINITE
DIFFERENCES
NOTES The concept of finite differences were introduced by Brook Taylor in 1715 and
have also been studied as abstract mathematical objects in the works by George
Boole (1860), L. M. Milne-Thomson (1933), and Károly Jordan (1939). Finite
differences trace their origins back to one of Jost Bürgi’s algorithms (c.
1592) and work by others including Isaac Newton.
A finite difference is a mathematical expression of the form f (x + b) – f
(x + a). If a finite difference is divided by b – a, then we get a difference quotient.
The finite differences mean finite difference approximations. The approximation of
derivatives through the finite differences has a significant role in finite difference
methods specifically for the numerical solution of differential equations, particularly
boundary value problems. Some of the recurrence relations can be written in the
form of difference equations basically by replacing iteration notation with the finite
differences.
At present, the term ‘Finite Difference’ is often taken as synonymous
with finite difference approximations of derivatives, particularly in the context of
numerical methods. Fundamentally, the finite difference approximations are finite
difference quotients.
The anti-difference operator ‘F’ is said to be an anti-difference of the real-
valued function f and is defined as 'F = f using the anti-difference operator '–1
we write F = '–1f.
Theorem : Fundamental Theorem of the Calculus of Finite Differences
Let f be a real-valued function and let a and b be integers such that a d b.
If F = '–1f, then .
Self-Instructional
78 Material
Interpolation
(F(b) F(b 1)) + (F(b + 1) F(n))
= F(b + 1) F(a)
NOTES
Solution: We can evaluate the sum applying the above mentioned ‘Fundamental
Theorem of the Calculus of Finite Differences’ derivations.
Thus,
If F = '–1f, then .
4.9 SUMMARY
tabulated function y = f(x) for a value of x within the table. But, they can
also be used in some cases for finding values of f(x) for values of x near to
the end points x0 or xn outside the interval [x0, xn]. This process of finding
values of f(x) at points beyond the interval is termed as extrapolation. NOTES
x Let us assume that values of a function y = f (x) are known for a set of
equally spaced values of x given by {x0, x1,..., xn}, such that the spacing
between any two consecutive values is equal. Thus, x1 = x0 + h, x2 = x1 +
h,..., xn = xn–1 + h, so that xi = x0 + ih for i = 1, 2, ...,n. We consider two
types of differences known as forward differences and backward differences
of various orders.
x Let y0, y1,..., yn be the values of a function y = f (x) at the equally spaced
values of x = x0, x1, ..., xn. The differences between two consecutive y given
by y1 – y0, y2 – y1,..., yn – yn–1 are called the first order forward differences
of the function y = f (x) at the points x0, x1,..., xn–1. These differences are
denoted by,
'y0 y1 y0 , 'y1 y2 y1 , ..., 'yn1 yn yn1
x The backward differences of various orders for a table of values of a
function y = f (x) are defined in a manner similar to the forward
differences. The backward difference operator (inverted triangle) is
defined by f ( x) f ( x) f ( x h).
x The central difference operator denoted by G is defined by,
Thus,
If F = '–1f, then .
Short-Answer Questions
1. Illustrate the graphical method of interpolation.
2. Define the finite difference.
3. State the forward difference.
4. Elaborate on the backward difference.
5. Explain the central difference.
6. Analyse the fundamental theorem of finite differences.
Long-Answer Questions
1. Discuss the graphical method of interpolation.
2. Explain the finite difference with the help of example.
3. Define the forward difference.
4. Analyse the backward difference. Give an appropriate example.
5. Briefly define the central difference.
6. State the fundamental theorem of finite differences.
Self-Instructional
Material 83
Interpolating
Polynomials BLOCK - II
and Operators
INTERPOLATIONS
NOTES
UNIT 5 INTERPOLATING
POLYNOMIALS
AND OPERATORS
Structure
5.0 Introduction
5.1 Objectives
5.2 Interpolating Polynomials Using Finite Difference
5.3 Other Difference Operators
5.4 Answers to Check Your Progress Questions
5.5 Summary
5.6 Key Words
5.7 Self Assessment Questions and Exercises
5.8 Further Readings
5.0 INTRODUCTION
Self-Instructional
84 Material
In this unit, you will study about the interpolating polynomials using finite Interpolating
Polynomials
difference, other difference operators, such as shift operator, and central difference and Operators
operator.
NOTES
5.1 OBJECTIVES
1 f0 x0 x
p0 j ( x) , for j 1, 2, ..., n (5.2)
x j x0 f j xj x
Now, consider the polynomial denoted by p01j (x) and defined by,
1 p01 ( x) x1 x
p01 j ( x) , for j 2, 3, ..., n (5.3)
x j x1 p0 j ( x) x j x
The polynomial p01j(x) interpolates f(x) at the points x0, x1, xj (j > 1) and is a
polynomial of degree 2, which can be easily verified that,
Self-Instructional
Material 85
Interpolating
Polynomials Similarly, the polynomial p012 j ( x ) can be constructed by replacing p01(x) by p012
and Operators (x) and p0j (x) by p01j (x).
Thus,
NOTES p012 ( x) x2 x
1
p012 j ( x) , for j 3, 4, ..., n (5.4)
x j x2 p01 j ( x) x j x
xk fk p0 j p 01 j ... x j x
x0 f0 x0 x
x1 f1 p01 x1 x
x2 f2 p02 p012 x2 x
x3 f3 p 03 p013 x3 x
... ... ... ... ... ...
xj fj p0 j p 01 j xj x
... ... ... ... ... ...
xn fn p0 n p01n xn x
Solution: Here, x = 2.12. The following table gives the successive iterative linear
interpolation results. The details of the calculations are shown below in the table.
xj s( x j ) p0 j p 01 j p012 j xj x
2.0 0.7909 0.12
2.1 0.7875 0.78682 0.02
2.2 0.7796 0.78412 0.78628 0.08
2.3 0.7673 0.78146 0.78628 0.78628 0.18
Self-Instructional
86 Material
Interpolating
1 0.7909 0.12 Polynomials
p01 0.78682
2.1 2.0 0.7875 0.02 and Operators
1 0.7909 0.12
p02 0.78412
2.2 2.0 0.7796 0.08 NOTES
1 0.7909 0.12
p03 0.78146
2.3 2.0 0.7673 0.18
1 0.78682 0.02
p012 0.78628
2.2 2.1 0.78412 0.08
1 0.78682 0.02
p013 0.78628
2.3 2.1 0.78146 0.18
1 0.78628 0.08
p012 0.78628
2.3 2.2 0.78628 0.18
The boldfaced results in the table give the value of the interpolation at x = 2.12.
The result 0.78682 is the value obtained by linear interpolation. The result 0.78628 is
obtained by quadratic as well as by cubic interpolation. We conclude that there is no
improvement in the third degree polynomial over that of the second degree.
Notes 1. Unlike Lagrange’s methods, it is not necessary to find the degree of the
interpolating polynomial to be used.
2. The approximation by a higher degree interpolating polynomial may not
always lead to a better result. In fact it may be even worse in some
cases.
Consider, the function f(x) = 4.
We form the finite difference table with values for x = 0 to 4.
Self-Instructional
Material 87
Interpolating Now, consider values of at x = 0.5 by taking successively higher and higher
Polynomials
and Operators degree polynomials.
Thus,
NOTES
.5)
24
We note that the actual value 40.5 = 2 is not obtainable by interpolation. The
results for higher degree interpolating polynomials become worse.
Note: Lagrange’s interpolation formula and iterative linear interpolation can easily
be implemented for computations by a digital computer.
Example 5.2: Determine the interpolating polynomial for the following table of
data:
x 1 2 3 4
y 1 1 1 5
Solution: The data is equally spaced. We thus form the finite difference table.
x y 'y '2 y
1 1
0
2 1 2
2
3 1 2
4
4 5
Since the differences of second order are constant, the interpolating polynomial
is of degree two. Using Newton’s forward difference interpolation, we get
,
Example 5.3: Compute the value of f(7.5) by using suitable interpolation on the
following table of data.
x 3 4 5 6 7 8
f ( x) 28 65 126 217 344 513
Self-Instructional
88 Material
Solution: The data is equally spaced. Thus for computing f(7.5), we use Newton’s Interpolating
Polynomials
backward difference interpolation. For this, we first form the finite difference table and Operators
as shown below.
x f ( x ) 'f ( x ) '2 f ( x) '3 f ( x)
3 28
NOTES
37
4 65 24
61 6
5 126 30
91 6
6 217 36
127 6
7 344 42
169
8 513
The differences of order three are constant and hence we use Newton’s backward
difference interpolating polynomial of degree three.
v(v 1) 2 v(v 1)(v 2) 3
f ( x) yn v yn yn yn ,
2 ! 3 !
x xn
v , for x 7.5, xn 8
h
7.5 8
? v 0.5
1
( 0.5) ( 0.5 1) 0.5 u 0.5 u 1.5
f (7.5) 513 0.5 u 169 u 42 u6
2 6
513 84.5 5.25 0.375
422.875
Example 5.4: Determine the interpolating polynomial for the following data:
x 2 4 6 8 10
f ( x) 5 10 17 29 50
x
Here, h = 0.1. Choosing x0 = 0.0, we have s 10 x. Newton’s forward
0.1
difference interpolation formula is,
s ( s 1) 2 s ( s 1)(s 2) 3
y y 0 s 'y 0 ' y0 ' y0
2! 3!
10 x(10 x 1) 10 x(10 x 1)(10 x 2)
1 10 x(0.0025) (0.0050) u 0.0025
2! 6
2 2.5 3 300 2 0.025
1.0 0.25 x 0.25 x 0.25 x x u 0.0025 x x
6 4 6
2 3
1.0 0.004 x 0.375 x 0.421x
y (0.05) 1.0002
Example 5.6: Compute f(0.23) and f(0.29) by using suitable interpolation formula
with the table of data given below.
x 0.20 0.22 0.24 0.26 0.28 0.30
f ( x) 1.6596 1.6698 1.6804 1.6912 1.7024 1.7139
Solution: The data being equally spaced, we use Newton’s forward difference
interpolation for computing f(0.23), and for computing f(0.29), we use Newton’s
backward difference interpolation. We first form the finite difference table,
Self-Instructional
90 Material
Interpolating
x f ( x) 'f ( x) '2 f ( x) Polynomials
0.20 1.6596 and Operators
102
0.22 1.6698 4
106 NOTES
0.24 1.6804 2
108
0.26 1.6912 4
112
0.28 1.7024 3
115
0.30 1.7139
We observe that differences of order higher than two would be irregular. Hence,
we use second degree interpolating polynomial. For computing f(0.23), we take x0
x x0 0.23 0.22
= 0.22 so that u 0.5.
h 0.02
Using Newton’s forward difference interpolation, we compute
(0.5)(0.5 1.0)
f (0.23) 1.6698 0.5 u 0.0106 u 0.0002
2
1.6698 0.0053 0.000025
1.675075
| 1.6751
Again, for computing f (0.29), we take xn = 0.30,
So that v
n
Using Newton’s backward difference interpolation we evaluate,
( 0.5)(0.5 1.0)
f (0.29) 1.7139 0.5 u .0115 u 0.0003
2
1.7139 0.00575 0.00004
1.70811
1.7081
Example 5.7: Compute values of ex at x = 0.02 and at x = 0.38 using suitable
interpolation formula on the table of data given below.
x 0.0 0.1 0.2 0.3 0.4
e x 1.0000 1.1052 1.2214 1.3499 1.4918
Solution: The data is equally spaced. We have to use Newton’s forward difference
interpolation formula for computing ex at x = 0.02, and for computing ex at x = 0.38,
we have to use Newton’s backward difference interpolation formula. We first form
the finite difference table.
Self-Instructional
Material 91
Interpolating
Polynomials x y ex 'y '2 y '3 y '4 y
and Operators 0.0 1.0000
1052
NOTES 0.1 1.1052 110
1162 13
0.2 1.2214 123 2
1285 11
0.3 1.3499 134
1419
0.4 1.4918
x x0 0.02 0.0
? u 0 .2
h 0.1
By Newton’s forward difference interpolation formula, we have
0.38 0.4
For computing e0.38 we take xn = 0.4. Thus, v 0.2
0.1
By Newton’s backward difference interpolation formula, we have
(0.2)(0.2 1)
e0.38 1.4918 ( 0.2) u 0.1419 u 0.0134
2
(0.2)(0.2 1)(0.2 2) 0.2(0.2 1)(0.2 2)(0.2 3)
u 0.0011 u (0.0002)
6 24
1.4918 0.02838 0.00107 0.00005 0.00001
1.49287 0.02844
1.46443 | 1.4644
(5.5)
At a tabulated point xi , we have NOTES
(5.6)
We also denote 'f ( xi ) by 'yi , given by
(5.7)
We also define an operator E, called the shift operator which is given by,
E f(x) = f(x + h) (5.8)
?
Thus, ' E 1 is an operator relation. (5.9)
While Equation (5.5) defines the first order forward difference, we can define
second order forward difference by,
? (5.10)
Shift Operator
The shift operator is denoted by E and is defined by E f (x) = f (x + h). Thus,
Eyk = yk+1
Higher order shift operators can be defined by, E2f (x) = Ef (x + h) = f (x + 2h).
E2yk = E(Eyk) = E(yk + 1) = yk + 2
In general, Emf (x) = f (x + mh)
Emyk = yk+m
Or, E (5.11)
Self-Instructional
Material 93
Interpolating Similarly, for the second order forward difference, we have
Polynomials
and Operators
NOTES
2 f ( x) f ( x) f ( x h)
f ( x ) f ( x h) f ( x h) f ( x 2h)
f ( x ) 2 f ( x h) f ( x 2h)
f ( x) E 1 f ( x) E 2 f ( x)
(1 E 1 E 2 ) f ( x)
(1 E 1 ) 2 f ( x)
m { (1 E 1 ) m (5.14)
Relations between the operators E, D and '
We have by Taylor’s theorem,
2
Thus,
§ h2 D2 ·
Or, (1 ') f ( x) ¨ 1 hD ... ¸ f ( x)
© 2! ¹
hD
e f ( x)
Self-Instructional
94 Material
Interpolating
Thus, (5.15) Polynomials
Also, hD log (1 ' ) and Operators
Thus,
i.e.,
Further,
(5.16)
Even though the central difference operator uses fractional arguments, still it is
widely used. This is related to the averaging operator and is defined by,
(5.17)
Squaring,
? (5.18)
Self-Instructional
Material 95
Interpolating
Polynomials
It may be noted that,
and Operators Also,
2
NOTES ? (5.19)
Further,
E 1
Thus, { or E { E 1 { '
E
Hence proved.
(ii) From Equation (1), we have E { ' 1 (3)
and from Equation (2) we get E 1 { 1 (4)
Combining Equations (3) and (4), we get (1 ' )(1 ) { 1.
Example 5.9: If fi is the value of f (x) at xi where xi = x0 + ih, for i = 1,2,..., prove
that,
i
§i·
fi E i fo ¦¨ j ¸ ' i
f0
j 0 © ¹
Solution: We can write Ef (x) = f (x + h)
Using Taylor series expansion, we have
Self-Instructional
96 Material
Interpolating
ehD . f ( x) Polynomials
and Operators
? 1 ' ehD
Hence proved.
Example 5.10: Compute the following differences:
(i) 'n e x (ii) 'n x n
Solution:
(i) We have, ' e x e xh e x e x (e h 1)
'( x n ) ( x h) n x n
n(n 1) 2 n 2
n h x n 1 h x .... h n
2!
' f ( x) ½
(ii) '{log f ( x)} log ®1 ¾
¯ f ( x) ¿
Self-Instructional
Material 97
Interpolating Solution:
Polynomials
and Operators (i) We have,
f ( x) ½ f ( x h) f ( x )
'® ¾
¯ g ( x) ¿ g ( x h) g ( x )
NOTES
f ( x h) g ( x ) f ( x ) g ( x h)
g ( x h) g ( x )
f ( x h) g ( x ) f ( x ) g ( x ) f ( x ) g ( x ) f ( x ) g ( x h )
g ( x h) g ( x )
g ( x){ f ( x h) f ( x)} f ( x){g ( x h) g ( x)}
g ( x) g ( x h)
g ( x) 'f ( x) f ( x) 'g ( x )
g ( x ) g ( x h)
(ii) We have,
' {log f ( x)} log{ f ( x h)} log{ f ( x)}
f ( x h) f ( x h) f ( x ) f ( x ) ½
log log ® ¾
f ( x) ¯ f ( x) ¿
'f ( x ) ½
log ® 1¾
¯ f ( x) ¿
Self-Instructional
98 Material
2. Let a function y = f (x) has a set of values y0, y1, y2,..., corresponding to Interpolating
Polynomials
points x0, x1, x2,..., where x1 = x0 + h, x2 = x0 + 2h,...., are equally spaced with and Operators
spacing h. We define different types of finite differences such as forward
differences, backward differences and central differences, and express them
in terms of operators. NOTES
3. The shift operator is denoted by E and is defined by E f (x) = f (x + h).
Thus,
Eyk = yk+1
Higher order shift operators can be defined by, E2f (x) = Ef (x + h) = f (x +
2h).
E2yk = E(Eyk) = E(yk + 1) = yk + 2
In general, Emf (x) = f (x + mh)
Emyk = yk+m
4.
Thus,
5.5 SUMMARY
x Let p01(x) denote the linear interpolating polynomial for the tabulated values
at x0 and x1. Thus, we can write as,
( x1 x) f 0 ( x0 x) f1
p01 ( x)
x1 x0
x Let a function y = f (x) has a set of values y0, y1, y2,..., corresponding to
points x0, x1, x2,..., where x1 = x0 + h, x2 = x0 + 2h,...., are equally spaced with
Self-Instructional
Material 99
Interpolating spacing h. We define different types of finite differences such as forward
Polynomials
and Operators differences, backward differences and central differences, and express them
in terms of operators.
x The shift operator is denoted by E and is defined by E f (x) = f (x + h).
NOTES
Thus,
Eyk = yk+1
Higher order shift operators can be defined by, E2f (x) = Ef (x + h) = f
(x + 2h).
E2yk = E(Eyk) = E(yk + 1) = yk + 2
In general, Emf (x) = f (x + mh)
Emyk = yk+m
x The central difference operator denoted by G is defined by,
Short-Answer Questions
1. Explain the interpolating polynomials using finite difference.
2. Define the shift operator.
3. Analyse the relation between forward difference operator and shift operator.
4. Elaborate on the relation between the backward difference operator with
shift operator.
5. State the central difference operator.
Self-Instructional
100 Material
Long-Answer Questions Interpolating
Polynomials
and Operators
1. Discuss briefly the interpolating polynomials using finite difference.
2. Explain the relation between forward difference operator and shift operator.
3. Analyse the relation between backward difference operator with shift NOTES
operator.
4. Explain the relation between the operators E, D, and '.
5. Define the central difference operator.
Self-Instructional
Material 101
Lagrange and Newton
Interpolations
UNIT 6 LAGRANGE AND NEWTON
INTERPOLATIONS
NOTES
Structure
6.0 Introduction
6.1 Objectives
6.2 Lagrange Interpolations
6.3 Newton Interpolations
6.4 Applications of Lagrange and Newton Interpolations
6.5 Answers to Check Your Progress Questions
6.6 Summary
6.7 Key Words
6.8 Self Assessment Questions and Exercises
6.9 Further Readings
6.0 INTRODUCTION
The Lagrange form of the interpolation polynomial shows the linear character of
polynomial interpolation and the uniqueness of the interpolation polynomial.
Therefore, it is preferred in proofs and theoretical arguments. The Lagrange basis
polynomials can be used in numerical integration to derive the Newton–Cotes
formulas. In numerical analysis, Lagrange polynomials are used for polynomial
interpolation. For a given set of points (xj, yj) with no two xj values equal, the
Lagrange polynomial is the polynomial of lowest degree that assumes at each
value xj the corresponding value yj, so that the functions coincide at each point.
A Newton polynomial, named after its inventor Isaac Newton, is an
interpolation polynomial for a given set of data points. The Newton polynomial is
sometimes called Newton’s divided differences interpolation polynomial because
the coefficients of the polynomial are calculated using Newton’s divided differences
method. Newton’s formula is of interest because it is the straightforward and natural
differences-version of Taylor’s polynomial. Taylor’s polynomial tells where a
function will go, based on its y value, and its derivatives (its rate of change, and the
rate of change of its rate of change, etc.) at one particular x value. Newton’s
formula is Taylor’s polynomial based on finite differences instead of instantaneous
rates of change.
In this unit, you will study about the Lagrange interpolations, Newton
interpolations, and applications of Lagrange and Newton interpolations.
Self-Instructional
102 Material
Lagrange and Newton
6.1 OBJECTIVES Interpolations
(6.2)
(6.3)
Equation (6.3) suggests that li(x) vanishes at the (n+1) points x0, x1, ... xi–1,
xi+1,..., xn. Thus, we can write,
li(x) = ci (x – x0) (x – x1) ... (x – xi–1) (x – xi+1)...(x – xn)
Where ci is a constant given by li (xi) =1,
( x x0 )( x x1 )...( x xi 1 )( x xi 1 )...( x xn )
Thus, li ( x) for i 0, 1, 2, ..., n
( xi x0 )( xi x1 )...( xi xi 1 )( xi xi 1 )...( xi xn )
(6.4)
Equations (6.2) and (6.4) together give Lagrange’s interpolating polynomial.
Algorithm: To compute f (x) by Lagrange’s interpolation.
Step 1: Read n [n being the number of values]
Step 2: Read values of xi, fi for i = 1, 2,..., n.
Step 3: Set sum = 0, i = 1
Step 4: Read x [x being the interpolating point]
Self-Instructional
Material 103
Lagrange and Newton Step 5: Set j = 1, product = 1
Interpolations
Step 6: Check if j z i, product = product × (x – xj)/(xi – xj) else go to Step 7
Step 7: Set j = j + 1
NOTES Step 8: Check if j > n, then go to Step 9 else go to Step 6
Step 9: Compute sum = sum + product × fi
Step 10: Set i = i + 1
Step 11: Check if i > n, then go to Step 12
else go to Step 5
Step 12: Write x, sum
Example 6.1: Compute f (0.4) for the table below by Lagrange’s interpolation.
x 0.3 0.5 0.6
f ( x) 0.61 0.69 0.72
Self-Instructional
104 Material
Where Lagrange and Newton
Interpolations
( x 0)( x 1)( x 2) 1
l0 ( x) x( x 1)( x 2)
( 1 0)(1 1)(1 2) 6
( x 1)( x 1)( x 2) 1
l1 ( x ) ( x 1)( x 1)( x 2) NOTES
(0 1)(0 1)(0 2) 2
( x 1)( x 0)( x 2) 1
l2 ( x) ( x 1) x( x 2)
(1 1)(1 0)(1 2) 2
( x 1)( x 0)( x 1) 1
l3 ( x ) ( x 1) x ( x 2)
(2 1)(2 0)(2 1) 6
1 1 1 1
f ( x) x ( x 1)( x 2) u 1 ( x 1)( x 1)( x 2) u1 ( x 1) x ( x 2) u 1 ( x 1) x( x 2) u (3)
6 2 2 6
1
( 4 x 3 4 x 6)
6
1
(2 x 3 2 x 3)
3
Example 6.4: Evaluate the values of f (2) and f (6.3) using Lagrange’s interpolation
formula for the table of values given below.
Since, the computed result cannot be more accurate than the data, the final
NOTES result is rounded-off to the same number of decimals as the data. In some cases,
a higher degree interpolating polynomial may not lead to better results.
k ( k 1) 2
f (a hk ) f (a) k ' f (a) ' f (a) ...
2!
k (k 1) (k 2) ... (k n 1) n
' f (a)
n!
This is the required formula.
This formula is particularly useful for interpolating the values of f(x) near the
beginning of the set of values given. h is called the interval of difference, while ' is
the forward difference operator.
Example. 6.5: From the following table, estimate the number of students who
weight between 45 and 50.
0.5 (0.5 1)
? y50 = 20 + 0.5 × 45 + u ( 10)
2!
x4
Here, x0 = 4, K=
2
§ x4·§ x4 ·
¨ ¸¨ 1¸
§ x4· © 2 ¹© 2 ¹ u (3) 0
So, y(x) = 1 ¨ ¸u 2
© 2 ¹ 2!
( x 4) ( x 6)
= 1 + (x – 4) + u3
2
ª ( x 6).3 º
= 1 + (x – 4) «1 »¼
¬ 2
x4 1
=1+ [3 x 16] 1 (3 x 2 28 x 64)
2 2
3 2
y(x) = x 14 x 33
2
3 2
So, y(5) = (5) 14(5) 33 0.5
2
Newton–Gregory Backward Interpolation Formula
Let y = f(x) be a function of x which assumes the value f(a), f(a + h), f(a + 2h),
..., f(a + nh) for (n + 1) equidistant values a, a + h, a + 2h, ..., a + nh of the
independent variable x.
Let f(x) be a polynomial of the nth degree.
+ An ( x a nh) ( x a n 1 h) ... ( x a h)
Self-Instructional
Material 109
Lagrange and Newton
Interpolations Where A0 , A1, A2 , A3 , ... An are to be determined. (6.8)
f (a n 1 h) = A0 h A1 f (a nh) h A1
| By Equation (6.9)
f (a nh)
A1 =
h
(6.10)
Put, x = a + (n – 2)h, then,
f (a n 2 h) = A0 2hA1 (2h) ( h) A2
= 2 f (a nh)
A2 =
2 f (a nh)
(6.11)
2! h 2
Proceeding, you get,
n f (a nh)
An = (6.12)
n! hn
Substituting the values in Equation (6.9), you get,
f (a nh)
f(x) = f (a nh) ( x a nh) ...
h
n f (a nh)
+ ( x a nh) ( x a n 1 h) ... (6.13)
n! hn
Put x = a + nh + kh, then,
x – a – nh = kh
And x – a – (n – 1)h = (k + 1)h
#
x – a – h = (k n 1) h
? Equation (6.13) becomes,
Self-Instructional
110 Material
Lagrange and Newton
k ( k 1) 2 Interpolations
f(x) = f ( a nh) k f ( a nh) f ( a nh)
2!
n f (a nh)
+ ... + kh .(k 1) h ... (k n 1) (h) NOTES
n! hn
Which is the required formula.
Or k (k 1) 2
f ( xn kh) f ( xn ) k f ( xn ) f ( xn )
2!
k (k 1)(k 2) ... (k n 1) n
... f ( xn )
n!
This formula is useful when the value of f(x) is required near the end of the
table.
Where xn = (x0 + nh) and a = x0, so f(a + nh) = f(xn).
Example 6.7: Using Newton’s backward interpolation formula, obtain the value
of tan 22°, given that:
Tq 0 4 8 12 16 20 24
tan T : 0 0.0699 0.1405 0.2126 0.2867 0.3640 0.4452
6.6 SUMMARY
Self-Instructional
Material 115
Lagrange and Newton x Newton polynomials primary uses is to furnish some mathematical tools
Interpolations
that are used in developing methods in the areas of approximation theory,
numerical integration, and the numerical solution of differential equations.
x Fundamentally, the Newton polynomials are specifically considered for
NOTES
enhanced accuracy for a given polynomial degree as compared to other
polynomial interpolations.
Short-Answer Questions
1. Explain the Lagrange interpolations.
2. Define the Newton interpolations.
3. State Newton-Gregory forward interpolation formula.
4. Elaborate on the Newton-Gregory backward interpolation formula.
5. Analyse the applications of Lagrange interpolations.
6. Explain the applications of Newton interpolations.
Long-Answer Questions
1. Briefly discuss the Lagrange interpolation.
2. Explain the Newton interpolations.
3. Define Newton-Gregory forward interpolation formula.
4. Discuss the Newton-Gregory backward interpolation formula.
5. Write down the applications of Lagrange and Newton interpolations.
Self-Instructional
116 Material
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite Lagrange and Newton
Interpolations
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
NOTES
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.
Self-Instructional
Material 117
Divided Differences
7.0 INTRODUCTION
7.1 OBJECTIVES
Self-Instructional
118 Material
Divided Differences
7.2 DIVIDED DIFFERENCES AND THEIR
PROPERTIES
We now look at the differences of various orders of a polynomial of degree n, given by NOTES
y f ( x) an xn an1 x n 1 an 2 x n 2 ... a1 x a0
The first order forward difference is defined by,
'f ( x ) f ( x h) f ( x) and is given by,,
x 1 2 3 4 5
f ( x) 3 18 83 258 627
Solution: The horizontal difference table for the given data is as follows:
x f ( x) f ( x) 2 f ( x) 3 f ( x) 4 f ( x)
1 3
2 18 15
3 83 65 50
4 258 175 110 60
5 627 369 194 84 24
Self-Instructional
Material 119
Divided Differences From the table we read the required values and get the following result:
f ( 4) 175, 2 f (3) 50, 3 f (5) 84
Example 7.2: Form the difference table of f (x) on the basis of the following table
NOTES and show that the third differences are constant. Hence, conclude about the degree
of the interpolating polynomial.
x 0 1 2 3 4
f ( x) 5 6 13 32 69
It is clear from the above table that the third differences are constant and hence,
the degree of the interpolating polynomial is three.
So that, y= y0 ( x – x0 ) [ x, x0 ] ...(7.1)
[ x, x0 ] [ x0 , x1 ]
Again, [x, x0, x1] =
x x1
Which gives, [x, x0] = [x0, x1] + (x – x1) [x, x0, x1] (7.2)
From Equations (7.1) and (7.2)
y = y0 + (x – x0) [x0, x1] + (x – x0) (x – x1) [x, x01, x1] ...(7.3)
[ x, x0 , x1 ] [ x0 , x1, x2 ]
Also [x, x0, x1, x2] =
x x2
Self-Instructional
120 Material
Which gives, [x, x0, x1] = [x0, x1, x2] + (x – x2) [x, x0, x1, x2] (7.4) Divided Differences
( x x0 ) ( x x1 ) ( x x2 ) [ x0 , x1 , x2 , x3 ]
+ ... + ( x x0 ) ( x x1 ) ( x x2 )
+ ( x x0 ) ( x x1 ) ( x x2 )
... ( x xn ) [ x, x0 , x1 , x2 , ..., xn ]
This is called Newton’s General Interpolation formula with divided differences.
The last term being the remainder term after (n + 1) terms.
Example 7.3: Referring to the following table, find the value of f(x) at point
x = 4:
x: 1.5 3 6
f(x): –0.25 2 20.
Solution: The divided difference table is shown below:
Divided Difference Table for Finding the Value of f(x)
1.5
3 2 1
6
6 20
Applying Newton’s divided difference formula,
f(x) = –0.25 + (x – 1.5) (1.5) + (x – 1.5) (x – 3) (1)
Putting x = 4, you get
f(4) = 6.
Self-Instructional
Material 121
Divided Differences Example 7.4: Using Newton’s divided difference formula prove that,
( x 1) x 2
f(x) = f (0) x'f ( 1) ' f ( 1)
2!
NOTES
( x 1) x ( x 1) 3
+ ' f ( 2) ...
3!
Solution: Taking the arguments, 0, – 1, 1, –2, ... the divided Newton’s difference
formula is,
2
f(x) = f (0) x '| f (0) x( x 1) '| f (0)
1 1, 1
2
= f (0) x '| f (1) x( x 1) '| f ( 1)
0 0, 1
f (0) f (1)
Now, '| f (1) = 'f (1)
0 0 ( 1)
1
'| 2 f (1) = [ '| f (0) '| f ( 1)]
0, 1 1 (1) 1 01
1 1 2
= ['f (0) 'f ( 1)] ' f ( 1)
2 2
1
'| 3 f ( 2) = [ '| 2 f (1) '| 2 f (2)]
1, 0, 1 1 ( 2) 0, 1 1, 0
(7.5)
(7.6)
The coefficients ai’s in Equation (7.6) are determined by satisfying the conditions
in Equation (7.5) successively for i = 0, 1, 2,...,n.
Thus, we get
Or,
a2
Self-Instructional
Material 123
Divided Differences Using these values of the coefficients, we get Newton’s forward difference
interpolation in the form,
NOTES
h h h n!
x x0
This formula can be expressed in a more convenient form by taking u as
h
shown here.
We have,
x x1 x ( x0 h ) x x0
1 u 1
h h h
x x2 x ( x0 2 h ) x x0
2 u2
h h h
x xn 1 x { x0 (n 1)h} x x0
(n 1) u n 1
h h h
(7.7)
n
This formula is generally used for interpolating near the beginning of the table.
For a given x, we choose a tabulated point as x0 for which the following condition is
satisfied.
For better results, we should have
x x0
|u| d 0.5
h
Self-Instructional
124 Material
known as Newton’s backward difference interpolation formula. Let a table of values Divided Differences
{xi, yi}, for i = 0, 1, 2, ..., n for equally spaced values of xi be given. Thus, xi = x0 +
ih, yi = f(xi), for i = 0, 1, 2, ..., n are known.
We construct an interpolating polynomial of degree n of the form,
NOTES
(7.8)
We have to determine the coefficients b0, b1, ..., bn by satisfying the relations,
(7.9)
Thus, (7.10)
Similarly,
yn yn 1 yn
Or, b1 (7.11)
h h
Again
yn yn 1
Or, yn 2 yn (2 h) b2 ( 2h)(h )
h
? b2 (7.12)
3 yn 4 yn n yn
b3 , b4 , ..., bn (7.13)
3 ! h3 4 ! h4 n ! hn
Substituting the expressions for bi in Equation (7.8), we get
(7.14)
This formula is known as Newton’s backward difference interpolation formula.
It uses the backward differences along the backward diagonal in the difference
table.
x xn
Introducing a new variable v ,
h
x x n 1 x ( x n h)
We have, v 1 .
h h
Self-Instructional
Material 125
Divided Differences Thus, the interpolating polynomial in Equation (7.15) may be rewritten as,
NOTES (7.15)
This formula is generally used for interpolation at a point near the end of a table.
The error in the given interpolation formula may be written as,
n
n
2. Let y0, y1, ..., yn be the values of y = f(x) corresponding to the arguments
x0, x1, ..., xn, then from the definition of divided differences, you have,
y y0
[x, x0] =
x x0
3. Newton’s forward difference interpolation formula is a polynomial of degree
less than or equal to n. This is used to find the value of the tabulated function
at a non-tabular point. Consider a function y = f (x) whose values y0, y1,..., yn
at a set of equidistant points x0 , x1 ,..., xn are known.
Self-Instructional
126 Material
4. Newton’s forward difference interpolation formula cannot be used for Divided Differences
interpolating at a point near the end of a table, since we do not have the
required forward differences for interpolating at such points. However, we
can use a separate formula known as Newton’s backward difference
interpolation formula. Let a table of values {xi, yi}, for i = 0, 1, 2, ..., n for NOTES
equally spaced values of xi be given. Thus, xi = x0 + ih, yi = f(xi), for i = 0, 1,
2, ..., n are known.
7.5 SUMMARY
x The differences of various orders of a polynomial of degree n, given by
y f ( x) an xn an1 x n 1 an 2 x n 2 ... a1 x a0
x The first order forward difference is defined by,
'f ( x) f ( x h) f ( x) and is given by,,
Self-Instructional
Material 127
Divided Differences
7.6 KEY WORDS
Short-Answer Questions
1. Explain the divided differences and their properties.
2. Define the Newton’s divided difference interpolation formula.
3. Elaborate on the Newton’s forward difference interpolation formula.
4. Illustrate the Newton’s backward difference interpolation formula.
Long-Answer Questions
1. Discuss briefly the divided differences and their properties.
2. Analyse the applications of Newton’s general interpolation formula.
3. Explain the Newton’s backward difference interpolation formula.
INTERPOLATIONS
NOTES
FORMULAE
Structure
8.0 Introduction
8.1 Objectives
8.2 Central Differences Interpolations Formulae
8.3 Gauss’s Formula
8.4 Stirling’s Formula
8.5 Bessel’s Formula
8.6 Lagrange’s Interpolation Formula
8.7 Everett’s Formula
8.8 Hermite's Formula
8.9 Answers to Check Your Progress Questions
8.10 Summary
8.11 Key Words
8.12 Self Assessment Questions and Exercises
8.13 Further Readings
8.0 INTRODUCTION
The central difference is an average of the forward and backward differences for
the equally spaced value of data. The truncation error of the central difference
approximation is order of O(n2), where n is the step size.
Gauss’s formula alternately adds new points at the left and right ends, thereby
keeping the set of points centered near the same place (near the evaluated point).
When so doing, it uses terms from Newton’s formula, with data points and x
values renamed in keeping with one’s choice of what data point is designated as
the x0 data point.
Stirling’s formula remains centered about a particular data point, for use
when the evaluated point is nearer to a data point than to a middle of two data
points.
Bessel’s formula remains centered about a particular middle between two
data points, for use when the evaluated point is nearer to a middle than to a data
point. Bessel and Stirling achieve that by sometimes using the average of two
differences, and sometimes using the average of two products of binomials in x,
where Newton’s or Gauss’s would use just one difference or product. Stirling’s
uses an average difference in odd-degree terms (whose difference uses an even
number of data points); Bessel’s uses an average difference in even-degree terms
(whose difference uses an odd number of data points). Self-Instructional
Material 129
Central Differences The Lagrange formula is at its best when all the interpolation will be done at
Interpolations Formulae
one x value, with only the data points’ y values varying from one problem to
another, and when it is known, from past experience, how many terms are needed
for sufficient accuracy.
NOTES
In numerical analysis, Hermite interpolation, named after Charles Hermite,
is a method of interpolating data points as a polynomial function. The generated
Hermite interpolating polynomial is closely related to the Newton polynomial, in
that both are derived from the calculation of divided differences. However, the
Hermite interpolating polynomial may also be computed without using divided
differences.
In this unit, you will study about the central differences interpolations formulae,
Gauss’s formula, Stirling’s formula, Bessel’s formula, Everett’s formula, and
Hermite’s formula.
8.1 OBJECTIVES
The central difference is an average of the forward and backward differences for
the equally spaced value of data. The truncation error of the central difference
approximation is order of O(n2), where n is the step size.
You shall now study the central difference formulae most suited for
Interpolation near the middle of a tabulated set.
Self-Instructional
130 Material
Central Differences
k ( k 1) ( k 2) ( k 3) 4 Interpolations Formulae
+ ' f ( a ) ... (8.1)
4!
Given a = 0, h = 1, you get
NOTES
k (k 1) 2 k (k 1) ( k 2) 3
f(k) = f(0) + k'f(0) + ' f (0) ' f (0)
2! 3!
k ( k 1) ( k 2) ( k 3) 4
+ ' f (0) ... (8.2)
4!
Now,
'3 f (1) = ' 2 f (0) ' 2 f (1) ' 2 f (0) '3 f (1) ' 2 f (1)
Also,
' 4 f (1) = '3 f (0) '3 f (1) '3 f (0) ' 4 f (1) '3 f (1)
And '5 f (1) = ' 4 f (0) ' 4 f (1) ' 4 f (0) '5 f (1) ' 4 f (1)
And so on.
? From Eqation (8.2),
k (k 1) 2
f(k) = f(0) + k 'f (0) {' f (1) '3 f (1)}
2!
k ( k 1) ( k 2) 3
{' f ( 1) ' 4 f ( 1)}
3!
k (k 1) ( k 2) ( k 3) 4
{' f (1) ' 5 f (1)} ...
4!
k ( k 1) 2 k ( k 1) k 2½ 3
= f (0) k 'f (0) ' f ( 1) ®1 ¾ ' f ( 1)
2! 2 ¯ 3 ¿
k (k – 1) ( k 2) k 3 ½ 4 k ( k 1) (k 2) (k 3)
®1 ¾ ' f (1) ' f (1) ...
6 ¯ 4 ¿ 4!
k ( k 1) 2 (k 1) k ( k 1) 3
= f (0) k ' (0) ' f ( 1) ' f ( 1)
2! 3!
(k 1) k (k 1) (k 2) 4 k (k 1) ( k 2) ( k 3) 5
+ ' f (1) ' f ( 1) ...
4! 4!
(8.3)
Self-Instructional
Material 131
Central Differences
Interpolations Formulae But, '5 f (2) = ' 4 f (1) ' 4 f (2)
k (k 1) 2 (k 1) k (k 1) 3
f (u ) f (0) k 'f (0) ' f ( 1) ' f (1)
2! 3!
k (k 1) (k 1) (k 2) 4
' f (2) ...
4!
This is called Gauss’s forward difference formula.
1
Note: This formula is applicable when a lies between 0 and .
2
Example 8.1: Find the value of f(41) using Gauss’s forward formula from the
following data:
x: 30 35 40 45 50
f ( x) : 3678.2 2995.1 2400.1 1876.2 1416.3
( k 1) k ( k 1) ( k 2) 4
+ ' y2 ...
4!
Self-Instructional
132 Material
The central difference table is shown below: Central Differences
Interpolations Formulae
Central Difference Table
0.2 (0.2 1)
Now, f(41) = 2400.1 + (0.2) × (–523.9) + u (71.1)
2!
k (k 1) ( k 2) 3
' f (a ) ... (8.4)
3!
Putting a = 0, h = 1, you get
k ( k 1) 2 k ( k 1) ( k 2) 3
f(k) = f (0) k 'f (0) ' f (0) ' f (0)
2! 3!
k (k 1) (k 2) (k 3) 4
' f (0) ... (8.5)
4!
Self-Instructional
Material 133
Central Differences
Interpolations Formulae Now, 'f (0) = 'f (1) ' 2 f (1)
§ k 1 · 2
= f (0) k 'f (1) k ¨1 ¸ ' f ( 1)
© 2 ¹
k (k 1) § k 2· 3
+ ¨1 ¸ ' f (1)
2 © 3 ¹
k ( k 1) ( k 2) k 3 ½ 4
+ ®1 ¾ ' f ( 1)
6 ¯ 4 ¿
k (k 1) ( k 2) (k 3) 5
' f (1) ...
4!
(k 1) k 2
= f (0) k 'f ( 1) ' f ( 1)
2!
k ( k 1) ( k 1) 3
' f (1)
3!
(k 1) k (k 1) (k 2) 4
+ ' f (1) ... (8.7)
4!
(k 1) k (k 1) (k 2) 4
+ {' f (2) ' 5 f ( 2)} ... NOTES
4!
Thus, Gauss’s backward formula is,
k ( k 1) 2 (k 1) k ( k 1) 3
y = y0 k 'y1 ' y1 ' y 2
2! 3!
( k 2) ( k 1) k ( k 1) 4 ( k 2) ( k 1) k (k 1) ( k 2) 5
+ ' y 2 ' y2 + ...
4! 5!
Example 8.2: Apply Gauss’s backward formula to compute sin 45° from the
following table:
T0 : 20 30 40 50 60 70 80
sin T : 0.34202 0.502 0.6479 0.76604 0.86603 0.93969 0.98481
Here, x 0 = 40, x = 45
45 40
And k = = 0.5.
10
Thus, by Gauss’s backward formula, you will have
k ( k 1) 2 (k 1) k (k 1) 3
y(x) = y0 k 'y1 ' y1 ' y 2
2! 3!
(k 2) (k 1) k (k 1) 4 (k 2) (k 1) k (k 1) ( k 2) 5
' y 2 ' y3 + ...
4! 5!
Self-Instructional
Material 135
Central Differences You have,
Interpolations Formulae
1.5 u 0.5
y(45) = 0.64279 + 0.5 × 0.14079 + × (–0.01754)
2!
NOTES
+
3!
= 0.64279 + 0.070395 + (–0.0065775) – (0.000103125) – 0.00028789
= 0.70679.
1 1 1
This is called Stirling’s formula. It is useful when | k | or k . It
2 2 2
1 1
gives the best estimate when – k .
4 4
Example 8.3: Use Stirling’s formula to evaluate f(1.22) given,
x 1.2
Taking x0 = 1.2 and h = 0.2 p =
0.2
Using Stirling’s formula,
2
ª 0.031 0.041 º (0.2)
y(1.22) = 0.932 + (0.2) × « »¼ u (0.01)
¬ 2 2!
+
(0.2) [(0.2) 2 12 ] ª 0.001 0.001 º (0.2) 2 [(0.2) 2 12 ]
u« »¼ u (0.002)
3! ¬ 2 4!
= 0.932 + 0.0072 – 0.0002 – 0.0000032
= 0.9389 (approx.)
( k 1) k ( k 1) ( k 2) 4
+ ' f ( 2) ... (8.11)
4!
Gauss’s backward formula is,
k (k 1) 2 ( k 1) k ( k 1) 3
f(k) = f (0) k 'f (1) ' f (1) + ' f (2)
2! 3!
( k 2) k ( k 1) ( k 1) 4
+ ' f ( 2) ... (8.12)
4!
Self-Instructional
Material 137
Central Differences In Equation (8.12), shift the origin to 1 by replacing k by k – 1 and adding
Interpolations Formulae
1 to each argument 0, –1, –2, ..., you get
k ( k 1) 2
f(k) = f (1) (k 1) 'f (0) ' f (0)
NOTES 2!
k ( k 1) ( k 2) 3
' f (1)
3!
( k 1) k ( k 1) ( k 2) 4
+ ' f ( 1) ... (8.13)
4!
By taking mean of Equations (8.11) and (8.13), you get
f (0) f (1) ½ k (k 1) ½
f(k) = ® ¾® ¾ 'f (0)
¯ 2 ¿ ¯ 2 ¿
k ( k 1) ' 3 f ( 1)
+ ( k 1 k 2)
3! 2
f (0) f (1) ½ § 1·
f (k ) ® ¾ ¨ k ¸ 'f (0)
¯ 2 ¿ © 2¹
k (k 1) ° ' 2 f (1) ' 2 f (0) ½°
® ¾
2! °¯ 2 °¿
§ 1·
(k 1) ¨ k ¸ u
© 2¹
' 3 f (1)
3!
(k 1) k (k 1) (k 2) ° ' 4 f (2) ' 4 f (1) °½
® ¾ ...
4! ¯° 2 ¿°
(8.14)
Example 8.4: Using Bessel’s formula obtain y26. Given that y20 = 2854,
y24 = 3162, y28 = 3544 and y32 = 3992.
Self-Instructional
138 Material
x 24 Central Differences
Solution: With x0 = 24 and k = the central difference table is shown Interpolations Formulae
4
below:
Central Difference Table for Obtaining y26
NOTES
x k y 'y '2y '3y
20 –1 2854
308
24 0 3162 74
382 –8
28 1 3544 66
448
32 2 3992
Using Bessel’s formula,
0.5 (0.5 1) § 74 66 ·
y26 = 3162 + 0.5 × 382 + u¨ ¸ + 0 × (–8)
2! © 2 ¹
= 3162 + 191 + (–8.75)
= 3344.25.
+ A1 ( x x0 ) ( x x2 ) ... ( x xn )
f(x1) = A1 ( x1 x0 ) ( x1 x2 ) ... ( x1 xn )
f ( x1 )
? A1 = (8.17)
( x1 x0 ) ( x1 x2 ) ... ( x1 xn ) Self-Instructional
Material 139
Central Differences
Interpolations Formulae # # #
f ( xn )
Similarly, An = (8.18)
( xn x0 ) ( xn x1 ) ... ( xn xn 1 )
NOTES
Substituting the values of A0, A1, ..., An in Equation (8.15), you will get,
( x x1 ) ( x x2 ) ... ( x xn )
f ( x) f ( x0 )
( x0 x1 ) ( x0 x2 ) ... ( x0 xn )
( x x0 ) ( x x2 ) ... ( x xn )
f ( x1 ) ..(8.19)
( x1 x0 ) ( x1 x2 ) ... ( x1 xn )
( x x0 ) ( x x1 ) ... ( x xn 1 )
... f ( xn )
( xn x0 ) ( xn x1 ) ... ( xn xn 1 )
n
Where, I(x) = ( x xr )
r 0
ªd º
And Ic( xr ) = « [I( x)]»
¬ dx ¼x xr
Proof:
You have the Lagrange’s formula,
n ( x x0 ) ( x x1) ... ( x xr 1) ( x xr 1 ) ... ( x xn )
Pn(x) = ¦ (x x0 ) ( xr x1) ... ( xr xr 1 ) ( xr xr 1 ) ... ( xr xn )
f ( xr )
Self-Instructional r 0 r
140 Material
Central Differences
n I( x) ½ ° f ( xr ) °½
= ¦®
Interpolations Formulae
¾® ¾
r 0 ¯ x xr ¿°¯ ( xr x0 ) ( xr x1 ) ... ( xr xr 1) ( xr xr 1) ... ( xr xn ) °¿
...(8.21)
NOTES
Now,
n
I( x ) = ( x xr )
r 0
= ( x x0 ) ( x x1 ) ... ( x xr 1 ) ( x xr ) ( x xr 1 ) ... ( x xn )
? Ic( x ) = ( x x1 ) ( x x2 ) ... ( x xr ) ... ( x xn )
+ ( x x0 ) ( x x2 ) ... ( x xr ) ... ( x xn ) ...
+ ( x x0 ) ( x x1 ) ... ( x xr 1 ) ... ( x xr 1 ) ... ( x xn ) ...
+ ( x x0 ) ( x x1 ) ... ( x xr ) ... ( x xn 1 )
Ic( xr ) = Ic( x) x xr
Hence proved.
Example 8.5: Find the unique polynomial P(x) of degree 2 such that,
P(1) = 1, P(3) = 27, P(4) = 64
Use the Lagrange’s method of Interpolation.
Solution: Here, x0= 1, x 1 = 3, x2 = 4
f(x0) = 1, f(x1) = 27, f(x2) = 64
Lagrange’s Interpolation formula is,
( x x1 ) ( x x2 ) ( x x0 ) ( x x2 )
P(x) = f ( x0 ) k f ( x1 )
( x0 x1 ) ( x0 x2 ) ( x1 x0 ) ( x1 x2 )
( x x0 ) ( x x1 )
+ f ( x2 )
( x2 x0 ) ( x2 x1 )
( x 3) ( x 4) ( x 1) ( x 4) ( x 1) ( x 3)
= (1) (27) (64)
(1 3) (1 4) (3 1) (3 4) (4 1) (4 3)
Self-Instructional
Material 141
Central Differences
1 2 27 64
Interpolations Formulae
= ( x 7 x 12) ( x 2 5 x 4) ( x 2 4 x 3)
6 2 3
= 8x2 – 19x + 12
NOTES Hence, the required unique polynomial is,
P(x) = 8x2 – 19x + 12
Hugh Everett III was an American physicist who first proposed the Many-Worlds
Interpretation (MWI) of quantum physics, which he termed his ‘Relative State’
formulation. In contrast to the then-dominant Copenhagen interpretation, the MWI
postulates that the Schrödinger equation never collapses and that all possibilities
of a quantum superposition are objectively real.
Everett’s interpolation formula is a formula for estimating the value of a
function at an intermediate value of the independent variable, when its value is
known at a series of equally spaced points (such as, those that appear in a table),
in terms of the central differences of the function of even order only and coefficients
which are polynomial functions of the independent variable.
Everett Interpolation Formula: Everett interpolation formula is a method
of writing the interpolation polynomial obtained from the Gauss interpolation formula
for forward interpolation at x = x0 + th with respect to the nodes x0, x0 + h, x0 –
h … x0 + nh, x0 – nh, x0 + (n + 1)h.
That is,
Where u = 1 “ t and,
Self-Instructional
142 Material
Central Differences
Everett’s Formula for Numerical Analysis Interpolations Formulae
NOTES
Examples 8.6: Find Solution for the given polynomial using Everett’s formula.
x f(x)
1 1
1.1 1.049
1.2 1.096
1.3 1.140
Given is x = 1.15
Solution: The value of table for x and y
x 1 1.1 1.2 1.3
y 1 1.049 1.096 1.14
Using Everett method to find solution we have,
h = 1.1 – 1 = 0.1
Taking x0 = 1.1, then
x = 1.15
Self-Instructional
Material 143
Central Differences
Interpolations Formulae
NOTES
Self-Instructional
144 Material
Central Differences
x Combinatorics, as an example of an Appell sequence, obeying the umbral Interpolations Formulae
calculus;
x Numerical analysis as Gaussian quadrature;
x Physics, where they give rise to the eigenstates of the quantum harmonic NOTES
oscillator;
x Systems theory in connection with nonlinear operations on Gaussian
noise.
x Random matrix theory in Gaussian ensembles.
Hermite Interpolation Formula
Hermite interpolation formula is a form of writing the polynomial Hm of degree m
that solves the problem of interpolating a function f and its derivatives at points
x0…xn, that is, satisfying the conditions,
Where,
Self-Instructional
Material 145
Central Differences
Interpolations Formulae 8.9 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS
NOTES
k (k 1) 2 ( k 1) k (k 1) 3
1. f (u ) f (0) k 'f (0) ' f (1) ' f (1)
2! 3!
k ( k 1) (k 1) (k 2) 4
' f (2) ...
4!
2. Gauss’s backward formula is,
k ( k 1) 2 (k 1) k ( k 1) 3
y = y0 k 'y1 ' y1 ' y 2
2! 3!
( k 2) ( k 1) k ( k 1) 4 ( k 2) ( k 1) k (k 1) ( k 2) 5
+ ' y 2 ' y 2
4! 5!
+ ...
2
3.
f (0) f (1) ½ § 1·
4. f (k ) ® ¾ ¨ k ¸ 'f (0)
¯ 2 ¿ © 2¹
k (k 1) ° ' 2 f (1) ' 2 f (0) °½
® ¾
2! °¯ 2 °¿
§ 1·
(k 1) ¨ k ¸ u
© 2¹
' 3 f (1)
3!
(k 1) k (k 1) (k 2) ° ' 4 f (2) ' 4 f (1) ½°
® ¾ ...
4! ¯° 2 ¿°
5.
Self-Instructional
146 Material
6. The nth order Hermite polynomial is a polynomial of degree n. The Central Differences
Interpolations Formulae
probabilist’s version Hen has leading coefficient 1, while the physicist’s
version Hn has leading coefficient 2n.
In mathematics, the Hermite polynomials are defined as a classical orthogonal NOTES
polynomial sequence.
8.10 SUMMARY
k (k 1) 2 (k 1) k (k 1) 3
x f (u ) f (0) k 'f (0) ' f (1) ' f (1)
2! 3!
k (k 1) ( k 1) ( k 2) 4
' f (2) ...
4!
(k 2) ( k 1) k ( k 1) 4 ( k 2) ( k 1) k (k 1) ( k 2) 5
+ ' y 2 ' y 2
4! 5!
+ ...
2
x
Self-Instructional
Material 147
Central Differences
Interpolations Formulae 8.11 KEY WORDS
x Gauss’s formula: Gauss’s formula alternately adds new points at the left
NOTES and right ends, thereby keeping the set of points centered near the same
place (near the evaluated point).
x Stirling’s formula: Stirling’s formula remains centered about a particular
data point, for use when the evaluated point is nearer to a data point than to
a middle of two data points.
x Bessel’s formula: Bessel’s formula remains centered about a particular
middle between two data points, for use when the evaluated point is nearer
to a middle than to a data point.
x Lagrange formula: The Lagrange formula is at its best when all the
interpolation will be done at one x value, with only the data points’ y values
varying from one problem to another, and when it is known, from past
experience, how many terms are needed for sufficient accuracy.
Short-Answer Questions
1. Define the Gauss’s formula.
2. State the Gauss’s backward difference formula.
3. Explain the Stirling’s formula.
4. Elaborate on the Bessel’s formula.
5. Interpret the Lagrange’s interpolation formula.
6. Analyse the Everett’s formula.
7. Define the Hermite’s formula.
Long-Answer Questions
1. Briefly discuss the Gauss’s forward differences formula.
2. Explain the Gauss’s backward differences formula.
3. Analyse the Stirling’s formula.
4. Discuss the Bessel’s formula.
5. What is Lagrange’s interpolation formula?
6. Define the Everett’s formula.
7. Explain the Hermite’s formula.
Self-Instructional
148 Material
Central Differences
8.13 FURTHER READINGS Interpolations Formulae
Self-Instructional
Material 149
Numerical Differentiation
BLOCK - III
NUMERICAL DIFFERENTIATION
AND INTEGRATION
NOTES
UNIT 9 NUMERICAL
DIFFERENTIATION
Structure
9.0 Introduction
9.1 Objectives
9.2 Numerical Differentiation
9.2.1 Differentiation Using Newton’s Forward Difference Interpolation Formula
9.2.2 Differentiation Using Newton’s Backward Difference Interpolation
Formula
9.3 Answers to Check Your Progress Questions
9.4 Summary
9.5 Key Words
9.6 Self Assessment Questions and Exercises
9.7 Further Readings
9.0 INTRODUCTION
9.1 OBJECTIVES
n
x x0
Where u
h
dy
The derivative can be evaluated as,
dx
(9.1)
(9.2)
Where v
dy d2y
The derivatives and 2 , obtained by differentiating the above formula
dx dx
are given by,
dy 1ª 2v 1 2 3v 2 6v 2 3 2v 3 9v 2 11v 3 4 º
«y n yn yn y n ...»
dx h ¬« 2 6 12 »¼
(9.5)
d2y 1 ª 2 3 6v 2 18v 11 4 º
2 «
y n ( v 1) y n y n ...» (9.6)
dx 2 h ¬« 12 ¼»
dy d2y
For a given x near the end of the table, the values of and 2 are com-
dx dx
puted by first computing v = (x – xn)/h and using the above formulae. At the tabu-
lated point xn, the derivatives are given by,
1ª 1 1 1 º
y c( xn ) y n 2 y n 3 y n 4 y n ...»
h «¬ 2 3 4 ¼
(9.7)
1 ª 2 11 5 º
y cc( xn ) 2 «
y n 3 y n 4 y n 5 y n ...»
h ¬ 12 6 ¼
(9.8)
Example 9.1: Compute the values of f c(2.1), f cc(2.1), f c(2.0) and f cc(2.0) when f
(x) is not known explicitly, but the following table of values is given:
x f(x)
2.0 0.69315
2.2 0.78846
2.4 0.87547
Self-Instructional
152 Material
Solution: Since the points are equally spaced, we form the finite difference table. Numerical Differentiation
x f ( x) 'f ( x) '2 f ( x)
2.0 0.69315
9531 NOTES
2.2 0.78846 83
8701
2.4 0.87547
1 ª 1 º
f c(2.0) « 'f 0 ' 2 f 0 »
0.2 ¬ 2 ¼
1 ª 1 º
« 0.09531 u 0.00083»
0.2 ¬ 2 ¼
0.09572
0.4786
0.2
1
f cc(2.0) u ( 0.0083)
(0.2) 2
0.21
Example 9.2: For the function f(x) whose values are given in the table below
compute values of f c(1), f cc(1), f c(5.0), f cc(5.0).
x 1 2 3 4 5 6
f ( x) 7.4036 7.7815 8.1291 8.4510 8.7506 9.0309
Solution: Since f(x) is known at equally spaced points, we form the finite differ-
ence table to be used in the differentiation formulae based on Newton’s interpolat-
ing polynomial.
Self-Instructional
Material 153
Numerical Differentiation 2 3 4 5
x f ( x) 'f ( x) ' f ( x) ' f ( x) ' f ( x) ' f ( x)
1 7.4036
0.3779
NOTES 2 7.7815 303
0.3476 46
3 8.1291 257 12
0.3219 34 8
4 8.4510 223 4
0.2996 30
5 8.7506 193
0.2803
6 9.0309
To calculate f c(1) and f cc(1), we use the derivative formulae based on Newton’ss
forward difference interpolation at the tabulated point given by,
1ª 1 1 1 1 º
f c( x0 ) « 'f 0 ' 2 f0 '3 f 0 ' 4 f 0 ' 5 f 0 »
h¬ 2 3 4 5 ¼
1 ª 2 11 4 5 5 º
f cc( x0 ) « ' f 0 ' f 0 12 ' f 0 6 ' f 0 »
3
h2 ¬ ¼
1ª 1 1 1 1 º
? f c(1) « 0.3779 u ( 0.0303) u 0.0046 u (0.0012) u 0.0008»
1¬ 2 3 4 5 ¼
0.39507
ª 11 5 º
f cc(1) «¬0.0303 0.0046 12 u (0.0012) 6 u 0.0008»¼
0.0367
Similarly, for evaluating f c(5.0) and f cc(5.0), we use the following formulae
1ª 1 2 1 3 1 4 1 5 º
f c( xn ) «f n f n f n f n f n »
h¬ 2 3 4 5 ¼
1 ª 2 11 5 º
f cc( xn ) 2 «
f n 3 f n 4 f n 5 f n »
h ¬ 12 6 ¼
ª 1 1 1 º
f c(5) «0.2996 2 (0.0223) 3 u 0.0034 4 (0.0012)»
¬ ¼
0.2893
11
f cc(5) [0.0223 0.0034 u 0.0012]
12
0.0178
Self-Instructional
154 Material
Numerical Differentiation
Example 9.3: Compute the values of y c(0), y cc(0.0), y c(0.02) and y cc(0.02) for the
function y = f(x) given by the following tabular values:
Self-Instructional
Material 155
Numerical Differentiation
1ª 2u 1 2 3u 2 6u 2 3 2u 3 9u 2 11u 3 4 º
y c(0.02) «'y0 ' y0 ' y0 ' y0 »
h¬ 2 6 12 ¼
1 ª 2 6(u 1) 3 6u 2 18u 11 4 º
y cc(0.02) 2 «
' y0 ' y0 u ' y0 »
h ¬ 6 12 ¼
NOTES
1 ª 2 u 0.4 1 3 u (0.4) 6 u 0.4 2
2
? y c(0.02) «0.10017 u 0.00100 u 0.00101
0.05 ¬ 2 6
2 u 0.43 9 u 0.42 11 u 0.4 3 º
u 0.00003»
12 ¼
4.00028
1 ª 6 u 0.16 18 u 0.4 11 º
y cc(0.02) «0.00100 0.00101 u (0.6) u 0.00003»
(0.05) 2 ¬ 12 ¼
0.800
1ª 1 1 º
f c( x0 ) « 'f 0 ' 2 f 0 ' 3 f 0 »
h¬ 2 3 ¼
1 ª 1 1 º
? f c(6.0) « 0.0248 u 0.0023 u 0.0003»
0.1 ¬ 2 3 ¼
10[0.0248 0.00115 0.0001]
0.2585
For evaluating f cc(6.3), we use the formula obtained by differentiating Newton’ss
backward difference interpolation formula. It is given by,
Self-Instructional
156 Material
Numerical Differentiation
1 2
f cc( xn ) [ f n 3 f n ]
h2
? 1
f cc(6.3) [0.0026 0.0003] 0.29
(0.1) 2 NOTES
Example 9.5: Compute the values of y c(1.00) and y cc(1.00) using suitable numerical
differentiation formulae on the following table of values of x and y:
Solution: For computing the derivatives, we use the formulae derived on differen-
tiating Newton’s forward difference interpolation formula, given by
1ª 1 1 1 º
f c( x0 ) 'y0 '2 y0 '3 y0 '4 y0 ...»
h «¬ 2 3 4 ¼
1 ª 2 11 º
f cc( x0 ) 2 «
' y0 '3 y0 '4 y 0 ...»
h ¬ 12 ¼
Now, we form the finite difference table.
x y 'y '2 y '3 y '4 y
1.00 1.00000
2470
1.05 1.02470 59
2411 5
1.10 1.04881 54 2
2357 3
1.15 1.07238 51
2306
1.20 1.09544
x 0 1 2 3
f ( x) 1 3 15 40
Self-Instructional
Material 157
Numerical Differentiation Solution: Since the values of x are equally spaced we use Newton’s forward
difference interpolating polynomial for finding f c( x) and f c(0.5). We first form the
finite difference table as given below:
NOTES x f ( x) 'f ( x) '2 f ( x) '3 f ( x)
0 1
2
1 3 10
12 3
2 15 13
25
3 40
x x0
Taking x0 0, we have u x. Thus the Newton’s forward difference
h
interpolation gives,
u (u 1) 2 u (u 1) (u 2) 3
f f 0 u'f 0 ' f0 ' f0
2! 3!
x( x 1) x( x 1) ( x 2)
i.e., f ( x) | 1 2 x u 10 u3
2 6
13 2 1 3
Or, f ( x ) 1 3x x x
2 2
3
? f c( x ) 3 13 x x 2
2
3
And, f c(0.5) 3 13 u 0.5 u (0.5) 2 3.12
2
Example 9.7: The population of a city is given in the following table. Find the rate
of growth in population in the year 2001 and in 1995.
dy
Solution: Since the rate of growth of the population is , we have to compute
dx
dy
at x = 2001 and at x = 1995. For this we consider the formula for the derivative
dx
on approximating y by the Newton’s backward difference interpolation given by,
dy 1ª 2u 1 2 3u 2 6u 2 3 2u 3 9u 2 11u 3 4 º
«y n yn yn y n ...»
dx h «¬ 2 6 12 »¼
x xn
Where u
h
Self-Instructional
158 Material
For this we construct the finite difference table as given below: Numerical Differentiation
x xn
For x = 2001, u 0
h
§ dy · 1 ª 1 1 1 º
¨ ¸ 29.09 u 5.48 u 1.02 u (4.47) »
? © dx ¹2001 10 ¬« 2 3 4 ¼
3.105
1995 1991
For x = 1995, u 0.4
10
Self-Instructional
Material 159
Numerical Differentiation 2. Newton’s forward difference interpolation formula is,
NOTES
Where
3. Newton’s backward difference interpolation formula is,
9.4 SUMMARY
Where .
x At the tabulated point x0, the value of u is zero and the formulae for the
derivatives are given by,
Self-Instructional
160 Material
Numerical Differentiation
x For a given x near the end of the table, the values of and are
x For computing the derivatives at a point near the middle of the table, the
derivatives of the central difference interpolation formula is used.
x If the arguments of the table are unequally spaced, then the derivatives of
the Lagrange’s interpolating polynomial are used for computing the derivatives
of the function.
Short-Answer Questions
1. Define the term numerical differentiation.
2. Give the differentiation formula for Newton’s forward difference interpolation.
x 0 1 2 3
f ( x) 1.6 3.8 8.2 15.4
3. Use suitable formulae to compute y c(1.4) and ycc(1.4) for the function y =
f(x), given by the following tabular values:
x 1.4 1.8 2.2 2.6 3.0
y 0.9854 0.9738 0.8085 0.5155 0.1411
dy d2y
4. Compute and for x =1 where the function y = f(x) is given by the
dx dx 2
following table:
x 1 2 3 4 5 6
y 1 8 27 64 125 216
5. A rod is rotating in a plane about one of its ends. The following table gives
the angle T (in radians) through which the rod has turned for different values
d
of time t seconds. Find its angular velocity and angular acceleration
dt
d2
at t = 1.0.
dt 2
dy d2y
6. Find and at x = 1 and at x = 3 for the function y = f(x), whose
dx dx 2
values in [1, 6] are given in the following table:
Self-Instructional
162 Material
Numerical Differentiation
x 1 2 3 4 5 6
y 2.7183 3.3210 4.0552 4.9530 6.0496 7.3891
dy d2y
7. Find and at x = 0.96 and at x = 1.04 for the function y = f(x) NOTES
dx dx 2
given in the following table:
x 0.96 0.98 1.0 1.02 1.04
y 0.7825 0.7739 0.7651 0.7563 0.7473
Self-Instructional
Material 163
Numerical Differentiation
Methods Based on Finite
Differences UNIT 10 NUMERICAL
DIFFERENTIATION
NOTES
METHODS BASED ON
FINITE DIFFERENCES
Structure
10.0 Introduction
10.1 Objectives
10.2 Numerical Differentiation
10.3 Methods Based on Finite Differences
10.4 Answers to Check Your Progress Questions
10.5 Summary
10.6 Key Words
10.7 Self Assessment Questions and Exercises
10.8 Further Readings
10.0 INTRODUCTION
Sir Isaac Newton had proposed Interpolation formulae for forward and backward
interpolation. These are used for numerical differentiation. Such tools are widely
used in the field of engineering, statistics and other branches of mathematics.
Computer science also uses these concepts to find nearly accurate solution for
differentiation.
Forward interpolations
Sir Isaac Newton had proposed a formula for forward interpolation that bears his
name. It is expressed as a finite difference identity from which an interpolated
value, in between tabulated points using first value y0 with powers of forward
difference is used. Forward difference is shown by using ' which is known as
forward difference, operator. Forward difference is defined as the value obtained
by subtracting the present value from the next value. If initial value is y0 and next
value is y1 then 'y0 = y1 – y0. In a similar way '2 is used. The '2y0 = 'y1 – 'y0.
Proceeding this way, you may write for first forward difference, second forward
difference and like wise of nth forward difference as follows:
'y0 = y1 – y0 '2y0 = 'y1 – 'y0 .... 'ny0 = 'n – 1y1 – 'n – 1y0
Taking this difference you may denote the next term (s) and thus,
y1 = y0 + 'y0 y1 = (1 + ')y0
Here, 1 + ' shows a forward shift and a separate operator E, known as
forward shift operator is used as E = 1 + '. Now in the light of this fact, you may
write y1 = Ey0 and y2 = Ey1 = E(Ey0) = E2y0. Proceeding this way, you may write
yn = En – 1 y0.
Backward interpolations
Just as there is forward different operators for forward interpolations, there are
backward difference operator for backward difference interpolation. This is also
credited to Newton. In forward you think to next, but in backward you think of
the preceding term, i.e., the one earlier to it. Backward difference are denoted by
backward difference operator and is given as:
yn – 1 = yn – yn yn = yn – yn – 1 and yn – 1 = (1 – )yn
Just as in forward difference it was y0 in backward difference operator it is yn.
Self-Instructional
Material 165
Numerical Differentiation Thus, y1 = y1 – y0 y2 = y2 – y1 2y2 = y2 – y1 and proceeding this
Methods Based on Finite
Differences way you get, nyn = n – 1 yn – n – 1 yn – 1
Relation between difference operators
NOTES E =1+ =E–1
(Ey0) = (y1) = y1 – y0
Thus, (1 – ) Ey0 = Ey0 – (Ey0) = y1 – (y1) = y1 – (y1 – y0) = y0
Or, (1 – ) (1 + ')y0= y0 which is true for all the terms of y, i.e., y0, y1, y2, .... yn.
Thus, (1 – ) (1 + ') = 1 and (1 – )–1 = (1 + ') = E
And also, ' = (1 – ) –1 –1.
Central difference operator
Forward shift operator if applied to a term shows the next term. Let any term
corresponding to the value of x be denoted as f(x) instead of y and with a very
small increment of h, when value of x becomes x + h, it is denoted by f(x + h), the
next term of y. Using forward shift operator, the same can also be written as Ef(x)
= f(x + h). You can also view the same as Eyn –1 = yn. If f(x) shows first term, then
f(x + h) shows the next term.
Central difference operator is defined as Gf(x) = f(x + h/2) – f(x – h/2).
This is known as first central difference operator. Higher difference operator can
also be given as :
G2f(x)= f(x + h) – 2f(x) + f(x – h)
And Gnf(x)= Gn – 1 f(x + h/2) – Gn – 1 f(x – h/2)
In following paragraphs, Newton’s formulae for forward and backward
interpolation, Stirling ’s and Bessel’s central difference formulae is explained.
(1) Newton’s forward difference interpolation formula
k ( k 1) 2 k ( k 1) ( k 2) 3
y = y0 + k'y0 + ' y0 ' y 0 ...
2! 3!
...(10.1)
xa
Where,k = ...(10.2)
h
Differentiating Equation (10.1) with respect to k, you get,
dy 2k 1 2 3k 2 6 k 2 3
= ' y0 ' y0 ' y0 ... ...(10.3)
dk 2 6
Differentiating Equation (10.2) with respect to x, you get,
dk 1
= ...(10.4)
dx h
You know that,
Self-Instructional
166 Material
Numerical Differentiation
dy dy dk 1 ª § 2k 1 · ' 2 y § 3k 6k 2 · ' 3 y ...º
2
Methods Based on Finite
= . '
« 0 y ¨ ¸ 0 ¨ ¸ 0 » Differences
dx dk dx h ¬ © 2 ¹ © 6 ¹ ¼
...(10.5)
dy NOTES
Equation (10.5) provides the value of at any x which is not tabulated.
dx
Equation (10.5) becomes simple for tabulated values of x in particular when x = a
and k = 0.
Putting k = 0 in Equation (10.5), you get,
§ dy · 1ª
'y0 ' 2 y0 '3 y0 ' 4 y0 '5 y0 ...º» ...(10.6)
1 1 1 1
¨ ¸ «
© dx ¹ x a h ¬ 2 3 4 5 ¼
Differentiating Equation (10.5) with respect to x, you get
d2y d § dy · d § dy · dk
2 = ¨ ¸ ¨ ¸
dx dx © dx ¹ dk © dx ¹ dx
1ª 2 3 § 6 k 2 18k 11 · 4 º1
= « ' y 0 ( k 1) ' y 0 ¨ ¸ ' y0 ...»
h¬ © 12 ¹ ¼h
1 ª 2 3 § 6 k 2 18k 11 · 4 º
= 2 « ' y0 ( k 1) ' y0 ¨ ¸ ' y0 ...» ...(10.7)
h ¬ © 12 ¹ ¼
Putting k = 0 in Equation (10.7), you get
§ d2y · § 2 ' y0 ... ·¸
1 3 11 4
¨ 2¸ ¨ ' y0 ' y0 ...(10.8)
© dx ¹ x a h2 © 12 ¹
Similarly, you get
§ d3y · 1 § 3
' y0 ' 4 y0 ... ·¸
3
¨ 3¸ 3 ¨ ...(10.9)
© dx ¹ x a h © 2 ¹
And so on.
Aliter: You know that,
E = ehD 1 + ' = ehD
' 2 '3 ' 4
? hD
= log (1 + ') = ' – ...
2 3 4
1ª 1 1 1 º
D = « ' ' 2 '3 ' 4 ...»
h¬ 2 3 4 ¼
Similarly,
2
= 2 §¨ ' ' 2 '3 ' 4 ... ·¸
1 1 1 1
D 2
h © 2 3 4 ¹
1 § 2
' '3 ' 4 '5 ... ·¸
11 5
2 ¨
h © 12 6 ¹
Self-Instructional
Material 167
Numerical Differentiation
1 § '3 3 ' 4 ·
Methods Based on Finite And D3 = ¨ ¸
Differences h3 © 2 ¹
(2) Newton’s backward difference interpolation formula
NOTES
k ( k 1) 2 k ( k 1) ( k 2) 3
y= yn + k yn yn yn ... ...(10.10)
2! 3!
x xn
Where, k = ...(10.11)
h
Differentiating Equation (10.10) with respect to k, you get,
dy § 2k 1 · 2 § 3k 2 6k 2 · 3
= y n ¨ ¸ y
n ¨ ¸ yn ... ...(10.12)
dk © 2 ¹ © 6 ¹
Differentiating Equation (10.11) with respect to x, you get,
dk 1
= ...(10.13)
dx h
dy dy dk
Now, = .
dx dx dx
1ª 2k 1 · 2 § 3k 2 6k 2 · 3 º
= «yn §¨ ¸ y
n ¨ ¸ yn ...» ...(10.14)
h¬ © 2 ¹ © 6 ¹ ¼
dy
Equation (10.14) provides the value of at any x which is not tabulated.
dx
At x = xh, you have k = 0
? Putting k = 0 in Equation (10.14), you get,
§ dy · 1§ 1 2 1 3 1 4 ·
¨ ¸ ¨ yn yn yn yn ... ¸ ...(10.15)
© dx ¹ x xn h © 2 3 4 ¹
Differentiating Equation (10.14) with respect to x, you get,
d2y d § dy · dk
= ¨ ¸
dx 2
dk © dx ¹ dx
1 ª 2 3 § 6k 2 18k 11 · 4 º
= 2 « yn (k 1) yn ¨ ¸ yn ...» ...(10.16)
h ¬ © 12 ¹ ¼
Putting k = 0 in Equation (10.16), you get,
§ d2y · 1 § 2
yn 3 yn 4 yn ... ·¸
11
¨ 2¸ 2 ¨ ...(10.17)
© dx ¹ x x0 h © 12 ¹
§ d3y · 1 § 3 y 3 4 y ... ·
Similarly, you get, ¨ 3 ¸ ¨ n n ¸ ...(10.18)
© dx ¹ x x0 h3 © 2 ¹
And so on.
Self-Instructional
168 Material
Formulae for computing higher derivatives may be obtained by successive Numerical Differentiation
Methods Based on Finite
differentiation. Differences
Aliter: You know that,
E –1 = 1 –
NOTES
e–hD = 1 –
k ( k 1) ( k 1) ( k 2) §¨ k ·¸
1
© 2 ¹ '5 y
2
5!
k (k 1) (k 2) ( k 1) ( k 2) (k 3) § ' 6 y3 ' 6 y2 ·
¨ ¸ ......(10.27)
6! © 2 ¹
xa
Where, k= ...(10.28)
h
Self-Instructional
170 Material
Differentiating Equation (10.27) with respect to k, you get, Numerical Differentiation
Methods Based on Finite
Differences
§ 3k 2 3k 1 ·
dy § 2 k 1 · § ' 2
y 1 ' 2
y · ¨ 2 ¸ '2 y
= 'y0 ¨ ¸¨
0
¸¨ ¸ 1
dk © 2! ¹ © 2 ¹ © 3! ¹ NOTES
§ 4k 3 6k 2 2k 2 · § '4 y2 ' 4 y1 · § 5k 4 10k 3 3k 1 · 5
+¨ ¸¨ ¸¨ ¸ ' y2
© 4! ¹© 2 ¹ © 5! ¹
§ 6k 5 15k 4 20k 3 45k 2 8k 12 · § ' 6 y3 ' 6 y2 ·
+¨ ¸¨ ¸ ... ...(10.29)
© 6! ¹© 2 ¹
Differentiating Equation (10.28) with respect to x, you get,
dk 1
=
dx h
dy dy dk
Now, = .
dx dk dx
ª § 3k 2 3k 1 ·
1 « § 2 k 1 · § ' 2
y 1 ' 2
y0
· ¨ 2 ¸ '3 y
= « 'y0 ¨ ¸¨ ¸¨ ¸ 1
h ¬ © 2! ¹ © 2 ¹ © 3! ¹
y n 1 2 y n y n 1
y cc( xn ) | 2 (10.39)
h
Substituting these in the differential equation, we have
2(yn+1–2yn+ yn–1) + pn h(yn+1– yn–1) + 2h2gnyn = 2rnh2,
Where pn = p(xn), qn = q(xn), rn = r(xn) (10.40)
Rewriting the equation by regrouping we get,
(2–hpn)yn–1+(–4+2h2qn)yn+(2+h2qn)yn+1 = 2rnh2 (10.41)
Self-Instructional
172 Material
This equation is to be considered at each of the interior points, i.e., it is true for Numerical Differentiation
Methods Based on Finite
n = 1, 2, ..., N–1. Differences
The boundary conditions of the problem are given by,
y0 D , yn E (10.42) NOTES
Introducing these conditions in the relevant equations and arranging them, we
have the following system of linear equations in (N–1) unknowns y1, y2, ..., yn–1.
ª B1 C1 0 0... 0 0 0 º
«A B2 C2 0... 0 0 0 »
« 2 »
«0 A3 B3 C3 ... 0 0 0 »
A « »
« ... ... ... ... ... ... ... » (10.45)
«0 0 0 0... AN 2 BN 2 C N 2 »
« »
¬« 0 0 0 0... 0 AN 1 B N 1 ¼»
Where Bi 4 2h 2 qi , i 1, 2,..., N 1
Ci 2 hpi , i 1, 2,..., N 2 (10.46)
Ai 2 hpi , i 2, 3,..., N 1
b1 2J1h 2 (2 hp1 )D
bi 2J i h 2 , for i 2, 3,..., N 2
(10.47)
bN 1 2J N 1 h (2 hlpN 1 )E
2
The system of linear equations can be directly solved using suitable methods.
Example 10.1. Compute values of y (1.1) and y (1.2) on solving the following initial
value problem, correct to their decimal places using Runge-Kutta mehtods of order 4.
yc
y cc y 0 , with y(1) = 0.77, y c (1) = –0.44
x Self-Instructional
Material 173
Numerical Differentiation Solution: We first rewrite the initial value problem in the form of pair of first order
Methods Based on Finite
Differences equations.
z
yc z, z c y
x
NOTES
With y (1) = 0.77 and z (1) = –0.44.
We now employ Runge-Kutta methods of order 4 with h = 0.1.
1
y (1.1) y (1) (k1 2k2 2k3 k4 )
6
1
y c(1.1) z (1.1) 1 (l1 2l2 2l3 l4 )
6
k1 0.44 u 0.1 0.044
§ 0.44 ·
l1 0.1 u ¨ 0.77 ¸ 0.033
© 1 ¹
§ 0.033 ·
k2 0.1 u ¨ 0.44 ¸ 0.04565
© 2 ¹
§ 0.4565 ·
l2 0.1 u ¨ 0.748 ¸ 0.031323809
© 1.05 ¹
§ 0.03123809 ·
k3 0.1 u ¨ 0.44 ¸ 0.0455661904
© 2 ¹
ª 0.0455661904 º
l3 0.1 u « 0.747175» 0.031321128
¬ 1.05 ¼
k4 0.1 u (0.47132112) 0.047132112
§ 0.047132112 ·
0.1 u ¨
l4 0.72443381¸ 0.068158643
© 1.1 ¹
1
? y (1.1) 0.77 [0.044 2 u (0.045661904) 0.029596005] 0.727328602
6
1
y c(1.1) 0.44 [ 0.033 2(0.031323809) 2( 0.031321128) 0.029596005]
6
1
0.44 [ 0.33 0.062647618 0.062642256 0.029596005]
6
0.526322021
Example 10.2: Compute the solution of the following initial value problem
for x = 0.2, using Taylor series solution method of order 4.
d2y dy
2
yx , y (0) 1, y c(0) 0.
dx dx
Solution: Given y cc y xy c, we put z y c, so that
zc y xz, y c z and y (0) 1, z (0) 0.
We solve for y and z by Taylor series method of order 4. For this we first
compute y cc(0), y ccc(0), y iv (0),...
Self-Instructional
174 Material
We have, y cc(0) y(0) 0 u y c(0) 1, z c(0) 1 Numerical Differentiation
Methods Based on Finite
y ccc(0) z cc(0) y c(0) z (0) 0.z c(0) 0 Differences
y iv (0) z ccc(0) y cc(0) 2 z c(0) 0.z cc(0) 3
iv
z (0) 4 z cc(0) 0.z ccc(0) 0 NOTES
By Taylor series of order 4, we have
x2 x3 x 4 iv
y (0 x ) y (0) xy c(0) y cc(0) y ccc(0) y (0)
2! 3! 4!
x2 x4
Or, y ( x) 1 u3
2! 4!
(0.2) 2 (0.2) 4
? y (0.2) 1 1.0202
2! 8
(0.2) 3
Similarly, y c(0.2) z (0.2) 0 .2 u 3 0.204
4!
Example 10.3: Compute the solution of the following initial value problem for x =
d2y
0.2 by fourth order Runge -Kutta methods xy, y (0) 1, y c(0) 1
dx 2
Solution: Given y cc xy, we put y c z and the simultaneous first order problem.
We use Runge-Kutta 4th order formulae, with h = 0.2, to compute y (0.2) and
y c(0.2), given below..
k1 h f ( x0 , y0 , z0 ) 0.2 u 1 0.2
l1 h g ( x0 , y0 , z0 ) 0.2 u 0 0
§ h k l ·
k2 h f ¨ x0 , y0 1 , z0 1 ¸ 0.2 u (1 0) 0.2
© 2 2 2¹
§ h k l · 0.2 § 0.2 ·
l2 h g ¨ x0 , y0 1 , z0 1 ¸ 0.2 u ¨1 ¸ 0.022
© 2 2 2¹ 2 © 2 ¹
§ h k l ·
k3 h f ¨ x0 , y0 2 , z0 2 ¸ 0.2 u 1.011 0.2022
© 2 2 2¹
§ h k l ·
l3 h g ¨ x0 , y0 2 , z0 2 ¸ 0.2 u 0.1 u 1.1 0.022
© 2 2 2¹
k4 h f ( x0 h, y0 k3 , z0 l3 ) 0.2 u 1.022 0.2044
l4 h g ( x0 h, y0 k3 , z0 l3 ) 0.2 u 0.2 u 1.2022 0.048088
1
y (0.2) 1 (0.2 2(0.2 0.2022) 0.2044) 1.2015
6
1
y c(0.2) 1 (0 2 (0.022 0.022) 0.048088) 1.02268
6
Self-Instructional
Material 175
Numerical Differentiation
Methods Based on Finite
Differences Check Your Progress
1. What is forward interpolations in numerical differentiation?
NOTES 2. Define the backward interpolation.
3. Explain the relation between difference operators.
4. Illustrate the central difference operator.
5. Elaborate on the methods based on finite differences.
Self-Instructional
176 Material
Numerical Differentiation
10.5 SUMMARY Methods Based on Finite
Differences
x Sir Isaac Newton had proposed Interpolation formulae for forward and
backward interpolation. These are used for numerical differentiation. Such NOTES
tools are widely used in the field of engineering, statistics and other branches
of mathematics. Computer science also uses these concepts to find nearly
accurate solution for differentiation.
x It is expressed as a finite difference identity from which an interpolated
value, in between tabulated points using first value y0 with powers of forward
difference is used. Forward difference is shown by using ' which is known
as forward difference, operator.
x Just as there is forward different operators for forward interpolations, there
are backward difference operator for backward difference interpolation.
This is also credited to Newton.
x Central difference operator is defined as Gf(x) = f(x + h/2) – f(x – h/2).
This is known as first central difference operator. Higher difference operator
can also be given as :
G2f(x) = f(x + h) – 2f(x) + f(x – h)
And Gnf(x) = Gn – 1 f(x + h/2) – Gn – 1 f(x – h/2)
x In this method of solving boundary value problem, the derivatives appearing
in the differential equation and boundary conditions, if necessary, are replaced
by appropriated difference gradients.
Self-Instructional
Material 177
Numerical Differentiation
Methods Based on Finite 10.7 SELF ASSESSMENT QUESTIONS AND
Differences
EXERCISES
Self-Instructional
178 Material
Numerical Integration
UNIT 11 NUMERICAL
INTEGRATION
NOTES
Structure
11.0 Introduction
11.1 Objectives
11.2 Numerical Integration
11.3 Trapezoidal Rule
11.4 Simpson’s 1/3 Rule
11.5 Simpson’s 3/8 Rule
11.6 Weddle’s Rule
11.7 Cotes Method
11.8 Answers to Check Your Progress Questions
11.9 Summary
11.10 Key Words
11.11 Self Assessment Questions and Exercises
11.12 Further Readings
11.0 INTRODUCTION
Self-Instructional
Material 179
Numerical Integration In numerical integration, Simpson’s rules are several approximations for
definite integrals, named after Thomas Simpson (1710–1761). The most basic of
these rules, called Simpson’s 1/3 rule, or just Simpson’s rule. Simpson’s 3/8 rule,
also called Simpson’s second rule requests one more function evaluation inside
NOTES the integration range, and is exact if f is a polynomial up to cubic degree. Simpson’s
1/3 and 3/8 rules are two special cases of closed Newton–Cotes formulas.
Weddle’s Rule is a method of integration, the Newton-Cotes formula with
N=6.
In this unit, you will study about the numerical integration, trapezoidal rule,
Simpson’s 1/3 rule, Simpson’s 3/8 rule, Weddle’s rule, and Cotes method.
11.1 OBJECTIVES
³ f ( x) dx
a
(11.1)
Self-Instructional
180 Material
Numerical Integration
11.3 TRAPEZOIDAL RULE
xn
For evaluating the integral ³ f ( x)dx, we have to sum the integrals for each of the
x0
NOTES
xn
h (11.2)
Or ³ f ( x)dx
x0 2
[ f 0 2( f1 f 2 ... f n 1 ) f n ]
x0
Where
Thus, we can write
2
(11.3)
Where
Or,
b
Algorithm: Evaluation of ³ f ( x)dx by trapezoidal rule.
a
Self-Instructional
Material 181
Numerical Integration Step 8: Compute I = h (S + (f (a) + f (b))/2)
Step 9: Output I, n
x0
(11.4)
x
x0
..]
5
(11.5)
Geometrical interpretation of Simpson’s one-third formula is that the integral
represented by the area under the curve is approximated by the area under the
parabola through the points (x0, f0), (x1, f1) and (x2, f2) shown in Figure 11.1.
Self-Instructional
182 Material
Numerical Integration
NOTES
h
( f 0 4 f1 f 2 ) ( f 2 4 f 3 f 4 ) ( f 4 4 f 5 f 6 ) ... ( f 2 m 2 4 f 2 m 1 f 2 m )
3
b
h
³ f ( x)dx
a
f 4 ( f1 f 3 f 5 ... f 2 m 1 ) 2 ( f 2 f 4 f 6 ... f 2 m 2 ) f 2 m .
3 0
(11.6)
This is known as Simpson’s one-third rule of numerical integration.
The error in this formula is given by the sum of the errors in each pair of intervals
as,
5
4
(11.7)
3
ª u
2
1§u
3
u · 2
2
1§u
4
2· 3
º
'f 0 ¨ ¸' f 0 ¨ u u ¸' f 0 »
3
h «uf 0
«¬ 2 2 ¨© 3 2 ¸¹ 6 ¨© 4 ¸
¹ »¼
0
ª 9 9 2 3 3 º
h «3 y0 'y 0 ' y0 ' y0 »
¬ 2 4 8 ¼
ª 9 9 3 º (11.8)
h «3 y0 ( y1 y0 ) ( y 2 2 y1 y0 ) ( y3 3 y 2 3 y1 y 0 )»
¬ 2 4 8 ¼
x3
3h
³ f ( x) dx
x0
( y 3 y1 y3 )
8 0
5
The truncation error in this formula is .
(11.9)
Where h = (b–a)/(3m); for m = 1, 2,...
i.e., the interval (b–a) is divided into 3m number of sub-intervals.
Self-Instructional
184 Material
The rule in Equation (11.9) can be rewritten as, Numerical Integration
b
3h
³ f ( x) dx
a
8
[ y 0 y3m 3 ( y1 y 2 y 4 y5 ... y3m 2 y3m 1 ) 2 ( y3 y6 ... y3m 3 )]
(11.10) NOTES
The truncation error in Simpson’s three-eighth rule is
240
41 6
This formula takes a very simple form if the last term ' y 0 is replaced by
140
42 6 3 6
' y0 ' y0 . Then the error in the formula will have an additional term
140 10
1 6
' y0 . The above formula then becomes,
140
x6
ª 123 5 3 º
³ ydx
x0
h «6 y0 18'y0 27' 2 y0 24'3 y0
¬ 10
' y0 ' 6 y0 »
10 ¼
? x6
3h (11.11)
³ ydx
x0 10
[ y0 5 y1 y2 6 y3 y4 5 y5 y6 ]
+2y6m–6+5y6m–5+y6m–4+6y6m–3+y6m–2+5y6m–1+y6m] (11.13)
Self-Instructional
Material 185
Numerical Integration Where b–a = 6mh
b
3h
i.e., ³ f ( x) dx
a
[ y y6 m 5 ( y1 y5 y 7 y11 ... y6m 5 y6m 1 ) y 2 y 4 y8 y10 ...
10 0
NOTES y 6 m 4 y6m 2 6 ( y3 y9 ... y6 m3 ) 2 ( y6 y12 ... y6 )]
25 32
Exact value 6 .4
5 5
Error in the result by trapezoidal rule = 6.4 – 7.0672 = – 0.6672
Error in the result by Simpson’s one third rule = 6.4 – 6.4230 = – 0.0230
Example 11.2: Evaluate the following integral:
1
³ (4 x 3x
2
)dx by taking n = 10 and using the following rules:
0
(i) Trapezoidal rule and (ii) Simpson’s one-third rule. Also compare them with
the exact value and find the error in each case.
Self-Instructional
186 Material
Solution: We tabulate f (x) = 4x–3x2, for x = 0, 0.1, 0.2, ..., 1.0. Numerical Integration
x 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
f ( x) 0.0 0.37 0.68 0.93 1.12 1.25 1.32 1.33 1.28 1.17 1.0
NOTES
(i) Using trapezoidal rule, we have
1
0.1
³ ( 4 x 3x
2
) dx 0 2 (0.37 0.68 0.93 1.12 1.25 1.32 1.33 1.28 1.17) 1.0
2
0
0.1
u (18.90 1.0) 0.995
2
0
intervals and (ii) Trapezoidal rule.
2
Solution: We tabulate values of e x for the 11 points x = 0, 0.1, 0.2, 0.3, ...., 1.0 as
given below.
2
x e x
0.0 1.00000
0.1 0.990050
0.2 0.960789
0.3 0.913931
0.4 0.852144
0.5 0.778801
0.6 0.697676
0.7 0.612626
0.8 0.527292
0.9 0.444854
1.0 0.367879
1.367879 3.740262 3.037901
Self-Instructional
Material 187
Numerical Integration Hence, by Simpson’s one-third rule we have,
1
h
³e
x2
dx [ f 0 f10 4 ( f1 f 3 f5 f 7 f9 ) 2 ( f 2 f 4 f 6 f8 )]
0 3
NOTES 0.1
[1.367879 4 u 3.740262 2 u 3.037901]
3
0.1
[1.367879 14.961048 6.075802]
3
2.2404729
0.7468243 | 0.746824
3
Using trapezoidal rule, we get
1
h
³e
x2
dx [ f 0 f10 2 ( f1 f 2 ... f9 )]
0
2
0.1
[1.367879 6.778163]
2
0.4073021
4
³ (x
3 2
Example 11.4: Compute the integral I 2 x 1)dx, using Simpson’s one-
0
third rule taking h = 1 and show that the computed value agrees with the exact
value. Give reasons for this.
Solution: The values of f (x) = x3–2x2+1 are tabulated for x = 0, 1, 2, 3, 4 as
x 0 1 2 3 4
f ( x) 1 0 1 10 33
Thus, the computed value by Simpson’s one-third rule is equal to the exact
value. This is because the error in Simpson’s one-third rule contains the fourth order
derivative and so this rule gives the exact result when the integrand is a polynomial
of degree less than or equal to three.
0.5
Example 11.5: Compute ³ e dx by (i) Trapezoidal rule and (ii) Simpson’s one-
x
0.1
third rule and compare the results with the exact value, by taking h = 0.1.
Self-Instructional
188 Material
Solution: We tabulate the values of f (x) = ex for x = 0.1 to 0.5 with spacing h = 0.1. Numerical Integration
0.1
IS [1.1052 4 (1.2214 1.4918) 2 u1.3498 1.6487]
3
0.1 0.1
[2.7539 4 u 2.7132 2.6996] [16.3063] 0.5435
3 3
rule taking 10 sub-intervals. Hence, find log e2 and compare it with the exact value
up to six decimal places.
1
Solution: We tabulate the values of f(x) = for x = 0, 0.1, 0.2,..., 1.0 as given
1 x
below:
1
x y f ( x)
1 x
0.0 y0 1.000000
0.1 y1 0.9090909
0.2 y2 0.8333333
0.3 y3 0.1692307
0.4 y4 0.7142857
0.5 y5 0.6666667
0.6 y6 0.6250000
0.7 y7 0.5882352
0.8 y8 0.5555556
0.9 y9 0.5263157
1.0 y10 0.500000
1.500000 3.4595391 2.7281746
Self-Instructional
Material 189
Numerical Integration (i) Using trapezoidal rule, we have
1
dx h
³ 1 x
0
[ f f10 2 ( f1 f 2 f 3 f 4 ... f 0 )]
2 0
NOTES 0.1
[1.500000 2 u (3.4595391 2.7281745)]
2
0.1
[1.500000 12.3754272] 0.6437714.
2
(ii) Using Simpson’s one-third rule, we get
1
dx h
³ 1 x
0
3
[ f 0 f10 4 ( f1 f 3 ... f 9 ) 2 ( f 2 f 4 ... f 8 )]
0 .1
[1.500000 4 u 3.4595391 2 u 2.7281745]
3
0 .1 0.1
[1.5 13.838156 5.456349] u 20.794505 0.6931501
3 3
(iii) Exact value:
1
dx 0.1
³1 x
0
log e2
3
[1.500000 4 u 3.4595391 2 u 2.7281745]
0.6931472
The trapezoidal rule gives the value of the integral having an error 0.693147 –
0.6437714 = 0.0493758, while the error in the value by Simpson’s one-third rule is
– 0. 000029.
Example 11.7: Compute by (i) Simpson’s rule and (ii) Weddle’s formula
taking six sub-intervals.
h D
For applying the integration rules we tabulate cos .
(i) The value of the integral by Simpson’s one-third rule is given by,
0.26179
IS [1 4 u (0.98281 0.84089 0.50874) 2 u (0.093061 0.070711) 0)]
3
0.26179
[1 4 u 2.33244 2 u 1.63772]
3
0.26179
u 13.6052 1.18723
3
Self-Instructional
190 Material
(ii) The value of the integral by Weddle’s formula is, Numerical Integration
3
IW u 0.26179 [1.05 7.45775 5.04534 0.93061 0.070711]
10
3 u 0.026179 [14.554411] 1.143059 | 1.14306
NOTES
Solution: On dividing the interval into six sub-intervals, the length of each sub-
interval will be h D
For computing the integral by Weddle’ss
formula, we tabulate f
f 0.91542
The value of the integral by Weddle’s formula is given by,
IW
n
(11.16)
x x0
Where s
h
Replacing f (x) by I (s) we get
xn n
ª s ( s 1) 2 º
³ f ( x) dx h³ «¬ f
x0 0
0 s 'f 0
2!
' f 0 ...» ds
¼
Self-Instructional
Material 191
Numerical Integration Performing the integration on the RHS we have,
xn
ª n2 1 § n3 n 2 · 1 § n4 n3 n2 ·
³
x0
f ( x)dx h «nf 0
«¬ 2
'f 0 ¨ ¸'2 f 0 ¨
2 ¨© 3 2 ¸¹ 6 ¨© 4
3 2 ¸'3 f 0
3 2 ¸¹
NOTES 1 §¨ n 5 3n 4 11n 3 · º
3n 2 ¸'4 f 0 ...»
¨
24 © 5 2 3 ¸
¹ ¼»
(11.17)
We can derive different integration formulae by taking particular values of n =
1, 2, 3, .... Again, on replacing the differences, the Newton-Cotes formula can be
expressed in terms of the function values at x0, x1,..., xn, as
xn n
³ f ( x) dx h ¦c
k 0
k f ( xk ) (11.18)
x0
n
(11.19)
Self-Instructional
192 Material
Thus, Numerical Integration
xn
h
³ f ( x)dx
x0
2
[( f 0 f1 ) ( f1 f 2 ) ... ( f n 1 f n )]
xn
h (11.2) NOTES
Or ³x [ f 0 2( f1 f 2 ... f n1 ) f n ]
f ( x)dx
2
0
x0
(11.4)
x
x0
x3 3
u (u 1) 2 u (u 1)(u 2) 3
³
x0
f ( x)dx ³
h ( f 0 u 'f 0
0
2!
' f0
3!
' f 0 ) du
3
ª u
2
1§u
3
u · 2
2
1§u
4
2· 3
º
'f 0 ¨ ¸' f 0 ¨ u u ¸' f 0 »
3
h «uf 0
«¬ 2 2 ¨© 3 2 ¸¹ 6 ¨© 4 ¸
¹ »¼ 0
ª 9 9 2 3 3 º
h «3 y0 'y 0 ' y0 ' y0 »
¬ 2 4 8 ¼
ª 9 9 3 º
h «3 y0 ( y1 y0 ) ( y 2 2 y1 y0 ) ( y3 3 y 2 3 y1 y 0 )»
¬ 2 4 8 ¼
x3
3h
³ f ( x) dx
x0
( y 3 y1 y3 )
8 0
+2y6m–6+5y6m–5+y6m–4+6y6m–3+y6m–2+5y6m–1+y6m]
Where b–a = 6mh
Self-Instructional
Material 193
Numerical Integration 6. We start with Newton’s forward difference interpolation formula which uses
a table of values of f (x) at equally spaced points in the interval [a, b]. Let the
interval [a, b] be divided into n equal sub-intervals such that,
a = x0, xi = xo+ ih, for i = 1, 2, ..., n – 1, xn = b
NOTES
So that, nh = b–a
11.9 SUMMARY
x The evaluation of a definite integral cannot be carried out when the integrand
f (x) is not integrable, as well as when the function is not explicitly known
but only the function values are known at a finite number of values of x.
However, the value of the integral can be determined numerically by applying
numerical methods.
x Geometrical interpretation of Simpson’s one-third formula is that the integral
represented by the area under the curve is approximated by the area under
the parabola through the points (x0, f0), (x1, f1) and (x2, f2)
5
x The truncation error in this formula is .
x The truncation error in Simpson’s three-eighth rule is
240
x In Newton-Cotes formula with n = 6 some minor modifications give the
Weddle’s formula.
Short-Answer Questions
1. Explain the numerical integration.
2. State the trapezoidal rule.
Self-Instructional
194 Material
3. Define the Simpson’s 1/3 rule. Numerical Integration
Self-Instructional
Material 195
Numerical Solutions of
Ordinary Differential BLOCK - IV
Equations
NUMERICAL SOLUTIONS OF ODE
NOTES
UNIT 12 NUMERICAL SOLUTIONS
OF ORDINARY
DIFFERENTIAL
EQUATIONS
Structure
12.0 Introduction
12.1 OBJECTIVES
12.2 Ordinary Differential Equations
12.3 Taylor’s Series Method
12.4 Picard’s Method of Successive Approximations
12.5 Euler’s Method
12.5.1 Modified Euler’s Method
12.5.2 Euler’s Method for a Pair of Differential Equations
12.6 Runge-Kutta Methods
12.7 Multistep Methods
12.8 Predictor-Correction Methods
12.8.1 Euler’s Predictor-Corrector Formula
12.8.2 Milne’s Predictor-Corrector Formula
12.9 Numerical Solution of Boundary Value Problems
12.9.1 Reduction to a Pair of Initial Value Problem
12.9.2 Finite Difference Method
12.10 Answers to Check Your Progress Questions
12.11 Summary
12.12 Key Words
12.13 Self Assessment Questions and Exercises
12.14 Further Readings
12.0 INTRODUCTION
12.1 OBJECTIVES
Even though there are many methods to find an analytical solution of ordinary
differential equations, for many differential equations solutions in closed form cannot
be obtained. There are many methods available for finding a numerical solution for
differential equations. We consider the solution of an initial value problem associated
with a first order differential equation given by,
dy
f ( x, y )
dx
(12.1)
With y (x0) = y0 (12.2)
In general, the solution of the differential equation may not always exist. For the
existence of a unique solution of the differential Equation (12.1), the following
conditions, known as Lipshitz conditions must be satisfied,
(i) The function f(x, y) is defined and continuous in the strip
Self-Instructional
Material 197
Numerical Solutions of (ii) There exists a constant L such that for any x in (x0, b) and any two numbers
Ordinary Differential
Equations y and y1
|f(x, y) – f(x, y1)| d L|y – y1|
(12.3)
NOTES
The numerical solution of initial value problems consists of finding the approximate
numerical solution of y at successive steps x1, x2,..., xn of x. A number of good
methods are available for computing the numerical solution of differential equations.
(12.5)
y ccc ( x0 ) f xx ( x0 , y0 ) 2 f xy ( x0 , y0 ) y c( x0 ) f yy ( x0 , y0 ) { y c ( x0 )}2 f y ( x, y ) y cc ( x0 )
y cc( x) xy c y , ? y cc(0) 1
y ccc( x) xy cc 2 y c, ? y ccc(0) 2
y (iv ) ( x) xy ccc 3 y cc, ? y (iv ) (0) 3
y (v ) ( x) xy (iv ) 3 y ccc, ? y ( v ) (0) 6
Self-Instructional
198 Material
Hence, the Taylor series solution y (x) is given by, Numerical Solutions of
Ordinary Differential
Equations
x2 x3 x 4 (iv ) x 5 (v)
y ( x) | y (0) xy c(0) y cc(0) y ccc(0) y (0) y (0)
2 3! 4! 5!
x 2 x3 x4 x5 x 2 x3 x 4 x5 NOTES
| 1 x u2 u3 u 6 1 x
2 6 24 120 2 3 8 20
0.01 0.001 0.0001 0.00001
? y (0.1) | 1 0.1 1.1053
2 3 8 20
y cc 2 x 2 yy c, ? y cc(0) 0
2
y ccc 2 2[ yy cc ( y c) , y ccc(0) 2
(iv ) ( iv )
y 2 ( yy ccc 3 y cy cc), ?y (0 ) 0
(v) (iv ) 2 (v )
y 2[ yy 4 y cy ccc 3 ( y cc) ], ?y 0
(vi ) (v) (iv ) ( vi )
y 2[ yy 5 y cy 10 y ccy ccc], ?y 0
( vii ) ( vi ) (v) (iv ) 2 ( vi )
y 2[ yy 6 yc y 15 y cc y 10 ( y ccc) ] ? y (0) 80
x3 x7 80 1 3 x7
The Taylor series up to two terms is y ( x) u2 u x
6 7 7! 3 63
Example 12.3: Given x y c = x – y2, y(2) = 1, evaluate y(2.1), y(2.2) and y(2.3)
correct to four decimal places using Taylor series method.
Solution: Given To compute
y(2.1) by Taylor series method, we first find the derivatives of y at x = 2.
1
yc 1 y 2 / x ? y c(2) 1 0.5
2
xy cc y c 1 2 yy c
1 1 1 2 1
2 y cc(2) 1 2. ? y cc(2) u 0.25
2 2 4 2 2
xy ccc 2 y cc 2 y c 2 yy cc
2
2
§ 1· §1· § 1·
? 2 y ccc(2) 2 ¨ ¸ 2 ¨ ¸ 2 ¨ ¸
© 4¹ © 2¹ © 4¹
1 1
2 y ccc(2) ? y ccc(2) 0.25
2 4
xy (iv ) 3 y ccc 4 y cy cc 2 ycy cc 2 yyccc Self-Instructional
Material 199
Numerical Solutions of
Ordinary Differential y
Equations
y
NOTES
The integral contains the unknown function y (x) and it is not possible to integrate
it directly. In Picard’s method, the first approximate solution y (1) ( x) is obtained by
replacing y (x) by y0.
x
Self-Instructional
200 Material
The second approximate solution is derived on replacing y by y(1) (x). Thus, Numerical Solutions of
Ordinary Differential
x Equations
³ f ( x, y
( 2) (1)
y ( x) y0 ( x)) dx
(12.8)
x0
NOTES
The process can be continued, so that we have the general approximate solution
given by,
x
³ f ( x, y
(n) ( n 1)
y ( x) y0 ( x))dx,
for n = 2, 3... (12.9)
x0
This iteration formula is known as Picard’s iteration for finding solution of a first
order differential equation, when an initial condition is given. The iterations are
continued until two successive approximate solutions y(k) and y(k+1) give approximately
the same result for the desired values of x up to a desired accuracy.
Note: Due to practical difficulties in evaluating the necessary integration, this method
cannot be always used. However, if f (x, y) is a polynomial in x and y, the successive
approximate solutions will be obtained as a power series of x.
Example 12.4: Find four successive approximate solutions for the following initial
value problem: y c x y, with y (0) = 1, by Picard’s method. Hence compute y
(0.1) and y (0.2) correct to five significant digits.
Solution: We have, with y (0) = 1.
The first approximation by Picard’s method is,
?
Self-Instructional
Material 201
Numerical Solutions of It is clear that successive approximations are easily determined as power series
Ordinary Differential
Equations of x having one degree more than the previous one. The value of y (0.1) is given by,
(0.1)3 (0.1)4
y (0.1) 1 0.1 (0.1)2 ... | 1.1103, correct to five significant
NOTES 3 4
digits.
Similarly, y
Example 12.5: Find the successive approximate solution of the initial value problem,
with y (0) = 1, by Picard’s method.
Solution: The first approximate solution is given by,
x
x2
³
(1)
y ( x) 1 ( x 1) dx 1 x
2
0
x
x2 x2 x3 x 4
y ( 2) ( x ) ³
1 [ x(1 x
0
2
) 1]dx 1 x
2
3
8
x
x 2 x3 x 4 x 2 x3 x 4 x5 x6
y (3) ( x ) ³
1 [ x(1 x
0
2
3
) 1]dx 1 x
4 2
3
8 15 48
Example 12.6: Compute y (0.25) and y (0.5) correct to three decimal places by
solving the following initial value problem by Picard’s method:
dy x2
, y (0) 0
dx 1 y2
dy x2
Solution: We have dx , y (0) 0
1 y2
x 2 3
x x
³
1
dx tan
x6 3
0 1
9
Self-Instructional
202 Material
2
Numerical Solutions of
Ordinary Differential
For Equations
2
y NOTES
(0.5)2
Again, for x = 0.5, y (1) (0.5) 0.083333
3
(0.5) 3
y ( 2) (0.5) tan 1 0.0416
3
Thus, correct to three decimal places, y (0.5) = 0.042.
Note: For this problem we observe that, the integral for getting the third and higher
approximate solution is either difficult or impossible to evalute, since
x
x2
y (3) ( x) ³
0 § x3 ·
2 is not integrable.
1 ¨ tan 1 ¸
© 3¹
Example 12.7: Use Picard’s method to find two successive approximate solutions
of the initial value problem,
dy yx
, y (0) 1
dx yx
y (1) ( x) y0 ³ f ( x, y0 )dx
0
x x
1 x 2 (1 x)
? y (1) ( x) 1 ³ dx 1 ³ dx
0 1 x 0 1 x
? y ( x) 1 2log e |1 x | x
(1)
x
y ( 2) ( x) ³ f ( x, y
(1)
y0 ( x))dx
0
x x
x 2 x 2 log e | 1 x | x
1 ³
0
1 2 log e | 1 x |
dx 1 x 2 ³ 1 2 log
0 e
|1 x |
dx
We observe that, it is not possible to obtain the integral for getting y(2)(x). Thus
Picard’s method is not applicable to get successive approximate solutions.
Self-Instructional
Material 203
Numerical Solutions of
Ordinary Differential 12.5 EULER’S METHOD
Equations
This is a crude but simple method of solving a first order initial value problem:
NOTES dy
f ( x, y ), y ( x0 ) y0
dx
This is derived by integrating f (x0, y0) instead of f (x, y) for a small interval,
?
x0 x0
? y ( x0 h) y ( x0 ) hf ( x0 , y0 )
? ek y ( xk h) { yk hf ( xk , yk )}
h2
yk hy c( xk ) ycc( xk Th) yk hyc( xk ), 0 T 1
2
h2
ek y cc( xk Th), 0 T 1
Self-Instructional 2
204 Material
Note: The Euler’s method finds a sequence of values {yk} of y for the sequence of Numerical Solutions of
Ordinary Differential
values {xk}of x, step by step. But to get the solution up to a desired accuracy, we Equations
have to take the step size h to be very small. Again, the method should not be used
for a larger range of x about x0, since the propagated error grows as integration
proceeds. NOTES
Example 12.8: Solve the following differential equation by Euler’s method for x =
dy
0.1, 0.2, 0.3; taking h = 0.1; x 2 y, y (0) 1. Compare the results with exact
dx
solution.
dy
Solution: Given x 2 y, with y (0) = 1.
dx
In Euler’s method one computes in successive steps, values of y1, y2, y3,... at
x1 = x0+ h, x2 = x0 + 2h, x3 = x0 + 3h, using the formula,
yn 1 yn hf ( xn , yn ), for n 0, 1, 2,...
? yn 1 yn h ( x n yn )
2
2
n xn yn f ( xn , y n ) x n y n y n 1 y n hf ( xn , y n )
0 0.0 1.000 1.000 0.9000
1 0.1 0.900 0.8900 0.8110
2 0.2 0.8110 0.7710 0.7339
3 0.3 0.7339 0.6439 0.6695
dy
The analytical solution of the differential equation written as y x 2 , is
dx
ye x ³ x e dx c
2 x
Or, ye x x 2 e x 2 xe x 2e x c.
? y x 2 2 x 2 e x .
The following table compares the exact solution with the approximate solution
by Euler’s method.
n xn Approximate Solution Exact Solution % Error
1 0.1 0.9000 0.9052 0.57
2 0.2 0.8110 0.8213 1.25
3 0.3 0.7339 0.7492 2.04
Self-Instructional
Material 205
Numerical Solutions of Example 12.9: Compute the solution of the following initial value problem by Euler’s
Ordinary Differential
Equations method, for x = 0.1 correct to four decimal places, taking h = 0.02,
dy yx
, y (0) 1 .
dx yx
NOTES
Solution: Euler’s method for solving an initial value problem,
dy
dx
Taking h = 0.02, we have x1 = 0.02, x2 = 0.04, x3 = 0.06, x4 = 0.08, x5 = 0.1.
Using Euler’s method, we have, since y(0) = 1
1 0
y (0.02) y1 y 0 h f ( x0 , y0 ) 1 0.02 u 1.0200
1 0
1.0200 0.02
y (0.04) y2 y1 h f ( x1 , y1 ) 1.0200 0.02 u 1.0392
1.0200 0.02
1.0392 0.04
y (0.06) y3 y 2 h f ( x 2 , y 2 ) 1.0392 0.02 u 1.0577
1.0392 0.04
1.0577 0.06
y (0.08) y4 y3 h f ( x3 , y3 ) 1.0577 0.02 u 1.0756
1.0577 0.06
1.0756 0.08
y (0.1) y5 y 4 h f ( x4 , y 4 ) 1.0756 0.02 u 1.0928
1.0756 0.08
Hence, y (0.1) = 1.0928.
(12.13)
(12.14)
The iterations are continued until two successive approximations yn( k)1 and yn( k11)
coincide to the desired accuarcy. As a rule, the iterations converge rapidly for a
sufficiently small h. If, however, after three or four iteration the iterations still do not
give the necessary accuracy in the solution, the spacing h is decreased and iterations
Self-Instructional are performed again.
206 Material
Example 12.10: Use modified Euler’s method to compute y (0.02) for the initial Numerical Solutions of
Ordinary Differential
value problem, dy x 2 y, with y (0) = 1, taking h = 0.01. Compare the result with Equations
dx
the exact solution.
Solution: Modified Euler’s method consists of obtaining the solution at successive NOTES
points, x1 = x0 + h, x2 = x0 + 2h,..., xn = x0 + nh, by the two stage computations given
by,
y n(0)1 yn hf ( x n , y n )
h
y n(1)1 yn f ( xn , y n ) f ( x n 1 , y n( 0)1 ) .
2
For the given problem, f (x, y) = x2 + y and h = 0.01
y1(0) y0 h[ x02 y0 ] 1 0.01 u 1 1.01
0.01
y1(1) 1 [1.0 1.01 (0.01)2 ] 1.01005
2
i.e., y1 y (0.01) 1.01005
Next, y 2( 0) y1 h [ x12 y1 ]
1.01005 0.01[(0.1) 2 1.01005]
1.01005 0.010102 1.02015
0.01
y2(1) 1.01005 [(0.01)2 1.01005 (0.01)2 1.02015]
2
0.01
1.01005 u (2.02140)
2
1.01005 0.10107
1.11112
? y2 y (0.02) 1.11112
NOTES
dy dz
We write z , so that g ( x, y, z ) with y (x0) = y0 and z (x0) = y0c .
dx dx
Example 12.11: Compute y(1.1) and y(1.2) by solving the initial value problem,
yc
y cc y 0, with y (1) = 0.77, y c (1) = –0.44
x
z
Solution: We can rewrite the problem as y c z , z c y; with y, (1) = 0.77 and
x
z (1.1) = –0.44.
Taking h = 0.1, we use Euler’s method for the problem in the form,
yi 1 yi hzi
ª z º
z i 1 z i h « 1 yi » , i 0, 1, 2,...
¬« xi ¼»
Thus, y1 = y (1.1) and z1 = z (1.1) are given by,
y1 y0 hz 0 0.77 0.1u (0.44) 0.726
ª z º
z1 z 0 h « 0 y 0 » 0.44 0.1u (0.44 0.77)
«¬ x0 »¼
0.44 0.33 0.473
Example 12.12: Using Euler’s method, compute y (0.1) and y (0.2) for the initial
value problem,
y cc y 0, y (0) 0, y c(0) 1
Self-Instructional
208 Material
Taking h = 0.1, we have by Euler’s method, Numerical Solutions of
Ordinary Differential
y1 y (0.1) y0 hz0 0 0.1 u 1 0.1 Equations
(iv ) iv
y ( x) xy ccc 3 y cc ? y (0) 0
v (iv ) v
y ( x) xy 4 y ccc ? y (0) 8
And in general, y(2n)(0) = 0, y ( 2n 1) (0) 2ny ( 2n 1) (0) (1) n 2 n.2!
x3 x5 2n n ! x 2n 1
Thus, y ( x) x 3 15 ... (1) (2n 1)! ...
n
This is an alternating series whose terms decrease. Using this, we form the
solution for y up to 0.2 as given below:
Self-Instructional
Material 209
Numerical Solutions of The unknown parameters a, b, D, and E are determined by expanding in Taylor
Ordinary Differential
Equations
series and forming equations by equating coefficients of like powers of h. We have,
2 3
h h
y n 1 y ( x n h) y n h y c( xn ) y cc( xn ) y ccc( xn ) 0 (h 4 )
2 6
NOTES h
2
h
3
[ f xx 2 ff yy f yy f f x f y f y f ]n 0(h ) (12.20)
2 2 4
y n h f ( xn , y n ) [ f ff y ]n
2 x 6
The subscript n indicates that the functions within brackets are to be evaluated
at (xn, yn).
Again, expanding k2 by Taylor series with two variables, we have
(12.21)
Thus, on substituting the expansion of k2, we get from Equation (12.21)
There are three equations for the determination of four unknown parameters.
Thus, there are many solutions. However, usually a symmetric solution is taken by
setting
Thus, we can write a Runge-Kutta methods of order 2 in the form,
h
yn 1 yn [ f ( xn , yn ) f ( xn h, yn h f ( xn , yn ))], for n 0, 1, 2,...
2
(12.22)
Proceeding as in second order method, Runge-Kutta methods of order 4 can be
formulated. Omitting the derivation, we give below the commonly used Runge-Kutta
methods of order 12.
1 5
y n 1 y n (k1 2k 2 2k3 k 4 ) 0 (h )
6
k1 h f ( xn , y n )
§ h k ·
k2 h f ¨¨ xn , y n 1 ¸¸
© 2 2¹
§ h k ·
k3 h f ¨¨ xn , y n 2 ¸¸
© 2 2 ¹
k4 h f ( x n h, y n k 3 ) (12.23)
Runge-Kutta methods of order 4 requires the evaluation of the first order
derivative f (x, y), at four points. The method is self-starting. The error estimate
with this method can be roughly given by,
Self-Instructional
210 Material
Numerical Solutions of
y n* y n
|y (xn) – yn| | (12.24) Ordinary Differential
15 Equations
h
Where yn* and yn are the approximate values computed with and h, respectively
2
as step size and y (xn) is the exact solution. NOTES
Note: In particular, for the special form of differential equation y c F (x), a function
of x alone, the Runge-Kutta methods reduces to the Simpson’s one-third formula of
numerical integration from xn to xn+1. Then,
xn1
h h
Or, yn+1 = yn+ [F(xn) + 4F(xn+ ) + F(xn+h)]
6 2
Runge-Kutta methods are widely used particularly for finding starting values at
steps x1, x2, x3,..., since it does not require evaluation of higher order derivatives. It
is also easy to implement the method in a computer program.
Example 12.14: Compute values of y (0.1) and y (0.2) by 4th order Runge-Kutta
methods, correct to five significant figures for the initial value problem,
dy
x y , y ( 0) 1
dx
dy
Solution: We have x y , y ( 0) 1
dx
? f ( x, y ) x y, h 0.1, x0 0, y0 1
By Runge-Kutta methods,
1
y (0.1) y (0 ) ( k1 2k 2 2k 3 k 4 )
Where,
1.130516
Self-Instructional
Material 211
Numerical Solutions of Example 12.15: Use Runge-Kutta methods of order 4 to evaluate y (1.1) and y
Ordinary Differential
Equations
(1.2), by taking step length h = 0.1 for the initial value problem,
dy
x 2 y 2 , y (1) 0
dx
NOTES
Solution: For the initial value problem,
dy
the Runge-Kutta methods of order 4 is given as,
dx
1
y n 1 yn ( k1 2k 2 2k 3 k 4 )
6
Where
For y (1.2):
k1
k2
k3
k4
0069)
631
Self-Instructional
212 Material
Algorithm: Solution of first order differential equation by Runge-Kutta methods of Numerical Solutions of
Ordinary Differential
order 2: y c f (x) with y (x0) = y0. Equations
Step 1: Define f (x, y)
Step 2: Read x0, y0, h, xf [h is step size, xf is final x] NOTES
Step 3: Repeat Steps 4 to 11 until x1 > xf
Step 4: Compute k1 = f (x0, y0)
Step 5: Compute y1 = y0+ hk1
Step 6: Compute x1 = x0+ h
Step 7: Compute k2 = f (x1, y1)
Step 8: Compute y1 y0 h u (k1 k 2 ) / 2
Step 9: Write x1, y1
Step 10: Set x0 = x1
Step 11: Set y0 = y1
Step 12: Stop
Algorithm: Solution of y1 f ( x, y ), y ( x0 ) y0 by Runge-Kutta method of
order 4.
Step 1: Define f (x, y)
Step 2: Read x0, y0, h, xf
Step 3: Repeat Step 4 to Step 16 until x1 > xf
Step 4: Compute k1 = h f (x0, y0)
h
Step 5: Compute x x0
2
k1
Step 6: Compute y y0
2
Step 7: Compute k2 = h f (x, y)
k2
Step 8: Compute y y0
2
Step 9: Compute k3 = h f(x, y)
Step 10: Compute x1 = x0+ h
Step 11: Compute y = y0+ k3
Step 12: Compute k4 = h f (x1, y)
Step 13: Compute y1 = y0+ (k1+ 2 (k2+ k3) + k4)/6
Step 14: Write x1, y1
Step 15: Set x0 = x1
Step 16: Set y0 = y1
Step 17: Stop
Self-Instructional
Material 213
Numerical Solutions of Runge-Kutta Method for a Pair of Equations
Ordinary Differential
Equations Consider an initial value problem associated with a system of two first order ordinary
differential equations in the form,
NOTES
(12.25)
Where k1 hf ( xi , yi , zi ), l1 hg ( xi , yi , z i )
§ h k l · § h k l ·
k2 hf ¨¨ xi , yi 1 , z i 1 ¸¸, l2 hg ¨¨ xi , yi 1 , z1 1 ¸¸
© 2 2 2¹ © 2 2 2¹
§ h k l · § h k l ·
k3 hf ¨¨ xi , yi 2 , z i 2 ¸¸, l3 hg ¨¨ xi , yi 2 , zi 2 ¸¸
© 2 2 2¹ © 2 2 2¹
k4 hf ( xi h, y1 k 3 , zi l3 ), l4 hf ( xi h, yi k 3 , z i l3 )
yi y ( xi ), z i z ( xi ), i 0, 1, 2,...
The solutions for y(x) and z(x) are determined at successive step points x1 = x0+h, x2
= x1+h = x0+2h,..., xN = x0+Nh.
Runge-Kutta Methods for a Second Order Differential Equation
Consider the initial value problem associated with a second order differential equation,
d2y
g ( x, y, y c)
dx 2
With y (x0) = y0 and y c (x0) =
On substituting z y c, the above problem is reduced to the problem,
dy dz
z, g ( x, y , z )
dx dx
With y (x0) = y0 and z (x0) = y c (x0) =
Which is an initial value problem associated with a system of two first order differential
equations. Thus we can write the Runge-Kutta methods for a second order differential
equation as,
(12.26)
Self-Instructional
214 Material
Where k1 h( zi ), l1 hg ( xi , yi , zi ) Numerical Solutions of
Ordinary Differential
§ l · § h k l · Equations
k2 h ¨ zi 1 ¸ , l2 hg ¨ xi , yi 1 , zi 1 ¸
© 2¹ © 2 2 2¹
§ l · § h k l · NOTES
k3 h ¨ zi 2 ¸ , l3 hg ¨ xi , yi 2 , zi 2 ¸
© 2¹ © 2 2 2¹
k4 h( zi l3 ), l4 hg ( xi h, yi k3 , zi l3 )
dy
dx
i.e.,
xn xn
x
?
xn
To evaluate the integral on the right hand side, we consider f (x, y) as a function
of x and replace it by an interpolating polynomial, i.e., a Newton’s backward difference
interpolation using the (m + 1) points xn, xn+1, xn–2,..., xn–m,
Self-Instructional
Material 215
Numerical Solutions of The coefficients Jk can be easily computed to give,
Ordinary Differential
Equations
, etc.
ª 1 5 3 º
y n 1 y n h « f n 'f n 1 '2 f n 2 '3 f n 3 »
¬ 2 12 8 ¼
Substituting the expression of the differences in terms of function values given
by,
'f n 1 f n f n 1 , '2 f n 2 f n 2 f n 1 f n 2
3
' f n3 f n 3 f n 1 3 f n 2 f n 3
We get on arranging,
h
y n 1 yn [55 f n 59 f n 1 37 f n 2 9 f n 3 ] (12.27)
24
This is known as Adams-Bashforth formula of order 4. The local error of this
formula is,
(12.28)
Or, (12.29)
The fourth order Adams-Bashforth formula requires four starting values, i.e.,
the derivaties, f3, f2, f1 and f0. This is a multistep method.
Self-Instructional
216 Material
Numerical Solutions of
(c ) h ( p) Ordinary Differential
y n 1 y n [ f ( xn , y n ) f ( xn 1 , y n 1 )] (12.31)
2 Equations
In order to determine the solution of the problem upto a desired accuracy, the
corrector formula can be employed in an iterative manner as shown below: NOTES
Step 1: Compute yn(0)1 , using Equation (12.30)
h
(k )
i.e., yn 1 yn [ f ( xn , yn ) f ( xn 1 , yn( k11) )], for K 1, 2, 3,...,
2
The computation is continued till the condition given below is satisfied,
y nk (12.32)
respectively..
(12.34)
Example 12.16: Compute the Taylor series solution of the problem
dy
xy 1, y (0) 1, up to x5 terms and hence compute values of y(0.1), y (0.2) and
dx
y (0.3). Use Milne’s Predictor-Corrector method to compute y (0.4) and y (0.5).
Solution: We have y c xy 1, with y (0) = 1, ? y c(0) 1
Differentiating successively, we get
y cc( x)
xy c y ? y cc(0) 1
y ccc( x)
xy cc y c ? y ccc(0) 2
(iv ) ( iv )
y ( x) xy ccc 3 y cc ?y (0) 3
(v) (iv ) ( iv )
y ( x) xy 4 y ccc ? y (0) 8
Self-Instructional
Material 217
Numerical Solutions of Thus, the Taylor series solution is given by,
Ordinary Differential
Equations
2 3 4 5
NOTES
2 3 4 5
y
15
4h
The Predictor formula gives, y4 = y(0.4) = y0+ (2 y1c y 2c 2 y3c ) .
3
4 u 0.1
? y4(0) 1 (2 u 1.11053 1.24458 2 u 1.40658)
3
1.50528 ? y4c 1 0.4 u 1.50528 1.602112
(1) h
The Corrector formula gives, y 4 y2 ( y c 4 y3c y c4 ) .
3 2
The following two methods reduce the boundary value problem into initial value
problems which are then solved by any of the methods for solving such problems. NOTES
12.9.1 Reduction to a Pair of Initial Value Problem
This method is applicable to linear differential equations only. In this method, the
solution is assumed to be a linear combination of two solutions in the form,
y(x) = u(x) + (x) (12.37)
Where O is a suitable constant determined by using the boundary condition and u(x)
and X(x) are the solutions of the following two initial value problems:
(i) u cc + p(x) u c + q(x)u = r(x)
u(a) = A, u c (a) = D1, (say). (12.38)
(ii) Xs + p(x) Xc + q(x) X = r(x)
X (a) = 0 and Xc(a) = D 2 , (say) (12.39)
Where D1 and D2 are arbitrarily assumed constants. After solving the two initial
value problems, the constant O is determined by satisfying the boundary condition at
x = b. Thus,
Or, (12.40)
Self-Instructional
Material 219
Numerical Solutions of The values of y at the mesh points is denoted by yn given by,
Ordinary Differential
Equations yn = y (x0+ nh), n = 0, 1, 2, ..., N (12.44)
The following central difference approximations are usually used in finite
difference method of solving boundary value problem,
NOTES
y n 1 y n1
y c( xn ) | (12.45)
2h
y n 1 2 y n y n 1
y cc( xn ) | 2 (12.46)
h
Substituting these in the differential equation, we have
2(yn+1–2yn+ yn–1) + pn h(yn+1– yn–1) + 2h2gnyn = 2rnh2,
Where pn = p(xn), qn = q(xn), rn = r(xn) (12.47)
Rewriting the equation by regrouping we get,
(2–hpn)yn–1+(–4+2h2qn)yn+(2+h2qn)yn+1 = 2rnh2 (12.48)
This equation is to be considered at each of the interior points, i.e., it is true for
n = 1, 2, ..., N–1.
The boundary conditions of the problem are given by,
(12.49)
Introducing these conditions in the relevant equations and arranging them, we
have the following system of linear equations in (N–1) unknowns y1, y2, ..., yn–1.
(12.50)
The above system of N–1 equations can be expressed in matrix notation in the
form
Ay b (12.51)
Where the coefficient matrix A is a tridiagonal one, of the form
ª B1 C1 0 0... 0 0 0 º
«A B2 C2 0... 0 0 0 »
« 2 »
«0 A3 B3 C3 ... 0 0 0 »
A « » (12.52)
« ... ... ... ... ... ... ... »
«0 0 0 0... AN 2 BN 2 C N 2 »
« »
«¬ 0 0 0 0... 0 AN 1 B N 1 »¼
Self-Instructional
220 Material
Numerical Solutions of
Where Bi 4 2h 2 qi , i 1, 2,..., N 1 Ordinary Differential
Equations
Ci 2 hpi , i 1, 2,..., N 2 (12.53)
Ai 2 hpi , i 2, 3,..., N 1
NOTES
The vector b has components,
(12.54)
The system of linear equations can be directly solved using suitable methods.
Example 12.17: Compute values of y (1.1) and y (1.2) on solving the following
initial value problem, using Runge-Kutta methods of order 4:
yc
y cc y 0 , with y(1) = 0.77, y c (1) = –0.44
x
Solution: We first rewrite the initial value problem in the form of pair of first order
equations.
z
yc z, z c y
x
With y (1) = 0.77 and z (1) = –0.44.
We now employ Runge-Kutta methods of order 4 with h = 0.1,
1
y (1.1) y (1)
(k1 2k2 2k3 k4 )
6
1
y c(1.1) z (1.1) 1 (l1 2l2 2l3 l4 )
6
k1 0.44 u 0.1 0.044
§ 0.44 ·
l1 0.1 u ¨ 0.77 ¸ 0.033
© 1 ¹
§ 0.033 ·
k2 0.1 u ¨ 0.44 ¸ 0.04565
© 2 ¹
§ 0.4565 ·
l2 0.1 u ¨ 0.748 ¸ 0.031323809
© 1.05 ¹
§ 0.03123809 ·
k3 0.1 u ¨ 0.44 ¸ 0.0455661904
© 2 ¹
ª 0.0455661904 º
l3 0.1 u « 0.747175» 0.031321128
¬ 1.05 ¼
k4 0.1 u ( 0.47132112) 0.047132112
§ 0.047132112 ·
l4 0.1 u ¨ 0.72443381¸ 0.068158643
© 1.1 ¹ Self-Instructional
Material 221
Numerical Solutions of 1
Ordinary Differential ? y (1.1) 0.77 [ 0.044 2 u (0.045661904) 0.029596005] 0.727328602
Equations 6
1
y c(1.1) 0.44 [ 0.033 2( 0.031323809) 2( 0.031321128) 0.029596005]
6
NOTES 1
0.44 [0.33 0.062647618 0.062642256 0.029596005]
6
0.526322021
Example 12.18: Compute the solution of the following initial value problem for
x = 0.2, using Taylor series solution method of order 4: n.l.
We solve for y and z by Taylor series method of order 4. For this we first
compute y cc(0), y ccc(0), y iv (0),...
x2 x3 x 4 iv
y (0 x ) y (0) xy c(0) y cc(0) y ccc(0) y (0)
2! 3! 4!
x2 x4
Or, y ( x) 1 u3
2! 4!
(0.2) 2 (0.2) 4
? y (0.2) 1 1.0202
2! 8
3
Similarly,
Example 12.19: Compute the solution of the following initial value problem for x =
2
d y
0.2 by fourth order Runge -Kutta method: n.l. xy, y (0) 1, y c(0) 1
dx 2
Solution: Given y cc xy, we put y c z and the simultaneous first order problem,
We use Runge-Kutta 4th order formulae, with h = 0.2, to compute y (0.2) and
y c(0.2), given below..
Self-Instructional
222 Material
Numerical Solutions of
Ordinary Differential
Equations
NOTES
.
Self-Instructional
Material 223
Numerical Solutions of This iteration formula is known as Picard’s iteration for finding solution of a
Ordinary Differential
Equations first order differential equation, when an initial condition is given. The iterations
are continued until two successive approximate solutions yk and yk + 1 give
approximately the same result for the desired values of x up to a desired
NOTES accuracy.
3. The method should not be used for a larger range of x about x0, since the
propagated error grows as integration proceeds.
4. Runge-Kutta methods are very useful when the method of Taylor series is
not easy to apply because of the complexity of finding higher order
derivatives.
5. A predictor formula is an open-type explicit formula derived by using, in the
integral, an interpolation formula which interpolates at the points xn, xn – 1,
..., xn – m.
12.11 SUMMARY
x There are many methods available for finding a numerical solution for
differential equations.
x Picard’s iteration is a method of finding solutions of a first order differential
equation when an initial condition is given.
x Euler’s method is a crude but simple method for solving a first order initial
value problem.
x Euler’s method is a particular case of Taylor’s series method.
x Runge-Kutta methods are useful when the method of Taylor series is not
easy to apply because of the complexity of finding higher order derivatives.
x For finding the solution at each step, the Taylor series method and Runge-
Kutta methods require evaluation of several derivatives.
x The multistep method requires only one derivative evaluation per step; but
unlike the self starting Taylor series on Runge-Kutta methods, the multistep
methods make use of the solution at more than one previous step points.
x These methods use a pair of multistep numerical integration. The first is the
predictor formula, which is an open-type explicit formula derived by using,
in the integral, an interpolation formula which interpolates at the points
xn, xn – 1, ..., xn – m. The second is the corrector formula which is obtained by
using interpolation formula that interpolates at the points xn + 1, xn, ..., xn – p in
the integral.
Self-Instructional
224 Material
x The solution of ordinary differential equation of order 2 or more, when Numerical Solutions of
Ordinary Differential
values of the dependent variable is given at more than one point, usually at Equations
the two ends of an interval in which the solution is required.
x The methods used to reduce the boundary value problem into initial value NOTES
problems are reduction to a pair of initial value problem and finite difference
method.
Short-Answer Questions
1. What are ordinary differential equations?
2. Name the methods for computing the numerical solution of differential
equations.
3. What is the significance of Runge-Kutta methods of different orders?
4. When is multistep method used?
5. Name the predictor-corrector methods.
6. How will you find the numerical solution of boundary value problems?
Long-Answer Questions
1. Use Picard’s method to compute values of y(0.1), y(0.2) and y(0.3) correct
to four decimal places, for the problem, yc = x + y, y(0) = 1.
2. Compute values of y at x = 0.02, by Euler’s method taking h = 0.01, given
y is the solution of the following initial value problem: = x3 + y, y(0) = 1.
3. Evaluate y(0.02) by modified Euler’s method, given yc = x2 + y, y(0) = 1,
correct to four decimal places.
Self-Instructional
Material 225
Numerical Solutions of 5. Using Runge-Kutta method of order 4, compute y(0.1) for each of the
Ordinary Differential
Equations
following problems:
(a)
NOTES
(b)
6. Compute solution of the following initial value problem by Runge-Kutta
method of order 4 taking h = 0.2 upto x = 1; yc = x – y, y(0) = 1.5.
Self-Instructional
226 Material
Ordinary Differential
EQUATIONS
NOTES
Structure
13.0 Introduction
13.1 Objectives
13.2 Runge-Kutta Methods
13.3 Euler’s Method
13.4 Taylor Series Method
13.5 Multiple Methods
13.6 Euler’s Method for a Pair of Differential Equations
13.7 Runge-Kutta Methods for a Pair of Equations
13.8 Runge-Kutta Methods for a Second Order Differential Equation
13.9 Numerical Solutions of Boundary Value Problems
13.10 Answers to Check Your Progress Questions
13.11 Summary
13.12 Key Words
13.13 Self Assessment Questions and Exercises
13.14 Further Readings
13.0 INTRODUCTION
Self-Instructional
Material 227
Ordinary Differential
Equations 13.1 OBJECTIVES
h2 h3 4
y n 1 y ( x n h) y n h y c( xn ) y cc( xn ) y ccc( xn ) 0 (h )
2 6
2 3
h h
[ f xx 2 ff yy f yy f f x f y f y f ]n 0(h ) (13.3)
2 2 4
y n h f ( xn , y n ) [ f ff y ]n
2 x 6
The subscript n indicates that the functions within brackets are to be evaluated
at (xn, yn).
Again, expanding k2 by Taylor series with two variables, we have
D 2E 2 E 2 k12 3
k2 h[ f n ah ( f x ) n Ek1 ( f y ) n ( f xx ) n DEhk1 ( f xy ) n ( f yy ) n 0(h )]
2 2
(13.4)
Thus, on substituting the expansion of k2, we get from Equation (13.2)
Self-Instructional
228 Material
On comparing with the expansion of yn+1 and equating coefficients of h and h2 Ordinary Differential
Equations
we get the following relations,
1
a b 1, bD bE
2 NOTES
There are three equations for the determination of four unknown parameters.
Thus, there are many solutions. However, usually a symmetric solution is taken by
setting
h h
Or, yn+1 = yn+ [F(xn) + 4F(xn+ ) + F(xn+h)]
6 2
Self-Instructional
Material 229
Ordinary Differential Runge-Kutta methods are widely used particularly for finding starting values at
Equations
steps x1, x2, x3,..., since it does not require evaluation of higher order derivatives. It
is also easy to implement the method in a computer program.
Example 13.1. Compute values of y (0.1) and y (0.2) by 4th order Runge-Kutta
NOTES methods, correct to five significant figures for the initial value problem.
dy
x y , y ( 0) 1
dx
dy
Solution. We have x y , y ( 0) 1
dx
? f ( x, y ) x y, h 0.1, x0 0, y0 1
By Runge-Kutta methods,
1
y (0.1) y (0 ) (k 2k 2 2k 3 k 4 )
6 1
Where, k1 h f ( x0 , y0 ) 0.1 u (0 1) 0.1
§ h k ·
k2 h f ¨ x0 , y0 2 ¸ 0.1 u (0.05 1.05) 0.11
© 2 2¹
§ h k ·
k3 h f ¨ x0 , y0 2 ¸ 0.1 u (0.05 1.055) 0.1105
© 2 2¹
k4 h f ( x0 h, y0 k3 ) 0.1 u (0.1 1.1105) 0.12105
1
? y (0.1) 1 [0.1 2 u (0.11 0.1105 0.12105] 1.130516
6
Thus, x1 0.1, y1 1.130516
1
y (0.2) y (0.1) (k1 2k2 2k3 k4 )
6
k1 h f ( x1 , y1 ) 0.1 u (0.1 1.11034) 0.121034
§ h k ·
k2 h f ¨ x1 , y1 1 ¸ 0.1 (0.15 1.17086) 0.132086
© 2 2¹
§ h k ·
k3 h f ¨ x1 , y1 2 ¸ 0.1 (0.15 1.17638) 0.132638
© 2 2¹
k4 h f ( x1 h, y1 k3 ) 0.1 (0.2 1.24298) 0.144298
1
y2 y (0.2) 1.11034 [0.121034 2 (0.132086 0.132638) 0.144298] 1.2428
6
Example 13.2. Use Runge-Kutta methods of order 4 to evaluate y (1.1) and y
(1.2), by taking step length h = 0.1 for the initial value problem:
dy
x 2 y 2 , y (1) 0
dx
Solution. For the initial value problem,
dy
f ( x, y ), y ( x0 ) y 0 ; the Runge-Kutta methods of order 4 is given as,
dx
Self-Instructional
230 Material
1 Ordinary Differential
y n 1 yn ( k1 2k 2 2k 3 k 4 ) Equations
6
k1 h f ( xn , yn )
§ h k · NOTES
k2 h f ¨ xn , yn 1 ¸
© 2 2¹
Where, § h k2 ·
k3 h f ¨ xn , yn ¸
© 2 2 ¹
k4 h f ( xn h, yn k3 ); for n 0, 1, 2,...
For the given problem, f (x, y) = x2 + y2, x0 = 1, y0 = 0, h = 0.1.
Thus,
k1 h f ( x0 , y0 ) 0.1 u (12 02 ) 0.1
§ h k ·
k2 h f ¨ x0 , y0 1 ¸ 0.1 u [(1.05) 2 (0.5)2 ] 0.13525
© 2 2¹
§ h k ·
k3 h f ¨ x0 , y0 2 ¸ 0.1 u [(1.05)2 (0.05525) 2 ] 0.13555
© 2 2¹
k4 h f ( x0 h, y0 k3 ) 0.1 u [(1.1)2 (0.13555) 2 ] 0.12283
1
? y1 y0 (k1 2k2 2k3 k4 )
6
1 1
(0.1 0.2705 0.2711 0.12283) u 0.76443
6 6
0.127405
For y (1.2):
k1 0.1 [(1.1) 2 (0.11072)2 ] 0.12226
k2 0.1 [(1.15) 2 (0.17183) 2 ] 0.135203
k3 0.1 [(1.15) 2 (0.17832)2 ] 0.135430
k4 0.1 [(1.2)2 (0.24615)2 ] 0.150059.
1
? y2 y (1.2) 0.11072 (0.12226 0.270406 0.270860 0.150069)
6
0.24631
Algorithm. Solution of first order differential equation by Runge-Kutta methods of
order 2: y c f (x) with y (x0) = y0.
Step 1. Define f (x, y)
Step 2. Read x0, y0, h, xf [h is step size, xf is final x]
Step 3. Repeat steps 4 to 11 until x1 > xf
Step 4. Compute k1 = f (x0, y0)
Step 5. Compute y1 = y0+ hk1
Step 6. Compute x1 = x0+ h
Step 7. Compute k2 = f (x1, y1)
Self-Instructional
Material 231
Ordinary Differential
Equations
Step 8. Compute y1 y0 h u (k1 k 2 ) / 2
Step 9. Write x1, y1
Step 10. Set x0 = x1
NOTES Step 11. Set y0 = y1
Step 12. Stop
Algorithm. Solution of y1 f ( x, y ), y ( x0 ) y0 by Runge-Kutta methods of
order 4.
Step 1. Define f (x, y)
Step 2. Read x0, y0, h, xf
Step 3. Repeat step 4 to step 16 until x1 > xf
Step 4. Compute k1 = h f (x0, y0)
h
Step 5. Compute x x0
2
k1
Step 6. Compute y y0
2
Step 7. Compute k2 = h f (x, y)
k2
Step 8. Compute y y0
2
Step 9. Compute k3 = h f(x, y)
Step 10. Compute x1 = x0+ h
Step 11. Compute y = y0+ k3
Step 12. Compute k4 = h f (x1, y)
Step 13. Compute y1 = y0+ (k1+ 2 (k2+ k3) + k4)/6
Step 14. Write x1, y1
Step 15. Set x0 = x1
Step 16. Set y0 = y1
Step 17. Stop
This is a crude but simple method of solving a first order initial value problem:
dy
f ( x, y ), y ( x0 ) y0
dx
This is derived by integrating f (x0, y0) instead of f (x, y) for a small interval,
x0 h x0 h
? ³
x0
dy ³
x0
f ( x0 , y0 )dx.
Self-Instructional
232 Material
Ordinary Differential
? y ( x0 h) y ( x0 ) hf ( x0 , y0 ) Equations
? ek y ( xk h) { yk hf ( xk , yk )}
h2
yk hy c( xk ) ycc( xk Th) yk hyc( xk ), 0 T 1
2
h2
ek y cc( xk Th), 0 T 1
? 2
Note. The Euler’s method finds a sequence of values {yk} of y for the sequence of
values {xk}of x, step by step. But to get the solution up to a desired accuracy, we
have to take the step size h to be very small. Again, the method should not be used
for a larger range of x about x0, since the propagated error grows as integration
proceeds.
Example 13.3. Solve the following differential equation by Euler’s method for x =
dy
0.1, 0.2, 0.3; taking h = 0.1; x 2 y, y (0) 1. Compare the results with exact
dx
solution. Self-Instructional
Material 233
Ordinary Differential
Equations dy
Solution. Given x 2 y, with y (0) = 1.
dx
In Euler’s method one computes in successive steps, values of y1, y2, y3,... at x1
NOTES = x0+h, x2 = x0+2h, x3 = x0+ 3h, using the formula,
yn 1 yn hf ( xn , yn ), for n 0, 1, 2,...
? yn 1 yn h ( x n yn )
2
2
n xn yn f ( xn , y n ) x n y n y n 1 y n hf ( xn , y n )
0 0.0 1.000 1.000 0.9000
1 0.1 0.900 0.8900 0.8110
2 0.2 0.8110 0.7710 0.7339
3 0.3 0.7339 0.6439 0.6695
dy
The analytical solution of the differential equation written as y x 2 , is
dx
ye x ³ x e dx c
2 x
Or, ye x x 2 e x 2 xe x 2e x c.
? y x 2 2 x 2 e x .
The following table compares the exact solution with the approximate solution
by Euler’s method.
n xn approx. sol. exact sol. % error
1 0.1 0.9000 0.9052 0.57
2 0 .2 0.8110 0.8213 1.25
3 0.3 0.7339 0.7492 2.04
Example 13.4. Compute the solution of the following initial value problem by Euler’s
method, for x = 0.1 correct to four decimal places, taking h = 0.02,
dy yx
, y (0) 1 .
dx yx
Self-Instructional
234 Material
Taking h = 0.02, we have x1 = 0.02, x2 = 0.04, x3 = 0.06, x4 = 0.08, x5 = 0.1. Ordinary Differential
Equations
Using Euler’s method, we have, since y(0) = 1
1 0
y (0.02) y1 y 0 h f ( x0 , y0 ) 1 0.02 u 1.0200
1 0 NOTES
1.0200 0.02
y (0.04) y2 y1 h f ( x1 , y1 ) 1.0200 0.02 u 1.0392
1.0200 0.02
1.0392 0.04
y (0.06) y3 y 2 h f ( x2 , y 2 ) 1.0392 0.02 u 1.0577
1.0392 0.04
1.0577 0.06
y (0.08) y4 y3 h f ( x3 , y3 ) 1.0577 0.02 u 1.0756
1.0577 0.06
1.0756 0.08
y (0.1) y5 y 4 h f ( x4 , y 4 ) 1.0756 0.02 u 1.0928
1.0756 0.08
Hence, y (0.1) = 1.0928
y (0) n 1 y n h f ( xn , y n ),
h (13.11)
y (1) n 1 y n [ f ( xn , y n ) f ( xn 1 , y ( 0) n 1 )].
2
(13.12)
The iterations are continued until two successive approximatious
yn( k)1 and yn( k11) coincide to the desired accuarcy. As a rule, the iterations converge
rapidly for a sufficiently small h. If, however, after three or four iteration the iterations
still do not give the necessary accuracy in the solution, the spacing h is decreased
and iterations are performed again.
Example 13.5. Use modified Euler’s method to compute y (0.02) for the initial
value problem, dy x 2 y, with y (0) = 1, taking h = 0.01. Compare the result with
dx
the exact solution.
Self-Instructional
Material 235
Ordinary Differential Solution. Modified Euler’s method consists of obtaining the solution at successive
Equations
points, x1 = x0+h, x2 = x0+2h,... xn = x0+nh, by the two stage computations given by,
y n(0)1 y n hf ( xn , yn )
NOTES h
y n(1)1 yn f ( xn , y n ) f ( x n 1 , y n( 0)1 ) .
2
For the given problem, f (x, y) = x2 + y and h = 0.01
Next, y 2( 0) y1 h [ x12 y1 ]
1.01005 0.01[(0.1) 2 1.01005]
1.01005 0.010102 1.02015
0.01
y2(1) 1.01005 [(0.01)2 1.01005 (0.01)2 1.02015]
2
0.01
1.01005 u (2.02140)
2
1.01005 0.10107
1.11112
? y2 y (0.02) 1.11112
y ccc ( x0 ) f xx ( x0 , y0 ) 2 f xy ( x0 , y0 ) y c( x0 ) f yy ( x0 , y0 ) { y c ( x0 )}2 f y ( x, y ) y cc ( x0 )
Self-Instructional
236 Material
Thus, the value of y1 = y (x0+h), can be computed by taking the Taylor series Ordinary Differential
Equations
expansion shown above. Usually, because of difficulties in obtaining higher order
derivatives, commonly a fourth order method is used. The solution at x2 = x1+h, can
be found by evaluating the derivatives at (x1, y1) and using the expansion; otherwise,
writing x2 = x0+2h, we can use the same expansion. This process can be continued NOTES
for determining yn+1 with known values xn, yn.
Note. If we take k = 1, we get the Euler’s method, y1 = y0+h f (x0, y0)
Thus, Euler’s method is a particular case of Taylor series method.
Example 13.6. Form the Taylor series solution of the initial value problem,
dy
xy 1, y (0) 1 up to five terms and hence compute y (0.1) and y (0.2), correct
dx
to four decimal places.
Solution. We have, y c xy 1, y (0) 1
Differentiating successively we get,
y cc( x) xy c y , ? y cc(0) 1
y ccc( x) xy cc 2 y c, ? y ccc(0) 2
y (iv ) ( x) xy ccc 3 y cc, ? y (iv ) (0) 3
y (v ) ( x) xy (iv ) 3 y ccc, ? y ( v ) (0) 6
x2 x3 x 4 (iv ) x 5 (v)
y ( x) | y (0) xy c(0) y cc(0) y ccc(0) y (0) y (0)
2 3! 4! 5!
x 2 x3 x4 x5 x 2 x3 x 4 x5
| 1 x u2 u3 u 6 1 x
2 6 24 120 2 3 8 20
0.01 0.001 0.0001 0.00001
? y (0.1) | 1 0.1 1.1053
2 3 8 20
0.04 0.008 0.0016 0.00032
Similarly, y (0.2) | 1 0.2 1.04274
2 3 8 20
Example 13.7. Find first two non-vanishing terms in the Taylor series solution of
the initial value problem y c x 2 y 2 , y (0) 0. Hence, compute y (0.1), y (0.2), y
(0.3) and comment on the accuracy of the solution.
Solution. We have, y c x 2 y 2 , y (0) 0
Self-Instructional
Material 237
Ordinary Differential Differentiating successively we have,
Equations
y cc 2 x 2 yy c, ? y cc(0) 0
2
y ccc 2 2[ yy cc ( y c) , y ccc(0) 2
NOTES y (iv ) 2 ( yy ccc 3 y cy cc), ? y ( iv ) (0) 0
(v) (iv ) 2 (v)
y 2[ yy 4 y cy ccc 3 ( y cc) ], ?y 0
(vi ) (v) (iv ) ( vi )
y 2[ yy 5 y cy 10 y ccy ccc], ?y 0
( vii ) ( vi ) (v) (iv ) 2 ( vi )
y 2[ yy 6 yc y 15 y cc y 10 ( y ccc) ] ? y (0) 80
x3 x7 80 1 3 x7
The Taylor series up to two terms is y ( x) u2 u x
6 7 7! 3 63
Example 13.8. Given x y c = x – y2, y (2) = 1, evaluate y (2.1), y (2.2) and y (2.3)
correct to four decimal places using Taylor series method.
Solution. Given y c x y 2 , i.e., y c 1 y 2 / x, and y 1 for x 2. To compute y (2.1)
by Taylor series method, we first find the derivatives of y at x = 2.
1
yc 1 y 2 / x ? y c(2) 1 0.5
2
xy cc y c 1 2 yy c
1 1 1 2 1
2 y cc(2) 1 2. ? y cc(2) u 0.25
2 2 4 2 2
xy ccc 2 y cc 2 y c2 2 yycc
2
§ 1· §1· § 1·
? 2 y ccc(2) 2 ¨ ¸ 2 ¨ ¸ 2 ¨ ¸
© 4¹ ©2¹ © 4¹
1 1
2 y ccc(2) ? y ccc(2) 0.25
2 4
xy ( iv ) 3 y ccc 4 ycycc 2 y cy cc 2 yy ccc
1 1 §1· 1
2 y cccc(2) 3 u 6u u¨ ¸ 2u
4 2 ©4¹ 4
§3 3 1·1
y cccc(2) ¨ ¸ 0.25
© 4 4 2¹2
(0.1)2 (0.1)3 (0.1)4
y (2.1) y (2) 0.1 y c(2) y cc(2) y ccc(2) y cccc(2)
2 3! 4!
0.01 0.001 0.0001
1 0.1 u 0.5 u ( 0.25) u 0.25 u (0.25)
2 6 24
1 0.05 0.00125 0.00004 0.000001
1.0488
0.04 0.008 0.0016
y (2.2) 1 0.2 u 0.5 u ( 0.25) u 0.25 u (0.5)
2 6 24
1 0.1 0.005 0.00032 0.00003
1.0954
0.09 0.009 0.0081
y (2.3) 1 0.3 u 0.5 (0.25) u 0.25 u (0.5)
2 2 24
1 0.15 0.01125 0.001125 0.000168
Self-Instructional 1.005043
238 Material
Ordinary Differential
13.5 MULTIPLE METHODS Equations
We have seen that for finding the solution at each step, the Taylor series method and
Runge-Kutta methods require evaluation of several derivatives. We shall now develop
NOTES
the multistep method which require only one derivative evaluation per step; but
unlike the self starting Taylor series on Runge-Kutta methods, the multistep methods
make use of the solution at more than one previous step points.
Let the values of y and y1 already have been evaluated by self-starting methods
at a number of equally spaced points x0, x1,...xn. We now integrate the different
equation,
dy
f ( x, y ), from xn to xn 1
dx
xn 1 xn 1
i.e., ³
xn
dy ³
xn
f ( x, y) dx
xn 1
yn 1 yn ³
xn
f ( x, y ( x)) dx
To evaluate the integral on the right hand side, we consider f (x, y) as a function
of x and replace it by an interpolating polynomial, i.e., a Newton’s backward difference
interpolation using the (m+1) points xn, xn+1, xn–2,...xn–m,
m
x xn
pm ( x) ¦ (1)
k 0
k s
( )' k f n k , where s
k
h
s 1
k s ( s 1)( s 2)...( s k 1)
k!
ª 1 5 3 º
y n 1 y n h « f n 'f n 1 '2 f n 2 '3 f n 3 »
¬ 2 12 8 ¼
Substituting the expression of the differences in terms of function values given
by, Self-Instructional
Material 239
Ordinary Differential
Equations 'f n 1 f n f n 1 , '2 f n 2 f n 2 f n 1 f n 2
'3 f n 3 f n 3 f n 1 3 f n 2 f n 3
We get on arranging,
NOTES h
y n 1 yn [55 f n 59 f n 1 37 f n 2 9 f n 3 ] (13.15)
24
This is known as Adams-Bashforth formula of order 4. The local error of this
formula is,
1
§ s 3·
³
5 iv
E h f ([ ) ¨¨ ¸ ds (13.16)
4 ¸¹
0©
The fourth order Adams-Bashforth formula requires four starting values, i.e.,
the derivaties, f3, f2, f1 and f0. This is a multistep method.
Self-Instructional
240 Material
Starting with i = 0 and continuing step by step for i = 1, 2, 3,... Evidently, we can Ordinary Differential
Equations
also extend Euler’s method for an initial value problem associated with a second
order differential equation by rewriting it as a pair of first order equations.
Consider the initial value problem,
NOTES
2
d y § dy ·
2
g ¨ x, y, ¸ , with y(x ) = y , y c( x0 ) y0c
dx © dx ¹ 0 0
dy dz
We write z , so that g ( x, y, z ) with y (x0) = y0 and z (x0) = y0c
dx dx
Example 13.9. Compute y(1.1) and y(1.2) by solving the initial value problem,
yc
y cc y 0, with y (1) = 0.77, y c = –0.44
x (1)
z
Solution. We can rewrite the problem as y c z , z c y; with y (1) = 0.77 and z
x
(1.1) = –0.44.
Taking h = 0.1, we use Euler’s method for the problem in the form,
yi 1 yi hzi
ª z º
z i 1 z i h « 1 yi » , i 0, 1, 2,...
¬« xi ¼»
Thus y1 = y (1.1) and z1 = z (1.1) are given by,
y1 y0 hz 0 0.77 0.1u (0.44) 0.726
ª z º
z1 z 0 h « 0 y 0 » 0.44 0.1 u (0.44 0.77)
¬« x0 ¼»
0.44 0.33 0.473
Example 13.10. Using Euler’s method, compute y (0.1) and y (0.2) for the initial
value problem,
y cc y 0, y (0) 0, y c(0) 1
Self-Instructional
Material 241
Ordinary Differential
Equations
Solution. We rewrite the initial value problem as y c z, z c y, with y (0) = 0,
z (0) = 1.
Taking h = 0.1, we have by Euler’s method,
NOTES y1 y (0.1) y0 hz0 0 0.1 u 1 0.1
z1 z (0.1) z0 h( y0 ) 1 0.1 u 0 1.0
y2 y (0.2) y1 hz1 0.1 0.1 u 1.0 0.2
z2 z (0.2) z1 hy1 1.0 0.1 u 0.1 0.99
And in general, y(2n)(0) = 0, y ( 2 n 1) (0) 2ny ( 2n 1) (0) (1) n 2 n.2!
x3 x 5 2n n ! x 2n 1
Thus, y ( x) x ... (1) n ...
3 15 (2n 1)!
This is an alternating series whose terms decrease. Using this we form the
solution for y up to 0.2 is given below:
Self-Instructional
242 Material
1 Ordinary Differential
y i 1 yi (k1 2k 2 2k 3 k 4 ) Equations
6
1 (3.21)
z i 1 z i (l1 2l 2 2l3 l 4 ) i 0, 1, 2,...
6
NOTES
Where, k1 hf ( xi , yi , z i ), l1 hg ( xi , yi , z i )
§ h k l · § h k l ·
k2 hf ¨¨ xi , yi 1 , z i 1 ¸¸, l2 hg ¨¨ xi , yi 1 , z1 1 ¸¸
© 2 2 2¹ © 2 2 2¹
§ h k l · § h k l ·
k3 hf ¨¨ xi , yi 2 , z i 2 ¸¸, l3 hg ¨¨ xi , yi 2 , zi 2 ¸¸
© 2 2 2¹ © 2 2 2¹
k4 hf ( xi h, y1 k 3 , z i l3 ), l4 hf ( xi h, yi k 3 , z i l3 )
yi y ( xi ), z i z ( xi ), i 0, 1, 2,...
The solutions for y(x), and z(x) are determined at successive step points x1 = x0+h, x2
= x1+h = x0+2h,...xN = x0+Nh.
dy dz
z, g ( x, y , z )
dx dx
With y (x0) = y0 and z (x0) = y c (x0) = D 0
Which is an initial value problem associated with a system of two first order differential
equation. Thus we can write the Runge-Kutta methods for a second order differential
equation as,
1
y i 1 yi (k1 2k 2 2k 3 k 4 ),
6
1 (13.22)
z i 1 y ic1 z i (l1 2l 2 2l3 l 4 ). i 0, 1, 2,...
6
Where, k1 h( zi ), l1 hg ( xi , yi , zi )
§ l · § h k l ·
k2 h ¨ zi 1 ¸ , l2 hg ¨ xi , yi 1 , zi 1 ¸
© 2¹ © 2 2 2¹
§ l · § h k l ·
k3 h ¨ zi 2 ¸ , l3 hg ¨ xi , yi 2 , zi 2 ¸
© 2¹ © 2 2 2¹
k4 h( zi l3 ), l4 hg ( xi h, yi k3 , zi l3 )
Self-Instructional
Material 243
Ordinary Differential
Equations 13.9 NUMERICAL SOLUTIONS OF BOUNDARY
VALUE PROBLEMS
NOTES We consider the solution of ordinary differential equation of order 2 or more, when
values of the dependent variable is given at more than one point, usually at the two
ends of an interval in which the solution is required. For example, the simplest
boundary value probelm associated with a second order differential equation is,
ycc +p (x) y c +q (x)y = r (x) (13.23)
With boundary conditions, y (a) = A, y (b) = B. (13.24)
The following two methods reduce the boundary value problem into initial value
problems which are then solved by any of the methods for solving such problems.
Reduction to Pair of Initial Value Problem
This method is applicable to linear differential equations only. In this method, the
solution is assumed to be a linear combination of two solution in the form,
y(x) = u(x)+ OX (x) (13.25)
Where O is a suitable constant determined by using the boundary condition and
u(x) and X (x) are the solution of the following two initial value problem.
(i) u cc +p(x) u c +q(x)u = r(x)
u(a) = A, u c (a) = D1 , (say). (13.26)
(ii) X cc +p(x) X c +q(x) X = r(x)
X (a) = 0 and X c (a) = D 2 , (say) (13.27)
Where D1 and D 2 are arbitraily assumed constants. After solving the two initial
value problems, the constant O is determined by satisfying the boundary condition
at x = b. Thus,
Or, (13.28)
QUESTIONS
1. Runge-Kutta methods can be of different orders. They are very useful when NOTES
the method of Taylor series is not easy to apply because of the complexity of
finding higher order derivatives. Runge-Kutta methods attempt to get better
accuracy and at the same time obviate the need for computing higher order
derivatives. These methods, however, require the evaluation of the first order
derivatives at several off-step points.
2. This is a crude but simple method of solving a first order initial value prob-
lem:
dy
f ( x, y ), y ( x0 ) y0
dx
This is derived by integrating f (x0, y0) instead of f (x, y) for a small interval,
x0 h x0 h
? ³
x0
dy ³
x0
f ( x0 , y0 )dx.
? y ( x0 h) y ( x0 ) hf ( x0 , y0 )
y ( 0) n 1 y n h f ( xn , y n ),
h
y (1) n 1 y n [ f ( xn , y n ) f ( xn 1 , y ( 0) n 1 )].
2
8. Consider the initial value problem associated with a second order differential
equation,
d2y
g ( x, y, yc)
dx 2
With y (x0) = y0 and y c (x0) = D 0
9. We consider the solution of ordinary differential equation of order 2 or more,
when values of the dependent variable is given at more than one point, usually
at the two ends of an interval in which the solution is required. For example,
the simplest boundary value probelm associated with a second order differential
equation is,
ycc +p (x) y c +q (x)y = r (x)
With boundary conditions, y (a) = A, y (b) = B.
13.11 SUMMARY
x Runge-Kutta methods can be of different orders. They are very useful when
the method of Taylor series is not easy to apply because of the complexity of
finding higher order derivatives. Runge-Kutta methods attempt to get better
accuracy and at the same time obviate the need for computing higher order
derivatives. These methods, however, require the evaluation of the first order
derivatives at several off-step points.
x Runge-Kutta methods are widely used particularly for finding starting values
at steps x1, x2, x3,..., since it does not require evaluation of higher order de-
rivatives. It is also easy to implement the method in a computer program.
x The Euler’s method finds a sequence of values {yk} of y for the sequence of
values {xk}of x, step by step. But to get the solution up to a desired accuracy,
we have to take the step size h to be very small. Again, the method should not
be used for a larger range of x about x0, since the propagated error grows as
integration proceeds.
x We have seen that for finding the solution at each step, the Taylor series
method and Runge-Kutta methods require evaluation of several derivatives.
Self-Instructional
246 Material
We shall now develop the multistep method which require only one derivative Ordinary Differential
Equations
evaluation per step; but unlike the self starting Taylor series on Runge-Kutta
methods, the multistep methods make use of the solution at more than one
previous step points.
x Another method which is commonly used for solving boundary problems is NOTES
the finite difference method discussed below.
Short-Answer Questions
1. State the Runge-Kutta methods.
2. Explain the Euler’s method.
3. Elaborate on the modified Euler’s method.
4. Analyse the Taylor series method.
5. Interpret the multiple methods.
6. Explain the Euler’s method for a pair of differential equation.
7. Illustrate the Runge-Kutta methods for a pair of equations.
8. Define the Runge-Kutta methods for a second order differential equation.
9. Analyse the numerical solutions of boundary value problems.
Long-Answer Questions
1. Explain the Taylor series method with the help of example.
2. Illustrate the multiple methods. Give appropriate example.
3. Define the Euler’s method for a pair of differential equation.
Self-Instructional
Material 247
Ordinary Differential 4. Discuss briefly the Runge-Kutta methods for a pair of equations.
Equations
5. Explain the Runge-Kutta methods for a second order differential equations.
6. Analyse the numerical solutions of boundary value problems.
NOTES
13.14 FURTHER READINGS
Self-Instructional
248 Material
Predictor-Corrector
METHODS
NOTES
Structure
14.0 Introduction
14.1 Objectives
14.2 Predictor-Corrector Method
14.3 Milne’s Predictor-Corrector Method
14.4 Adam’s Predictor-Corrector Method
14.5 Answers to Check Your Progress Questions
14.6 Summary
14.7 Key Words
14.8 Self Assessment Questions and Exercises
14.9 Further Readings
14.0 INTRODUCTION
14.1 OBJECTIVES
Self-Instructional
Material 249
Predictor-Corrector
Methods 14.2 PREDICTOR-CORRECTOR METHOD
These methods use a pair of multistep numerical integration. The first is the Predictor
NOTES formula, which is an open-type explicit formula derived by using, in the integral, an
interpolation formula which interpolates at the points xn, xn–1,...,xn–m. The second is
the Corrector formula which is obtained by using interpolation formula that interpolates
at the points xn+1, xn, ..., xn–p in the integral.
y n( p1) y n h f ( xn , yn ) (14.1)
(c) h ( p)
y n 1 y n [ f ( xn , y n ) f ( xn 1 , y n 1 )] (14.2)
2
In order to determine the solution of the problem upto a desired accuracy the
corrector formula can be employed in an iterative manner as shown below.
Step 1. Compute yn(0)1 , using equation (14.3)
h
(k )
i.e., yn 1 yn [ f ( xn , yn ) f ( xn 1 , yn( k11) )], for K 1, 2, 3,...,
2
The computation is continued till the condition given below is satisfied,
y n( k)1 y n( k11)
H (14.5)
y n( k)1
h2 h3
y cc(K1 ) and y ccc(K2 ), respectively..
2 12
Single-step methods (such as Euler’s method) refer to only one previous point
Self-Instructional
and its derivative to determine the current value. Methods such as Runge–Kutta
250 Material
Predictor-Corrector
take some intermediate steps (for example, a half-step) to obtain a higher order Methods
method, but then discard all previous information before taking a second step.
Multistep methods attempt to gain efficiency by keeping and using the information
from previous steps rather than discarding it. Consequently, multistep methods NOTES
refer to several previous points and derivative values.
There are some more useful linear multistep methods, such as Two-step
Adams–Bashforth methods and Adams–Moulton methods. Two-step Adams–
Bashforth methods is more accurate than Euler’s method. This is always the case
if the step size is small enough.
Adams–Bashforth methods
Self-Instructional
Material 251
Predictor-Corrector
Methods instead. This equation can be solved exactly; the solution is simply the
integral of p. This suggests taking
NOTES
Self-Instructional
252 Material
Predictor-Corrector
Methods
The Adams–Moulton methods are solely due to John Couch Adams, like
the Adams–Bashforth methods. The name of Forest Ray Moulton became
associated with these methods because he realized that they could be used in
tandem with the Adams–Bashforth methods as a predictor-corrector pair (Moulton
1926); Milne (1926) had the same idea. Adams used Newton’s method to solve
the implicit equation.
y n( p1) y n h f ( xn , yn )
(c) h ( p)
y n 1 y n [ f ( xn , y n ) f ( xn 1 , y n 1 )]
2
In order to determine the solution of the problem upto a desired accuracy the
corrector formula can be employed in an iterative manner
Self-Instructional
Material 253
Predictor-Corrector 3. The Adams–Bashforth methods are explicit methods. The coefficients are
Methods
and while the are chosen such
that the methods have order s (this determines the methods uniquely).
NOTES
4. The Adams–Moulton methods are similar to the Adams–Bashforth methods
in that they also have and Again,
the b coefficients are chosen to obtain the highest order possible. However,
the Adams–Moulton methods are implicit methods. By removing the
restriction that an s-step Adams–Moulton methods can reach
order while an s-step Adams–Bashforth methods has only order s.
14.6 SUMMARY
x These methods use a pair of multistep numerical integration. The first is the
Predictor formula, which is an open-type explicit formula derived by using, in
the integral, an interpolation formula which interpolates at the points xn, xn–
1
,...,xn–m. The second is the Corrector formula which is obtained by using
interpolation formula that interpolates at the points xn+1, xn, ..., xn–p in the
integral.
x The simplest formula of the type is a pair of formula given by,
y n( p1) y n h f ( xn , yn )
(c) h ( p)
y n 1 y n [ f ( xn , y n ) f ( xn 1 , y n 1 )]
2
In order to determine the solution of the problem upto a desired accuracy the
corrector formula can be employed in an iterative manner
x The Adams–Bashforth methods are explicit methods. The coefficients are
and while the are chosen such
that the methods have order s (this determines the methods uniquely).
x The Adams–Moulton methods are similar to the Adams–Bashforth methods
in that they also have and Again,
the b coefficients are chosen to obtain the highest order possible. However,
the Adams–Moulton methods are implicit methods. By removing the
restriction that an s-step Adams–Moulton methods can reach
order while an s-step Adams–Bashforth methods has only order s.
Self-Instructional
254 Material
Predictor-Corrector
14.7 KEY WORDS Methods
Short-Answer Questions
1. Explain the predictor-corrector method.
2. State the Milne’s predictor-corrector method.
3. Define the Adam’s predictor-corrector method.
4. Elaborate on the Adams-Bashforth methods.
5. Interpret the Adams-Moulton methods.
Long-Answer Questions
1. Describe the predictor-corrector method. Give appropriate example.
2. Briefly discuss the Milne’s predictor-corrector method.
3. Define the Adams-Bashforth methods.
4. Explain the Adams-Moulton methods.
Self-Instructional
Material 255
Predictor-Corrector Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
Methods
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
NOTES
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.
Self-Instructional
256 Material