0% found this document useful (0 votes)
816 views264 pages

UG B.sc. Mathematics 113 53 BSc-Mathematics Numerical Analysis CRC 2329

Uploaded by

kgf0237
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
816 views264 pages

UG B.sc. Mathematics 113 53 BSc-Mathematics Numerical Analysis CRC 2329

Uploaded by

kgf0237
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 264

ALAGAPPA UNIVERSITY

[Accredited with ‘A+’ Grade by NAAC (CGPA:3.64) in the Third Cycle


and Graded as Category–I University by MHRD-UGC]
(A State University Established by the Government of Tamil Nadu)
KARAIKUDI – 630 003

Directorate of Distance Education

B.Sc. [Mathematics]
V - Semester
113 53

NUMERICAL ANALYSIS
Author:
Dr. N Datta, Retired Senior Professor, Department of Mathematics, Indian Institute of Technology, Kharagpur, West Bengal
Units: (1, 2.0 - 2.2, 3, 4.0 - 4.6, 5, 6.0 - 6.2, 7.0 - 7.2, 7.3 - 7.8, 9, 10.3 - 10.8, 11-13, 14.0 - 14.3)
Dr. Kalika Patrai, Associate Professor, Institute of Innovation in Technology & Management (IITM), Janakpuri, New Delhi
Units: (6.3, 7.2.1, 8.0 - 8.6, 10.0 - 10.2)
Vikas Publishing House, Units: (2.3 - 2.8, 4.7 - 4.12, 6.4 - 6.9, 8.7 - 8.13, 14.4 - 14.9)

"The copyright shall be vested with Alagappa University"

All rights reserved. No part of this publication which is material protected by this copyright notice
may be reproduced or transmitted or utilized or stored in any form or by any means now known or
hereinafter invented, electronic, digital or mechanical, including photocopying, scanning, recording
or by any information storage or retrieval system, without prior written permission from the Alagappa
University, Karaikudi, Tamil Nadu.

Information contained in this book has been published by VIKAS® Publishing House Pvt. Ltd. and has
been obtained by its Authors from sources believed to be reliable and are correct to the best of their
knowledge. However, the Alagappa University, Publisher and its Authors shall in no event be liable for
any errors, omissions or damages arising out of use of this information and specifically disclaim any
implied warranties or merchantability or fitness for any particular use.

Vikas® is the registered trademark of Vikas® Publishing House Pvt. Ltd.


VIKAS® PUBLISHING HOUSE PVT. LTD.
E-28, Sector-8, Noida - 201301 (UP)
Phone: 0120-4078900 x Fax: 0120-4078999
Regd. Office: A-27, 2nd Floor, Mohan Co-operative Industrial Estate, New Delhi 1100 44
x Website: www.vikaspublishing.com x Email: [email protected]

Work Order No.AU/DDE/DE12-27/Preparation and Printing of Course Materials/2020 Dated 12.08.2020 Copies - 500
SYLLABI-BOOK MAPPING TABLE
Numerical Analysis
Syllabi Mapping in Book

BLOCK I: POLYNOMINALM EQUATIONS AND SYSTEM OF LINEAR


EQUATION
UNIT 1: Algebraic & Transcendental and Polynomial Equations: Bisection Method, Unit 1: Algebraic, Transcendental,
Iteration Method, Method of False Position, Newton - Raphson Method. and Polynomial Equations
UNIT 2: System of Linear Equations: Matrix Inversion Method, Cramer’s (Pages 1-34);
Rule, Guass Elimination Method, Guass - Jordan Elimination Method, Unit 2: System of Linear Equations
Triangularisation Method. (Pages 35-56);
UNIT 3: Solutions to Linear Systems - Jacobi & Gauss Siedal Iterative Methods - Unit 3: Solution of Linear Systems
Theory & Problems. (Pages 57-66);
UNIT 4: Interpolation: Graphic Method - Finite Differences - Forward and Backward Unit 4: Interpolation
Differences - Central Differences - Fundamental Theorem of Finite Differences. (Pages 67-83)

BLOCK II: INTERPOLATIONS


UNIT 5: Interpolating Polynomials using Finite Differences - Other Difference Unit 5: Interpolating Polynomials
Operators. and Operators
UNIT 6: Lagrange and Newton Interpolations - Applications. (Pages 84-101);
UNIT 7: Divided Differences and their Properties - Application of Newton’s General Unit 6: Lagrange Newton
Interpolating Formula. Interpolations
UNIT 8: Central Differences Interpolation Formulae - Guass Formulae, Stirlings (Pages 102-117);
Formula, Bessel’s Formula, Everett’s Formula, Hermite’s Formula. Unit 7: Divided Differences
(Pages 118-128);
Unit 8: Central Differences
Interpolations Formulae
(Pages 129-149)

BLOCK III: NUMERICAL DIFFERENCIATION AND INTEGRATION


UNIT 9: Numerical Differentiation - Methods based on Interpolation -Problems. Unit 9: Numerical Differentiation
UNIT 10: Numerical Differentiation - Methods based on Finite Differences - (Pages 150-163);
Problems. Unit 10: Numerical Differentiation
UNIT 11: Numerical Integration, Trapezodial Rule, Simpson’s 1/3 Rule, Simpson’s Methods Based on Finite
3/8 Rule, Weddle’s Rule, Cote’s Method. Differences
(Pages 164-178);
Unit 11: Numerical Integration
(Pages 179-195)

BLOCK IV: NUMERICAL SOLUTIONS OF ODE


UNIT 12: Numerical Solutions of Ordinary Differential Equations: Taylor’s Series Unit 12: Numerical Solutions of
Method, Picard’s Method, Euler’s Method, Runge - Kutta Method Ordinary Differential Equations
UNIT 13: Numerical Solutions of Ordinary Differential Equations using Runge (Pages 196-226);
Kutta 2nd and 4th Order Methods (Derivation of the Formula not Needed) - Unit 13: Ordinary Differential
Theory & Problems. Equations
UNIT 14: Predictor - Corrector Methods - Milne’s Predictor Corrector Methods - (Pages 227-248);
Adam’s Predictor Corrector Method. Unit 14: Predictor-Corrector
Methods
(Pages 249-256)
CONTENTS
BLOCK I: POLYNOMINALM EQUATIONS AND SYSTEM OF LINEAR EQUATION
UNIT 1 ALGEBRAIC, TRANSCENDENTAL, AND
POLYNOMIAL EQUATIONS 1-34
1.0 Introduction
1.1 Objectives
1.2 Root Finding
1.2.1 Methods for Finding Location of Real Roots
1.2.2 Methods for Finding the Roots—Bisection and Simple Iteration Methods
1.2.3 Newton-Raphson Methods
1.2.4 Secant Method
1.2.5 Regula-Falsi Methods
1.2.6 Roots of Polynomial Equations
1.2.7 Descarte’s Rule
1.3 Curve Fitting
1.3.1 Method of Least Squares
1.4 Answers to Check Your Progress Questions
1.5 Summary
1.6 Key Words
1.7 Self Assessment Questions and Exercises
1.8 Further Readings
UNIT 2 SYSTEM OF LINEAR EQUATIONS 35-56
2.0 Introduction
2.1 Objectives
2.2 System of Linear Equations
2.2.1 Classical Methods
2.2.2 Elimination Methods
2.2.3 Iterative Methods
2.2.4 Computation of the Inverse of a Matrix by using Gaussian Elimination Method
2.3 Triangularisation Method
2.4 Answers to Check Your Progress Questions
2.5 Summary
2.6 Key Words
2.7 Self Assessment Questions and Exercises
2.8 Further Readings
UNIT 3 SOLUTION OF LINEAR SYSTEMS 57-66
3.0 Introduction
3.1 Objectives
3.2 Solution of Linear Systems
3.3 Jacobi and Gauss-Seidal Iterative Methods
3.4 Answers to Check Your Progress Questions
3.5 Summary
3.6 Key Words
3.7 Self Assessment Questions and Exercises
3.8 Further Readings
UNIT 4 INTERPOLATION 67-83
4.0 Introduction
4.1 Objectives
4.2 Graphical Method of Interpolation
4.3 Finite Difference
4.4 Forward Difference
4.5 Backward Difference
4.6 Central Difference
4.7 Fundamental Theorem of Finite Differences
4.8 Answers to Check Your Progress Questions
4.9 Summary
4.10 Key Words
4.11 Self Assessment Questions and Exercises
4.12 Further Readings

BLOCK II: INTERPOLATIONS


UNIT 5 INTERPOLATING POLYNOMIALS AND OPERATORS 84-101
5.0 Introduction
5.1 Objectives
5.2 Interpolating Polynomials Using Finite Difference
5.3 Other Difference Operators
5.4 Answers to Check Your Progress Questions
5.5 Summary
5.6 Key Words
5.7 Self Assessment Questions and Exercises
5.8 Further Readings
UNIT 6 LAGRANGE NEWTON INTERPOLATIONS 102-117
6.0 Introduction
6.1 Objectives
6.2 Lagrange Interpolations
6.3 Newton Interpolations
6.4 Applications of Lagrange and Newton Interpolations
6.5 Answers to Check Your Progress Questions
6.6 Summary
6.7 Key Words
6.8 Self Assessment Questions and Exercises
6.9 Further Readings
UNIT 7 DIVIDED DIFFERENCES 118-128
7.0 Introduction
7.1 Objectives
7.2 Divided Differences and Their Properties
7.2.1 Newton’s Divided Difference Interpolation Formula
7.3 Application of Newton's General Interpolating Formula
7.4 Answers to Check Your Progress Questions
7.5 Summary
7.6 Key Words
7.7 Self Assessment Questions and Exercises
7.8 Further Readings
UNIT 8 CENTRAL DIFFERENCES INTERPOLATIONS FORMULAE 129-149
8.0 Introduction
8.1 Objectives
8.2 Central Differences Interpolations Formulae
8.3 Gauss’s Formula
8.4 Stirling’s Formula
8.5 Bessel’s Formula
8.6 Lagrange’s Interpolation Formula
8.7 Everett’s Formula
8.8 Hermite's Formula
8.9 Answers to Check Your Progress Questions
8.10 Summary
8.11 Key Words
8.12 Self Assessment Questions and Exercises
8.13 Further Readings

BLOCK III: NUMERICAL DIFFERENCIATION AND INTEGRATION


UNIT 9 NUMERICAL DIFFERENTIATION 150-163
9.0 Introduction
9.1 Objectives
9.2 Numerical Differentiation
9.2.1 Differentiation Using Newton’s Forward Difference Interpolation Formula
9.2.2 Differentiation Using Newton’s Backward Difference Interpolation Formula
9.3 Answers to Check Your Progress Questions
9.4 Summary
9.5 Key Words
9.6 Self Assessment Questions and Exercises
9.7 Further Readings
UNIT 10 NUMERICAL DIFFERENTIATION METHODS
BASED ON FINITE DIFFERENCES 164-178
10.0 Introduction
10.1 Objectives
10.2 Numerical Differentiation
10.3 Methods Based on Finite Differences
10.4 Answers to Check Your Progress Questions
10.5 Summary
10.6 Key Words
10.7 Self Assessment Questions and Exercises
10.8 Further Readings
UNIT 11 NUMERICAL INTEGRATION 179-195
11.0 Introduction
11.1 Objectives
11.2 Numerical Integration
11.3 Trapezoidal Rule
11.4 Simpson’s 1/3 Rule
11.5 Simpson’s 3/8 Rule
11.6 Weddle’s Rule
11.7 Cotes Method
11.8 Answers to Check Your Progress Questions
11.9 Summary
11.10 Key Words
11.11 Self Assessment Questions and Exercises
11.12 Further Readings
BLOCK IV: NUMERICAL SOLUTIONS OF ODE
UNIT 12 NUMERICAL SOLUTIONS OF ORDINARY
DIFFERENTIAL EQUATIONS 196-226
12.0 Introduction
12.1 OBJECTIVES
12.2 Ordinary Differential Equations
12.3 Taylor’s Series Method
12.4 Picard’s Method of Successive Approximations
12.5 Euler’s Method
12.5.1 Modified Euler’s Method
12.5.2 Euler’s Method for a Pair of Differential Equations
12.6 Runge-Kutta Methods
12.7 Multistep Methods
12.8 Predictor-Correction Methods
12.8.1 Euler’s Predictor-Corrector Formula; 12.8.2 Milne’s Predictor-Corrector Formula
12.9 Numerical Solution of Boundary Value Problems
12.9.1 Reduction to a Pair of Initial Value Problem
12.9.2 Finite Difference Method
12.10 Answers to Check Your Progress Questions
12.11 Summary
12.12 Key Words
12.13 Self Assessment Questions and Exercises
12.14 Further Readings
UNIT 13 ORDINARY DIFFERENTIAL EQUATIONS 227-248
13.0 Introduction
13.1 Objectives
13.2 Runge-Kutta Methods
13.3 Euler’s Method
13.4 Taylor Series Method
13.5 Multiple Methods
13.6 Euler’s Method for a Pair of Differential Equations
13.7 Runge-Kutta Methods for a Pair of Equations
13.8 Runge-Kutta Methods for a Second Order Differential Equation
13.9 Numerical Solutions of Boundary Value Problems
13.10 Answers to Check Your Progress Questions
13.11 Summary
13.12 Key Words
13.13 Self Assessment Questions and Exercises
13.14 Further Readings
UNIT 14 PREDICTOR-CORRECTOR METHODS 249-256
14.0 Introduction
14.1 Objectives
14.2 Predictor-Corrector Method
14.3 Milne’s Predictor-Corrector Method
14.4 Adam’s Predictor-Corrector Method
14.5 Answers to Check Your Progress Questions
14.6 Summary
14.7 Key Words
14.8 Self Assessment Questions and Exercises
14.9 Further Readings
Introduction

INTRODUCTION
Numerical analysis is the study of algorithms to find solutions for problems of
NOTES continuous mathematics. It helps in obtaining approximate solutions while
maintaining reasonable bounds on errors. Although numerical analysis has
applications in all fields of engineering and the physical sciences, yet in the 21st
century life sciences and both the arts have adopted elements of scientific
computations. Ordinary differential equations are used for calculating the movement
of heavenly bodies, i.e., planets, stars and galaxies. Besides, it evaluates optimization
occurring in portfolio management and also computes stochastic differential
equations to solve problems related to medicine and biology. Airlines use
sophisticated optimization algorithms to finalize ticket prices, airplane and crew
assignments and fuel needs. The basic aim of numerical analysis is to design and
analyse techniques to compute approximate and accurate solutions to unique
problems.
In numerical analysis, two methods are involved, namely direct and iterative
methods. Direct methods compute the solution to a problem in a finite number of
steps whereas iterative methods start from an initial guess to form successive
approximations that converge to the exact solution only in the limit. Iterative methods
are more common than direct methods in numerical analysis. The study of errors
is an important part of numerical analysis.
This book, Numerical Analysis, is divided into four blocks, which are
further subdivided into fourteen units. This book provides a basic understanding
of the subject and helps to grasp its fundamentals. In a nutshell, it explains various
aspects, such as algebraic, transcendental and polynomial equations, Newton-
Raphson method, system of linear equations, Guass-Jordan elimination method,
triangularisation method, solutions of linear systems, Jacobi and Gauss-Siedal
iterative methods, interpolation, finite differences, forward and backward
differences, central differences, interpolating polynomials using finite differences,
Lagrange and Newton interpolations, divided differences and their properties,
central differences interpolation formulae (Guass, Stirling, Bessel, Everett and
Hermite), numerical differentiation, numerical integration, trapezoidal rule,
Simpson’s 1/3 and 3/8 rule, Weddle’s rule, Cote’s method, numerical solutions of
ordinary differential equations (Taylor’s series, Picard, Euler and Runge-Kutta
methods), numerical solutions of ordinary differential equations using Runge-Kutta
2nd and 4th order methods, and predictor-corrector methods (Milne’s and Adam’s
methods).
The book follows the Self-Instructional Mode (SIM) wherein each unit
begins with an ‘Introduction’ to the topic. The ‘Objectives’ are then outlined before
going on to the presentation of the detailed content in a simple and structured
format. ‘Check Your Progress’ questions are provided at regular intervals to test
the student’s understanding of the subject. ‘Answers to Check Your Progress
Questions’, a ‘Summary’, a list of ‘Key Words’, and a set of ‘Self-Assessment
Questions and Exercises’ are provided at the end of each unit for effective
recapitulation. This book provides a good learning platform to the people who
need to be skilled in the area of operating system functions. Logically arranged
topics, relevant examples and illustrations have been included for better
understanding of the topics and for effective recapitulation.
Self-Instructional
8 Material
Algebraic, Transcendental,
BLOCK - I and Polynomial Equations

POLYNOMIAL EQUATIONS AND


SYSTEM OF LINEAR EQUATIONS
NOTES

UNIT 1 ALGEBRAIC,
TRANSCENDENTAL, AND
POLYNOMIAL EQUATIONS
Structure
1.0 Introduction
1.1 Objectives
1.2 Root Finding
1.2.1 Methods for Finding Location of Real Roots
1.2.2 Methods for Finding the Roots—Bisection and Simple Iteration Methods
1.2.3 Newton-Raphson Methods
1.2.4 Secant Method
1.2.5 Regula-Falsi Methods
1.2.6 Roots of Polynomial Equations
1.2.7 Descarte’s Rule
1.3 Curve Fitting
1.3.1 Method of Least Squares
1.4 Answers to Check Your Progress Questions
1.5 Summary
1.6 Key Words
1.7 Self Assessment Questions and Exercises
1.8 Further Readings

1.0 INTRODUCTION

A root-finding algorithm is a numerical method, or algorithm, for finding a


value x such that f(x) = 0, for a given function f. Such an x is called a root of the
function f. Generally speaking, algorithms for solving problems numerically can be
divided into two main groups: direct methods and iterative methods. Direct methods
are those which can be completed in a predetermined ûnite number of steps.
Iterative methods are methods which converge to the solution over time. These
algorithms run until some convergence criterion is met. When choosing which
method to use one important consideration is how quickly the algorithm converges
to the solution or the method’s convergence rate. Curve fitting is the process of
constructing a curve, or mathematical function, which has the best fit to a series of
data points, possibly subject to constraints.

Self-Instructional
Material 1
Algebraic, Transcendental, In this unit, you will study about the algebraic equations, transcendental
and Polynomial Equations
equations, polynomial equations, bisection method, iteration method, method of
false position, and Newton-Raphson methods.

NOTES
1.1 OBJECTIVES

After going through this unit, you will be able to:


x Compute the real roots of an equation
x Define graphical and tabulation method of finding real roots
x Use Newton-Raphson method for finding a root of an equation
x Explain Regula-Falsi method
x Understand the concept of curve fitting
x Explain the method of least squares

1.2 ROOT FINDING


In this section, we consider numerical methods for computing the roots of an equation
of the form,
f (x) = 0 (1.1)
Where f (x) is a reasonably well-behaved function of a real variable x. The function
may be in algebraic form or polynomial form given by,
f ( x) a n x n  a n1 x n 1  ...  a1 x  a0 (1.2)
It may also be an expression containing transcendental functions such as cos x,
sin x, ex, etc. First, we would discuss methods to find the isolated real roots of a
single equation. Later, we would discuss methods to find the isolated roots of a
system of equations, particularly of two real variables x and y, given by
f (x, y) = 0 , g (x, y) = 0 (1.3)
A root of an equation is usually computed in two stages. First, we find the
location of a root in the form of a crude approximation of the root. Next we use an
iterative technique for computing a better value of the root to a desired accuracy
in successive approximations/computations. This is done by using an iterative
function.

1.2.1 Methods for Finding Location of Real Roots


The location or crude approximation of a real root is determined by the use of any
one of the two methods, (a) Graphical and (b) Tabulation.
Graphical Method: In the graphical method, we draw the graph of the function
y = f (x), for a certain range of values of x. The abscissae of the points where the
graph intersects the x-axis are crude approximations for the roots of the Equation
(1.1). For example, consider the equation,
Self-Instructional
2 Material
f (x) = x2 + 2x – 1 = 0 Algebraic, Transcendental,
and Polynomial Equations
From the graph of the function y = f (x) shown in Figure 1.1, we find that it cuts
the x-axis between 0 and 1. We may take any point in [0, 1] as the crude approximation
for one root. Thus, we may take 0.5 as the location of a root. The other root lies
between – 2 and – 3. We can take – 2.5 as the crude approximation of the other NOTES
root.

Fig. 1.1 Graph of y x 2  2x 1


In some cases, where it is complicated to draw the graph of y = f (x), we may
rewrite the equation f (x) = 0, as f1(x) = f2(x), where the graphs of y = f1 (x) and y =
f2(x) are standard curves. Then we find the x-coordinate(s) of the point(s) of intersection
of the curves y = f1(x) and y = f2(x), which is the crude approximation of the root (s).
For example, consider the equation

This can be rewritten as,

Where it is easy to draw the graphs of y = x3 and y = 15.2 x + 13.2. Then, the
abscissa of the point(s) of intersection can be taken as the crude approximation(s)
of the root(s).

Fig. 1.2 Graph of y = x3 and y = 15.2x + 13.2


Self-Instructional
Material 3
Algebraic, Transcendental,
and Polynomial Equations
Example 1.1: Find the location of the root of the equation x x

Solution: The equation can be rewritten as x


x
NOTES
Now, the curves can be easily drawn and are shown in
x
Figure below.

1
Graph of y and y log10 x
x
The point of intersection of the curves has its x-coordinates value 2.5
approximately. Thus, the location of the root is 2.5.
Tabulation Method: In the tabulation method, a table of values of f (x) is made
for values of x in a particular range. Then, we look for the change in sign in the
values of
f (x) for two consecutive values of x. We conclude that a real root lies between
these values of x. This is true if we make use of the following theorem on continuous
functions.
Theorem 1.1: If f (x) is continuous in an interval (a, b), and f (a) and f(b) are of
opposite signs, then there exists at least one real root of f (x) = 0, between a and b.
Consider for example, the equation f (x) = x3 – 8x + 5 = 0.
Constructing the following table of x and f (x),

x  4  3  2 1 0 1 2 3
f ( x)  27 2 13 12 5  2  3 8

we observe that there is a change in sign of f (x) in each of the sub-intervals


(–3, –4), (0, 1) and (2, 3). Thus, we can take the crude approximation for the three
real roots as – 3.2, 0.2 and 2.2.

Self-Instructional
4 Material
1.2.2 Methods for Finding the Roots—Bisection and Simple Algebraic, Transcendental,
and Polynomial Equations
Iteration Methods
Bisection Method: The bisection method is a root finding method which repeatedly
bisects an interval and then selects a subinterval in which a root must lie for further NOTES
processing. It is an extremely simple and robust method, but it is relatively slow. It
is normally used for obtaining a rough approximation to a solution which is then
used as a starting point for more rapidly converging methods. When an interval
contains more than one root, the bisection method can find one of them. When an
interval contains a singularity, the bisection method converges to that singularity.
The notion of the bisection method is based on the fact that a function will change
sign when it passes through zero. By evaluating the function at the middle of an
interval and replacing whichever limit has the same sign, the bisection method can
halve the size of the interval in each iteration to find the root.
Thus, the bisection method is the simplest method for finding a root to an
equation. It needs two initial estimates xa and xb which bracket the root. Let
fa = f(xa), and fb = f(xb), such that fa fb d 0. Evidently, if fa fb = 0 then one or both
of xa and xb must be a root of f(x) = 0. Figure 1.3 is a graphical representation of
the bisection method showing two initial guesses xa and xb bracketing the root.

Fig. 1.3 Graph of the Bisection Method showing Two Initial Estimates
xa and xb Bracketing the Root

The method is applicable when we wish to solve the equation f(x) = 0 for the real
variable x, where f is a continuous function defined on an interval [a, b] and f(a)
and f(b) have opposite signs.

Self-Instructional
Material 5
Algebraic, Transcendental, The bisection method involves successive reduction of the interval in which
and Polynomial Equations
an isolated root of an equation lies. This method is based upon an important theorem
on continuous functions as stated below.
Theorem 1.2: If a function f (x) is continuous in the closed interval [a, b], and f
NOTES
(a) and f (b) are of opposite signs, i.e., f (a) f (b) < 0, then there exists at least one
real root of f (x) = 0 between a and b.
The bisection method starts with two guess values x0 and x1. Then, this interval
1
[x0, x1] is bisected by a point x2 ( x0  x1 ), where f(x0) . f(x1) < 0. We compute
2
f(x2). If f(x2) = 0, then x2 is a root. Otherwise, we check whether f(x0) . f(x2) < 0 or
f(x1) . f(x2) < 0. If f (x2)/f (x0) < 0, then the root lies in the interval (x2, x0). Otherwise,
if f(x0) . f(x1) < 0, then the root lies in the interval (x2, x1).
The sub-interval in which the root lies is again bisected and the above process is
repeated until the length of the sub-interval is less than the desired accuracy.
The bisection method is also termed as bracketing method, since the method
successively reduces the gap between the two ends of an interval surrounding the
real root, i.e., brackets the real root.
The algorithm given below clearly shows the steps to be followed in finding a
real root of an equation, by bisection method to the desired accuracy.
Algorithm: Finding root using bisection method.
Step 1: Define the equation, f (x) = 0
Step 2: Read epsilon, the desired accuracy
Setp 3: Read two initial values x0 and x1 which bracket the desired root
Step 4: Compute y0 = f (x0)
Step 5: Compute y1 = f (x1)
Step 6: Check if y0 y1 < 0, then go to Step 6
else go to Step 2
Step 7: Compute x2 = (x0 + x1)/2
Step 8: Compute y2 = f (x2)
Step 9: Check if y0 y2 > 0, then set x0 = x2
else set x1 = x2
Step 10: Check if | ( x1  x0 ) / x1 | > epsilon, then go to Step 3
Step 11: Write x2, y2
Step 12: End

Self-Instructional
6 Material
Next, we give the flowchart representation of the above algorithm to get a Algebraic, Transcendental,
and Polynomial Equations
better understanding of the method. The flowchart also helps in easy implementation
of the method in a computer program.

Flowchart for Bisection Algorithm NOTES

Self-Instructional
Material 7
Algebraic, Transcendental, Example 1.2: Find the location of the smallest positive root of the equation
and Polynomial Equations
x3 – 9x + 1 = 0 and compute it by bisection method, correct to two decimal places.
Solution: To find the location of the smallest positive root we tabulate the function
f (x) = x3 – 9x + 1 below.
NOTES
x 0 1 2 3
f ( x) 1  2  9 1

We observe that the smallest positive root lies in the interval [0, 1]. The computed
values for the successive steps of the bisection method are given in the table.

n x0 x1 x2 f ( x2 )
1 0 1 0 .5  3.37
2 0 0.5 0.25  1.23
3 0 0.25 0.125  0.123
4 0 0.125 0.0625 0.437
5 0.0625 0.125 0.09375 0.155
6 0.09375 0.125 0.109375 0.016933
7 0.109375 0.125 0.11718  0.053

From the above results, we conclude that the smallest root correct to two decimal
places is 0.11.
Simple Iteration Method: A root of an equation f (x) = 0, is determined using
the method of simple iteration by successively computing better and better
approximation of the root, by first rewriting the equation in the form,
x = g(x) (1.4)
Then, we form the sequence {xn} starting from the guess value x0 of the root
and computing successively,

In general, the above sequence may converge to the root or it may


diverge. If the sequence diverges, we shall discard it and consider another form
x = h(x), by rewriting f (x) = 0. It is always possible to get a convergent sequence
since there are different ways of rewriting f (x) = 0 in the form x = g(x). However,
instead of starting computation of the sequence, we shall first test whether the form
of g(x) can give a convergent sequence or not. We give below a theorem which can
be used to test for convergence.
Theorem 1.3: If the function g(x) is continuous in the interval [a, b] which contains
a root [of the equation f (x) = 0, and is rewritten as x = g(x), and | g c( x) | d l d 1 in
this interval, then for any choice of x0  [a, b] , the sequence {xn} determined by the
iterations,
(1.5)
Self-Instructional This converges to the root of f (x) = 0.
8 Material
Proof: Since x = [, is a root of the equation x = g(x), we have Algebraic, Transcendental,
and Polynomial Equations
(1.6)
The first iteration gives x1 = g(x0) (1.7)
Subtracting Equation (1.7) from Equation (1.6), we get NOTES

Applying mean value theorem, we can write


(1.8)
Similarly, we can derive
(1.9)
....
(1.10)

From Equations (1.8), (1.9) and (1.10), we get


(1.11)
Since | g c( xi ) |  l for each xi, the above Equation (1.11) becomes,
(1.12)

Evidently, since l < l, the right hand side tends to zero and
thus it follows that the sequence {xn}converges to the root This
completes the proof.
Order of Convergence: The order of convergence of an iterative process is
determined in terms of the errors en and en+1 in successive iterations. An iterative
en
process is said to have kth order of convergence if k
where M is a
n

finite number.
Roughly speaking, the error in any iteration is proportional to the kth power of
the error in the previous iteration.
Evidently, the simple iteration discussed in this section has its order of
convergence 1.
The above iteration is also termed as fixed point iteration since it determines
the root as the fixed point of the mapping defined by x = g(x).
Algorithm: Computation of a root of f (x) = 0 by linear iteration.
Step 1: Define g(x), where f (x) = 0 is rewritten as x = g(x)
Step 2: Input x0, epsilon, maxit, where x0 is the initial guess of root, epsilon is
accuracy desired, maxit is the maximum number of iterations allowed.
Step 3: Set i = 0
Step 4: Set x1 = g (x0)
Self-Instructional
Material 9
Algebraic, Transcendental, Step 5: Set i = i + 1
and Polynomial Equations
Step 6: Check, if |(x1 – x0)/ x1| < epsilon, then print ‘root is’, x1
else go to Step 6
NOTES Step 7: Check, if i < n, then set x0 = x1 and go to Step 3
Step 8: Write ‘No convergence after’, n, ‘iterations’
Step 9: End
Example 1.3: In order to compute a real root of the equation x3 – x – 1 = 0, near
x = 1, by iteration, determine which of the following iterative functions can be used
to give a convergent sequence.

(i) x = x3 – 1 (ii) x (iii) x


x x
Solution:
(i) For the form g(x) = x3 – 1 and . Hence,
for x near 1. So, this form would not give a convergent sequence of iterations.

(ii) For the form Thus, and


x x x x
g Hence, this form also would not give a convergent
sequence of iterations.
1

x 1 1 § x 1· 2 § 1 ·
(iii) For the form, g ( x) , g c( x) ¨ ¸ ˜ ¨  2 ¸.
x 2© x ¹ © x ¹

1 x 1
? | g c(1) |  1. Hence, the form x would give a convergent
2 2 x
sequence of iterations.
Example 1.4: Compute the real root of the equation x3 + x2 – 1 = 0, correct to five
significant digits, by iteration method.
Solution: The equation has a real root between 0 and 1 since f (x) = x3 + x2 – 1 has
opposite signs at 0 and 1. For using iteration, we first rewrite the equation in the
following different forms:

(i) x (ii) x (iii) x


x x x
1 2
For the form (i), g ( x) 1  , g c( x)  and for x in (0, 1), . So,
x2 x3
this form is not suitable. For the form

1 1 § 1 ·
(ii) g c( x) . ¨  2  1¸ and | g c( x) | ! 1 for all x in (0, 1). Finally, for
2 1 © x ¹
1
x
Self-Instructional the form
10 Material
Algebraic, Transcendental,
and Polynomial Equations
(iii) Thus, this form can be
x
used to form a convergent sequence for finding the root.
NOTES
1
We start the iteration x with x0 = 1. The results of suecessive iterations
1 x
are,
x1 0.70711 x2 0.76537 x3 0.75236 x4 0.75541
x5 0.75476 x6 0.75490 x7 0.75488 x8 0.75488

Thus, the root is 0.75488, correct to five significant digits.


Example 1.5: Compute the root of the equation x2 – x – 0.1 = 0, which lies in (1,2),
correct to five significant figures.
Solution: The equation is rewritten in the following form for computing the root by
iteration:
1
x x  0.1. Here, and | g c( x) |  1, for x in (1, 2).
x
The results for successive iterations, taking x0 = 1, are
x1 = 1.0488 x2 = 1.0718 x3 = 1.0825
x4 = 1.0874 x5 = 1.0897.
Thus, the root is 1.09, correct to three significant figures.
Example 1.6: Solve the following equation for the root lying in (2, 4) by using the
method of linear iteration: x3 – 9x + 1 = 0. Show that there are various ways of
rewriting the equation in the form, x = g (x) and choose the one which gives a
convergent sequence for the root.
Solution: We can rewrite the equation in the following different forms:

1 3 1 1
(i) x ( x  1) (ii) x 9/ x 2 (iii) x 9
9 x x

1 2
In case of (i), g c( x) x and for x in [2, 4], Hence it will not give
3
rise to a convergent sequence.

In case of (ii) and for x in [2, 4],


x x

In case of (iii)

Thus, the forms (ii) and (iii) would give convergent sequences for finding the
root in [2, 3].
Self-Instructional
Material 11
Algebraic, Transcendental, We start the iterations taking x0 = 2 in the iteration scheme (iii). The result for
and Polynomial Equations
successive iterations are,
x0 = 2.0 x1 = 2.91548x4 = 2.94282
x2 = 2.94228 x3 = 2.94281
NOTES
Thus, the root can be taken as 2.94281, correct to four decimal places.

1.2.3 Newton-Raphson Methods


Newton-Raphson methods is a widely used numerical method for finding a root of
an equation f (x) = 0, to the desired accuracy. It is an iterative method which has a
faster rate of convergence and is very useful when the expression for the derivative
f (x) is not complicated. Newton-Raphson methods, also called the Newton’s method,
is a root finding algorithm that uses the first few terms of the Taylor series of a
function f(x) in the neighborhood of a suspected root. In the Newton-Raphson
methods, to find the root start with an initial guess x1 at the root, the next guess x2 is
the intersection of the tangent from the point [x1, f(x1)] to the x-axis. The next guess
x3 is the intersection of the tangent from the point [x2, f(x2)] to the x-axis as shown
in Figure 1.4.

Fig. 1.4 Graph of the Newton-Raphson Methods


The Newton-Raphson methods can be derived from the definition of a slope as
follows:
f x f ( x1 )
f c(x1) = Ÿ x2 = x1 –
f '( x1 )
As a general rule, from the point [xn, f(xn)], the next guess is calculated as
follows:
f ( xn )
xn+1 = xn –
f '( xn )

Self-Instructional
12 Material
The derivative or slope f(xn) can be approximated numerically as follows: Algebraic, Transcendental,
and Polynomial Equations
f ( xn  'x )  f ( xn )
f¢(xn) =
'x
To derive the formula for this method, we consider a Taylor’s series expansion of f NOTES
(x0 + h), x0 being an initial guess of a root of f (x) = 0 and h a small correction to the
root.
2

Assuming h to be small, we equate f (x0 + h) to 0 by neglecting square and


higher powers of h.
?

Or, h

Thus, we can write an improved value of the root as,

i.e.,

Successive approximations x , x ,..., xn can thus be written as,

f ( x1 )
x2 x1 
f c( x1 )
f ( x2 )
x3 x2 
f c( x2 )
... ... ...
f ( xn )
xn 1 xn 
f c( xn ) (1.13)

If the sequence {xn } converges, we get the root.


Algorithm: Computation of a root of f (x) = 0 by Newton-Raphson methods.
Step 1: Define f (x), f c(x)
Step 2: Input x0, epsilon, maxit
[x0 is the initial guess of root, epsilon is the desired accuracy of the root
and maxit is the maximum number of iterations allowed]
Step 3: Set i = 0
Step 4: Set f0 = f (x0)
Step 5: Compute df0 = f c (x0)
Self-Instructional
Material 13
Algebraic, Transcendental, Step 6: Set x1 = x0 – f0/df0
and Polynomial Equations
Step 7: Set i = i + 1
Step 8: Check if |(x1 – x0) |x1| < epsilon, then print ‘root is’, x1 and stop
NOTES else if i < n, then set x0 = x1 and go to Step 3
Step 9: Write ‘Iterations do not converge’
Step 10: End
Example 1.7: Use Newton-Raphson method to compute the positive root of the
equation x3 – 8x – 4 = 0, correct to five significant digits.
Solution: Newton-Raphson iterative scheme is given by,

For the given equation, f (x) = x3 – 8x – 4.


First we find the location of the root by the method of tabulation. The table for
f (x) is,
x 0 1 2 3 4
f ( x)  4  13  12  1 28
Evidently, the positive root is near x = 3. We take x0 = 3 in Newton-Raphson
iterative scheme.
xn3  8 xn  4
xn 1 xn 
3xn2  8
27  24  4
We get, x1 3  3.0526
27  8
Similarly, x2 = 3.05138 and x3 = 3.05138.
Thus, the positive root is 3.0514, correct to five significant digits.
Example 1.8: Find a real root of the equation correct to five
significant digits.
Solution: First we find the location of the real root by tabulation. We observe that
the real root is negative and since f (–7) = 9 > 0 and f (–8) = – 55 < 0, a root lies
between –7 and – 8.
For computing the root to the desired accuracy, we take x0 = –8 and use Newton-
Raphson iterative formula,

The successive iterations give,


x1 = –7.3125
x2 = –7.17966
x3 = –7.17484
x4 = –7.17483
Self-Instructional Hence, the desired root is –7.1748, correct to five significant digits.
14 Material
Algebraic, Transcendental,
1§ a ·
Example 1.9: For evaluating a , deduce the iterative formula xn 1 ¨ xn  ¸, and Polynomial Equations
¨
2© xn ¸¹

by using Newton-Raphson scheme of iteration. Hence, evaluate 2 using this,


correct to four significant digits. NOTES
Solution: We observe that, a is the solution of the equation x2 – a = 0.
Now, using f (x) = x2 – a in the Newton-Raphson iterative scheme,
xn2  a
xn 1 xn 
2 xn

xn2  a
We have, xn 1 xn 
2 xn

xn2  a
xn 1 xn 
2 xn

1§ a·
i.e., xn 1 ¨ xn  ¸ , for n 0, 1, 2,...
2© xn ¹

Now, for computing 2 , we assume x0 = 1.4. The successive iterations give,


1§ 2 · 3.96
x1 ¨1.4  ¸ 1.414
2© 1.4 ¹ 2.8
1§ 2 ·
x2 ¨1.414  ¸ 1.41421
2© 1.414 ¹

Hence, the value of 2 is 1.414 correct to four significant digits.


Example 1.10: Prove that k a can be computed by the iterative scheme,
1ª a º
x n1 «(k  1) x n  k 1 ». Hence evaluate 3
2 , correct to five significant digits.
k ¬« xn ¼»

Solution: The value k a is the positive root of xk – a = 0. Thus, the iterative scheme
for evaluating k a is,
xnk  a
xn 1 xn  1
kxnk

Now, for evaluating 3


2 , we take x0 = 1.25 and use the iterative formula,
1ª 2º
xn 1 « 2 xn  2 » .
3¬ xn ¼

Self-Instructional
Material 15
Algebraic, Transcendental,
and Polynomial Equations 1ª 2 º
We have, x1 «1.25 u 2  » 1.26
3¬ (1.25) 2 ¼

x2 1.259921, x3 1.259921
NOTES
Hence, 3 2 = 1.2599, correct to five significant digits.
Example 1.11: Find by Newton-Raphson methods, the real root of 3x – cos x – 1
= 0, correct to three significant figures.
Solution: The location of the real root of f (x) = 3x – cos x – 1 = 0, is [0, 1] since
f (0) = – 2 and f (1) > 0.
We choose x0 = 0 and use Newton-Raphson scheme of iteration.

xn
The results for successive iterations are,
x1 = 0.667, x2 = 0.6075, x3 = 0.6071
Thus, the root is 0.607 correct to three significant figures.
Example 1.12: Find a real root of the equation xx + 2x – 6 = 0, correct to four
significant digits.
Solution: Taking f (x) = xx + 2x – 6, we have f (1) = –3 < 0 and f (2) = 2 > 0. Thus,
a root lies in [1, 2]. Choosing x0 = 2, we use Newton-Raphson iterative scheme
given by,
x

xn e xn
The computed results for successive iterations are,

x1

x2
x3
Hence, the root is 1.723 correct to four significant figures.
Order of Convergence: We consider the order of convergence of the Newton-
Raphson method given by the formula,

Let us assume that the sequence of iterations {xn} converge to the root [
Then, expanding by Taylor’s series about xn, the relation f ([) = 0, gives

Self-Instructional
16 Material
Algebraic, Transcendental,
and Polynomial Equations

NOTES

Taking Hn as the error in the nth iteration and writing Hn = Hn – [, we have,


f
(1.14)
f

Thus, where k is a constant.


This shows that the order of convergence of Newton-Raphson methods is 2. In
other words, the Newton-Raphson methods has a quadratic rate of convergence.
The condition for convergence of Newton-Raphson methods can easily be
derived by rewriting the Newton-Raphson iterative scheme as with

Hence, using the condition for convergence of the linear iteration method, we

can write .

Thus, the sufficient condition for the convergence of Newton-Raphson methods


is,

in the interval near the root.

i.e., (1.15)

1.2.4 Secant Method


Secant method can be considered as a discretized form of Newton-Raphson methods.
The iterative formula for this method is obtained from formula of Newton-Raphson
methods on replacing the derivative f c( x0 ) by the gradient of the chord joining two
neighbouring points x0 and x1 on the curve y = f (x).
Thus, we have

The iterative formula is given by,

Self-Instructional
Material 17
Algebraic, Transcendental, This can be rewritten as,
and Polynomial Equations
x0 f ( x1 )  x1 f ( x0 )
x2
f ( x1 )  f ( x0 )
NOTES
Secant formula in general form is,
xn = xn–1 – f(xn–1)

The iterative formula is equivalent to the one for Regula-Falsi methods. The
distinction between secant method and Regula-Falsi methods lies in the fact that
unlike in Regula-Falsi methods, the two initial guess values do not bracket a root
and the bracketing of the root is not checked during successive iterations, in secant
method. Thus, secant method may not always give rise to a convergent sequence
to find the root. The geometrical interpretation of the method is shown in Figure
1.5.

(Line AB meets x-axis alone)

Fig. 1.5 Secant Method


Algorithm: To find a root of f (x) = 0, by secant method.
Step 1: Define f (x).
Step 2: Input x0, x1, error, maxit [x0, x1, are initial guess values, error is the
prescribed precision and maxit is the maximum number of iterations
allowed].
Step 3: Set i = 1
Step 4: Compute f0 = f (x0)
Step 5: Compute f1 = f (x1)
Step 6: Compute x2 = (x0 f1 – x1 f0)/(f1 – f0)
Step 7: Set i = i + 1
Step 8: Compute accy = |x2 – x1| / |x1|
Step 9: Check if accy < error, then go to Step 14
Step 10: Check if i t maxit then go to Step 16
Step 11: Set x0 = x1

Self-Instructional
18 Material
Step 12: Set x1 = x2 Algebraic, Transcendental,
and Polynomial Equations
Step 13: Go to Step 6
Step 14: Print “Root =”, x2
Step 15: Go to Step 17 NOTES
Step 16: Print ‘iterations do not converge’
Step 17: Stop

1.2.5 Regula-Falsi Methods


Regula-Falsi methods is also a bracketing method. As in bisection method, we start
the computation by first finding an interval (a, b) within which a real root lies. Writing
a = x0 and b = x1, we compute f (x0) and f (x1), and check if f (x0) and f (x1) are of
opposite signs. For determining the approximate root x2, we find the point of
intersection of the chord joining the points (x0, f (x0)) and (x1, f (x1)) with the x-axis,
i.e., the curve y = f (x0) is replaced by the chord given by,
f ( x1 )  f ( x0 )
y  f ( x0 ) ( x  x0 ) (1.16)
x1  x0

Thus, by putting y = 0 and x = x2 in Equation (1.16), we get


f ( x0 )
x2 x0  ( x1  x0 ) (1.17)
f ( x1 )  f ( x0 )

Next, we compute f (x2) and determine the interval in which the root lies in the
following manner. If (a) f (x2) and f (x1) are of opposite signs, then the root lies in
(x2, x1). Otherwise if (b) f (x0) and f (x2) are of opposite signs, then the root lies in
(x0, x2). The next approximate root is determined by changing x0 by x2 in the first
case and x1 by x2 in the second case.
The aforesaid process is repeated until the root is computed to the desired accuracy
H, i.e., the condition

should be satisfied.
Regula-Falsi methods can be geometrically interpreted by the following Figure
1.6.

Fig. 1.6 Regula-Falsi Methods


Self-Instructional
Material 19
Algebraic, Transcendental, Algorithm: Computing root of an equation by Regula-Falsi methods.
and Polynomial Equations
Step 1: Define f (x)
Step 2: Read epsilon, the desired accuracy
NOTES Step 3: Read maxit, the maximum number of iterations
Step 4: Read x0, x1, two initial guess values of root
Step 5: Compute f0 = f (x0)
Step 6: Compute f1 = f (x1)
Step 7: Check if f0 f1 < 0, then go to the next step
else go to Step 4
Step 8: Compute x2 = (x0 f1 – x1 f0) / (f1 – f0)
Step 9: Compute f2 = f (x2)
Step 10: Check if |f2| < epsilon, then go to Step 18
Step 11: Check if f2 f0 < 0 then go to the next step
else go to Step 15
Step 12: Set x1 = x2
Step 13: Set f1 = f2
Step 14: Go to Step 7
Step 15: Set x0 = x2
Step 16: Set f0 = f2
Step 17: Go to Step 7
Step 18: Write ‘root =’ , x2, f3
Step 19: End
Example 1.13: Use Regula-Falsi methods to compute the positive root of x3 – 3x
– 5 = 0, correct to four significant figures.
Solution: First we find the interval in which the root lies. We observe that f (2) =
–3 and f (3) = 13. Thus, the root lies in [2, 3]. For using the Regula-Falsi methods,
we use the formula,
f ( x0 )
x2 x0  ( x1  x0 )
f ( x1 )  f ( x0 )
With x0 = 2, and x1 = 3, we have
3
x2 2 (3  2) 2.1875
13  3
Again, since f (x2) = f (2.1875) = –1.095, we consider the interval [2.1875, 3].
The next approximation is x3 = 2.2461. Also, f (x3) = – 0.4128. Hence, the root lies in
[2.2461, 3].
Repeating the iterations, we get
x4 = 2.2684, f (x4) = – 0.1328

Self-Instructional
20 Material
x5 = 2.2748, f (x5) = – 0.0529 Algebraic, Transcendental,
and Polynomial Equations
x6 = 2.2773, f (x6) = – 0.0316
x7 = 2.2788, f (x7) = – 0.0028
x8 = 2.2792, f (x8) = – 0.0022 NOTES
The root correct to four significant figures is 2.279.

1.2.6 Roots of Polynomial Equations


Polynomial equations with real coefficients have some important characteristics
regarding their roots. A polynomial equation of degree n is of the form pn(x) = anxn
+ an–1xn–1 + an–2xn–2 + ... + a2x2 + a1x + a0 = 0.
(i) A polynomial equation of degree n has exactly n roots.
(ii) Complex roots occur in pairs, i.e., if is a root of pn(x) = 0, then
is also a root.
(iii) Descarte’s rule of signs can be used to determine the number of possible
real roots (positive or negative).
(iv) If x1, x2,..., xn are all real roots of the polynomial equation, then we can
express pn(x) uniquely as,

(v) pn(x) has a quadratic factor for each pair of complex conjugate roots. Let,
and be the roots, then is the
quadratic factor.
(vi) There is a special method, known as Horner’s method of synthetic
substitution, for evaluating the values of a polynomial and its derivatives for
a given x.

1.2.7 Descarte’s Rule


The number of positive real roots of a polynomial equation is equal to the number of
changes of sign in pn(x), written with descending powers of x, or less by an even
number.
Consider for example, the polynomial equation,

Clearly there are three changes of sign and hence the number of positive real
roots is three or one. Thus, it must have a real root. In fact, every polynomial equation
of odd degree has a real root.
We can also use Descarte’s rule to determine the number of negative roots by
finding the number of changes of signs in pn(–x). For the above equation,
and it has two changes of sign. Thus,
it has either two negative real roots or none.

Self-Instructional
Material 21
Algebraic, Transcendental,
and Polynomial Equations
Check Your Progress
1. The roots of an equation are computed in how many stages?
NOTES 2. Define tabulation method.
3. State the procedure of bisection method.
4. How is the order of convergence of an iterative process determined?
5. State a property of Newton-Raphson methods.

1.3 CURVE FITTING


In this section, we consider the problem of approximating an unknown function
whose values, at a set of points, are generally known only empirically and are, thus
subject to inherent errors, which may sometimes be appreciably larger in many
engineering and scientific problems. In these cases, it is required to derive a functional
relationship using certain experimentally observed data. Here, the observed data
may have inherent or round-off errors, which are serious, making polynomial
interpolation for approximating the function inappropriate. In polynomial interpolation
the truncation error in the approximation is considered to be important. But when
the data contains round-off errors or inherent errors, interpolation is not appropriate.
The subject of this section is curve fitting by least square approximation. Here,
we consider a technique by which noisy function values are used to generate a
smooth approximation. This smooth approximation can then be used to approximate
the derivative more accurately than with exact polynomial interpolation.
There are situations where interpolation for approximating function may not be
efficacious procedure. Errors will arise when the function values f (xi), i = 1, 2, …,
n are observed data and not exact. In this case, if we use the polynomial interpolation,
then it would reproduce all the errors of observation. In such situations one may
take a large number of observed data, so that statistical laws in effect cancel the
errors introduced by inaccuracies in the measuring equipment. The approximating
function is then derived, such that the sum of the squared deviation between the
observed values and the estimated values are made as small as possible.
Mathematically, the problem of curve fitting or function approximation may be
stated as follows:
To find a functional relationship y = g(x), that relates the set of observed data
values Pi(xi, yi), i = 1, 2,..., n as closely as possible, so that the graph of y = g(x)
goes near the data points Pi’s though not necessarily through all of them.
The first task in curve fitting is to select a proper form of an approximating
function g(x), containing some parameters, which are then determined by minimizing
the total squared deviation.

Self-Instructional
22 Material
For example, g(x) may be a polynomial of some degree or an exponential or Algebraic, Transcendental,
and Polynomial Equations
logarithmic function. Thus g (x) may be any of the following:
(i) (ii)
NOTES
(iii) (iv)
(v)
Here D, E, J are parameters which are to be evaluated so that the curve y =
g(x), fits the data well. A measure of how well the curve fits is called the goodness
of fit.
In the case of least square fit, the parameters are evaluated by solving a system
of normal equations, derived from the conditions to be satisfied so that the sum of
the squared deviations of the estimated values from the observed values, is minimum.

1.3.1 Method of Least Squares


Let (x1, f1), (x2, f2),..., (xn, fn) be a set of observed values and g(x) be the approximating
function. We form the sums of the squares of the deviations of the observed values
fi from the estimated values g (xi),
n

¦
2
i.e., S f i  g ( xi ) (1.18)
i 1

The function g(x) may have some parameters, D, E, J. In order to determine


these parameters we have to form the necessary conditions for S to be minimum,
which are

(1.19)

These equations are called normal equations, solving which we get the parameters
for the best approximate function g(x).

Curve Fitting by a Straight Line: Let be the straight line which


fits a set of observed data points (xi, yi), i = 1, 2, ..., n.
Let S be the sum of the squares of the deviations g(xi) – yi, i = 1, 2,...,n, given
by,

(1.20)

We now employ the method of least squares to determine D and E so that S will
be minimum. The normal equations are,

(1.21)

Self-Instructional
Material 23
Algebraic, Transcendental,
and Polynomial Equations And, (1.22)

These conditions give,


NOTES

Where,

Solving,

Algorithm: Fitting a straight line y = a + bx.


Step 1: Read n [n being the number of data points]
Step 2: Initialize : sum x = 0, sum x2 = 0, sum y = 0, sum xy = 0
Step 3: For j = 1 to n compute
Begin
Read data xj, yj
Compute sum x = sum x + xj
Compute sum x2 + xj × xj
Compute sum y = sum y + yi × yj
Compute sum xy = sum xy + xj × yj
End
Step 4: Compute b = (n × sum xy – sum x × sum y)/ (n × sum x2 – (sum x)2)
Step 5: Compute x bar = sum x / n
Step 6: Compute y bar = sum y / n
Step 8: Compute a = y bar – b × x bar
Step 9: Write a, b
Step 10: For j = 1 to n
Begin
Compute y estimate = a + b × x
write xj, yj, y estimate
End
Step 11: Stop
Curve Fitting by a Quadratic (A Parabola): Let g(x) = a + bx + cx2, be the
approximating quadratic to fit a set of data (xi, yi), i = 1, 2, ..., n. Here the
parameters are to be determined by the method of least squares, i.e., by minimizing
the sum of the squares of the deviations given by,
Self-Instructional
24 Material
Algebraic, Transcendental,
and Polynomial Equations
(1.23)

Thus the normal equations, are as follows: (1.24) NOTES

(1.25)

These equations can be rewritten as,

(1.26)

Where

It is clear that the normal equations form a system of linear equations in the
unknown parameters a, b, c. The computation of the coefficients of the normal
(1.27)
equations can be made in a tabular form for desk computations as shown below.

x xi yi xi2 xi3 xi4 xi yi xi2 yi


1 x1 y1 x12 x13 x 41 x1 y1 x12 y1
2 x2 y2 x22 x23 x24 x2 y2 x22 y2
... ... ... ... ... ... ... ...
n xn yn xn2 xn3 xn4 xn yn 2
xn yn
Sum s1 s01 s2 s3 s4 s11 s21

The system of linear equations can be solved by Gaussian elimination method. It


may be noted that number of normal equations is equal to the number of unknown
parameters.

Self-Instructional
Material 25
Algebraic, Transcendental, Example 1.14: Find the straight line fitting the following data:
and Polynomial Equations
xi 4 6 8 10 12
y1 13.72 12.90 12.01 11.14 10.31
NOTES
Solution: Let y = a + bx, be the straight line which fits the data. We have the

normal equations for determining a and b, where

5
S ¦ ( y  a  bx )
i 1
i i
2
.

5 5

Thus, ¦y
i 1
i  na  b¦ xi
i 1
0

5 5 5

¦x y  a ¦ xi  b¦ xi
2
And, i i 0
i 1 I 1 i 1

The coefficients are computed in the table below.

xi yi xi2 xi yi
4 13.72 16 54.88
6 12.90 36 77.40
8 12.01 64 96.08
10 11.14 100 111.40
12 10.31 144 123.72
Sum 40 60.08 360 463.48

Thus the normal equations are,


5a  40b  60.08 0
40a  360b  463.48 0
Solving these two equations we obtain,
a = 15.448, b = 0.429
Thus, y = g(x) = 15.448 – 0.429 x, is the straight line fitting the data.
Example 1.15: Use the method of least square approximation to fit a straight line
to the following observed data:

xi 60 61 62 63 64
yi 40 40 48 52 55

Solution: Let the straight line fitting the data be y = a + bx. The data values being
large, we can use a change in variable by substituting u = x – 62 and v = y – 48.

Self-Instructional
26 Material
Let v = A + B u, be a straight line fitting the transformed data, where the normal Algebraic, Transcendental,
and Polynomial Equations
equations for A and B are,

5 5

¦
i 1
vi 5A B ¦u
i 1
i NOTES
5 5 5

¦
i 1
ui vi ¦
A
i 1
ui  B ¦u
i 1
2
i

The computation of the various sums are given in the table below,
xi yi ui vi ui vi ui2
60 40 –2 –8 16 4
61 42 1 6 6 1
62 48 0 0 0 0
63 52 1 4 4 1
64 55 2 7 14 4
Sum 0 3 40 10

Thus the normal equations are,

This gives the line,


Or,
Transforming we get the line,
20 (x – 62) – 5 (y – 48) – 3 = 0
Or, 20 x – 5y – 1003 = 0
Curve Fitting with an Exponential Curve: We consider a two parameter
exponential curve as,
(1.28)
For determining the parameters, we can apply the principle of least squares by
first using a transformation,
z = log y, so that Equation (1.28) is rewritten as, (1.29)
z = log a – bx (1.30)
Thus, we have to fit a linear curve of the form with z – x variables
and then get the parameters a and b as,
(1.31)

Self-Instructional
Material 27
Algebraic, Transcendental, Thus proceeding as in linear curve fitting,
and Polynomial Equations

NOTES (1.32)

And, where (1.33)

After computing D and E, we can determine a and b given by Equation (1.30).


Finally, the exponential curve fitting data set is given by Equation (1.28).
Algorithm: To fit a straight line for a given set of data points by least square error
method.
Step 1: Read the number of data points, i.e., n
Step 2: Read values of data-points, i.e., Read (xi, yi) for i = 1, 2,..., n
Step 3: Initialize the sums to be computed for the normal equations,
i.e., sx = 0, sx2 = 0, sy = 0, syx = 0
Step 4: Compute the sums, i.e., For i = 1 to n do
Begin
sx sx  xi
sx 2
sx2  xi2
sy sy  yi
syx syx  xi yi
End
Step 5: Solve the normal equations, i.e., solve for a and b of the line y = a + bx
Compute d n sx 2  sx sx
b (n sxy  sy sx) / d
xbar sx / n
ybar sy / n
a ybar  b x bar
Step 6: Print values of a and b
Step 7: Print a table of values of
Step 8: Stop
Algorithm: To fit a parabola y a  bx  cx 2 , for a given set of data points by least
square error method.
Step 1: Read n, the number of data points

Self-Instructional
28 Material
Step 2: Read (xi, yi) for i = 1, 2,..., n, the values of data points Algebraic, Transcendental,
and Polynomial Equations
Step 3: Initialize the sum to be computed for the normal equations,
i.e., sx = 0, sx2 = 0, sx3 = 0, sx4 = 0, sy = 0, sxy = 0.
Step 4: Compute the sums, i.e., For i = 1 to n do NOTES
Begin
sx sx  xi
2
x xi xi
sx 2
sx 2  x 2
sx 3 sx 3  xi x 2
sx 4 sx 4  x 2 x 2
sy sy  yi
sxy sxy  xi yi
sx y 2
sx y  x
2 2
yi
End
Step 5: Form the coefficients {aij } matrix of the normal equations, i.e.,

a11 n, a21 sx, a31 sx 2


a12 sx, a22 sx 2 , a32 sx3
a13 sx 2 , a23 sx 3 , a33 sx 4

Step 6: Form the constant vector of the normal equations.


b1 = sy, b2 = sxy, b3 = sx2y
Step 7: Solve the normal equation by Gauss-Jordan methods
a12 a12 / a11 , a13 a13 / a11 , b1 b1 / a11
a 22 a 22  a 21a12 , a23 a 23  a21a13
b2 b2  b1a21

a32 a32  a31a12


a33 a33  a31a13
b3 b3  b1a31
a 23 a23 / a 22
b2 b2 / a 22
a33 a33  a23 a32
b3 b3  a32 b2
c b3 / a33
b b2  c a 23
a b1  b a12  c a13

Self-Instructional
Material 29
Algebraic, Transcendental, Step 8: Print values of a, b, c (the coefficients of the parabola)
and Polynomial Equations
Step 9: Print the table of values of xk , yk and y pk where y pk a  bxk  cx 2 k ,

i.e., print
NOTES
Step 10: Stop.

Check Your Progress


6. Define secant method.
7. Give the procedure of Regula-Falsi methods.
8. When does an error arise in function interpolation?
9. How is approximating function found in the method of least squares?

1.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. A root of an equation is usually computed in two stages. First, we find the


location of a root in the form of a crude approximation of the root. Next we
use an iterative technique for computing a better value of the root to a
desired accuracy in successive approximations/computations.
2. In the tabulation method, a table of values of f (x) is made for values of x in
a particular range. Then, we look for the change in sign in the values of f (x)
for two consecutive values of x. We conclude that a real root lies between
these values of x.
3. The bisection method involves successive reduction of the interval in which
an isolated root of an equation lies. The sub-interval in which the root lies is
again bisected and the above process is repeated until the length of the sub-
interval is less than the desired accuracy.
4. The order of convergence of an iterative process is determined in terms of
the errors en and en + 1 in successive iterations.
5. Newton-Raphson methods is a widely used numerical method for finding a
root of an equation f (x) = 0, to the desired accuracy. It is an iterative
method which has a faster rate of convergence and is very useful when the
expression for the derivative f (x) is not complicated.
6. Secant method can be considered as a discretized form of Newton-Raphson
methods. The iterative formula for this method is obtained from formula of
Newton-Raphson methods on replacing the derivative by the gradient of
the chord joining two neighbouring points x0 and x1 on the curve y = f (x).

Self-Instructional
30 Material
7. Regula-Falsi methods is also a bracketing method. As in bisection method, Algebraic, Transcendental,
and Polynomial Equations
we start the computation by first finding an interval (a, b) within which a real
root lies. Writing a = x0 and b = x1, we compute f(x0) and f(x1) and check
if f(x0) and f(x1) are of opposite signs. For determining the approximate
NOTES
root x2, we find the point of intersection of the chord joining the points (x0,
f(x0)) and (x1, f(x1)) with the x-axis, i.e., the curve y = f(x0) is replaced by
the chord given by,

8. There are situations where interpolation for approximating function may not
be efficacious procedure. Errors will arise when the function values
f(xi), i = 1, 2, …, n are observed data and not exact.
9. Let (x1, f1), (x2, f2), ..., (xn, fn) be a set of observed values and g(x) be the
approximating function. We form the sums of the squares of the deviations
of the observed values fi from the estimated values g(xi),

i.e.,

The function g(x) may have some parameters, D, E, J. In order to determine


these parameters we have to form the necessary conditions for S to be
minimum, which are

These equations are called normal equations, solving which we get the
parameters for the best approximate function g(x).

1.5 SUMMARY

x Numerical methods can be employed for computing the roots of an equation


of the form, f(x)=0, where f(x) is a reasonably well-behaved function of a
real variable x.
x The location or crude approximation of a real root is determined by the use
of any one of the following methods, (a) graphical and (b) tabulation.
x In general, the roots of an equation can be computed using bisection and
simple iteration methods.
x The bisection method is also termed as bracketing method.

Self-Instructional
Material 31
Algebraic, Transcendental, x Newton-Raphson methods is a widely used numerical method for finding a
and Polynomial Equations
root of an equation f (x) = 0, to the desired accuracy.
x Secant method can be considered as a discretized form of Newton-Raphson
NOTES methods. The iterative formula for this method is obtained from formula of
Newton-Raphson methods on replacing the derivative by the gradient of
the chord joining two neighbouring points x0 and x1 on the curve y = f (x).
x Regula-Falsi methods is also a bracketing method.
x Horner’s method of synthetic substitution is used for evaluating the values
of a polynomial and its derivatives for a given x.
x Descarte’s rule is used to determine the number of negative roots by finding
the number of changes of signs in pn(–x).
x By using the method of least squares, noisy function values are used to
generate a smooth approximation. This smooth approximation can then be
used to approximate the derivative more accurately than with exact
polynomial interpolation.

1.6 KEY WORDS

x Tabulation method: In the tabulation method, a table of values of f (x) is


made for values of x in a particular range.
x Bisection method: It involves successive reduction of the interval in which
an isolated root of an equation lies.
x Order of convergence: An iterative process is said to have kth order

convergence if where M is a finite number..

x Newton-Raphson methods: It is a widely used numerical method for


finding a root of an equation f (x) = 0, to the desired accuracy.

1.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What are isolated roots?
2. What is crude approximation in graphical method?
3. Why is bisection method also termed as bracketing method?
4. What is the order of convergence of the Newton-Raphson methods?

Self-Instructional
32 Material
5. State the similarity between secant method and Regula-Falsi methods. Algebraic, Transcendental,
and Polynomial Equations
6. How many roots are there in a polynomial equation of degree n?
7. How many positive real roots are there in a polynomial equation?
Long-Answer Questions NOTES

1. Use graphical method to find the location of a real root of the equation
x3 + 10x – 15 = 0.
2. Draw the graphs of the function f (x) = cos x – x, in the range [0, S/2) and
find the location of the root of the equation f (x) = 0.
3. Compute the root of the equation x3 – 9x + 1 = 0 which lies between 2 and
3 correct upto three significant digits using bisection method.
4. Compute the root of the equation x3 + x2 – 1 = 0, near 1, by the iterative
method correct upto two significant digits.
5. Compute using Newton-Raphson methods the root of the equation ex = 4x,
near 2, correct upto four significant digits.
6. Find the real root of x log10x – 1.2 = 0 correct upto four decimal places
using Regula-Falsi methods.
7. Use the method of least squares to fit a straight line for the following data
points:

1.8 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.

Self-Instructional
Material 33
Algebraic, Transcendental, Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
and Polynomial Equations
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
NOTES
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
34 Material
System of Linear

UNIT 2 SYSTEM OF LINEAR Equations

EQUATIONS
NOTES
Structure
2.0 Introduction
2.1 Objectives
2.2 System of Linear Equations
2.2.1 Classical Methods
2.2.2 Elimination Methods
2.2.3 Iterative Methods
2.2.4 Computation of the Inverse of a Matrix by using Gaussian Elimination
Method
2.3 Triangularisation Method
2.4 Answers to Check Your Progress Questions
2.5 Summary
2.6 Key Words
2.7 Self Assessment Questions and Exercises
2.8 Further Readings

2.0 INTRODUCTION

In mathematics, a system of linear equations is a collection of linear equations


involving the same set of variables. When you are trying to calculate the solution of
a system of linear equations, you will arrive at one of three distinct cases: the
system has unique solution, the system has no solution and the system has infinite
solutions. There are various methods to obtain the solution of the linear equations.
The two methods discussed in this unit are elimination methods and iterative
methods.
In this unit, you will study about the system of linear equations, matrix inversion
method, Cramer’s rule Gauss elimination method, Gauss-Jordan elimination
method, and triangularisation method.

2.1 OBJECTIVES

After going through this unit, you will be able to:


x Explain the system of linear equations
x Understand Cramer’s rule
x Explain Gaussian elimination method and Gauss-Jordan elimination methods
x Define Jacobi iteration method and Gauss-Seidel iteration methods
x Compute inverse of a matrix using Gaussian elimination method
Self-Instructional
Material 35
System of Linear
Equations 2.2 SYSTEM OF LINEAR EQUATIONS
Many engineering and scientific problems require the solution of a system of linear
equations. We consider a system of m linear equations in n unknowns written as,
NOTES
a11 x1  a12 x2  a13 x3  ...  a1n xn b1
a21 x1  a22 x2  a23 x3  ...  a2 n xn b2
a31 x1  a32 x2  a33 x3  ...  a3n xn b3
... ... ...
(2.1)
am1  am 2 x2  am 3 x3  ...  am xn bm
Using matrix notation, we can write the above system of equations in the form,
Ax = b (2.2)
Where A is a m × n matrix and x, b are respectively n-column, m-row vectors given
by,

ª a11 a12 a13 ... a1n º ª x1 º ª b1 º


«a a22 a23 ... a2 n »» «x » «b »
« 21 « 2» « 2»
A « a31 a32 a33 ... a3n » , x « x3 » , b « b3 »
« » « » « » (2.3)
« ... ... ... ... ... » « ... » « ... »
«¬ am1 am 2 am 3 ... amn »¼ «¬ xn »¼ «¬bm »¼

The system of equations is termed as a homogeneous one, if all the elements in


the column vector b are zero. Otherwise, the system is termed as a non-homogeneous
one.
The homogeneous system has a non-trivial solution, if A is a square matrix, i.e., m
= n, and the determinant of the coefficient matrix, i.e., |A| is equal to zero.
The solution of the non-homogeneous system exists, if the rank of the coefficient
matrix A is equal to the rank of the augmented matrix [A : b] given by,
ª a11 a12 a13 ... a1n b1 º
« »
«a21 a22 a23 ... a2 n b2 »
[ A : b] « a31 a32 a33 ... a3n b3 »
« »
« ... ... ... ... ... ... »
« an1 an 2 an3 ... ann bn »¼
¬
Further, a unique non-trivial solution of the system given by Equation (2.1) exists
when m = n and the determinant | A | z 0 , i.e., the coefficient matrix is a square non-
singular matrix. The computation of the solution of a system of n linear equations in
n unknowns can be made by any one of the two classical methods known as the
Cramer’s rule and the matrix inversion method. But these two methods are not
suitable for numerical computation, since both the methods require the evaluation of
determinants. There are two types of efficient numerical methods for computing
solution of systems of equations. Some are direct methods and others are iterative
Self-Instructional
36 Material
in nature. Among the direct methods, Gaussian elimination method is most commonly System of Linear
Equations
used. Among the iterative methods, Gauss-Seidel iteration method is very commonly
used.

2.2.1 Classical Methods NOTES


Cramer’s Rule: Let D = |A| be the determinant of the coefficient matrix A and Di
be the determinant obtained by replacing the ith column of D by the column vector
b. The Cramer’s rule gives the solution vector x by the equations,

(2.4)

Thus, we have to compute (n + 1) determinants of order n.


Example 2.1: Use Cramer’s rule to solve the following system:

ª 2 3 1 º ª x1 º ª1 º
« 3 1 1» « x » «2»
« »« 2» « »
«¬1 1 1»¼ «¬ x3 »¼ «¬1 »¼

Solution: The determinant D of the coefficient matrix is,

ª2  3 1 º
D «3 1  1» 2(1  1)  3(1  3)  (3  1) 14
« »
«¬1  1  1»¼

The determinants D1, D2 and D3 are,

1 3 1
D1 2 1 1 ( 1  1)  3(1  2)  (2  1) 8
1 1 1
2 1 1
D2 3 2 1 2(2  1)  (1  3)  (3  2) 1
1 1 1
2 3 1
D3 3 1 2 2(1  2)  3(2  3)  (3  1) 5
1 1 1
Hence by Cramer’s rule, we get
D1 8 4 D2 1 D3 5
x1 , x2 , x3 
D 14 7 D 14 D 14

2.2.2 Elimination Methods


Matrix Inversion Method: Let A–1 be the inverse of the matrix A defined by,
A
A (2.5)
A
Self-Instructional
Material 37
System of Linear Where Adj A is the adjoint matrix obtained by transposing the matrix of the cofactors
Equations
of the elements aij of the determinant of the coefficient matrix A.
Thus,

NOTES

Adj A
(2.6)

Aij being the cofactor of aij.


Then the solution of the system is given by,
x A1b (2.7)
Note: If the rank of the coefficient matrix of a system of linear equations in n
unknowns is less than n, then there are more unknowns than the number of
independent equations. In such a case, the system has an infinite set of solutions.
Example 2.2: Solve the given system of equations by matrix inversion method:

ª1 1 1 º ª x1 º ª4º
« 2 1 3 » «x » «1 »
« » « 2» « »
¬« 3 2 1¼» ¬« x3 ¼» ¬«1 ¼»

Solution: For solving the system of equations by matrix inversion method we first
compute the determinant of the coefficient matrix,
1 1 1
| A | 2 1 3 13
3 2 1

Since | A | z 0 , the matrix A is non-singular and A–1 exists. We now compute the
adjoint matrix,

Hence, the solution by matrix inversion method gives,

Gaussian Elimination Method: This method consists in systematic elimination


of the unknowns so as to reduce the coefficient matrix into an upper triangular
system, which is then solved by the procedure of back-substitution. To understand
the procedure for a system of three equations in three unknowns, consider the
following system of equations:
Self-Instructional
38 Material
System of Linear
a11 x1  a12 x2  a13 x3 b1 (2.8(a)) Equations
a 21 x1  a22 x2  a23 x3 b2 (2.8(b))
a31 x1  a32 x 2  a33 x3 b3 (2.8(c))
NOTES
We have to first eliminate x1 from the last two equations and then eliminate x2
from the last equation.
In order to eliminate x1 from the second equation we multiply the Equation
(2.8(a)) by  a 21 / a11 = m2, and add to the second equation. Similarly, for elimination
of x1 from the third Equation (2.8(c)) we have to multiply the first Equation (2.8(a))
by  a31 / a11 m3 , and add to the last Equation (2.8(c)). We would then have the
following two equations from them:
(1) (1) (1)
a 22 x 2  a 23 x3 b2 (2.9(a))
(1) (1)
a32 x2  a33 x3 b3(1) (2.9(b))

Again for eliminating x2 from the last of the above two equations, we multiply
(1) (1)
the first Equation (2.9(a)) by m4  a32 / a 22 , and add to the second Equation (2.9(b)),
which would give the equation,
( 2)
a33 b3( 2) (2.10)
( 2) (1) (1)
Where a33 a33  m 4 a 23 , b3( 2) b3(1)  m4 b2(1)

Thus by systematic elimination we get the triangular system given below,


a11 x1  a12 x2  a13 x3 b1 (2.11(a))
(1) (1)
a 22 x2  a 23 x3 b2(1) (2.11(b))
( 2)
a33 x3 b2( 2) (2.11(c))
It is now easy to solve the unknowns by back-substitution as stated below:
We solve for x3 from Equation (2.11(c)), then solve for x2 from Equation (2.11(b))
and finally solve for x1 from Equation (2.11(a)). This systematic Gaussian elimination
procedure can be written in matrix notation in a compact form, as shown below.
(i) We write the augmented matrix, [A : b] and the multipliers on the left.

Self-Instructional
Material 39
System of Linear (ii) Then we write the transformed 2nd and 3rd rows after the elimination of x1 by
Equations
row operations [(m2 × 1st row + 2nd row) and (m3 × 1st row + 3rd row)] as new
2nd and 3rd rows along with the multiplier on the left.

NOTES ªa11 a12 a13 b1 º


« »
m4 (1)
 a31 (1)
/ a22 «
(1)
a 22 (1)
a23 b2(1) ». Perform
 o
« (1)
a32
(1)
a33 b3 »¼ R3  m4 R2
(1)
¬

(iii) Finally, we get the upper triangular transformed augmented matrix as given
below.

ªa11 a12 a13 b1 º


« (1) (1) »
« a22 a 23 b2(1) »
(2.12)
«¬ ( 2)
a33 b3 »¼

Notes:
1. The above procedure can be easily extended to a system of n unknowns, in
which case, we have to perform a total of (n–1) steps for the systematic
elimination to get the final upper triangular matrix.
2. The condition to be satisfied for using this elimination is that the first diagonal
elements at each step must not be zero. These diagonal elements
(1) ( 2)
[ a11 , a 22 , a33 , etc.] are called pivot. If the pivot is zero at any stage, the method
fails. However, we can rearrange the rows so that none of the pivots is zero, at
any stage.
Example 2.3: Solve the following system by Gauss elimination method:

x1  2 x2  x3 0
2 x1  2 x2  3x3 3
 x1  3x2 2

Show the computations by augmented matrix representation.


Solution: The augmented matrix of the system is,
ª1 2 1 : 0º
«2 2 3 : 3»»
«
«¬ 1  3 0 : 2»¼

Step 1: For elimination of x1 from the 2nd and 3rd equations we multiply the first
equation by –2 and 1, successively, and add them to the 2nd and 3rd equation. The
result is shown in the augmented matrix below.
ª1 2 1 : 0º
 2 ««0  2 1 : 3»»
1 «¬0  1 1 : 2»¼
Self-Instructional
40 Material
Step 2: For elimination of x2 from the third equation we multiply the second equation System of Linear
Equations
1
by  and add it to the third equation. The result is shown in the augmented matrix
2
below.
NOTES
ª1 2 1 : 0 º
 1 / 2««0  2 1 : 3 »»
«¬0 0 1 / 2 : 1 / 2»¼

Step 3: The upper triangular system is now solved by back-substitution, giving


x1 = 1, x2 = –1, x3 = 1
Gauss-Jordan Elimination Method: The Gauss-Jordan elimination method is
a variation of the Gaussian elimination method. In this method, the augmented
coefficient matrix is transformed by row operations such that the coefficient matrix
reduces to the identity matrix. The solution of the system is then directly obtained
as the reduced augmented column of the transformed augmented matrix. We explain
the method with a system of three equations given by,
ª a11 a12 a13 º ª x1 º ª b1 º
«a a 23 »» «x » «b »
« 21 a22 « 2» « 2» (2.13)
«¬a31 a32 a33 »¼ «¬ x3 »¼ «¬b3 »¼

The augmented matrix is,

ª a11 a12 a13 : b1 º


« »
«a 21 a 22 a23 : b2 »
(2.14)
«a a32 a33 : b3 »¼
¬ 31

We assume that a11 is non-zero. If, however, a11 is zero, we can interchange
rows so that a11 is non-zero in the resulting system.
The first step is to divide the first row by a11 and then eliminating x1 from 2nd
and 3rd equations by row operations of multiplying the reduced first row by a21 and
subtracting from the second row and next multiplying the reduced first row by a31
and subtracting from the third row. This is shown in matrix transformations given
below.

ª a11 a12 a13 : b1 º ª1 c


a12 c : b1c º
a13 R2  R1 a21 ª1 a12c c : b1c º
a13
«a  o
« 21 a22 a23 : b2 »» R / a «a
« 21 a22 »
ac23 : b2 »  o « 0 ac ac23 : b2c »»
1 11 R3  R1 a31 « 22

«¬ a31 a32 a33 : b3 »¼ «¬ a31 a32 c : b3 »¼


a33 «¬0 a32
c c : b3c »¼
a33

Where,
c
a12 c
a12 / a11 , a13 a13 / a11 , b1c b1 / a11
a c22 c , a23
a22  a21 a12 c c , b2c
a23  a21 a13 b2  a21b1c
c
a32 c , a33
a32  a31 a12 c c , b3c
a33  a31 a13 b3  a31b1c

Self-Instructional
Material 41
System of Linear
Equations
Now considering ac22 as the non-zero pivot, we first divide the second row by
c and subtract it from the first
c and then multiply the reduced second row by a12
a22
row and also multiply the reduced second row by a32 c and subtracting it from the
NOTES third row. The operations are shown below in matrix notation.
ª1 a12c c : b1c º
a13 ª1 a12c c : b1c º
a13 ª1 0 a13cc : b1ccº
« » c «
R2 / a22 » c and R3R2 a32
R1R2 a12 c «
c c : b2c o 0 1 acc23 : b2cc»  cc : b2cc»
o «0 1 a23
«0 a22 a23 » « »
¬«0 a32
c c : b3c »¼
a33 ¬«0 a32
c c : b3c »¼
a33 ¬«0 0 a33
cc : b3cc¼»

Where
cc
a13 c  a12
a13 c a23
cc , b1cc1 b1c  a12
c b2cc
cc
a 23 c / a 22
a 23 c , b2cc b2c / a 22
c
cc
a33 c  a 23
a33 cc a32
c , b3cc b3c  a32
c b2cc

cc and then the reduced third


Finally, the third row elements are divided by a33
cc and subtracted from the first row and also the reduced third
row is multiplied by a13
row is multiplied by ac23
c and subtracted from the second row. This is again shown in
matrix notation below.

ª1 0 a13cc : b1ccº ª1 0 a13cc : b1ccº R1 a cc R3 ª1 0 0 : b1ccº


«0 1 a cc : bcc» o «0 1 acc : bcc»  25 o « cc»
« 23 2» cc «
R3 / a33 23 2» «0 1 0 : b2 »
«¬0 0 a33
cc : b3cc»¼ «¬0 0 1 : b3ccc»¼ cc R3
R2  a23 «¬0 0 1 : b3cc»¼

Where b1cc b1cc  a13


cc b3ccc , b2ccc b2cc  a23
cc b3ccc, b3ccc b3cc / a33
cc
Finally, the solution of the system is given by the reduce augmented column,
i.e.,
We illustrate the elimination procedure with an example using augmented matrix,

ª 2 2 4 : 18º
«1 3 2 : 13»
« »
«¬ 3 1 3 : 14»¼

First, we divide the first row by 2 then subtract the reduced first row from 2nd
row and also multiply the first row by 2 and then subtract from the third. The results
are shown below:

ª1 2 4 18º ª1 1 2 : 9 º R R ª1 1 2 : 9 º
«1 3 2 13» R /2 «1 3 2 : 13» 2  1o «0 2 0 : 4 »»
» o
1
« « » «
«¬3 1 3 14»¼ «¬3 1 3 : 14»¼ R3  2 R2 «¬0  2  3 :  13»¼

Next considering 2nd row, we reduce the second column to [0, 1, 0] by row
operations shown below:

Self-Instructional
42 Material
System of Linear
ª1 1 2 : 9 º ª1 1 2 : 9 º R R ª1 0 2 : 7 º Equations
«0 2 » R2 / 2 «
0 : 4 »  o«0 1 0 : 2 »»  o
1 2 «0 1 0 : 2 »
« « »
«¬0  2  3 :  13»¼ «¬0  2  3 :  13»¼ R3  2 R2 «¬0 0  3 :  9»¼
NOTES
Finally, dividing the third row by –3 and then subtracting from the first row the
elements of the third row multiplied by 2, the result is shown below:

ª1 0 2 : 7 º ª1 0 2 : 7º ª1 0 0 : 1 º
«0 1 0 : 2 » R /( 3) « » R1 2 R3 « »
»   o «0 1 0 : 2»  o «0 1 0 : 2»
3
«
«¬0 0  3 :  9»¼ «¬0 0 1 : 3»¼ «¬0 0 1 : 3»¼

Hence the solution of the system is x1 = 1, x2 = 2, x3 = 3.


Example 2.4: Solve the following system by Gauss-Jordan elimination method:

ª3 18 9º ª x1 º ª 18 º
« 2 3 3» « x » «117 »
« » « 2» « »
«¬4 1 2»¼ «¬ x3 »¼ «¬283»¼

Solution: We consider the augmented matrix and solve the system by Gauss-Jordan
elimination method. The computations are shown in compact matrix notation as
given below. The augmented matrix is,

ª3 18 9 : 18 º
«2 3 3 : 117 »
« »
«¬4 1 2 : 283»¼

Step 1: The pivot is 3 in the first column. The first column is transformed into [1, 0,
0] T by row operations shown below:

ª3 18 9 : 18 º ª1 6 3 6 º ª1 6 3 : 6 º
«2 3 3 : 117 » R /3 «  2R1
» R2  «0  9  3 : 105 »
»  o «2 3 3 117 »  o
1
« R3  4 R1 « »
¬«4 1 2 : 283»¼ ¬«4 1 2 283»¼ ¬«0  23  10 : 259»¼

Step 2: The second column is transformed into [0, 1, 0] by row operations shown
below:
ª1 6 3 : 6 º ª1 6 3 : 6 º ª1 0 1 : 76 º
«0  9  3 : 105 »  R2 / 9 «0 R  6 R2
« »  o « 1 1 / 3 :  35 / 3»» 1   o «0 1 1 / 3 :  35 / 3»
« »
R3  23R2
«¬0  23  10 : 259»¼ «¬0  23  10 : 259 »¼ «¬0 0  7 / 3 : 28 / 3 »¼

Step 3: The third column is transformed into [0 0 1]T by row operations shown
below:
ª1 0 1 : 76 º ª1 0 1 : 76 º ª1 0 0 : 72 º
«0 1 1 / 3 :  35 / 3»  R3 /( 7 / 3) « R  R3
« »     o « 0 1 1 / 3 :  35 / 3»» 1  o «0 1 0 :  13»
« »
¬«0 0  7 / 3 : 28 / 3 ¼» ¬«0 0 1 : 4 ¼» R2  R3 / 3 ¬«0 0 1 : 4 ¼»

Hence, the solution of the system is x1 = 72, x2 = –13, x3 = 4.


Self-Instructional
Material 43
System of Linear 2.2.3 Iterative Methods
Equations
We can use iteration methods to solve a system of linear equations when the
coefficient matrix is diagonally dominant. This is ensured by the set of sufficient
conditions given as follows,
NOTES
n

¦| a
j 1, j z i
ij |  | aii |, for i 1, 2,..., n (2.15)

An alternative set of sufficient conditions is,


n

¦| a
i 1, i z j
ij |  | a jj | , for j 1, 2,..., n (2.16)

There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
Jacobi Iteration Method: Consider a system of n linear equations,

a11 x1  a12 x2  a13 x3  ........  a1n xn b1


a 21 x1  a22 x2  a 23 x3  ........  a2 n xn b2
a31 x1  a32 x2  a33 x3  ........  a3n xn b3
..................................................................
a n1 x1  an 2 x2  an3 x3  ........  a nn xn bn

The diagonal elements aii, i = 1, 2, ..., n are non-zero and satisfy the set of
sufficient conditions stated earlier. When the system of equations do not satisfy
these conditions, we may rearrange the system in such a way that the conditions
hold.
In order to apply the iteration we rewrite the equations in the following form:

x1 (b1  a12 x2  a13 x3  ...  a1n xn ) / a11


x2 (b2  a 21 x1  a 23 x3  ...  a2 n xn ) / a22
x3 (b3  a31 x1  a32 x2  ...  a3n xn ) / a33
............................................................
xn (bn  a n1 x1  an 2 x2  ...  a nn 1 xn 1 ) / ann

To start the iteration we make an initial guess of the unknowns as


x1(0) , x2(0) , x3(0) ,..., xn(0) (initial guess may be taken to be zero).
Successive approximations are computed using the equations,

x1( k 1) (b1  a12 x2( k )  a13 x3( k )  ...  a1n xn( k ) ) / a11
x2( k 1) (b2  a 21 x1( k )  a 23 x3( k )  ...  a 2 n xn( k ) ) / a 22
( k 1) (k ) (k ) (k )
x3 (b3  a31 x1  a32 x2  ...  a3n xn ) / a33
(2.17)
.................................................................................
( k 1) (k ) (k ) (k )
xn (bn  a n1 x1  a n 2 x2  ...  a nn 1 xn 1 ) / a nn
Self-Instructional
44 Material
Where k = 0, 1, 2, ... System of Linear
Equations
The iterations are continued till the desired accuracy is achieved. This is checked
by the relations,

(2.18) NOTES

Jacobi Iterative Algorithm


Choose an initial guess x(0) to the solution x.
for k = 1, 2, …
for i = 1, 2, …, n
xi = 0
for j = 1, 2, …, i – 1, i + 1, …, n
( k –1)
xi = xi + ai , j x j
end

,i

end

check convergence; continue if necessary


end
Gauss-Seidel Iteration Method: This is a simple modification of the Jacobi
iteration. In this method, at any stage of iteration of the system, the improved
values of the unknowns are used for computing the components of the unknown
vector. The iteration equations given below are used in this method,
x1( k 1) (b1  a12 x 2( k )  a13 x3( k )  ...  a1n xn( k ) ) / a11
x2( k 1) (b2  a 21 x1( k 1)  a23 x3( k )  ...  a2 n xn( k ) ) / a 22
( k 1) ( k 1) ( k 1) (k )
x3 (b3  a31 x1  a32 x 2  ...  a3n x n ) / a33
(2.19)
.................................................................................
( k 1) ( k 1) ( k 1) ( k 1)
xn (bn  a n1 x1  an2 x2 (k  1)  ...  a nn 1 xn 1 ) / a nn

It is clear from above that for computing x2( k 1) , the improved value of x1( k 1) is
used instead of x1( k ) ; and for computing x3( k 1) , the improved values x1( k 1) and
x2( k 1) are used. Finally, for computing xn( k ) , improved values of all the components
x1( k 1) , x2( k 1) ,..., xn( k11) are used. Further, as in the Jacobi iteration, the iterations are
continued till the desired accuracy is achieved.
Example 2.5: Solve the following system by Gauss-Seidel iterative method correct
upto four significant digits.
Self-Instructional
Material 45
System of Linear
Equations 10 x1  2 x2  x3  x4 3
 2 x1  10 x2  x3  x4 15
 x1  x2  10 x3  2 x4 27
NOTES  x1  x2  2 x3  10 x4 9

Solution: The given system is clearly having diagonally dominant coefficient matrix,
n
i.e., | aii | t ¦| a
j 1
ij |, i 1, 2, ..., n
j zi

Hence, we can employ Gauss-Seidel iteration method, for which we rewrite the
system as,
( k 1) (k ) (k ) (k )
x1 0.3  0.2 x2  0.1 x3  0.1 x4
( k 1) ( k 1) (k ) (k )
x2 1.5  0.2 x1  0.1 x3  0.1 x4
( k 1) ( k 1) ( k 1) (k )
x3 2.7  0.1 x1  0.1 x2  0 .2 x 4
x4( k 1) 0.9  0.1 x1( k 1)  0.1 x2( k 1)  0.2 x3( k 1)
We start the iteration with,
( 0) (0 ) ( 0) (0)
x1 0.3, x2 1.5, x3 2. 7 , x 4 0.9

The results of successive iterations are given in the table below.

k x1 x2 x3 x4
1 0.72 1.824 2.774 –0.0196
2 0.9403 1.9635 2.9864 –0.0125
3 0.09901 1.9954 2.9960 –0.0023
4 0.9984 1.9990 2.9993 –0.0004
5 0.9997 1.9998 2.9998 –0.0003
6 0.9998 1.9998 2.9998 –0.0003
7 1.0000 2.0000 3.0000 0.0000

Hence the solution correct to four significant figures is x1 = 1.0000, x2 = 2.000,


x3 = 3.000, x4 = 0.000.
Example 2.6: Solve the following system by Gauss-Seidel iteration method.
20 x1  2 x2  x3 30
x1  40 x2  3x3 75
2 x1  x2  10 x2 30

Give the solution correct upto three significant figures.


Solution: It is evident that the coefficient matrix is diagonally dominant and the
sufficient conditions for convergence of the Gauss-Seidel iterations are satisfied, since
| a11 | 20 t | a12 |  | a13 | 3
| a 22 | 40 t | a 21 |  | a23 | 4
| a33 | 10 t | a31 |  | a32 | 3
Self-Instructional
46 Material
For starting the iterations, we rewrite the equations as, System of Linear
Equations
1
x1 (30  2 x2  x3 )
20
1
x2 (75  x1  3 x3 ) NOTES
40
1
x3 (30  2 x1  x2 )
10
The initial approximate solution is taken as,
(0) (0) (0)
x1 1.5, x 2 2.0, x3 3 .0

The first iteration gives,


(1) 1
x1 (30  2 u 2.0  3.0) 1.15
20
1
x2(1) (75  1.15  3 u 3.0) 2.14
40
1
x3(1) (30  2 u 1.15  2.14) 2.98
10
The second iteration gives,
( 2) 1
x (30  2 u 2.14  2.98) 1.137
1 20
1
x2( 2) (75  1.137  3 u 2.98) 2.127
40
1
x3( 2) (30  2 u 1.137  2.127) 2.986
10
The third iteration gives,

(3) 1
x1 (30  2 u 2.127  2.986) 1.138
20
1
x2(3) (75  1.138  3 u 2.986) 2.127
40
1
x3(3) (30  2 u 1.138  2.127) 2.985
10

Thus the solution correct to three significant digits can be written as x1 = 1.14,
x2 = 2.13, x3 = 2.98.
Example 2.7: Solve the following system correct to three significant digits, using
Jacobi iteration method.
10 x1  8 x 2  3 x 3  x 4 16
3 x1  4 x 2  10 x 3  x 4 10

2 x1  10 x2  x3  4 x4 9
2 x1  2 x2  3x3  10 x4 11

Self-Instructional
Material 47
System of Linear Solution: The system is first rearranged so that the coefficient matrix is diagonally
Equations
dominant. The equations are rewritten for starting Jacobi iteration as,

x1( k 1) 1.6  0.8 x2( k )  0.3 x3( k )  0.1 x4( k )


NOTES x2( k 1) 0.9  0.2 x1( k )  0.1 x3( k )  0.4 x4( k )
x3( k 1) 1.0  0.3 x1( k )  0.4 x2( k )  0.1 x4( k )
x4( k 1) 1.1  0.2 x1( k )  0.2 x2( k )  0.3 x3( k ) , where k 0, 1, 2,...

The initial guess of solution is taken as,


x1(0) 1.6, x2(0) 0.9, x3(0) 1.0, x4(0) 1.1
The results of successive iterations computed by Jacobi iterations are given in
the following table:

k x1 x2 x3 x4
1 1.07 0.92 0.77 0.90
2 1.050 0.969 0.957 0.933
3 1.0186 0.9765 0.9928 0.9923
4 1.0174 0.9939 0.9858 0.9989
5 0.9997 0.9975 0.9925 0.9974
6 1.0001 0.9997 0.9994 0.9984
7 1.0002 0.9998 1.0001 0.9999

Thus the solution correct to three significant digits is x1 = 1.000, x2 = 1.000,


x4 = 1.000.
Algorithm: Solution of a system of equations by Gauss-Seidel iteration method.
Step 1: Input elements aij of augmented matrix for i = 1 to n, next, j = 1 to n +
1.
Step 2: Input epsilon, maxit [epsilon is desired accuracy, maxit is maximum
number of iterations]
Step 3: Set xi = 0, for i = 1 to n
Step 4: Set big = 0, sum = 0, j = 1, k = 1, iter = 0
Step 5: Check if k z j , set sum = sum + ajk xk
Step 6: Check if k < n, set k = k + 1, go to Step 5 else go to next step
Step 7: Compute temp = (ajn + 1 – sum) / ajj
Step 8: Compute relerr = abs (xj – temp) / temp
Step 9: Check if big < relerr then big = relerr
Step 10: Set xj = temp
Step 11: Set j = j + 1, k = 1
Step 12: Check if j d n to Step 5 else go to next step
Step 13: Check if relerr < epsilon then {write iterations converge, and write xj
for j = 1 to n go to Step 15} else if iter < maxit iter = iter + 1 go
to Step 5
Self-Instructional
48 Material
Step 14: Write ‘iterations do not converge in’, maxit ‘iteration’ System of Linear
Equations
Step 15: Write xj for j = 1 to n
Step 16: End

2.2.4 Computation of the Inverse of a Matrix by using Gaussian NOTES


Elimination Method
The inverse matrix B of a given square matrix A satisfy the relation,
A.B=I
Where I is the unit matrix of the same order as that of A. In order to determine the
elements bij of the matrix B, we can employ row operations as in Gaussian elimination.
We explain the method for a 2 × 3 matrix as given below. We can write the above
relation in detail as,
ª a11 a12 a13 º ª b11 b12 b13 º ª1 0 0º
«a a22 a23 »» «b b23 »» «0 1 0»
« 21 « 21 b22 « »
«¬a31 a32 a33 »¼ «¬b31 b32 b33 »¼ «¬0 0 1»¼

By using the definition of matrix multiplication we can write that the above
relation equivalent to the following three systems of linear equations.

ª a11 a12 a13 º ªb11 º ª1 º ª a11 a12 a13 º ªb12 º ª 0º ª a11 a12 a13 º ª b13 º ª0 º
«a a 22 a 23 »» «b » «0 » , «a a 22 a23 »» «b » «1 » , «a a 22 a23 »» «b » «0 »
« 21 « 21 » « » « 21 « 22 » « » « 21 « 23 » « »
¬« a31 a32 a33 ¼» ¬«b31 ¼» ¬«0¼» ¬«a31 a32 a33 ¼» ¬«b32 ¼» ¬«0¼» ¬« a31 a32 a33 ¼» ¬«b33 ¼» ¬«1¼»

Thus, by solving each of the above systems we shall get the three columns of
the inverse matrix B = A–1. Since, the coefficient matrix is the same for each of the
three systems, we can apply Gauss elimination to all the three systems simultaneously.
We consider for this the following augmented matrix:
ª a11 a12 a13 : 1 0 0º
«a a 23 : 0 1 0»»
« 21 a 22
«¬a31 a32 a33 : 0 0 1»¼
We employ Gauss elimination to this augmented matrix. At the end of 1st stage
we get,
R2  a21 / a11 R1 ª a11 a12 a13 : 1 0 0º
o «0 (1)
a22 (1)
a23 : a21 / a11 1 0 »»
«
R3  a31 / a11 R1 «¬ 0 (1)
a32 (1)
a33 : a31 / a11 0 1 »¼
Where
(1) (1)
a 22 a 22  ( a 21 / a11 ) a12 , a 23 a 23  (a 21 / a11 ) a13
(1) (1)
a32 a32  ( a31  a11 ) a12 , a33 a33  ( a31 / a11 ) a13
Similarly, at the end of the second stage, we have
ª a11 a12 a13 : 1 0 0º
R3  a / a (1)
23
(1)
22 a (1)
32 o «« 0
 (1)
a22 (1)
a23 : c21 1 0»»
«¬ 0 0 (2)
a33 : c31 c32 1 »¼
Self-Instructional
Material 49
System of Linear Where
Equations
(2)
a33 (1)
a33  a32
(1) (1)
/ a22 (1)
a23 , c21 (a21 / a12 )

= a21 a32(1) / a11 a22(1)  (a31 / a11 ), c32  a32


(1) (1)
/ a22
NOTES
By back-substitution process, we get the elements of the inverse matrix, by
solving the three systems corresponding to the three columns of the reduced
augmented part, i.e.,
ª1 0 0º
«c 0»»
« 11 1
«¬c31 c32 1»¼

We illustrate the method by an example given below.


Example 2.8: Find the inverse of the following matrix A by Gaussian elimination
method.
ª 2 3  1º
A «4 4  3»
« »
«¬2  3 1 »¼

Solution: We consider the following augmented matrix:


ª 2 3  1 : 1 0 0º
[ A.I ] « 4 4  3 : 0 1 0»
« »
«¬2  3 1 : 0 0 1»¼
Using Gaussian elimination to this augmented matrix, we get the following at
the end of first step:

R 2 R ª2 3  1 : 1 0 0 º
2 1 o «0  2  1 :  2 1 0 »
« »
R3  R1 «¬0  6 2 :  1 0 1»¼
Similarly, at the end of 2nd step we get,
ª2 3  1 : 1 0 0º
«
  o«0  2  1 :  2 1 0»»
R3  3R2
«¬0 0 5 : 5  3 1»¼
Thus, we get the three columns of inverse matrix by solving the following three
systems:
ª2 3  1 : 1 º ª2 3  1 : 0 º ª 2 3  1 : 0º
«0  2  1 :  2» «0  2  1 : 1 » «0  2  1 : 0 »
« »« » « »
«¬0 0 5 : 5 »¼ «¬0 0 5 :  3»¼ «¬0 0 5 : 1»¼
The solution of the three are easily derived by back-substitution, which give the
three columns of the inverse matrix given below:
ª1 / 4 0 1/ 4 º
«1 / 2  1 / 5  1 / 10»
« »
«¬ 1  3 / 5 1 / 5 »¼
Self-Instructional
50 Material
We can also employ Gauss-Jordan elimination to compute the inverse matrix. System of Linear
Equations
This is illustrated by the following example:
Example 2.9: Compute the inverse of the following matrix by Gauss-Jordan
elimination.
ª2 3  1º
NOTES
A «4 4  3»
« »
«¬2  3 1 »¼
Solution: We consider the augmented matrix [A : I],
ª 2 3 1 : 1 0 0 º ª 1 3 / 2 1/ 2 : 1/ 2 0 0 º
[A: I] « 4 4 3 : 0 1 0 »  o « 3 : 0 1 0 »»
« » R / 2 «4 4
¬« 2 3 1 : 0 0 1 »¼ ¬« 2 3 : 0 0 1 ¼»
1
1

R3 2 R1 ª1 3 / 2 1/ 2 : 1/ 2 0 0 º ª1 3 / 2 1/ 2 : 1/ 2 0 0º
 o «0 2  1 : 2 1 0 » o «0 1 1/ 2 : 1 1/ 2 0 »
« » R / 2 « »
R2  4 R1 «0 6 : 1 0 1 ¼» ¬«0 6 1 »¼
2
¬ 2 2 : 1 0

R1 3R2 /2 ª1 0 5 / 4 : 1 3 / 4 0 º ª 1 0 5 / 4 : 1 3 / 4 0 º
 o «0 1 1/ 2 : 1 1/ 2 0 »  o « 0 1 1/ 2 : 1 1/ 2 0 »
« » R /5 « »
R3  6 R2 «0 0 5 : 5 3 1 »¼
3
«¬ 0 0 1 : 1 3 / 5 1/ 5 ¼»
¬

R1 5R3 / 4 ª1 0 0 : 1/ 4 0 1/ 4 º
 o «0 1 0 : 1/ 2 1/ 5 1/10»»
«
R2  1R3 / 2 «0 0 1 : 1 3 / 5 1/ 5 »¼
¬

ª1 / 4 0 1/ 4 º
1 «1 / 2  1 / 5  1 / 10»
Which gives A « »
«¬ 1  3 / 5 1 / 5 »¼

2.3 TRIANGULARISATION METHOD

In numerical analysis, triangularisation method is also known as decomposition


method or the factorization method. As a direct method, this is most useful method
for solving a linear simultaneous equation. The inverse of matrix can also be
determined by this method.
Lower–upper (LU) decomposition or factorization factors a matrix as the
product of a lower triangular matrix and an upper triangular matrix. The product
sometimes includes a permutation matrix as well. LU decomposition can be viewed
as the matrix form of Gaussian elimination. Computers usually solve square systems
of linear equations using LU decomposition, and it is also a key step when inverting
a matrix or computing the determinant of a matrix. The LU decomposition was
introduced by the Polish mathematician Tadeusz Banachiewicz in 1938.
Let A be a square matrix. An LU factorization refers to the factorization of
A, with proper row and/or column orderings or permutations, into two factors – a
lower triangular matrix L and an upper triangular matrix U:

Self-Instructional
Material 51
System of Linear
Equations In the lower triangular matrix all elements above the diagonal are zero, in
the upper triangular matrix, all the elements below the diagonal are zero. For
example, for a 3 × 3 matrix A, its LU decomposition looks like this:
NOTES

Without a proper ordering or permutations in the matrix, the factorization


may fail to materialize. For example, it is easy to verify (by expanding the matrix
multiplication) that If then at least one of and
has to be zero, which implies that either L or U is singular. This is impossible if A is
nonsingular (invertible). This is a procedural problem. It can be removed by simply
reordering the rows of A so that the first element of the permuted matrix is nonzero.
The same problem in subsequent factorization steps can be removed the same
way; see the basic procedure below.
LU Factorization with Partial Pivoting
It turns out that a proper permutation in rows (or columns) is sufficient for LU
factorization. LU factorization with partial pivoting (LUP) refers often to LU
factorization with row permutations only:

Where L and U are again lower and upper triangular matrices, and P is a
permutation matrix, which, when left-multiplied to A, reorders the rows of A. It
turns out that all square matrices can be factorized in this form, and the factorization
is numerically stable in practice. This makes LUP decomposition a useful technique
in practice.
LU Factorization with Full Pivoting

An LU factorization with full pivoting involves both row and column permutations:

Where L, U and P are defined as before, and Q is a permutation matrix


that reorders the columns of A.
Lower-Diagonal-Upper (LDU) Decomposition
An Lower-diagonal-upper (LDU) decomposition is a decomposition of the form

Where D is a diagonal matrix, and L and U are unitriangular matrices, meaning


that all the entries on the diagonals of L and U are one.
Self-Instructional
52 Material
System of Linear
Above we required that A be a square matrix, but these decompositions Equations
can all be generalized to rectangular matrices as well. In that case, L and D are
square matrices both of which have the same number of rows as A, and U has
exactly the same dimensions as A. Upper triangular should be interpreted as having NOTES
only zero entries below the main diagonal, which starts at the upper left corner.
Example 2.10: Solve the given 2-by-2 matrix by triangularisation method

Solution: We factorize the following 2-by-2 matrix:

One way to find the LU decomposition of this simple matrix would be to


simply solve the linear equations by inspection. Expanding the matrix multiplication
gives

This system of equations is underdetermined. In this case any two non-zero


elements of L and U matrices are parameters of the solution and can be set arbitrarily
to any non-zero value. Therefore, to find the unique LU decomposition, it is
necessary to put some restriction on L and U matrices. For example, we can
conveniently require the lower triangular matrix L to be a unit triangular matrix (i.e.
set all the entries of its main diagonal to ones). Then the system of equations has
the following solution:

Substituting these values into the LU decomposition above yields

Self-Instructional
Material 53
System of Linear
Equations Check Your Progress
1. When is the system of equations homogenous and when is it non-
homogenous?
NOTES 2. What is Gauss elimination method?
3. What happens in Gauss-Jordan elimination method?
4. When is iteration methods used?
5. Define the process of Gauss-Seidel iteration method.
6. Elaborate on the triangularisation method.

2.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. The system of equations Ax = b is termed as a homogeneous one if all the
elements in the column vector b are zero. Otherwise, the system is termed
as a non-homogeneous one.
2. This method consists in systematic elimination of the unknowns so as to
reduce the coefficient matrix into an upper triangular system, which is then
solved by the procedure of back-substitution.
3. The Gauss-Jordan elimination method is a variation of the Gaussian elimination
method. In this method, the augmented coefficient matrix is transformed by
row operations such that the coefficient matrix reduces to the Identity matrix.
The solution of the system is then directly obtained as the reduced augmented
column of the transformed augmented matrix.
4. We can use iteration methods to solve a system of linear equations when
the coefficient matrix is diagonally dominant.
5. This is a simple modification of the Jacobi iteration. In this method, at any
stage of iteration of the system, the improved values of the unknowns are
used for computing the components of the unknown vector.

2.5 SUMMARY
x Many engineering and scientific problems require the solution of a system
of linear equations.
x The system of equations is termed as a homogeneous one if all the elements
in the column vector b of the equation Ax = b, are zero.
x Cramer’s rule and matrix inversion method are two classical methods to
solve the system of equations.
x If D = |A| be the determinant of the coefficient matrix A and Di is the
determinant obtained by replacing the ith column of D by the column vector
b, then the Cramer’s rule gives the solution vector x by the equations,
Di
xi for i = 1, 2, …, n.
Self-Instructional D
54 Material
x Gaussian elimination method consists in systematic elimination of the System of Linear
Equations
unknowns so as to reduce the coefficient matrix into an upper triangular
system, which is then solved by the procedure of back-substitution.
x In Gauss-Jordan elimination, the augmented matrix is transformed by row
operations such that the coefficient matrix reduces to the identity matrix. NOTES
x We can use iteration methods to solve a system of linear equations when
the coefficient matrix is diagonally dominant.
x There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
x Gaussian elimination can be used to compute the inverse of a matrix.
x In numerical analysis, triangularisation method is also known as
decomposition method or the factorization method. As a direct method,
this is most useful method for solving a linear simultaneous equation. The
inverse of matrix can also be determined by this method.

2.6 KEY WORDS


x Homogenous equation: In this system of equations, all the elements in the
column vector b of the equation Ax = b, are zero.
x Gaussian elimination: It is the systematic elimination of the unknowns so
as to reduce the coefficient matrix into an upper triangular system, which is
then solved by the procedure of back-substitution.
x Gauss-Seidel iteration: In this method, at any stage of iteration of the
system, the improved values of the unknowns are used for computing the
components of the unknown vector.
x Triangularisation method In numerical analysis, triangularisation method
is also known as decomposition method or the factorization method. As a
direct method, this is most useful method for solving a linear simultaneous
equation. The inverse of matrix can also be determined by this method.

2.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define the system of linear equations.
2. How many determinants d0 we have to compute in Cramer’s rule?
3. What is the basic difference between Gaussian elimination and Gauss-Jordan
elimination method?
4. What are iterative methods?
5. State an application of Gaussian elimination method.

Self-Instructional
Material 55
System of Linear Long-Answer Questions
Equations
1. Use Cramer’s rule to solve the following systems of equations:
(i) x1 – x2 – x3 = 1 (ii) x1 + x2 + x3 = 6
NOTES 2x1 – 3x2 + x3 = 1 x1 + 2x2 + 3x3 = 14
3x1 + x2 – x3 = 2 x1 – 2x2 + x3 = 2
2. Using the matrix inversion method to solve the following systems of equation:
(i) 4x1 – x2 + 2x3 = 15 (ii) x1 + 4x2 + 9x3 = 16
x1 – 2x2 – 3x3 = –5 2x1 + x2 + x3 = 10
5x1 – 7x2 + 9x3 = 8 3x1 + 2x2 + 3x3 = 18
3. Solve the following systems of equation using Gaussian elimination method:
(i) 2x + 2y + 4z = 18 (ii) x1 + 2x2 + x3 + 4x4 = 13
x + 3y + 2x = 13 x1 + 4x3 + 3x4 = 28
3x + y + 3x = 14 4x1 + 2x2 + 2x3 + x4 = 20
–3x1 + x2 + 3x3 + 2x4 = 6
4. Apply Gauss-Jordan elimination method to solve the following systems:
(i) x1 + 2x2 + 3x3 = 4 (ii) 5x1 + 3x2 + x3 = 2
x1 + x2 + x3 = 3 4x1 + 10x2 + 4x3 = –4
2x1 + 2x2 + x3 = 1 2x1 + 3x2 + 5x3 = 11
5. Compute the solution of the following systems correct to three significant
digits using Gauss-Jordan iteration method:
(i) 9x1 – 3x2 + 2x3 = 23 (ii) x1 + 2x2 + 3x3 + 4x4 = 30
6x1 + 3x2 + 14x3 = 38 4x1 + x2 + 2x3 + 3x4 = 24
4x1 + 2x2 – 3x3 = 35 3x1 + 4x2 + x3 + 2x4 = 22
2x1 + 3x2 + 4x3 + x4 = 24

2.8 FURTHER READINGS


Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical
Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Self-Instructional Prentice Hall of India Pvt. Ltd.
56 Material
Solution of Linear

UNIT 3 SOLUTION OF LINEAR Systems

SYSTEMS
NOTES
Structure
3.0 Introduction
3.1 Objectives
3.2 Solution of Linear Systems
3.3 Jacobi and Gauss-Seidal Iterative Methods
3.4 Answers to Check Your Progress Questions
3.5 Summary
3.6 Key Words
3.7 Self Assessment Questions and Exercises
3.8 Further Readings

3.0 INTRODUCTION

A solution of a linear system is an assignment of values to the variables x1, x2,


..., xn such that each of the equations is satisfied. The set of all possible solutions
is called the solution set. A linear system may behave in any one of three possible
ways: The system has infinitely many solutions, The system has a single unique
solution, and The system has no solution.
In numerical linear algebra, the Jacobi method is an iterative algorithm for
determining the solutions of a strictly diagonally dominant system of linear equations.
Each diagonal element is solved for, and an approximate value is plugged in. The
process is then iterated until it converges. This algorithm is a stripped-down version
of the Jacobi transformation method of matrix diagonalization. The method is named
after Carl Gustav Jacob Jacobi.
In numerical linear algebra, the Gauss–Seidel method, also known as the
Liebmann method or the method of successive displacement, is an iterative method
used to solve a system of linear equations. It is named after the German
mathematicians Carl Friedrich Gauss and Philipp Ludwig von Seidel, and is similar
to the Jacobi method.
In this unit, you will study about the solution of linear systems, Jacobi iterative
method, and Gauss-Seidel iterative methods.

3.1 OBJECTIVES

After going through this unit, you will be able to:


x Analyse the solution of linear systems

Self-Instructional
Material 57
Solution of Linear x Explain the Jacobi iterative method
Systems
x Define the Gauss-Seidel iteration methods

NOTES 3.2 SOLUTION OF LINEAR SYSTEMS


Many engineering and scientific problems require the solution of a system of linear
equations. We consider a system of m linear equations in n unknowns written as,
a11 x1  a12 x2  a13 x3  ...  a1n xn b1
a21 x1  a22 x2  a23 x3  ...  a2 n xn b2
a31 x1  a32 x2  a33 x3  ...  a3n xn b3
... ... ...
(3.1)
am1  am 2 x2  am 3 x3  ...  am xn bm
Using matrix notation, we can write the above system of equations in the
form,
Ax = b (3.2)
Where A is a m × n matrix and x, b are respectively n-column, m-row vectors
given by,

ª a11 a12 a13 ... a1n º ª x1 º ª b1 º


«a a22 a23 ... a2 n »» «x » «b »
« 21 « 2» « 2»
A « a31 a32 a33 ... a3n » , x « x3 » , b « b3 »
« » « » « » (3.3)
« ... ... ... ... ... » « ... » « ... »
«¬ am1 am 2 am 3 ... amn »¼ «¬ xn »¼ «¬bm »¼

The system of equations is termed as a homogeneous one if all the elements in


the column vector b are zero. Otherwise, the system is termed as a non-
homogeneous one.
The homogeneous system has a non-trivial solution if A is a square matrix, i.e.,
m = n, and the determinant of the coefficient matrix, i.e., |A| is equal to zero.
The solution of the non-homogeneous system exists if the rank of the coefficient
matrix A is equal to the rank of the augmented matrix [A : b] given by,
ª a11 a12 a13 ... a1n b1 º
« »
«a21 a22 a23 ... a2 n b2 »
[ A : b] « a31 a32 a33 ... a3n b3 »
« »
« ... ... ... ... ... ... »
« an1 an 2 an3 ... ann bn »¼
¬
Further, a unique non-trivial solution of the system (3.1) exists when m = n
and the determinant | A | z 0 , i.e., the coefficient matrix is a square non-singular
matrix. The computation of the solution of a system of n linear equations in n
unknowns can be made by any one of the two classical methods known as the
Self-Instructional
58 Material
Cramer’s rule and the Matrix Inversion method. But these two methods are not Solution of Linear
Systems
suitable for numerical computation, since both the methods require the evaluation
of determinants. There are two types of efficient numerical methods for computing
solution of systems of equations. Some are direct methods and others are iterative
in nature.Among the direct methods Gaussian Elimination method is most commonly NOTES
used. Among the iterative methods, Gauss-Seidel Iteration method is very
commonly used.

3.3 JACOBI AND GAUSS-SEIDAL ITERATIVE


METHODS
We can use iteration methods to solve a system of linear equations when the
coefficient matrix is diagonally dominant. This is ensured by the set of sufficient
conditions given below,
n

¦| a
j 1, j z i
ij |  | aii |, for i 1, 2,..., n (3.4)

An alternative set of sufficient conditions is,


n

¦| a
i 1, i z j
ij |  | a jj | , for j 1, 2,..., n (3.5)

There are two forms of iteration methods termed as Jacobi iteration method
and Gauss-Seidel iteration method.
Jacobi Iteration Method: Consider a system of n linear equations,
a11 x1  a12 x2  a13 x3  ........  a1n xn b1
a 21 x1  a22 x2  a 23 x3  ........  a2 n xn b2
a31 x1  a32 x2  a33 x3  ........  a3n xn b3
..................................................................
a n1 x1  an 2 x2  an3 x3  ........  a nn xn bn

The diagonal elements aii, i = 1, 2, ..., n are non-zero and satisfy the set of
sufficient conditions stated earlier. When the system of equations does not satisfy
these conditions, we may rearrange the system in such a way that the conditions
hold.
In order to apply the iteration we rewrite the equations in the following form.

x1 (b1  a12 x2  a13 x3  ...  a1n xn ) / a11


x2 (b2  a 21 x1  a 23 x3  ...  a2 n xn ) / a 22
x3 (b3  a31 x1  a32 x2  ...  a3n xn ) / a33
............................................................
xn (bn  a n1 x1  an 2 x2  ...  a nn 1 xn 1 ) / ann
Self-Instructional
Material 59
Solution of Linear To start the iteration we make an initial guess of the unknowns as
Systems
x1(0) , x2(0) , x3(0) ,..., xn(0) (initial guess may be taken to be zero).
Successive approximations are computed using the equations,
NOTES
x1( k 1) (b1  a12 x2( k )  a13 x3( k )  ...  a1n xn( k ) ) / a11
x2( k 1) (b2  a 21 x1( k )  a 23 x3( k )  ...  a 2 n xn( k ) ) / a 22
( k 1) (k ) (k ) (k )
x3 (b3  a31 x1  a32 x2  ...  a3n xn ) / a33
(3.6)
.................................................................................
( k 1) (k ) (k ) (k )
xn (bn  a n1 x1  a n 2 x2  ...  a nn 1 xn 1 ) / a nn

Where k = 0, 1, 2, ...
The iterations are continued till the desired accuracy is achieved. This is checked
by the relations,
xi( k 1)  xi( k )  H , for i 1, 2, ..., n (3.7)

Gauss-Seidel Iteration Method: This is a simple modification of the Jacobi


iteration. In this method, at any stage of iteration of the system, the improved
values of the unknowns are used for computing the components of the unknown
vector. The iteration equations given below are used in this method,

x1( k 1) (b1  a12 x 2( k )  a13 x3( k )  ...  a1n xn( k ) ) / a11
x2( k 1) (b2  a 21 x1( k 1)  a23 x3( k )  ...  a 2 n xn( k ) ) / a22
( k 1) ( k 1) ( k 1) (k )
x3 (b3  a31 x1  a32 x 2  ...  a3n xn ) / a33
(3.8)
.................................................................................
( k 1) ( k 1) ( k 1) ( k 1)
xn (bn  a n1 x1  an2 x2 ( k  1)  ...  a nn 1 xn 1 ) / a nn

It is clear from above that for computing x2( k 1) , the improved value of x1( k 1)
is used instead of x1( k ) ; and for computing x3( k 1) , the improved values x1( k 1) and
x2( k 1) are used. Finally, for computing xn( k ) improved values of all the components

x1( k 1) , x2( k 1) ,..., xn( k11) are used. Further, as in the Jacobi iteration, the iterations are
continued till the desired accuracy is achieved.
Example 3.1: Solve the following system by Gauss - Seidel iterative method
correct upto four significant digits.
10 x1  2 x2  x3  x4 3
 2 x1  10 x2  x3  x4 15
 x1  x2  10 x3  2 x4 27
 x1  x2  2 x3  10 x4 9

Self-Instructional
60 Material
Solution: The given system is clearly having diagonally dominant coefficient matrix, Solution of Linear
Systems
n
i.e., | aii | t ¦| a
j 1
ij |, i 1, 2, ..., n
j zi
NOTES
Hence, we can employ Gauss-Seidel iteration method, for which we rewrite
the system as,

x1( k 1) 0.3  0.2 x2( k )  0.1 x3( k )  0.1 x4( k )


( k 1) ( k 1) (k ) (k )
x2 1.5  0.2 x1  0.1 x3  0.1 x4
x3( k 1) 2.7  0.1 x1( k 1)  0.1 x2( k 1)  0.2 x4( k )
x4( k 1) 0.9  0.1 x1( k 1)  0.1 x2( k 1)  0.2 x3( k 1)

We start the iteration with,


x1( 0) 0.3, x 2(0) 1.5, x3( 0) 2.7, x4(0 ) 0.9

The results of successive iterations are given in the table below.


k x1 x2 x3 x4
1 0.72 1.824 2.774 –0.0196
2 0.9403 1.9635 2.9864 –0.0125
3 0.09901 1.9954 2.9960 –0.0023
4 0.9984 1.9990 2.9993 –0.0004
5 0.9997 1.9998 2.9998 –0.0003
6 0.9998 1.9998 2.9998 –0.0003
7 1.0000 2.0000 3.0000 0.0000

Hence, the solution correct to four significant figures is x1 = 1.0000, x2 =


2 . 0 0 0 ,
x3 = 3.000, x4 = 0.000.
Example 3.2: Solve the following system by Gauss-Seidel iteration method.
20 x1  2 x2  x3 30
x1  40 x2  3x3 75
2 x1  x2  10 x2 30

Give the solution correct upto three significant figures.


Solution: It is evident that the coefficient matrix is diagonally dominant and the
sufficient conditions for convergence of the Gauss-Seidel iterations are satisfied,
since
| a11 | 20 t | a12 |  | a13 | 3
| a 22 | 40 t | a 21 |  | a23 | 4
| a33 | 10 t | a31 |  | a32 | 3

Self-Instructional
Material 61
Solution of Linear For starting the iterations, we rewrite the equations as,
Systems

1
x1 (30  2 x2  x3 )
20
NOTES 1
x2 (75  x1  3 x3 )
40
1
x3 (30  2 x1  x2 )
10

The initial approximate solution is taken as,


x1(0 ) 1.5, x2(0 ) 2.0, x3( 0) 3 .0

The first iteration gives,

(1) 1
x1 (30  2 u 2.0  3.0) 1.15
20
1
x2(1) (75  1.15  3 u 3.0) 2.14
40
1
x3(1) (30  2 u 1.15  2.14) 2.98
10

The second iteration gives,

( 2) 1
x (30  2 u 2.14  2.98) 1.137
1 20
1
x2( 2) (75  1.137  3 u 2.98) 2.127
40
1
x3( 2) (30  2 u 1.137  2.127) 2.986
10

The third iteration gives,

( 3) 1
x1 (30  2 u 2.127  2.986) 1.138
20
1
x2(3) (75  1.138  3 u 2.986) 2.127
40
1
x3(3) (30  2 u 1.138  2.127) 2.985
10

Thus, the solution correct to three significant digits can be written as x1 =


1 . 1 4 ,
x2 = 2.13, x3 = 2.98.
Example 3.3: Solve the following system correct to three significant digits, using
Jacobi iteration method.

Self-Instructional
62 Material
Solution of Linear
10 x1  8 x 2  3 x 3  x 4 16 Systems
3 x1  4 x 2  10 x 3  x 4 10

2 x1  10 x2  x3  4 x4 9
NOTES
2 x1  2 x2  3x3  10 x4 11

Solution: The system is first rearranged so that the coefficient matrix is diagonally
dominant. The equations are rewritten for starting Jacobi iteration as,

x1( k 1) 1.6  0.8 x2( k )  0.3 x3( k )  0.1 x4( k )


x2( k 1) 0.9  0.2 x1( k )  0.1 x3( k )  0.4 x4( k )
x3( k 1) 1.0  0.3 x1( k )  0.4 x2( k )  0.1 x4( k )
x4( k 1) 1.1  0.2 x1( k )  0.2 x2( k )  0.3 x3( k ) , where k 0, 1, 2,...

The initial guess of solution is taken as,


x1(0) 1.6, x2(0) 0.9, x3(0) 1.0, x4(0) 1.1
The results of successive iterations computed by Jacobi iterations are given in
the following table:

k x1 x2 x3 x4
1 1.07 0.92 0.77 0.90
2 1.050 0.969 0.957 0.933
3 1.0186 0.9765 0.9928 0.9923
4 1.0174 0.9939 0.9858 0.9989
5 0.9997 0.9975 0.9925 0.9974
6 1.0001 0.9997 0.9994 0.9984
7 1.0002 0.9998 1.0001 0.9999

Thus the solution correct to three significant digits is x1 = 1.000, x2 = 1.000,


x4 = 1.000.
Algorithm. Solution of a system of Equations by Gauss-Seidel iteration method.
Step 1. Input elements aij of augmented matrix for i = 1 to n, next, j = 1 to n
+ 1.
Step 2. Input epsilon, maxit [epsilon is desired accuracy, maxit is maximum
no. of iterations]
Step 3. Set xi = 0, for i = 1 to n
Step 4. Set big = 0, sum = 0, j = 1, k = 1, iter = 0
Step 5. Check if k z j, set sum = sum + ajk xk
Step 6. Check if k < n, set k = k + 1, go to step 5 else go to next step
Step 7. Compute temp = (ajn + 1 – sum) / ajj
Step 8. Compute relerr = abs (xj – temp) / temp
Self-Instructional
Material 63
Solution of Linear Step 9. Check if big < relerr then big = relerr
Systems
Step 10. Set xj = temp
Step 11. Set j = j + 1, k = 1
NOTES Step 12. Check if j d n to step 5 else go to next step
Step 13. Check if relerr < epsilon then {write iterations converge, and write
xj for j = 1 to n go to step 15} else if iter < maxit iter = iter + 1 go
to step 5
Step 14. Write ‘iterations do not converge in’, maxit ‘iteration’
Step 15. Write xj for j = 1 to n
Step 16. End

Check Your Progress


1. Define the solution of linear systems.
2. Explain the homogeneous system of equations.
3. How can we solve a system of linear equations?
4. State the Gauss-Seidel iteration methods.

3.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. Many engineering and scientific problems require the solution of a system
of linear equations. We consider a system of m linear equations in n
unknowns written as,

a11 x1  a12 x2  a13 x3  ...  a1n xn b1


a21 x1  a22 x2  a23 x3  ...  a2 n xn b2
a31 x1  a32 x2  a33 x3  ...  a3n xn b3
... ... ...
am1  am 2 x2  am 3 x3  ...  am xn bm

2. The system of equations is termed as a homogeneous one if all the elements


in the column vector b are zero. Otherwise, the system is termed as a non-
homogeneous one.
The homogeneous system has a non-trivial solution if A is a square matrix,
i.e., m = n, and the determinant of the coefficient matrix, i.e., |A| is equal to
zero.
3. We can use iteration methods to solve a system of linear equations when
the coefficient matrix is diagonally dominant. This is ensured by the set of
sufficient conditions given below,
Self-Instructional
64 Material
Solution of Linear
n

¦| a
Systems
ij |  | aii |, for i 1, 2,..., n
j 1, j z i

4. Gauss-Seidel Iteration Method: This is a simple modification of the Jacobi


NOTES
iteration. In this method, at any stage of iteration of the system, the improved
values of the unknowns are used for computing the components of the
unknown vector.

3.5 SUMMARY
x Many engineering and scientific problems require the solution of a system
of linear equations. We consider a system of m linear equations in n
unknowns written as,
a11 x1  a12 x2  a13 x3  ...  a1n xn b1
a21 x1  a22 x2  a23 x3  ...  a2 n xn b2
a31 x1  a32 x2  a33 x3  ...  a3n xn b3
... ... ...
am1  am 2 x2  am 3 x3  ...  am xn bm

x The system of equations is termed as a homogeneous one if all the elements


in the column vector b are zero. Otherwise, the system is termed as a non-
homogeneous one.
The homogeneous system has a non-trivial solution if A is a square matrix,
i.e., m = n, and the determinant of the coefficient matrix, i.e., |A| is equal to
zero.
x The computation of the solution of a system of n linear equations in n
unknowns can be made by any one of the two classical methods known as
the Cramer’s rule and the Matrix Inversion method.
x We can use iteration methods to solve a system of linear equations when
the coefficient matrix is diagonally dominant. This is ensured by the set of
sufficient conditions given below,
n

¦| a
j 1, j z i
ij |  | aii |, for i 1, 2,..., n

x An alternative set of sufficient conditions is,


n

¦| a
i 1, i z j
ij |  | a jj | , for j 1, 2,..., n

x Gauss-Seidel Iteration Method: This is a simple modification of the Jacobi


iteration. In this method, at any stage of iteration of the system, the improved
values of the unknowns are used for computing the components of the
unknown vector.
Self-Instructional
Material 65
Solution of Linear
Systems 3.6 KEY WORDS
x Solution of linear systems: A solution of a linear system is an assignment
NOTES of values to the variables x1, x2, ..., xn such that each of the equations is
satisfied. The set of all possible solutions is called the solution set.
x Jacobi iterative method: The Jacobi method is an iterative algorithm for
determining the solutions of a strictly diagonally dominant system of linear
equations.
x Gauss-Seidal iteration methods: The Gauss–Seidel methods, also known
as the Liebmann method or the method of successive displacement, is an
iterative method used to solve a system of linear equations.

3.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Explain the solution of linear systems.
2. Elaborate on the Jacobi iterative method.
3. State the Gauss-Seidel iteration methods.
Long-Answer Questions
1. Explain the solution of linear systems with the help of example.
2. Discuss briefly the Jacobi iterative method.
3. Analyse the Gauss-Seidel iteration methods. Give an appropriate example.

3.8 FURTHER READINGS


Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical
Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Self-Instructional
66 Material
Prentice Hall of India Pvt. Ltd.
Interpolation

UNIT 4 INTERPOLATION
Structure NOTES
4.0 Introduction
4.1 Objectives
4.2 Graphical Method of Interpolation
4.3 Finite Difference
4.4 Forward Difference
4.5 Backward Difference
4.6 Central Difference
4.7 Fundamental Theorem of Finite Differences
4.8 Answers to Check Your Progress Questions
4.9 Summary
4.10 Key Words
4.11 Self Assessment Questions and Exercises
4.12 Further Readings

4.0 INTRODUCTION

In numerical analysis, interpolation is a type of estimation, a method of constructing


new data points within the range of a discrete set of known data points. In
engineering and science, one often has a number of data points, obtained by sampling
or experimentation, which represent the values of a function for a limited number
of values of the independent variable. It is often required to interpolate, i.e., estimate
the value of that function for an intermediate value of the independent variable.
A finite difference is a mathematical expression of the form f (x + b) – f
(x + a). If a finite difference is divided by b – a, one gets a difference quotient. The
approximation of derivatives by finite differences plays a central role in finite
difference methods for the numerical solution of differential equations, especially
boundary value problems.
Forward differences plays most important role for solving the ordinary
differential equations using single-step predictor-corrector methods, such as Euler
and Runge-Kutta methods. Backward differences are very convenient for the
approximation of the derivatives. The central difference is an average of the forward
and backward differences for the equally spaced value of data. The truncation
error of the central difference approximation is order of O(n2), where n is the step
size.
In this unit, you will study about the interpolation, graphical method of
interpolation, finite difference, forward difference, backward difference, central
difference, and the fundamental theorem of finite differences.

Self-Instructional
Material 67
Interpolation
4.1 OBJECTIVES

After going through this unit, you will be able to:


NOTES x Define the interpolation
x Explain the graphical method of interpolation
x Understand finite difference
x Elaborate on the forward difference
x Comprehend the central difference
x Explain the fundamental theorem of finite differences

4.2 GRAPHICAL METHOD OF INTERPOLATION

The problem of interpolation is very fundamental problem in numerical analysis.


The term interpolation literally means reading between the lines. In numerical analysis,
interpolation means computing the value of a function f (x) in between values of x
in a table of values. It can be stated explicitly as ‘given a set of (n + 1) values y0,
y1, y2,..., yn for x = x0, x1, x2, ..., xn, respectively. The problem of interpolation is
to compute the value of the function y = f (x) for some non-tabular value of x.’
The computation is often made by finding a polynomial called interpolating
polynomial of degree less than or equal to n such that the value of the polynomial
is equal to the value of the function at each of the tabulated points. Thus if,
n
(4.1)
is the interpolating polynomial of degree d n , then
(4.2)

It is true that, in general, it is difficult to guess the type of function to


approximate f (x). In case of periodic functions, the approximation can be made
by a finite series of trigonometric functions. Polynomial interpolation is a very
useful method for functional approximation. The interpolating polynomial is also
useful as a basis to develop methods for other problems such as numerical
differentiation, numerical integration and solution of initial and boundary value
problems associated with differential equations.
The following theorem, developed by Weierstrass, gives the justification for
approximation of the unknown function by a polynomial.
Theorem 4.1: Every function which is continuous in an interval (a, b) can be
represented in that interval by a polynomial to any desired accuracy. In other
words, it is possible to determine a polynomial P(x), such that
for every x in the interval (a, b) where H is any prescribed

Self-Instructional
68 Material
small quantity. Geometrically, it may be interpreted that the graph of the polynomial Interpolation

y = P(x) is confined to the region bounded by the curves


for all values of x within (a, b), however small H
may be.
NOTES

Fig. 4.1 Interpolation

The following theorem is regarding the uniqueness of the interpolating


polynomial.
Theorem 4.2: For a real-valued function f (x) defined at (n + 1) distinct points
x0, x1, ..., xn, there exists exactly one polynomial of degree d n which interpolates
f (x) at x0, x1, ..., xn.
We know that a polynomial P(x) which has (n + 1) distinct roots x0, x1, ...,
xn can be written as,
P(x) = (x – x0) (x – x1) .....(x – xn) q (x)
Where q(x) is a polynomial whose degree is either 0 or (n + 1) which is less than the
degree of P(x).

Suppose, that two polynomials are of degree d n and that


both interpolate f(x). Here, n Then P(x)

vanishes at the n +1 points x0 , x1 ,..., xn . Thus P(x) = 0 and


Extrapolation
The interpolating polynomials are usually used for finding values of the tabulated
function y = f(x) for a value of x within the table. But, they can also be used in some
cases for finding values of f(x) for values of x near to the end points x0 or xn outside
the interval [x0, xn]. This process of finding values of f(x) at points beyond the
interval is termed as extrapolation. We can use Newton’s forward difference
interpolation for points near the beginning value x0. Similarly, for points near the end
value xn, we use Newton’s backward difference interpolation formula.
Example 4.1: With the help of appropriate interpolation formula, find from the
following data the weight of a baby at the age of one year and of ten years:

Age x 3 5 7 9
Weight y (kg ) 5 8 12 17
Self-Instructional
Material 69
Interpolation Solution: Since the values of x are equidistant, we form the finite difference table
for using Newton’s forward difference interpolation formula to compute weight of
the baby at the age of required years.
x y 'y '2 y
NOTES
3 5
3
5 8 1
4
7 12 1
5
9 17

Taking x = 2, u
Newton’s forward difference interpolation gives,
(0.5)(1.5)
y at x 1, y (1) 5  0.5 u 3  u1
2
5  1.5  0.38 3.88  3.9 kg.
Similarly, for computing weight of the baby at the age of ten years, we use
Newton’s backward difference interpolation given by,
x  xn
10  9
v 0.5
h 2
0.5 u1.5
y at x 10, y (10) 17  0.5 u 5  u1
2
17  2.5  0.38  19.88

4.3 FINITE DIFFERENCE

Let us assume that values of a function y = f (x) are known for a set of equally
spaced values of x given by {x0, x1,..., xn}, such that the spacing between any
two consecutive values is equal. Thus, x1 = x0 + h, x2 = x1 + h,..., xn = xn–1 + h,
so that xi = x0 + ih for i = 1, 2, ...,n. We consider two types of differences known
as forward differences and backward differences of various orders. These
differences can be tabulated in a finite difference table as explained in the subsequent
sections.

4.4 FORWARD DIFFERENCE


Let y0, y1,..., yn be the values of a function y = f (x) at the equally spaced values of
x = x0, x1, ..., xn. The differences between two consecutive y given by y1 – y0, y2 –
y1,..., yn – yn–1 are called the first order forward differences of the function y = f (x)
at the points x0, x1,..., xn–1. These differences are denoted by,
'y0 y1  y0 , 'y1 y2  y1 , ..., 'yn1 yn  yn1 (4.3)
Self-Instructional
70 Material
Where ' is termed as the forward difference operator defined by,, Interpolation

'f ( x ) f ( x  h)  f ( x ) (4.4)
Thus, ' yi = yi+1 – yi, for i = 0, 1, 2, ..., n – 1, are the first order forward
differences at xi.
NOTES
The differences of these first order forward differences are called the second
order forward differences.
Thus, ' 2 y ' ( 'y ) i i

'yi 1  'yi , for i 0, 1, 2, ..., n  2


(4.5)
Evidently,
'2 y0 'y1  'y0 y 2  y1  ( y1  y 0 ) y2  2 y1  y0
And, '2 yi yi  2  yi 1  ( yi 1  yi )
i.e., ' 2 yi yi  2  2 yi 1  yi , for i 0, 1, 2, ..., n  2
(4.6)
Similarly, the third order forward differences are given by,
' 3 yi ' 2 yi 1  ' 2 yi , for i 0, 1, 2, ..., n  3
i.e., ' 3 yi y i  3  3 y i  2  3 y i 1  y i
(4.7)
Finally, we can define the nth order forward difference by,
n( n  1)
'n y 0 y n  ny n 1  y n  2  ...  ( 1) n y 0 (4.8)
2!
The coefficients in above equations are the coefficients of the binomial expansion
(1 – x)n.
The forward differences of various orders for a table of values of a function
y = f (x), are usually computed and represented in a diagonal difference table. A
diagonal difference table for a table of values of y = f (x), for six points x0, x1, x2, x3,
x4, x5 is shown here.
Diagonal difference Table for y = f(x):
i xi yi 'yi '2 yi '3 yi '4 yi '5 yi
0 x0 y0
'y0
1 x1 y1 '2 y0
'y1 '3 y0
2
2 x2 y2 ' y1 '4 y0
'y 2 '3 y1 '5 y 0
3 x3 y3 '2 y 2 '4 y1
'y3 '3 y 2
2 3
4 x4 y4 ' y
'y 4
5 x5 y5

Self-Instructional
Material 71
Interpolation The entries in any column of the differences are computed as the differences of
the entries of the previous column and one placed in between them. The upper data
in a column is subtracted from the lower data to compute the forward differences.
We notice that the forward differences of various orders with respect to yi are along
NOTES the forward diagonal through it. Thus, 'y0, '2y0, '3y0, '4y0 and '5y0 lie along the top
forward diagonal through y0. Consider the following example.
Example 4.2: Given the table of values of y = f (x),

x 1 3 5 7 9
y 8 12 21 36 62

form the diagonal difference table and find the values of 'f (5), '2 f (3), '3 f (1) .
Solution: The diagonal difference table is,
i xi yi 'y i '2 yi '3 yi '4 yi
0 1 8
4
1 3 12 5
9 1
2 5 21 6 4
15 5
3 7 36 11
26
4 9 62

From the table, we find that 'f (5) 15, the entry along the diagonal through the
entry 21 of f (5).
Similarly, '2 f (3) 6, the entry along the diagonal through f (3). Finally,,
'3 f (1) 1.

4.5 BACKWARD DIFFERENCE


The backward differences of various orders for a table of values of a function y = f
(x) are defined in a manner similar to the forward differences. The backward
difference operator ’ (inverted triangle) is defined by ’f ( x) f ( x)  f ( x  h).
Thus, ’ yk yk  yk 1 , for k 1, 2, ..., n
i.e., ’y1 y1  y0 , ’y 2 y 2  y1 ,..., ’y n y n  y n 1 (4.9)
The backward differences of second order are defined by,
’ 2 yk ’yk  ’yk 1 yk  2 yk 1  yk  2

Self-Instructional
72 Material
Hence, Interpolation

’ 2 y2 y2  2 y1  y0 , and ’ 2 yn yn  2 yn 1  yn 2 (4.10)
Higher order backward differences can be defined in a similar manner.
NOTES
Thus, ’ 3 yn yn  3 yn 1  3 yn 2  yn 3 , etc. (4.11)
Finally,

(4.12)

The backward differences of various orders can be computed and placed in a


diagonal difference table. The backward differences at a point are then found along
the backward diagonal through the point. The following table shows the backward
differences entries.
Diagonal difference Table of backward differences:

i xi yi ’y i ’ 2 yi ’ 3 yi ’ 4 yi ’ 5 yi
0 x0 y0
’y1
1 x1 y1 ’ 2 y2
’y 2 ’ 3 y3
2
2 x2 y2 ’ y3 ’ 4 y4
’y 3 ’3 y4
2
3 x3 y3 ’ y4 ’ 4 y5
3
’y 4 ’ y5
2
4 x4 y4 ’ y5
’y5
5 x5 y5

The entries along a column in the table are computed (as discussed in previous
example) as the differences of the entries in the previous column and are placed in
between. We notice that the backward differences of various orders with respect to
yi are along the backward diagonal through it. Thus, ’y5 , ’ 2 y5 , ’ 3 y5 , ’ 4 y5 and
5
’ y 5 are along the lowest backward diagonal through y5.

We may note that the data entries of the backward difference table in any
column are the same as those of the forward difference table, but the differences
are for different reference points.
Specifically, if we compare the columns of first order differences we can see
that,
'y 0 ’y1 , 'y1 ’y 2 , ..., 'y n 1 ’y n

Hence, 'yi ’yi 1 , for i 0, 1, 2, ..., n  1

Self-Instructional
Material 73
Interpolation
Similarly, '2 y 0 ’ 2 y 2 , '2 y1 ’ 2 y 3 ,..., '2 y n  2 ’ 2 y n

Thus, ' 2 yi ’2 yi  2 , for i 1, 2, ..., n  2

NOTES In general, 'k yi ’ k yi  k .

Conversely, ’ k yi 'k yi  k .
Example 4.3: Given the following table of values of y = f (x):

x 1 3 5 7 9
y 8 12 21 36 62

2 3
Find the values of ’ y( 7) , ’ y (9) , ’ y (9) .
Solution: We form the diagonal difference table,

xi yi ’y i ’ 2 yi ’ 3 yi ’ 4 yi
1 8
4
3 12 5
9 1
5 21 6 4
15 5
7 36 11
26
9 62

From the table, we can easily find ’y(7) 15, ’ y(9) 11, ’ y(9) 5.
2 3

4.6 CENTRAL DIFFERENCE


The central difference operator denoted by G is defined by,

Thus,

Giving the operator relation, or


Also,

i.e.,
Self-Instructional
74 Material
Further, Interpolation

NOTES

(4.13)
Even though the central difference operator uses fractional arguments, still it is
widely used. This is related to the averaging operator and is defined by,

(4.14)

Squaring,

? (4.15)

It may be noted that,


Also,
2

? (4.16)
Further,

Example 4.4: Prove the following operator relations:


(i) ' { ’E (ii) (1  ' ) (1  ’) 1
Solution:
(i) Since, 'f (x) f (x  h)  f (x) Ef (x)  f (x) , ' { E  1 (1)

And since ’f ( x) f ( x )  f ( x  h) (1  E 1 ) f ( x) , ’ { 1  E 1 (2)

E 1
Thus, ’ { or ’E { E  1 { '
E
Hence proved.
Self-Instructional
Material 75
Interpolation (ii) From Equation (1), we have E { '  1 (3)
And from Equation (2) we get E 1 { 1  ’ (4)
Combining Equations (3) and (4), we get (1  ' )(1  ’ ) { 1.
NOTES
Example 4.5: If fi is the value of f (x) at xi where xi = x0 + ih, for i = 1,2,..., prove
that,
i
§i·
fi E i fo ¦¨ j ¸ ' i
f0
j 0 © ¹
Solution: We can write Ef (x) = f (x + h)
Using Taylor series expansion, we have

ehD . f ( x)

? 1  ' ehD

Hence, eihD (1  ')i

Now, fi f ( xi ) f ( x0  ih) E i f ( x0 )

Hence proved.
Example 4.6: Compute the following differences:
(i) 'n e x (ii) 'n x n
Solution:
(i) We have, ' e x e xh  e x e x (e h  1)

Again, '2 e x '('e x ) (e h  1)'e x (e h  1) 2 e x

Thus by induction, 'n e x (e h  1) n e x .

Self-Instructional
76 Material
(ii) We have, Interpolation

'( x n ) ( x  h) n  x n
n(n  1) 2 n  2
n h x n 1  h x  ....  h n
2! NOTES

Thus, '( x n ) is a polynomial of degree (n – 1)

Also, '(h n ) 0. Hence, we can say that '2 ( x n ) is a polynomial of degree


(n – 2) with the leading term n(n  1)h 2 x n 2 .
Proceeding n times, we get

Example 4.7: Prove that,


­ f ( x) ½ g ( x) 'f ( x)  f ( x ) 'g ( x)
(i) ' ® ¾
¯ g ( x) ¿ g ( x ) g ( x  h)

­ ' f ( x) ½
(ii) '{log f ( x)} log ®1  ¾
¯ f ( x) ¿

Solution:
(i) We have,
­ f ( x) ½ f ( x  h) f ( x )
'® ¾ 
¯ g ( x) ¿ g ( x  h) g ( x )
f ( x  h) g ( x )  f ( x ) g ( x  h)
g ( x  h) g ( x )
f ( x  h) g ( x )  f ( x ) g ( x )  f ( x ) g ( x )  f ( x ) g ( x  h)
g ( x  h) g ( x )
g ( x){ f ( x  h)  f ( x)}  f ( x){g ( x  h)  g ( x)}
g ( x ) g ( x  h)
g ( x) 'f ( x)  f ( x) 'g ( x)
g ( x ) g ( x  h)
(ii) We have,
' {log f ( x)} log{ f ( x  h)}  log{ f ( x)}
f ( x  h) ­ f ( x  h)  f ( x)  f ( x) ½
log log ® ¾
f ( x) ¯ f ( x) ¿
­ 'f ( x ) ½
log ®  1¾
¯ f ( x) ¿

Self-Instructional
Material 77
Interpolation
4.7 FUNDAMENTAL THEOREM OF FINITE
DIFFERENCES

NOTES The concept of finite differences were introduced by Brook Taylor in 1715 and
have also been studied as abstract mathematical objects in the works by George
Boole (1860), L. M. Milne-Thomson (1933), and Károly Jordan (1939). Finite
differences trace their origins back to one of Jost Bürgi’s algorithms (c.
1592) and work by others including Isaac Newton.
A finite difference is a mathematical expression of the form f (x + b) – f
(x + a). If a finite difference is divided by b – a, then we get a difference quotient.
The finite differences mean finite difference approximations. The approximation of
derivatives through the finite differences has a significant role in finite difference
methods specifically for the numerical solution of differential equations, particularly
boundary value problems. Some of the recurrence relations can be written in the
form of difference equations basically by replacing iteration notation with the finite
differences.
At present, the term ‘Finite Difference’ is often taken as synonymous
with finite difference approximations of derivatives, particularly in the context of
numerical methods. Fundamentally, the finite difference approximations are finite
difference quotients.
The anti-difference operator ‘F’ is said to be an anti-difference of the real-
valued function f and is defined as 'F = f using the anti-difference operator '–1
we write F = '–1f.
Theorem : Fundamental Theorem of the Calculus of Finite Differences
Let f be a real-valued function and let a and b be integers such that a d b.
If F = '–1f, then .

Proof: Let f be a real-valued function and let a, b  where a d b.

Since F = '–1f, we have 'F = f.


i.e., 'F(x) = F(x + 1) F(x) = f(x)
Hence,

= (F(a + 1) F(a)) + (F(a + 2) F(a )) + ………. +

Self-Instructional
78 Material
Interpolation
(F(b) F(b 1)) + (F(b + 1) F(n))

= F(b + 1) F(a)
NOTES

Numerical examples can be solved applying the ‘Fundamental Theorem of


the Calculus of Finite Differences’.
Example 4.8: Consider the function f(x) = 2x + 1. An anti-difference of f is the
function F(x) = x2. Evaluate the sum,

Solution: We can evaluate the sum applying the above mentioned ‘Fundamental
Theorem of the Calculus of Finite Differences’ derivations.

= F(6 + 1) F(1) = F(7) F(1) = (7)2  (1)2 = 48

Finite difference is often used as an approximation of the derivative, typically


in numerical differentiation.

Check Your Progress


1. Explain the graphical method of interpolation.
2. What do you understand by the extrapolation?
3. Define the finite difference.
4. Elaborate on the forward difference.
5. State the backward difference.
6. Explain the central difference.
7. Analyse the fundamental theorem of finite difference.

4.8 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. The problem of interpolation is very fundamental problem in numerical


analysis. The term interpolation literally means reading between the lines. In
numerical analysis, interpolation means computing the value of a function f
(x) in between values of x in a table of values. It can be stated explicitly as
‘ given a set of ( n + 1) values y0, y1, y2,..., yn for x = x0, x1, x2, ..., xn,
respectively. The problem of interpolation is to compute the value of the
function y = f (x) for some non-tabular value of x.’ Self-Instructional
Material 79
Interpolation 2. The interpolating polynomials are usually used for finding values of the
tabulated function y = f(x) for a value of x within the table. But, they can
also be used in some cases for finding values of f(x) for values of x near to
the end points x0 or xn outside the interval [x0, xn]. This process of finding
NOTES values of f(x) at points beyond the interval is termed as extrapolation.
3. Let us assume that values of a function y = f (x) are known for a set of
equally spaced values of x given by {x0, x1,..., xn}, such that the spacing
between any two consecutive values is equal. Thus, x1 = x0 + h, x2 = x1 +
h,..., xn = xn–1 + h, so that xi = x0 + ih for i = 1, 2, ...,n. We consider two
types of differences known as forward differences and backward differences
of various orders.
4. Let y0, y1,..., yn be the values of a function y = f (x) at the equally spaced
values of x = x0, x1, ..., xn. The differences between two consecutive y
given by y1 – y0, y2 – y1,..., yn – yn–1 are called the first order forward
differences of the function y = f (x) at the points x0, x1,..., xn–1. These
differences are denoted by,
'y0 y1  y0 , 'y1 y2  y1 , ..., 'yn1 yn  yn1
5. The backward differences of various orders for a table of values of a
function y = f (x) are defined in a manner similar to the forward
differences. The backward difference operator ’ (inverted triangle) is
defined by ’f ( x) f ( x)  f ( x  h).
6. The central difference operator denoted by G is defined by,

Thus,

7. Let f be a real-valued function and let a and b be integers such that a d b.

If F = '–1f, then .

4.9 SUMMARY

x The problem of interpolation is very fundamental problem in numerical


analysis. The term interpolation literally means reading between the lines. In
numerical analysis, interpolation means computing the value of a function f
(x) in between values of x in a table of values. It can be stated explicitly as
‘given a set of (n + 1) values y0, y1, y2,..., yn for x = x0, x1, x2, ..., xn,
respectively. The problem of interpolation is to compute the value of the
function y = f (x) for some non-tabular value of x.’
Self-Instructional
80 Material
x The interpolating polynomials are usually used for finding values of the Interpolation

tabulated function y = f(x) for a value of x within the table. But, they can
also be used in some cases for finding values of f(x) for values of x near to
the end points x0 or xn outside the interval [x0, xn]. This process of finding
values of f(x) at points beyond the interval is termed as extrapolation. NOTES
x Let us assume that values of a function y = f (x) are known for a set of
equally spaced values of x given by {x0, x1,..., xn}, such that the spacing
between any two consecutive values is equal. Thus, x1 = x0 + h, x2 = x1 +
h,..., xn = xn–1 + h, so that xi = x0 + ih for i = 1, 2, ...,n. We consider two
types of differences known as forward differences and backward differences
of various orders.
x Let y0, y1,..., yn be the values of a function y = f (x) at the equally spaced
values of x = x0, x1, ..., xn. The differences between two consecutive y given
by y1 – y0, y2 – y1,..., yn – yn–1 are called the first order forward differences
of the function y = f (x) at the points x0, x1,..., xn–1. These differences are
denoted by,
'y0 y1  y0 , 'y1 y2  y1 , ..., 'yn1 yn  yn1
x The backward differences of various orders for a table of values of a
function y = f (x) are defined in a manner similar to the forward
differences. The backward difference operator ’ (inverted triangle) is
defined by ’f ( x) f ( x)  f ( x  h).
x The central difference operator denoted by G is defined by,

Thus,

x Let f be a real-valued function and let a and b be integers such that a d b.

If F = '–1f, then .

4.10 KEY WORDS

x Interpolation: The term interpolation literally means reading between the


lines. In numerical analysis, interpolation means computing the value of a
function f(x) in between values of x in a table of values.
x Extrapolation: The process of finding value of f(x) at points beyond the
interval [x0,xn] is termed as extrapolation.
x Finite difference: A finite difference is a mathematical expression of the
form f (x + b) – f (x + a). If a finite difference is divided by b – a, one
gets a difference quotient.
Self-Instructional
Material 81
Interpolation x Forward difference: Forward differences plays most important role for
solving the ordinary differential equations using single-step predictor-
corrector methods, such as Euler and Runge-Kutta methods.
x Backward difference: The backward difference of various orders for a
NOTES
table of values of a function y = f(x) are defined in a manner similar to the
forward difference. The backward difference operator (inverted triangle)
is denoted by f(x) = f(x) – f(x – h).
x Central difference: The central difference is an average of the forward
and backward differences for the equally spaced value of data. The
truncation error of the central difference approximation is order of O(n2),
where n is the step size.

4.11 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Illustrate the graphical method of interpolation.
2. Define the finite difference.
3. State the forward difference.
4. Elaborate on the backward difference.
5. Explain the central difference.
6. Analyse the fundamental theorem of finite differences.
Long-Answer Questions
1. Discuss the graphical method of interpolation.
2. Explain the finite difference with the help of example.
3. Define the forward difference.
4. Analyse the backward difference. Give an appropriate example.
5. Briefly define the central difference.
6. State the fundamental theorem of finite differences.

4.12 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Self-Instructional
82 Material
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and Interpolation

Engineering. Chennai (Tamil Nadu): The National Publishing Company.


Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
NOTES
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
Material 83
Interpolating
Polynomials BLOCK - II
and Operators
INTERPOLATIONS

NOTES
UNIT 5 INTERPOLATING
POLYNOMIALS
AND OPERATORS
Structure
5.0 Introduction
5.1 Objectives
5.2 Interpolating Polynomials Using Finite Difference
5.3 Other Difference Operators
5.4 Answers to Check Your Progress Questions
5.5 Summary
5.6 Key Words
5.7 Self Assessment Questions and Exercises
5.8 Further Readings

5.0 INTRODUCTION

In numerical analysis, polynomial interpolation is the interpolation of a given data


set by the polynomial of lowest possible degree that passes through the points of
the dataset. Polynomials can be used to approximate complicated curves, for
example, the shapes of letters in typography, given a few points. A relevant
application is the evaluation of the natural logarithm and trigonometric functions:
pick a few known data points, create a lookup table, and interpolate between
those data points. This results in significantly faster computations. Polynomial
interpolation also forms the basis for algorithms in numerical quadrature and
numerical ordinary differential equations and Secure Multi Party Computation,
Secret Sharing schemes.
Polynomial interpolation is also essential to perform sub-quadratic
multiplication and squaring such as Karatsuba multiplication and Toom–Cook
multiplication, where an interpolation through points on a polynomial which defines
the product yields the product itself.
According to the method of finite differences, for any polynomial of degree
n or less, any sequence of n+2 values at equally spaced positions has a (n+1)th
difference exactly equal to 0. The element Sn+1 of the binomial transform is such a
(n+1)th difference.

Self-Instructional
84 Material
In this unit, you will study about the interpolating polynomials using finite Interpolating
Polynomials
difference, other difference operators, such as shift operator, and central difference and Operators
operator.
NOTES
5.1 OBJECTIVES

After going through this unit, you will be able to:


x Explain the interpolating polynomials using finite difference
x Elaborate on the shift operator
x Understand the central difference operator

5.2 INTERPOLATING POLYNOMIALS USING


FINITE DIFFERENCE
In this method, we successively generate interpolating polynomials, of any degree,
by iteratively using linear interpolating functions.
Let p01(x) denote the linear interpolating polynomial for the tabulated values at
x0 and x1. Thus, we can write as,
( x1  x) f 0  ( x0  x) f1
p01 ( x)
x1  x0
This can be written with determinant notation as,
f0 x0  x
f1 x1  x (5.1)
p01 ( x)
x1  x0
This form of p01(x) is easy to visualize and is convenient for desk computation.
Thus, the linear interpolating polynomial through the pair of points (x0, f0) and
( x j , f j ) can be easily written as,

1 f0 x0  x
p0 j ( x) , for j 1, 2, ..., n (5.2)
x j  x0 f j xj  x

Now, consider the polynomial denoted by p01j (x) and defined by,

1 p01 ( x) x1  x
p01 j ( x) , for j 2, 3, ..., n (5.3)
x j  x1 p0 j ( x) x j  x

The polynomial p01j(x) interpolates f(x) at the points x0, x1, xj (j > 1) and is a
polynomial of degree 2, which can be easily verified that,

Self-Instructional
Material 85
Interpolating
Polynomials Similarly, the polynomial p012 j ( x ) can be constructed by replacing p01(x) by p012
and Operators (x) and p0j (x) by p01j (x).
Thus,
NOTES p012 ( x) x2  x
1
p012 j ( x) , for j 3, 4, ..., n (5.4)
x j  x2 p01 j ( x) x j  x

Evidently, p012j (x) is a polynomial of degree 3 and it interpolates the function at


x0, x1, x2 and xj.
i.e., p012 j ( x0 ) f 0 ; p012 j ( xi ) f1 ; p012 j ( x2 ) f 2 and p012 j ( x j ) fj
This process can be continued to generate higher and higher degree interpolating
polynomials.
The results of the iterated linear interpolation can be conveniently represented
as given in the following table.

xk fk p0 j p 01 j ... x j  x
x0 f0 x0  x
x1 f1 p01 x1  x
x2 f2 p02 p012 x2  x
x3 f3 p 03 p013 x3  x
... ... ... ... ... ...
xj fj p0 j p 01 j xj  x
... ... ... ... ... ...
xn fn p0 n p01n xn  x

The successive columns of interpolation results can be conveniently filled by


computing the values of the determinants written using the previous column and the
corresponding entries in the last column xj – x. Thus, for computing p01j’s for j = 2,
3, ..., n, we evaluate the determinant whose elements are the boldface quantities
and divide the determinant’s value by the difference ( x j  x)  ( x1  x) .
Example 5.1: Find s(2.12) using the following table by iterative linear interpolation:
x 2.0 2.1 2.2 2.3
s( x) 0.7909 0.7875 0.7796 0.7673

Solution: Here, x = 2.12. The following table gives the successive iterative linear
interpolation results. The details of the calculations are shown below in the table.
xj s( x j ) p0 j p 01 j p012 j xj  x
2.0 0.7909  0.12
2.1 0.7875 0.78682  0.02
2.2 0.7796 0.78412 0.78628 0.08
2.3 0.7673 0.78146 0.78628 0.78628 0.18

Self-Instructional
86 Material
Interpolating
1 0.7909 0.12 Polynomials
p01 0.78682
2.1  2.0 0.7875 0.02 and Operators

1 0.7909 0.12
p02 0.78412
2.2  2.0 0.7796 0.08 NOTES
1 0.7909 0.12
p03 0.78146
2.3  2.0 0.7673 0.18
1 0.78682 0.02
p012 0.78628
2.2  2.1 0.78412 0.08
1 0.78682 0.02
p013 0.78628
2.3  2.1 0.78146 0.18
1 0.78628 0.08
p012 0.78628
2.3  2.2 0.78628 0.18
The boldfaced results in the table give the value of the interpolation at x = 2.12.
The result 0.78682 is the value obtained by linear interpolation. The result 0.78628 is
obtained by quadratic as well as by cubic interpolation. We conclude that there is no
improvement in the third degree polynomial over that of the second degree.
Notes 1. Unlike Lagrange’s methods, it is not necessary to find the degree of the
interpolating polynomial to be used.
2. The approximation by a higher degree interpolating polynomial may not
always lead to a better result. In fact it may be even worse in some
cases.
Consider, the function f(x) = 4.
We form the finite difference table with values for x = 0 to 4.

x f ( x) 'f ( x) '2 f ( x) '3 f ( x) '4 f ( x)


0 1
3
1 4 9
12 27
2 16 36 81
48 108
3 64 144
192
4 256

Newton’s forward difference interpolating polynomial is given below by taking


x0 0,

Self-Instructional
Material 87
Interpolating Now, consider values of at x = 0.5 by taking successively higher and higher
Polynomials
and Operators degree polynomials.
Thus,
NOTES

.5)
24
We note that the actual value 40.5 = 2 is not obtainable by interpolation. The
results for higher degree interpolating polynomials become worse.
Note: Lagrange’s interpolation formula and iterative linear interpolation can easily
be implemented for computations by a digital computer.
Example 5.2: Determine the interpolating polynomial for the following table of
data:
x 1 2 3 4
y 1 1 1 5

Solution: The data is equally spaced. We thus form the finite difference table.

x y 'y '2 y
1 1
0
2 1 2
2
3 1 2
4
4 5

Since the differences of second order are constant, the interpolating polynomial
is of degree two. Using Newton’s forward difference interpolation, we get
,

Example 5.3: Compute the value of f(7.5) by using suitable interpolation on the
following table of data.
x 3 4 5 6 7 8
f ( x) 28 65 126 217 344 513

Self-Instructional
88 Material
Solution: The data is equally spaced. Thus for computing f(7.5), we use Newton’s Interpolating
Polynomials
backward difference interpolation. For this, we first form the finite difference table and Operators
as shown below.
x f ( x ) 'f ( x ) '2 f ( x) '3 f ( x)
3 28
NOTES
37
4 65 24
61 6
5 126 30
91 6
6 217 36
127 6
7 344 42
169
8 513

The differences of order three are constant and hence we use Newton’s backward
difference interpolating polynomial of degree three.
v(v  1) 2 v(v  1)(v  2) 3
f ( x) yn  v ’yn  ’ yn  ’ yn ,
2 ! 3 !

x  xn
v , for x 7.5, xn 8
h
7.5  8
? v 0.5
1
( 0.5) ( 0.5  1) 0.5 u 0.5 u 1.5
f (7.5) 513  0.5 u 169  u 42  u6
2 6
513  84.5  5.25  0.375
422.875
Example 5.4: Determine the interpolating polynomial for the following data:

x 2 4 6 8 10
f ( x) 5 10 17 29 50

Solution: The data is equally spaced. We construct the Newton’s forward


difference interpolating polynomial. The finite difference table is,

x f ( x ) 'f ( x) '2 f ( x) '3 f ( x) '4 f ( x)


2 5
5
4 10 2
7 3
6 17 5 1
12 4
8 29 9
21
10 50
Self-Instructional
Material 89
Interpolating Here, x0 = 2, u = (x–x0)/h = (x–2)/2.
Polynomials
and Operators The interpolating polynomial is,
u (u  1) 2
f ( x) f ( x 0 )  u 'f ( x0 )  ' f ( x0 )  ...
NOTES 2!
x2 x  2 § x  2 · 2 x  2 § x  2 ·§ x  2 ·3
5 u5  ¨  1¸  ¨  1¸¨  2¸
2 2 © 2 ¹ 2! 2 © 2 ¹© 2 ¹ 3!
x  2 § x  2 ·§ x  2 ·§ x  2 · 1
 ¨  1¸¨  2 ¸¨  3¸
2 © 2 ¹© 2 ¹© 2 ¹ 4!
1
( x 4  4 x 3  52 x 2  1040 x)
384
Example 5.5: Find the interpolating polynomial which takes the following values:
y(0) = 1, y(0.1) = 0.9975, y(0.2) = 0.9900, y(0.3) = 0.9980. Hence, compute y (0.05).
Solution: The data values of x are equally spaced we form the finite difference
table,
x y 'y '2 y '3 y
0.0 1.0000
 25
0.1 0.9975  50
 75 25
0.2 0.9900  25
 100
0.3 0.9800

x
Here, h = 0.1. Choosing x0 = 0.0, we have s 10 x. Newton’s forward
0.1
difference interpolation formula is,
s ( s  1) 2 s ( s  1)(s  2) 3
y y 0  s 'y 0  ' y0  ' y0
2! 3!
10 x(10 x  1) 10 x(10 x  1)(10 x  2)
1  10 x(0.0025)  (0.0050)  u 0.0025
2! 6
2 2.5 3 300 2 0.025
1.0  0.25 x  0.25 x  0.25 x  x  u 0.0025 x  x
6 4 6
2 3
1.0  0.004 x  0.375 x  0.421x
y (0.05) 1.0002
Example 5.6: Compute f(0.23) and f(0.29) by using suitable interpolation formula
with the table of data given below.
x 0.20 0.22 0.24 0.26 0.28 0.30
f ( x) 1.6596 1.6698 1.6804 1.6912 1.7024 1.7139

Solution: The data being equally spaced, we use Newton’s forward difference
interpolation for computing f(0.23), and for computing f(0.29), we use Newton’s
backward difference interpolation. We first form the finite difference table,
Self-Instructional
90 Material
Interpolating
x f ( x) 'f ( x) '2 f ( x) Polynomials
0.20 1.6596 and Operators
102
0.22 1.6698 4
106 NOTES
0.24 1.6804 2
108
0.26 1.6912 4
112
0.28 1.7024 3
115
0.30 1.7139

We observe that differences of order higher than two would be irregular. Hence,
we use second degree interpolating polynomial. For computing f(0.23), we take x0
x  x0 0.23  0.22
= 0.22 so that u 0.5.
h 0.02
Using Newton’s forward difference interpolation, we compute
(0.5)(0.5  1.0)
f (0.23) 1.6698  0.5 u 0.0106  u 0.0002
2
1.6698  0.0053  0.000025
1.675075
| 1.6751
Again, for computing f (0.29), we take xn = 0.30,

So that v
n
Using Newton’s backward difference interpolation we evaluate,
( 0.5)(0.5  1.0)
f (0.29) 1.7139  0.5 u .0115  u 0.0003
2
1.7139  0.00575  0.00004
1.70811
 1.7081
Example 5.7: Compute values of ex at x = 0.02 and at x = 0.38 using suitable
interpolation formula on the table of data given below.
x 0.0 0.1 0.2 0.3 0.4
e x 1.0000 1.1052 1.2214 1.3499 1.4918
Solution: The data is equally spaced. We have to use Newton’s forward difference
interpolation formula for computing ex at x = 0.02, and for computing ex at x = 0.38,
we have to use Newton’s backward difference interpolation formula. We first form
the finite difference table.

Self-Instructional
Material 91
Interpolating
Polynomials x y ex 'y '2 y '3 y '4 y
and Operators 0.0 1.0000
1052
NOTES 0.1 1.1052 110
1162 13
0.2 1.2214 123 2
1285 11
0.3 1.3499 134
1419
0.4 1.4918

For computing e0.02 , we take x 0 0

x  x0 0.02  0.0
? u 0 .2
h 0.1
By Newton’s forward difference interpolation formula, we have

0.2(0.2  1) 0.2(0.2  1)(0.2  2)


e0.02 1.0  0.2 u 0.1052  u 0.0110  u 0.0013
2 6
0.2 (0.2  1)(0.2  2)(0.2  3)
 u 0.0002
24
1.0  .02104  0.00088  0.00006  0.00001
1.02023  1.0202

0.38  0.4
For computing e0.38 we take xn = 0.4. Thus, v 0.2
0.1
By Newton’s backward difference interpolation formula, we have

(0.2)(0.2  1)
e0.38 1.4918  ( 0.2) u 0.1419  u 0.0134
2
(0.2)(0.2  1)(0.2  2) 0.2(0.2  1)(0.2  2)(0.2  3)
 u 0.0011  u (0.0002)
6 24
1.4918  0.02838  0.00107  0.00005  0.00001
1.49287  0.02844
1.46443 | 1.4644

5.3 OTHER DIFFERENCE OPERATORS


We consider the finite differences of an equally spaced tabular data for developing
numerical methods. Let a function y = f (x) has a set of values y0, y1, y2,...,
corresponding to points x0, x1, x2,..., where x1 = x0 + h, x2 = x0 + 2h,...., are equally
spaced with spacing h. We define different types of finite differences such as forward
differences, backward differences and central differences, and express them in
terms of operators.
Self-Instructional
92 Material
The forward difference of a function f (x) is defined by the operator ', called Interpolating
Polynomials
the forward difference operator given by, and Operators

(5.5)
At a tabulated point xi , we have NOTES
(5.6)
We also denote 'f ( xi ) by 'yi , given by
(5.7)
We also define an operator E, called the shift operator which is given by,
E f(x) = f(x + h) (5.8)
?
Thus, ' E  1 is an operator relation. (5.9)
While Equation (5.5) defines the first order forward difference, we can define
second order forward difference by,

? (5.10)

Shift Operator
The shift operator is denoted by E and is defined by E f (x) = f (x + h). Thus,
Eyk = yk+1
Higher order shift operators can be defined by, E2f (x) = Ef (x + h) = f (x + 2h).
E2yk = E(Eyk) = E(yk + 1) = yk + 2
In general, Emf (x) = f (x + mh)
Emyk = yk+m

Relation between forward difference operator and shift operator


From the definition of forward difference operator, we have

This leads to the operator relation,

Or, E (5.11)

Self-Instructional
Material 93
Interpolating Similarly, for the second order forward difference, we have
Polynomials
and Operators

NOTES

This gives the operator relation, '2 ( E  1) 2 .

Finally, we have (5.12)


Relation between the backward difference operator with shift operator
From the definition of backward difference operator, we have
’f ( x ) f ( x )  f ( x  h)
f ( x )  E 1 f ( x) (1  E 1 ) f ( x)

This leads to the operator relation, ’ { 1  E 1 (5.13)


Similarly, the second order backward difference is defined by,

’ 2 f ( x) ’f ( x)  ’f ( x  h)
f ( x )  f ( x  h)  f ( x  h)  f ( x  2h)
f ( x )  2 f ( x  h)  f ( x  2h)
f ( x)  E 1 f ( x)  E  2 f ( x)
(1  E 1  E  2 ) f ( x)
(1  E 1 ) 2 f ( x)

This gives the operator relation, ’ 2 { (1  E 1 ) 2 and in general,

’ m { (1  E 1 ) m (5.14)
Relations between the operators E, D and '
We have by Taylor’s theorem,
2

Thus,

§ h2 D2 ·
Or, (1  ') f ( x) ¨ 1  hD   ... ¸ f ( x)
© 2! ¹
hD
e f ( x)
Self-Instructional
94 Material
Interpolating
Thus, (5.15) Polynomials
Also, hD log (1  ' ) and Operators

'2 '3 '4


Or, hD '    ...
2 3 4 NOTES
1 § ' 2 '3 ' 4 ·
? D ¨ '     ... ¸
h © 2 3 4 ¹

Central Difference Operator


The central difference operator denoted by G is defined by,

Thus,

Giving the operator relation, or


Also,

i.e.,
Further,

(5.16)
Even though the central difference operator uses fractional arguments, still it is
widely used. This is related to the averaging operator and is defined by,

(5.17)

Squaring,

? (5.18)
Self-Instructional
Material 95
Interpolating
Polynomials
It may be noted that,
and Operators Also,
2

NOTES ? (5.19)
Further,

Example 5.8: Prove the following operator relations:


(i) ' { ’E (ii) (1  ' ) (1  ’) 1
Solution:
(i) Since, 'f (x) f (x  h)  f (x) Ef (x)  f (x) , ' { E  1 (1)

And since ’f ( x) f ( x )  f ( x  h) (1  E 1 ) f ( x) , ’ { 1  E 1 (2)

E 1
Thus, ’ { or ’E { E  1 { '
E
Hence proved.
(ii) From Equation (1), we have E { '  1 (3)
and from Equation (2) we get E 1 { 1  ’ (4)
Combining Equations (3) and (4), we get (1  ' )(1  ’ ) { 1.
Example 5.9: If fi is the value of f (x) at xi where xi = x0 + ih, for i = 1,2,..., prove
that,
i
§i·
fi E i fo ¦¨ j ¸ ' i
f0
j 0 © ¹
Solution: We can write Ef (x) = f (x + h)
Using Taylor series expansion, we have

Self-Instructional
96 Material
Interpolating
ehD . f ( x) Polynomials
and Operators
? 1  ' ehD

Hence, eihD (1  ')i NOTES


Now, fi f ( xi ) f ( x0  ih) E i f ( x0 )

Hence proved.
Example 5.10: Compute the following differences:
(i) 'n e x (ii) 'n x n
Solution:
(i) We have, ' e x e xh  e x e x (e h  1)

Again, '2 e x '('e x ) (e h  1)'e x (e h  1) 2 e x

Thus by induction, 'n e x (e h  1) n e x .


(ii) We have,

'( x n ) ( x  h) n  x n
n(n  1) 2 n  2
n h x n 1  h x  ....  h n
2!

Thus, '( x n ) is a polynomial of degree (n – 1)

Also, '(h n ) 0. Hence, we can say that '2 ( x n ) is a polynomial of degree


(n – 2) with the leading term n(n  1)h 2 x n 2 .
Proceeding n times, we get

Example 5.11: Prove that,


­ f ( x) ½ g ( x) 'f ( x)  f ( x ) 'g ( x)
(i) ' ® ¾
¯ g ( x) ¿ g ( x ) g ( x  h)

­ ' f ( x) ½
(ii) '{log f ( x)} log ®1  ¾
¯ f ( x) ¿

Self-Instructional
Material 97
Interpolating Solution:
Polynomials
and Operators (i) We have,
­ f ( x) ½ f ( x  h) f ( x )
'® ¾ 
¯ g ( x) ¿ g ( x  h) g ( x )
NOTES
f ( x  h) g ( x )  f ( x ) g ( x  h)
g ( x  h) g ( x )
f ( x  h) g ( x )  f ( x ) g ( x )  f ( x ) g ( x )  f ( x ) g ( x  h )
g ( x  h) g ( x )
g ( x){ f ( x  h)  f ( x)}  f ( x){g ( x  h)  g ( x)}
g ( x) g ( x  h)
g ( x) 'f ( x)  f ( x) 'g ( x )
g ( x ) g ( x  h)
(ii) We have,
' {log f ( x)} log{ f ( x  h)}  log{ f ( x)}
f ( x  h) ­ f ( x  h)  f ( x )  f ( x ) ½
log log ® ¾
f ( x) ¯ f ( x) ¿
­ 'f ( x ) ½
log ®  1¾
¯ f ( x) ¿

Check Your Progress


1. Define interpolating polynomials using finite difference.
2. Explain the other difference operator.
3. Elaborate on the shift operator.
4. Analyse the relation between difference operator and shift operator.
5. Illustrate the relation between backward difference operator with shift
operator.
6. Explain the central difference operator.

5.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. In this method, we successively generate interpolating polynomials, of any
degree, by iteratively using linear interpolating functions.
Let p01(x) denote the linear interpolating polynomial for the tabulated values
at x0 and x1. Thus, we can write as,
( x1  x) f 0  ( x0  x) f1
p01 ( x)
x1  x0

Self-Instructional
98 Material
2. Let a function y = f (x) has a set of values y0, y1, y2,..., corresponding to Interpolating
Polynomials
points x0, x1, x2,..., where x1 = x0 + h, x2 = x0 + 2h,...., are equally spaced with and Operators
spacing h. We define different types of finite differences such as forward
differences, backward differences and central differences, and express them
in terms of operators. NOTES
3. The shift operator is denoted by E and is defined by E f (x) = f (x + h).
Thus,
Eyk = yk+1
Higher order shift operators can be defined by, E2f (x) = Ef (x + h) = f (x +
2h).
E2yk = E(Eyk) = E(yk + 1) = yk + 2
In general, Emf (x) = f (x + mh)
Emyk = yk+m

4.

This gives the operator relation, '2 ( E  1) 2 .


Finally, we have

5. The central difference operator denoted by G is defined by,

Thus,

5.5 SUMMARY
x Let p01(x) denote the linear interpolating polynomial for the tabulated values
at x0 and x1. Thus, we can write as,
( x1  x) f 0  ( x0  x) f1
p01 ( x)
x1  x0

x Let a function y = f (x) has a set of values y0, y1, y2,..., corresponding to
points x0, x1, x2,..., where x1 = x0 + h, x2 = x0 + 2h,...., are equally spaced with
Self-Instructional
Material 99
Interpolating spacing h. We define different types of finite differences such as forward
Polynomials
and Operators differences, backward differences and central differences, and express them
in terms of operators.
x The shift operator is denoted by E and is defined by E f (x) = f (x + h).
NOTES
Thus,
Eyk = yk+1
Higher order shift operators can be defined by, E2f (x) = Ef (x + h) = f
(x + 2h).
E2yk = E(Eyk) = E(yk + 1) = yk + 2
In general, Emf (x) = f (x + mh)
Emyk = yk+m
x The central difference operator denoted by G is defined by,

5.6 KEY WORDS

x Shift operator: The shift operator is denoted by E and is defined by


E f (x) = f (x+h). Thus, Eyk = y k+1.
x Central difference operator: The central difference operator denoted by
d is defined by,

5.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Explain the interpolating polynomials using finite difference.
2. Define the shift operator.
3. Analyse the relation between forward difference operator and shift operator.
4. Elaborate on the relation between the backward difference operator with
shift operator.
5. State the central difference operator.

Self-Instructional
100 Material
Long-Answer Questions Interpolating
Polynomials
and Operators
1. Discuss briefly the interpolating polynomials using finite difference.
2. Explain the relation between forward difference operator and shift operator.
3. Analyse the relation between backward difference operator with shift NOTES
operator.
4. Explain the relation between the operators E, D, and '.
5. Define the central difference operator.

5.8 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
Material 101
Lagrange and Newton
Interpolations
UNIT 6 LAGRANGE AND NEWTON
INTERPOLATIONS
NOTES
Structure
6.0 Introduction
6.1 Objectives
6.2 Lagrange Interpolations
6.3 Newton Interpolations
6.4 Applications of Lagrange and Newton Interpolations
6.5 Answers to Check Your Progress Questions
6.6 Summary
6.7 Key Words
6.8 Self Assessment Questions and Exercises
6.9 Further Readings

6.0 INTRODUCTION

The Lagrange form of the interpolation polynomial shows the linear character of
polynomial interpolation and the uniqueness of the interpolation polynomial.
Therefore, it is preferred in proofs and theoretical arguments. The Lagrange basis
polynomials can be used in numerical integration to derive the Newton–Cotes
formulas. In numerical analysis, Lagrange polynomials are used for polynomial
interpolation. For a given set of points (xj, yj) with no two xj values equal, the
Lagrange polynomial is the polynomial of lowest degree that assumes at each
value xj the corresponding value yj, so that the functions coincide at each point.
A Newton polynomial, named after its inventor Isaac Newton, is an
interpolation polynomial for a given set of data points. The Newton polynomial is
sometimes called Newton’s divided differences interpolation polynomial because
the coefficients of the polynomial are calculated using Newton’s divided differences
method. Newton’s formula is of interest because it is the straightforward and natural
differences-version of Taylor’s polynomial. Taylor’s polynomial tells where a
function will go, based on its y value, and its derivatives (its rate of change, and the
rate of change of its rate of change, etc.) at one particular x value. Newton’s
formula is Taylor’s polynomial based on finite differences instead of instantaneous
rates of change.
In this unit, you will study about the Lagrange interpolations, Newton
interpolations, and applications of Lagrange and Newton interpolations.

Self-Instructional
102 Material
Lagrange and Newton
6.1 OBJECTIVES Interpolations

After going through this unit, you will be able to:


x Understand the Lagrange interpolations NOTES
x Explain the Newton interpolations
x Define the applications of Lagrange and Newton interpolations

6.2 LAGRANGE INTERPOLATIONS

Lagrange’s interpolation is useful for unequally spaced tabulated values. Let y = f


(x) be a real valued function defined in an interval (a, b) and let y0, y1,..., yn be the
(n + 1) known values of y at x0, x1,...,xn, respectively. The polynomial
which interpolates f (x), is of degree less than or equal to n. Thus,
(6.1)
The polynomial is assumed to be of the form,

(6.2)

Where each li(x) is a polynomial of degree d n in x and is called Lagrangian function.


Now, satisfies Equation (6.1) if each li(x) satisfies,

(6.3)

Equation (6.3) suggests that li(x) vanishes at the (n+1) points x0, x1, ... xi–1,
xi+1,..., xn. Thus, we can write,
li(x) = ci (x – x0) (x – x1) ... (x – xi–1) (x – xi+1)...(x – xn)
Where ci is a constant given by li (xi) =1,

( x  x0 )( x  x1 )...( x  xi 1 )( x  xi 1 )...( x  xn )
Thus, li ( x) for i 0, 1, 2, ..., n
( xi  x0 )( xi  x1 )...( xi  xi 1 )( xi  xi 1 )...( xi  xn )
(6.4)
Equations (6.2) and (6.4) together give Lagrange’s interpolating polynomial.
Algorithm: To compute f (x) by Lagrange’s interpolation.
Step 1: Read n [n being the number of values]
Step 2: Read values of xi, fi for i = 1, 2,..., n.
Step 3: Set sum = 0, i = 1
Step 4: Read x [x being the interpolating point]
Self-Instructional
Material 103
Lagrange and Newton Step 5: Set j = 1, product = 1
Interpolations
Step 6: Check if j z i, product = product × (x – xj)/(xi – xj) else go to Step 7
Step 7: Set j = j + 1
NOTES Step 8: Check if j > n, then go to Step 9 else go to Step 6
Step 9: Compute sum = sum + product × fi
Step 10: Set i = i + 1
Step 11: Check if i > n, then go to Step 12
else go to Step 5
Step 12: Write x, sum
Example 6.1: Compute f (0.4) for the table below by Lagrange’s interpolation.
x 0.3 0.5 0.6
f ( x) 0.61 0.69 0.72

Solution: The Lagrange’s interpolation formula gives,


(0.4  0.5)(0.4  0.6) (0.4  0.3)(0.4  0.6) (0.4  0.3)(0.4  0.5)
f (0.4) u 0.61  u 0.69  u 0.72
(0.3  0.5)(0.3  0.6) (0.5  0.3)(0.5  0.6) (0.6  0.3)(0.6  0.5)
0.203  0.69  0.24 0.653  ~ 0.65

Thus, f (0.4) = 0.65.


Example 6.2: Using Lagrange’s formula, find the value of f (0) from the table
given below.
x 1  2 2 4
f ( x)  1  9 11 69

Solution: Using Lagrange’s interpolation formula, we find


ª (0  2)(0  2)(0  4) º ª (0  1)(0  2)(0  4) º
f (0) « u (1)»  « u (9)»
¬ ( 1  2 ) (  1  2 )( 1  4) ¼ ¬ ( 2  1 )( 2  2)(  2  4) ¼
ª (0  1) (0  2)(0  4) º ª (0  1)(0  2)(0  2) º
« u11»  « u 69»
¬ ( 2  1 )( 2  2 )(2  4 ) ¼ ¬ ( 4  1)( 4  2 )( 4  2 ) ¼
16 9 11 69 20 85
    
15 3 3 15 3 15
20 17
 1
3 3
Example 6.3: Determine the interpolating polynomial of degree three for the table
given below.
x 1 0 1 2
f ( x) 1 1 1  3
Solution: We have Lagrange’s third degree interpolating polynomial as,
3
f ( x) ¦ l ( x) f ( x )
i 0
i i

Self-Instructional
104 Material
Where Lagrange and Newton
Interpolations
( x  0)( x  1)( x  2) 1
l0 ( x)  x( x  1)( x  2)
( 1  0)(1  1)(1  2) 6
( x  1)( x  1)( x  2) 1
l1 ( x ) ( x  1)( x  1)( x  2) NOTES
(0  1)(0  1)(0  2) 2
( x  1)( x  0)( x  2) 1
l2 ( x)  ( x  1) x( x  2)
(1  1)(1  0)(1  2) 2
( x  1)( x  0)( x  1) 1
l3 ( x ) ( x  1) x ( x  2)
(2  1)(2  0)(2  1) 6
1 1 1 1
f ( x)  x ( x  1)( x  2) u 1  ( x  1)( x  1)( x  2) u1  ( x  1) x ( x  2) u 1  ( x  1) x( x  2) u (3)
6 2 2 6
1
 ( 4 x 3  4 x  6)
6
1
(2 x 3  2 x  3)
3
Example 6.4: Evaluate the values of f (2) and f (6.3) using Lagrange’s interpolation
formula for the table of values given below.

x 1.2 2.5 4 5.1 6 6.5


f ( x) 6.84 14.25 27 39.21 51 58.25

Solution: It is not advisable to use a higher degree interpolating polynomial. For


evaluation of f (2) we take a second degree polynomial using the values of f (x) at
the points x0 = 1.2, x1 = 2.5 and x2 = 4.
Thus,
f (2) = l0(2) × 6.84 + l1(2) × 14.25 + l2(2) × 27
Where
(2  2.5)(2  4)
l0 ( 2) 0.275
(1.2  2.5)(1.2  4)
(2  1.2)(2  4)
l1 (2) 0.821
(2.5  1.2)(2.5  4)
(2  1.2)(2  2.5)
l 2 ( 2) 0.095
(4  1.2)(4  2.5)
? f (2) = 0.275 × 6.84 + 0.821 × 14.25 – 0 .095 × 27 = 11.015 ~
 11.02
For evaluation of f (6.3), we consider the values of f (x) at x0 = 5.1, x1 = 6.0,
x2 = 6.5.
Thus, f (6.3) = l0(6.3) × 39.21 + l1(6.3) × 51 + l2(6.3) × 58.25
Where
(6.3  6.0)(6.3  6.5)
l0 (6.3) 0.048
(5.1  6.0)(5.1  6.5)
(6.3  5.1)(6.3  6.5)
l1 (6.3) 0.533
(6  5.1)(6.0  6.5)
(6.3  5.1)(6.3  6.0)
l2 (6.3) 0.514
(6.5  5.1)(6.5  6.0) Self-Instructional
Material 105
Lagrange and Newton
Interpolations ? f (6.3)  0.048 u 39.21  0.533 u 51  0.514 u 58.25
55.241 # 55.24

Since, the computed result cannot be more accurate than the data, the final
NOTES result is rounded-off to the same number of decimals as the data. In some cases,
a higher degree interpolating polynomial may not lead to better results.

6.3 NEWTON INTERPOLATIONS

Newton’s formula is used for constructing the interpolation polynomial. It makes


use of divided differences. This result was first discovered by the Scottish
mathematician James Gregory (1638–1675), a contemporary of Newton.
Gregory and Newton did extensive work on methods of interpolation but
now the formula is referred to as Newton’s interpolation formula. Newton derived
a general forward and backward difference interpolation formulae.
Newton–Gregory Forward Interpolation Formula
Let y = f(x) be a function of x which assumes the values f(a), f(a + h), f(a + 2h),
..., f(a + nh) for (n + 1) equidistant values a, a + h, a + 2h, ..., a + nh of the
independent variable x. Let f(x) be a polynomial of nth degree.
Let f(x) = A0  A1 ( x  a)  A2 ( x  a) ( x  a  h)
+ A3 (x – a) (x – a – h) (x – a – 2h) + ...
+ An ( x  a) ... ( x  a  n  1 h) (6.5)

Where A0 , A1 , A2 , ..., An are to be determined.


Put x = a, a + h, a + 2h, ..., a + nh in Equation (6.5) successively.
For x = a f(a) = A0 (6.6)

For x = a + h, f(a + h) = A0  A1h

Ÿ f(a + h)= f(a) + A1h | By (6.6)


'f (a )
Ÿ A1 = (6.7)
h
For x = a + 2h,
f(a + 2h) = A0  A1 (2h)  A2 (2h) h
'f ( a ) ½
= f ( a )  2h ­® 2
¾  2h A2
¯ h ¿
Ÿ 2h 2 A2 = f ( a  2 h)  2 f ( a  h)  f ( a ) ' 2 f ( a )
Self-Instructional
106 Material
Lagrange and Newton
' 2 f (a ) Interpolations
Ÿ A2 =
2! h 2
'3 f (a )
Similarly, A3 = and so on. NOTES
3! h3
' n f (a )
Thus, A n = .
n! hn
From Equation (6.5),
' f (a) ' 2 f (a)
f(x) = f (a)  ( x  a)  ( x  a ) ( x  a  h)  ...
h 2! h2
' n f (a )
( x  a) ... ( x  a  n  1 h)
n! hn
xa
Put, x = a + hk Ÿ k = , you will have
h
' f (a) (hk ) (hk  h) 2
f(a + hk) = f (a)  hk  ' f (a)  ...
h 2! h2

(hk ) ( hk  h) ( hk  2h) ... ( hk  n  1 h)


+ n
' n f (a )
n! h

k ( k  1) 2
Ÿ f (a  hk ) f (a)  k ' f (a)  ' f (a)  ...
2!
k (k  1) (k  2) ... (k  n  1) n
 ' f (a)
n!
This is the required formula.
This formula is particularly useful for interpolating the values of f(x) near the
beginning of the set of values given. h is called the interval of difference, while ' is
the forward difference operator.
Example. 6.5: From the following table, estimate the number of students who
weight between 45 and 50.

Weight (in kg ) 35 – 45 45 – 55 55 – 65 65 – 75 75 –85


No. of Students 20 45 35 12 10

Solution: The cumulative frequency table is shown below:


Weight less than (x) (in kg): 45 55 65 75 85
No. of Students (yx ): 20 65 100 112 122
Self-Instructional
Material 107
Lagrange and Newton Thus, the difference table is shown below:
Interpolations
Difference Table for the Number of Students Who weigh between 45–50 kgs

x yx 'y '2 y '3y '4y


NOTES 45 20
45
55 65 –10
35 –13
65 100 –23 34
12 21
75 112 –2
10
85 122
By taking x0 = 45, you shall find the number of students with weight less
than 50.
50  45
So, k = = 0.5
10
Using Newton’s Forward Interpolation formula, you get
k (k  1) 2
y50 = y45  k 'y45  ' y45  ...
2!

0.5 (0.5  1)
? y50 = 20 + 0.5 × 45 + u ( 10)
2!

0.5 (0.5  1) (0.5  2) 0.5 (0.5  1) (0.5  2) (0.5  3)


+ u ( 13)  u (34)
3! 4!
= 20 + 22.5 + 1.25 – 0.8125 – 1.32813
= 41.60937 | 42 students.
But the number of students with weight less then 45 is 20.
Hence, the number of students with weight between 45 kg and 50 kg are
42 – 20 = 22.
Example 6.6: Using Newton’s Forward interpolation formula obtain a polynomial
in x which takes the following values.
x: 4 6 8 10
y: 1 3 8 16

Hence, evaluate y for x = 5.


Solution: Newton’s forward formula is:
k ( k  1) 2 k ( k  1) ( k  2) 3
y(x) = y0  k 'y0  ' y0  ' y0  ...
Self-Instructional 2! 3!
108 Material
Lagrange and Newton
x  x0 Interpolations
Where k = .
h
The difference table for Newton’s Forward interpolation formula is as shown
in table given below: NOTES
2 3
x y 'y ' y ' y
4 1
2
6 3 3
5 0
8 8 3
8
10 16

x4
Here, x0 = 4, K=
2
§ x4·§ x4 ·
¨ ¸¨  1¸
§ x4· © 2 ¹© 2 ¹ u (3)  0
So, y(x) = 1  ¨ ¸u 2 
© 2 ¹ 2!
( x  4) ( x  6)
= 1 + (x – 4) + u3
2
ª ( x  6).3 º
= 1 + (x – 4) «1  »¼
¬ 2
x4 1
=1+ [3 x  16] 1  (3 x 2  28 x  64)
2 2
3 2
y(x) = x  14 x  33
2
3 2
So, y(5) = (5)  14(5)  33 0.5
2
Newton–Gregory Backward Interpolation Formula
Let y = f(x) be a function of x which assumes the value f(a), f(a + h), f(a + 2h),
..., f(a + nh) for (n + 1) equidistant values a, a + h, a + 2h, ..., a + nh of the
independent variable x.
Let f(x) be a polynomial of the nth degree.

Let, f(x) = A0  A1 ( x  a  nh)  A2 ( x  a  nh)( x  a  n  1 h)  ...

+ An ( x  a  nh) ( x  a  n  1 h)  ... ( x  a  h)

Self-Instructional
Material 109
Lagrange and Newton
Interpolations Where A0 , A1, A2 , A3 , ... An are to be determined. (6.8)

Put, x = a  hn, a  n  1 h, ..., a in Equation (6.8), respectively..


NOTES Put, x = a + nh, then f(a + nh) = A0 (6.9)
Put, x = a + (n – 1) h, then,

f (a  n  1 h) = A0  h A1 f (a  nh)  h A1
| By Equation (6.9)
’f (a  nh)
Ÿ A1 =
h
(6.10)
Put, x = a + (n – 2)h, then,

f (a  n  2 h) = A0  2hA1  (2h) ( h) A2

Ÿ 2! h2 A2 = f (a  n  2 h) – f(a + nh) + 2’f(a + nh)

= ’2 f (a  nh)
A2 =
’ 2 f (a  nh)
(6.11)
2! h 2
Proceeding, you get,
’ n f (a  nh)
An = (6.12)
n! hn
Substituting the values in Equation (6.9), you get,
’f (a  nh)
f(x) = f (a  nh)  ( x  a  nh)  ...
h

’ n f (a  nh)
+ ( x  a  nh) ( x  a  n  1 h) ... (6.13)
n! hn
Put x = a + nh + kh, then,
x – a – nh = kh
And x – a – (n – 1)h = (k + 1)h
#
x – a – h = (k  n  1) h
? Equation (6.13) becomes,

Self-Instructional
110 Material
Lagrange and Newton
k ( k  1) 2 Interpolations
f(x) = f ( a  nh)  k ’f ( a  nh)  ’ f ( a  nh)
2!

’ n f (a  nh)
+ ... + kh .(k  1) h ... (k  n  1) (h) NOTES
n! hn
Which is the required formula.

Or k (k  1) 2
f ( xn  kh) f ( xn )  k ’f ( xn )  ’ f ( xn )
2!
k (k  1)(k  2) ... (k  n  1) n
 ...  ’ f ( xn )
n!
This formula is useful when the value of f(x) is required near the end of the
table.
Where xn = (x0 + nh) and a = x0, so f(a + nh) = f(xn).
Example 6.7: Using Newton’s backward interpolation formula, obtain the value
of tan 22°, given that:
Tq 0 4 8 12 16 20 24
tan T : 0 0.0699 0.1405 0.2126 0.2867 0.3640 0.4452

Solution: The difference table is shown below:


Difference Table for Obtaining the Values of tan 22°

x 104f(x) = y 'y '2 y '3y '4y '5y


0 0
699
7
4 699 8
706 –3
8 1405 15 10
721 5
12 2126 20 7
741 12 –12
16 2867 32 –5
773 7
20 3640 39
812
24 4452
Here, x n = 24, x = 22
x  xn
And k = = – 0.5.
h
Thus, by Newton’s backward interpolation formula, you have
k (k  1) 2 k ( k  1) (k  12) 3
y(x) = yn  k ’yn  ’ yn  ’ yn  ...
2! 3! Self-Instructional
Material 111
Lagrange and Newton y(22) = 4452 + (–0.5) × 812
Interpolations
(0.5).(0.5) (0.5).(0.5).(1.5)
+ u 39  × (7)
2! 3!
NOTES
( 0.5).(0.5).(1.5).(2.5) ( 0.5).(0.5).(1.5).(2.5) u (3.5)
+ ×(–5) × (–12)
4! 5!
= 4452 – 406 – 4.875 – 0.4375 + 0.1953 + 0.32813
10 y(22) = 4041.21
4

Thus, tan 22° = 0.4041.

6.4 APPLICATIONS OF LAGRANGE AND


NEWTON INTERPOLATIONS

Applications of Lagrange Interpolation


In numerical analysis, Lagrange polynomials are used for polynomial interpolation.
Interpolation is used to solve mathematical equations that are difficult to treats
specially high degree ones, those are needed to be repeated to find the final results
through which reading in depth in given data for certain function to calculate the
average. The idea of interpolation includes replacing a well-known or an unknown
function through which having the values of this function at certain independent
variable values. In many domains of science, measurements are done. If these
measurements are drawn in a graph, they will be shown as simple unconnected
points. Such data are called data points or discrete. Management and analysis of
the data is easy, if it can be described using a continuous function, or a line connecting
the discrete data point together. For this, the line between two measurements or
data points has to be created. This process of connecting the data points together
with a line is termed as interpolation. There are different ways to invent the data,
each method has its benefits and drawbacks. The simplest method is drawing a
straight line between two data points, which is not very accurate. Very often,
polynomials are used to create a more accurate representation. In many cases, the
data may be wrong and hence the results of this interpolation are usable either as
a replacement for missing data points or as a tool to evaluate more complicated
data.
Lagrange interpolation tries to find the values between two known points of
data. The primary use of Lagrange interpolation is to help users, be they scientists,
photographers, engineers or mathematicians, determine what data might exist outside
of their collected data. Outside the domain of mathematics, interpolation is frequently
used to scale images and to convert the sampling rate of digital signals.
In the domain of science, a scientist may use a computer to calculate a
complicated function. However, if that function takes a very long time to compute,
Self-Instructional
112 Material
it may make the experiments difficult or impossible to run properly. So, interpolation Lagrange and Newton
Interpolations
is used to compute the result.
Lagrange interpolation is also used in the field of imaging in which pixels are
used. When an image is made larger, the existing pixels cannot simply be stretched.
NOTES
Instead, new pixels have to be created. In order to ‘Presume or Assume’ how
these pixels should look like, the software is used that enlarges the pixels in the
image using the interpolation method. The software first spreads out the existing
pixels into the new image size, leaving many gaps and spaces. Then, it examines
the existing data (the original pixels) and then creates a function that describes the
missing data (the new pixels) to create a new data set (the enlarged image).
Sometimes, the output result of the program may be poor, i.e., the resulting image
may be blur, hazy or distorted in places. These are called Lagrange interpolation
errors.
Lagrange method of interpolation can also be used in a neural network
learning by developing the calculation in the back propagation. This proposed
method of calculating decreased the learning time with best ordering/classification
process results. Moreover, the Langrage interpolation polynomial was used to
process the image pixels and to remove the noise in the image. This interpolation
gives the effective processing in removing the noise and error in the image layers.
One of the advantages of this method is to reduce the noise to minimum value by
replacing the noisy pixels (detected by Lagrange Back Propagation Neural Network
or LBPNN) using the results calculated by the Lagrange interpolation with high
speed processing.
The Lagrange polynomial can also be computed in finite fields. This has
applications in cryptography, such as in Shamir’s Secret Sharing scheme. Shamir’s
Secret Sharing is an algorithm in cryptography created by Adi Shamir. It is a form
of secret sharing, where a secret is divided into parts, giving each participant its
own unique part. To reconstruct the original secret, a minimum number of parts is
required. In the threshold scheme this number is less than the total number of
parts. Otherwise all participants are needed to reconstruct the original secret.
Applications of Newton Interpolation
In the mathematical field of numerical analysis, a Newton polynomial, named after
its inventor Isaac Newton, is an interpolation polynomial for a given set of data
points. The Newton polynomial is sometimes called Newton’s divided differences
interpolation polynomial because the coefficients of the polynomial are calculated
using Newton’s divided differences method.
The definition of the divided differences new data points can be added to
the data set to create a new interpolation polynomial without recalculating the old
coefficients. And when a data point changes, we usually do not have to recalculate
all coefficients. Furthermore, if the Xi are distributed equidistantly then the calculation
of the divided differences becomes significantly easier. Therefore, the divided
Self-Instructional
Material 113
Lagrange and Newton difference formulas are usually preferred over the Lagrange form for practical
Interpolations
purposes.
Interpolation is the technique of estimating the value of a function for any
intermediate value of the independent variable, while the process of computing the
NOTES
value of the function outside the given range is called extrapolation.
The Newton polynomials generate the Newton series. These are in turn a
special case of the general difference polynomials which allow the representation
of analytic functions through generalized difference equations. For these reasons,
polynomials are often used for approximating continuous functions, the various
interpolating polynomials used include the concepts of forward, backward and
central differences.
Newton polynomials primary uses is to furnish some mathematical tools
that are used in developing methods in the areas of approximation theory, numerical
integration, and the numerical solution of differential equations.
Fundamentally, the Newton polynomials are specifically considered for
enhanced accuracy for a given polynomial degree as compared to other polynomial
interpolations.

6.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Lagrange’s interpolation is useful for unequally spaced tabulated values.


Let y = f (x) be a real valued function defined in an interval (a, b) and let y0,
y1,..., yn be the (n + 1) known values of y at x0, x1,...,xn, respectively. The
polynomial which interpolates f (x), is of degree less than or equal to
n. Thus,

2. Newton’s formula is used for constructing the interpolation polynomial. It


makes use of divided differences. This result was first discovered by the
Scottish mathematician James Gregory (1638–1675), a contemporary of
Newton.
3. Let y = f(x) be a function of x which assumes the values f(a), f(a + h), f(a
+ 2h), ..., f(a + nh) for (n + 1) equidistant values a, a + h, a + 2h, ..., a +
nh of the independent variable x. Let f(x) be a polynomial of nth degree.
4. Let y = f(x) be a function of x which assumes the value f(a), f(a + h), f(a +
2h), ..., f(a + nh) for (n + 1) equidistant values a, a + h, a + 2h, ..., a + nh
of the independent variable x.
5. Lagrange interpolation tries to find the values between two known points of
data. The primary use of Lagrange interpolation is to help users, be they
scientists, photographers, engineers or mathematicians, determine what data
Self-Instructional
114 Material
might exist outside of their collected data. Outside the domain of mathematics, Lagrange and Newton
Interpolations
interpolation is frequently used to scale images and to convert the sampling
rate of digital signals.
6. Newton polynomials primary uses is to furnish some mathematical tools
NOTES
that are used in developing methods in the areas of approximation theory,
numerical integration, and the numerical solution of differential equations.
Fundamentally, the Newton polynomials are specifically considered for
enhanced accuracy for a given polynomial degree as compared to other
polynomial interpolations.

6.6 SUMMARY

x Lagrange’s interpolation is useful for unequally spaced tabulated values.


Let y = f (x) be a real valued function defined in an interval (a, b) and let y0,
y1,..., yn be the (n + 1) known values of y at x0, x1,...,xn, respectively. The
polynomial which interpolates f (x), is of degree less than or equal to
n. Thus,

x Newton’s formula is used for constructing the interpolation polynomial. It


makes use of divided differences. This result was first discovered by the
Scottish mathematician James Gregory (1638–1675), a contemporary of
Newton.
x Let y = f(x) be a function of x which assumes the values f(a), f(a + h), f(a
+ 2h), ..., f(a + nh) for (n + 1) equidistant values a, a + h, a + 2h, ..., a +
nh of the independent variable x. Let f(x) be a polynomial of nth degree.
x Let y = f(x) be a function of x which assumes the value f(a), f(a + h), f(a +
2h), ..., f(a + nh) for (n + 1) equidistant values a, a + h, a + 2h, ..., a + nh
of the independent variable x.
x Lagrange interpolation tries to find the values between two known points of
data. The primary use of Lagrange interpolation is to help users, be they
scientists, photographers, engineers or mathematicians, determine what data
might exist outside of their collected data. Outside the domain of mathematics,
interpolation is frequently used to scale images and to convert the sampling
rate of digital signals.
x Lagrange interpolation is also used in the field of imaging in which pixels are
used. When an image is made larger, the existing pixels cannot simply be
stretched. Instead, new pixels have to be created. In order to ‘Presume or
Assume’ how these pixels should look like, the software is used that enlarges
the pixels in the image using the interpolation method.

Self-Instructional
Material 115
Lagrange and Newton x Newton polynomials primary uses is to furnish some mathematical tools
Interpolations
that are used in developing methods in the areas of approximation theory,
numerical integration, and the numerical solution of differential equations.
x Fundamentally, the Newton polynomials are specifically considered for
NOTES
enhanced accuracy for a given polynomial degree as compared to other
polynomial interpolations.

6.7 KEY WORDS

x Lagrange interpolation: Lagrange’s interpolation is useful for unequally


spaced tabulated values.
x Newton interpolation: Newton’s formula is used for constructing the
interpolation polynomial. It makes use of divided difference.

6.8 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Explain the Lagrange interpolations.
2. Define the Newton interpolations.
3. State Newton-Gregory forward interpolation formula.
4. Elaborate on the Newton-Gregory backward interpolation formula.
5. Analyse the applications of Lagrange interpolations.
6. Explain the applications of Newton interpolations.
Long-Answer Questions
1. Briefly discuss the Lagrange interpolation.
2. Explain the Newton interpolations.
3. Define Newton-Gregory forward interpolation formula.
4. Discuss the Newton-Gregory backward interpolation formula.
5. Write down the applications of Lagrange and Newton interpolations.

6.9 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.

Self-Instructional
116 Material
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite Lagrange and Newton
Interpolations
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
NOTES
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
Material 117
Divided Differences

UNIT 7 DIVIDED DIFFERENCES


NOTES Structure
7.0 Introduction
7.1 Objectives
7.2 Divided Differences and Their Properties
7.2.1 Newton’s Divided Difference Interpolation Formula
7.3 Application of Newton's General Interpolating Formula
7.4 Answers to Check Your Progress Questions
7.5 Summary
7.6 Key Words
7.7 Self Assessment Questions and Exercises
7.8 Further Readings

7.0 INTRODUCTION

In mathematics, divided differences is an algorithm, historically used for computing


tables of logarithms and trigonometric functions. Charles Babbage’s difference
engine, an early mechanical calculator, was designed to use this algorithm in its
operation.
Divided differences is a recursive division process. The method can be
used to calculate the coefficients in the interpolation polynomial in the Newton
form. When the data points are equidistantly distributed we get the special case
called forward differences. They are easier to calculate than the more general
divided differences.
Note that the “Divided Portion” from forward divided difference must still
be computed, to recover the forward divided difference from the forward difference.
The Newton polynomial is sometimes called Newton’s divided differences
interpolation polynomial because the coefficients of the polynomial are calculated
using Newton’s divided differences method.
In this unit, you will study about the divided differences and their properties,
and the applications of Newton’s general interpolating formula.

7.1 OBJECTIVES

After going through this unit, you will be able to:


x Explain the divided differences and their properties
x Understand the applications of Newton’s general interpolating formula

Self-Instructional
118 Material
Divided Differences
7.2 DIVIDED DIFFERENCES AND THEIR
PROPERTIES
We now look at the differences of various orders of a polynomial of degree n, given by NOTES
y f ( x) an xn  an1 x n 1  an  2 x n 2  ...  a1 x  a0
The first order forward difference is defined by,
'f ( x ) f ( x  h)  f ( x) and is given by,,

Where the coefficients of various powers of x are collected separately.


Thus, the first order difference of a polynomial of degree n is a polynomial of
degree n – 1, with bn 1 an . nh
Proceeding as above, we can state that the second order forward difference of
a polynomial of degree n is a polynomial of degree n – 2, with coefficient of xn – 2 as
n( n  1) h 2 a 0 .

Continuing successively, we finally get ' n y a0 n!h n , a constant.


We can conclude that for polynomial of degree n, all other differences having
order higher than n are zero.
It may be noted that the converse of the above result is partially true and suggests
that if the tabulated values of a function are found to be such that the differences of
the kth order are approximately constant, then the highest degree of the interpolating
polynomial that should be used is k. Since the tabulated data may have round-off
errors, the acutal function may not be a polynomial.
Example 7.1: Compute the horizontal difference table for the following data and
hence, write down the values of ’f (4), ’ 2 f (3) and ’ 3 f (5).

x 1 2 3 4 5
f ( x) 3 18 83 258 627

Solution: The horizontal difference table for the given data is as follows:
x f ( x) ’f ( x) ’ 2 f ( x) ’ 3 f ( x) ’ 4 f ( x)
1 3    
2 18 15   
3 83 65 50  
4 258 175 110 60 
5 627 369 194 84 24
Self-Instructional
Material 119
Divided Differences From the table we read the required values and get the following result:
’f ( 4) 175, ’ 2 f (3) 50, ’ 3 f (5) 84
Example 7.2: Form the difference table of f (x) on the basis of the following table
NOTES and show that the third differences are constant. Hence, conclude about the degree
of the interpolating polynomial.

x 0 1 2 3 4
f ( x) 5 6 13 32 69

Solution: The difference table is given below

x f ( x) 'f ( x) '2 f ( x) '3 f ( x)


0 5
1
1 6 6
7 6
2 13 12
19 6
3 32 18
37
4 69

It is clear from the above table that the third differences are constant and hence,
the degree of the interpolating polynomial is three.

7.2.1 Newton’s Divided Difference Interpolation Formula


Let y0, y1, ..., yn be the values of y = f(x) corresponding to the arguments x0, x1,
..., xn, then from the definition of divided differences, you have,
y  y0
[x, x0] =
x  x0

So that, y= y0  ( x – x0 ) [ x, x0 ] ...(7.1)

[ x, x0 ]  [ x0 , x1 ]
Again, [x, x0, x1] =
x  x1
Which gives, [x, x0] = [x0, x1] + (x – x1) [x, x0, x1] (7.2)
From Equations (7.1) and (7.2)
y = y0 + (x – x0) [x0, x1] + (x – x0) (x – x1) [x, x01, x1] ...(7.3)
[ x, x0 , x1 ]  [ x0 , x1, x2 ]
Also [x, x0, x1, x2] =
x  x2
Self-Instructional
120 Material
Which gives, [x, x0, x1] = [x0, x1, x2] + (x – x2) [x, x0, x1, x2] (7.4) Divided Differences

From Equations (7.3) and (7.4),


y = y0  ( x  x0 ) [ x0 , x1 ]  ( x  x0 ) ( x  x1 ) [ x0 , x1 , x2 ]
NOTES
 ( x  x0 ) ( x  x1 ) ( x  x2 ) [ x, x0 , x1 , x2 ]
Proceeding in this manner, you get,
y = f(x) = y0  ( x  x0 ) [ x0 , x1 ]  ( x  x0 ) ( x  x1 ) [ x0 , x1 , x2 ]

 ( x  x0 ) ( x  x1 ) ( x  x2 ) [ x0 , x1 , x2 , x3 ]

+ ... + ( x  x0 ) ( x  x1 ) ( x  x2 )

... ( x  xn 1) [ x0 , x1, x2 , x3 , ..., xn ]

+ ( x  x0 ) ( x  x1 ) ( x  x2 )

... ( x  xn ) [ x, x0 , x1 , x2 , ..., xn ]
This is called Newton’s General Interpolation formula with divided differences.
The last term being the remainder term after (n + 1) terms.
Example 7.3: Referring to the following table, find the value of f(x) at point
x = 4:
x: 1.5 3 6
f(x): –0.25 2 20.
Solution: The divided difference table is shown below:
Divided Difference Table for Finding the Value of f(x)

x f(x) '| f(x) '| 2 f(x)


1.5  0.25

1.5

3 2 1
6
6 20
Applying Newton’s divided difference formula,
f(x) = –0.25 + (x – 1.5) (1.5) + (x – 1.5) (x – 3) (1)
Putting x = 4, you get
f(4) = 6.

Self-Instructional
Material 121
Divided Differences Example 7.4: Using Newton’s divided difference formula prove that,
( x  1) x 2
f(x) = f (0)  x'f ( 1)  ' f ( 1)
2!
NOTES
( x  1) x ( x  1) 3
+ ' f ( 2)  ...
3!
Solution: Taking the arguments, 0, – 1, 1, –2, ... the divided Newton’s difference
formula is,
2
f(x) = f (0)  x '| f (0)  x( x  1) '| f (0)
1 1, 1

+ x(x + 1) (x – 1) '| 3 f ( 1) +...


1, 1,  2

2
= f (0)  x '| f (1)  x( x  1) '| f ( 1)
0 0, 1

 ( x  1) x ( x –1) '| 3 f ( 2)  ...


1, 0, 1

f (0)  f (1)
Now, '| f (1) = 'f (1)
0 0  ( 1)

1
'| 2 f (1) = [ '| f (0)  '| f ( 1)]
0, 1 1  (1) 1 01

1 1 2
= ['f (0)  'f ( 1)] ' f ( 1)
2 2

1
'| 3 f ( 2) = [ '| 2 f (1)  '| 2 f (2)]
1, 0, 1 1  ( 2) 0, 1 1, 0

1 ª ' 2 f ( 1) ' 2 f ( 2) º


= «  »
3 ¬« 2 2 ¼»

'3 f ( 2) '3 f ( 2)


= and so on.
3.2 3
Substituting these values, you get,
( x  1) x 2
f(x) = f (0)  x'f ( 1)  ' f ( 1)
2!
Self-Instructional
122 Material
Divided Differences
( x  1) x ( x  1) 3
+ ' f ( 2)  ...
3!

7.3 APPLICATION OF NEWTON'S GENERAL NOTES


INTERPOLATING FORMULA
Newton’s forward difference interpolation formula is a polynomial of degree less
than or equal to n. This is used to find the value of the tabulated function at a non-
tabular point. Consider a function y = f (x) whose values y0, y1,..., yn at a set of
equidistant points x0 , x1 ,..., xn are known.
Let be the interpolating polynomial, such that

(7.5)

We assume the polynomial to be of the form,

(7.6)
The coefficients ai’s in Equation (7.6) are determined by satisfying the conditions
in Equation (7.5) successively for i = 0, 1, 2,...,n.
Thus, we get

Or,

a2

Proceeding further, we get successively,

Self-Instructional
Material 123
Divided Differences Using these values of the coefficients, we get Newton’s forward difference
interpolation in the form,

NOTES

h h h n!

x  x0
This formula can be expressed in a more convenient form by taking u as
h
shown here.
We have,

x  x1 x  ( x0  h ) x  x0
1 u 1
h h h
x  x2 x  ( x0  2 h ) x  x0
2 u2
h h h
x  xn 1 x  { x0  (n  1)h} x  x0
 (n  1) u  n 1
h h h

Thus, the interpolating polynomial reduces to:

(7.7)
n
This formula is generally used for interpolating near the beginning of the table.
For a given x, we choose a tabulated point as x0 for which the following condition is
satisfied.
For better results, we should have
x  x0
|u| d 0.5
h

The degree of the interpolating polynomial to be used is less than or equal to n


and is determined by the order of the differences when they are nearly same so that
the differences of higher orders are irregular due to the propagated round-off error
in the data.

Newton’s Backward Difference Interpolation Formula


Newton’s forward difference interpolation formula cannot be used for interpolating
at a point near the end of a table, since we do not have the required forward
differences for interpolating at such points. However, we can use a separate formula

Self-Instructional
124 Material
known as Newton’s backward difference interpolation formula. Let a table of values Divided Differences
{xi, yi}, for i = 0, 1, 2, ..., n for equally spaced values of xi be given. Thus, xi = x0 +
ih, yi = f(xi), for i = 0, 1, 2, ..., n are known.
We construct an interpolating polynomial of degree n of the form,
NOTES

(7.8)
We have to determine the coefficients b0, b1, ..., bn by satisfying the relations,
(7.9)
Thus, (7.10)
Similarly,

yn  yn 1 ’yn
Or, b1 (7.11)
h h
Again

yn  yn 1
Or, yn  2 yn  (2 h)  b2 ( 2h)(h )
h

? b2 (7.12)

By induction or by proceeding as mentioned earlier, we have

’3 yn ’ 4 yn ’ n yn
b3 , b4 , ..., bn (7.13)
3 ! h3 4 ! h4 n ! hn
Substituting the expressions for bi in Equation (7.8), we get

(7.14)
This formula is known as Newton’s backward difference interpolation formula.
It uses the backward differences along the backward diagonal in the difference
table.
x  xn
Introducing a new variable v ,
h
x  x n 1 x  ( x n  h)
We have, v 1 .
h h

Self-Instructional
Material 125
Divided Differences Thus, the interpolating polynomial in Equation (7.15) may be rewritten as,

NOTES (7.15)
This formula is generally used for interpolation at a point near the end of a table.
The error in the given interpolation formula may be written as,

n
n

Check Your Progress


1. Define the first order forward difference.
2. Explain the Newton's divided difference interpolation formula.
3. State the application of Newton's forward difference interpolation formula.
4. Analyse the uses of Newton's backward difference interpolation formula.

7.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. The first order forward difference is defined by,
'f ( x ) f ( x  h)  f ( x ) and is given by,,

2. Let y0, y1, ..., yn be the values of y = f(x) corresponding to the arguments
x0, x1, ..., xn, then from the definition of divided differences, you have,
y  y0
[x, x0] =
x  x0
3. Newton’s forward difference interpolation formula is a polynomial of degree
less than or equal to n. This is used to find the value of the tabulated function
at a non-tabular point. Consider a function y = f (x) whose values y0, y1,..., yn
at a set of equidistant points x0 , x1 ,..., xn are known.
Self-Instructional
126 Material
4. Newton’s forward difference interpolation formula cannot be used for Divided Differences
interpolating at a point near the end of a table, since we do not have the
required forward differences for interpolating at such points. However, we
can use a separate formula known as Newton’s backward difference
interpolation formula. Let a table of values {xi, yi}, for i = 0, 1, 2, ..., n for NOTES
equally spaced values of xi be given. Thus, xi = x0 + ih, yi = f(xi), for i = 0, 1,
2, ..., n are known.

7.5 SUMMARY
x The differences of various orders of a polynomial of degree n, given by
y f ( x) an xn  an1 x n 1  an  2 x n 2  ...  a1 x  a0
x The first order forward difference is defined by,
'f ( x) f ( x  h)  f ( x) and is given by,,

x The first order difference of a polynomial of degree n is a polynomial of


degree n – 1, with bn 1 an . nh
x Proceeding as above, we can state that the second order forward difference
of a polynomial of degree n is a polynomial of degree n – 2, with coefficient
of xn – 2 as n(n  1)h 2 a0 .
x Let y0, y1, ..., yn be the values of y = f(x) corresponding to the arguments
x0, x1, ..., xn, then from the definition of divided differences, you have,
y  y0
[x, x0] =
x  x0
x Newton’s forward difference interpolation formula is a polynomial of degree
less than or equal to n. This is used to find the value of the tabulated function
at a non-tabular point. Consider a function y = f (x) whose values y0, y1,..., yn
at a set of equidistant points x0 , x1 ,..., xn are known.
x Newton’s forward difference interpolation formula cannot be used for
interpolating at a point near the end of a table, since we do not have the
required forward differences for interpolating at such points.

Self-Instructional
Material 127
Divided Differences
7.6 KEY WORDS

x First order difference: The first order difference of a polynomial of degree


NOTES n is a polynomial of degree n1, with bn-1 = an.nh
x Second order forward difference: We can state that the second order
forward difference of a polynomial of degree n is a polynomial of degree
n2, with coefficient of xn 2 as n(n1) h2a0.

7.7 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Explain the divided differences and their properties.
2. Define the Newton’s divided difference interpolation formula.
3. Elaborate on the Newton’s forward difference interpolation formula.
4. Illustrate the Newton’s backward difference interpolation formula.
Long-Answer Questions
1. Discuss briefly the divided differences and their properties.
2. Analyse the applications of Newton’s general interpolation formula.
3. Explain the Newton’s backward difference interpolation formula.

7.8 FURTHER READINGS


Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical
Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.
Self-Instructional
128 Material
Central Differences

UNIT 8 CENTRAL DIFFERENCES Interpolations Formulae

INTERPOLATIONS
NOTES
FORMULAE
Structure
8.0 Introduction
8.1 Objectives
8.2 Central Differences Interpolations Formulae
8.3 Gauss’s Formula
8.4 Stirling’s Formula
8.5 Bessel’s Formula
8.6 Lagrange’s Interpolation Formula
8.7 Everett’s Formula
8.8 Hermite's Formula
8.9 Answers to Check Your Progress Questions
8.10 Summary
8.11 Key Words
8.12 Self Assessment Questions and Exercises
8.13 Further Readings

8.0 INTRODUCTION

The central difference is an average of the forward and backward differences for
the equally spaced value of data. The truncation error of the central difference
approximation is order of O(n2), where n is the step size.
Gauss’s formula alternately adds new points at the left and right ends, thereby
keeping the set of points centered near the same place (near the evaluated point).
When so doing, it uses terms from Newton’s formula, with data points and x
values renamed in keeping with one’s choice of what data point is designated as
the x0 data point.
Stirling’s formula remains centered about a particular data point, for use
when the evaluated point is nearer to a data point than to a middle of two data
points.
Bessel’s formula remains centered about a particular middle between two
data points, for use when the evaluated point is nearer to a middle than to a data
point. Bessel and Stirling achieve that by sometimes using the average of two
differences, and sometimes using the average of two products of binomials in x,
where Newton’s or Gauss’s would use just one difference or product. Stirling’s
uses an average difference in odd-degree terms (whose difference uses an even
number of data points); Bessel’s uses an average difference in even-degree terms
(whose difference uses an odd number of data points). Self-Instructional
Material 129
Central Differences The Lagrange formula is at its best when all the interpolation will be done at
Interpolations Formulae
one x value, with only the data points’ y values varying from one problem to
another, and when it is known, from past experience, how many terms are needed
for sufficient accuracy.
NOTES
In numerical analysis, Hermite interpolation, named after Charles Hermite,
is a method of interpolating data points as a polynomial function. The generated
Hermite interpolating polynomial is closely related to the Newton polynomial, in
that both are derived from the calculation of divided differences. However, the
Hermite interpolating polynomial may also be computed without using divided
differences.
In this unit, you will study about the central differences interpolations formulae,
Gauss’s formula, Stirling’s formula, Bessel’s formula, Everett’s formula, and
Hermite’s formula.

8.1 OBJECTIVES

After going through this unit, you will be able to:


x Understand the central differences interpolations formulae
x Explain the Gauss’s formula
x Elaborate on the Stirling’s formula
x Interpret the Bessel’s formula
x Comprehend the Everett’s formula
x Illustrate the Hermite’s formula

8.2 CENTRAL DIFFERENCES INTERPOLATIONS


FORMULAE

The central difference is an average of the forward and backward differences for
the equally spaced value of data. The truncation error of the central difference
approximation is order of O(n2), where n is the step size.
You shall now study the central difference formulae most suited for
Interpolation near the middle of a tabulated set.

8.3 GAUSS’S FORMULAE

Newton–Gregory forward difference formula is,


k (k  1) 2 k (k  1) ( k  2) 2
f(a + hk) = f(a) + k'f(a) + ' f (a )  ' f (a)
2! 3!

Self-Instructional
130 Material
Central Differences
k ( k  1) ( k  2) ( k  3) 4 Interpolations Formulae
+ ' f ( a )  ... (8.1)
4!
Given a = 0, h = 1, you get
NOTES
k (k  1) 2 k (k  1) ( k  2) 3
f(k) = f(0) + k'f(0) + ' f (0)  ' f (0)
2! 3!

k ( k  1) ( k  2) ( k  3) 4
+ ' f (0)  ... (8.2)
4!
Now,

'3 f (1) = ' 2 f (0)  ' 2 f (1) Ÿ ' 2 f (0) '3 f (1)  ' 2 f (1)
Also,

' 4 f (1) = '3 f (0)  '3 f (1) Ÿ '3 f (0) ' 4 f (1)  '3 f (1)

And '5 f (1) = ' 4 f (0)  ' 4 f (1) Ÿ ' 4 f (0) '5 f (1)  ' 4 f (1)
And so on.
? From Eqation (8.2),
k (k  1) 2
f(k) = f(0) + k 'f (0)  {' f (1)  '3 f (1)}
2!

k ( k  1) ( k  2) 3
 {' f ( 1)  ' 4 f ( 1)}
3!

k (k  1) ( k  2) ( k  3) 4
 {' f (1)  ' 5 f (1)}  ...
4!

k ( k  1) 2 k ( k  1) ­ k  2½ 3
= f (0)  k 'f (0)  ' f ( 1)  ®1  ¾ ' f ( 1)
2! 2 ¯ 3 ¿

k (k – 1) ( k  2) ­ k  3 ½ 4 k ( k  1) (k  2) (k  3) 
 ®1  ¾ ' f (1)  ' f (1)  ...
6 ¯ 4 ¿ 4!

k ( k  1) 2 (k  1) k ( k  1) 3
= f (0)  k ' (0)  ' f ( 1)  ' f ( 1)
2! 3!
(k  1) k (k  1) (k  2) 4 k (k  1) ( k  2) ( k  3) 5
+ ' f (1)  ' f ( 1)  ...
4! 4!
(8.3)

Self-Instructional
Material 131
Central Differences
Interpolations Formulae But, '5 f (2) = ' 4 f (1)  ' 4 f (2)

? ' 4 f (1) = ' 4 f (2)  '5 f (2)


NOTES Then Equation (8.3) becomes,
k ( k  1) 2 (k  1) k (k  1) 3
f(u) = f (0)  k 'f (0)  ' f ( 1)  ' f ( 1)
2! 3!
( k  1) k ( k  1) ( k  2) 4
+ {' f ( 2)  ' 5 f ( 2)}
4!
k (k  1) ( k  2) (k  3) 5
+ ' f (1)  ...
4!

k (k  1) 2 (k  1) k (k  1) 3
f (u ) f (0)  k 'f (0)  ' f ( 1)  ' f (1)
2! 3!
k (k  1) (k  1) (k  2) 4
 ' f (2)  ...
4!
This is called Gauss’s forward difference formula.
1
Note: This formula is applicable when a lies between 0 and .
2
Example 8.1: Find the value of f(41) using Gauss’s forward formula from the
following data:

x: 30 35 40 45 50
f ( x) : 3678.2 2995.1 2400.1 1876.2 1416.3

Solution: Gauss’s forward formula is,


k ( k  1) 2 (k  1) k (k  1) 3
f(x) = y0  k 'y0  ' y1  ' y1
2! 3!

( k  1) k ( k  1) ( k  2) 4
+ ' y2  ...
4!

Self-Instructional
132 Material
The central difference table is shown below: Central Differences
Interpolations Formulae
Central Difference Table

x k f(x) 'y '2y '3y '4y


NOTES
30 –2 3678.2
–683.1
35 –1 2995.1 88.1
–595 –17
40 0 2400.1 71.1 9.9
–523.9 –7.1
45 1 1876.2 64
–459.9
50 2 1416.3
Thus,
41  40 1
Value of k = 0.2
45  40 5

0.2 (0.2  1)
Now, f(41) = 2400.1 + (0.2) × (–523.9) + u (71.1)
2!

(0.2  1) (0.2) (0.2  1) (0.2  1) (0.2) (0.2  1) (0.2  2)


+ u ( 7.1)  ×
3! 4!
(9.9)
= 2400.1 – 104.78 – 5.688 + 0.2272 + 0.14256
= 2290.001760.
Gauss’s Backward Difference Formula
Newton–Gregory forward difference formula is,
k ( k  1) 2
f(a + hk) = f ( a )  k 'f ( a )  ' f (a )
2!

k (k  1) ( k  2) 3
 ' f (a )  ... (8.4)
3!
Putting a = 0, h = 1, you get
k ( k  1) 2 k ( k  1) ( k  2) 3
f(k) = f (0)  k 'f (0)  ' f (0)  ' f (0)
2! 3!

k (k  1) (k  2) (k  3) 4
 ' f (0)  ... (8.5)
4!

Self-Instructional
Material 133
Central Differences
Interpolations Formulae Now, 'f (0) = 'f (1)  ' 2 f (1)

' 2 f (0) = ' 2 f (1)  '3 f (1)

NOTES '3 f (0) = '3 f (1)  ' 4 f (1)

' 4 f (0) = ' 4 f (1)  '5 f (1) and so on.


? From Equation (8.5),
2 k (k  1) 2
f(k) = f (0)  k[ 'f (1)  ' f ( 1)]  [' f (1)  '3 f (1)]
2!
k ( k  1) ( k  2) 3
+ [ ' f ( 1)  ' 4 f ( 1)]
3!
k ( k  1) ( k  2) ( k  3) 4
+ [ ' f ( 1)  '5 f ( 1)]  ... (8.6)
4!

§ k 1 · 2
= f (0)  k 'f (1)  k ¨1  ¸ ' f ( 1)
© 2 ¹

k (k  1) § k 2· 3
+ ¨1  ¸ ' f (1)
2 © 3 ¹

k ( k  1) ( k  2) ­ k  3 ½ 4
+ ®1  ¾ ' f ( 1)
6 ¯ 4 ¿

k (k  1) ( k  2) (k  3) 5
 ' f (1)  ...
4!
(k  1) k 2
= f (0)  k 'f ( 1)  ' f ( 1)
2!
k ( k  1) ( k  1) 3
 ' f (1)
3!
(k  1) k (k  1) (k  2) 4
+ ' f (1)  ... (8.7)
4!

Again, '3 f (1) = '3 f (2)  ' 4 f (2)

And ' 4 f (1) = ' 4 f (2)  '5 f (2) and so on


? Equation (8.7) gives
( k  1) k 2
f(k) = f (0)  k 'f ( 1)  ' f ( 1)
2!
Self-Instructional
134 Material
Central Differences
( k  1) k ( k  1) Interpolations Formulae
 {'3 f (2)  ' 4 f (2)}
3!

(k  1) k (k  1) (k  2) 4
+ {' f (2)  ' 5 f ( 2)}  ... NOTES
4!
Thus, Gauss’s backward formula is,
k ( k  1) 2 (k  1) k ( k  1) 3
y = y0  k 'y1  ' y1  ' y 2
2! 3!

( k  2) ( k  1) k ( k  1) 4 ( k  2) ( k  1) k (k  1) ( k  2) 5
+ ' y 2  ' y2 + ...
4! 5!
Example 8.2: Apply Gauss’s backward formula to compute sin 45° from the
following table:

T0 : 20 30 40 50 60 70 80
sin T : 0.34202 0.502 0.6479 0.76604 0.86603 0.93969 0.98481

Solution: The difference table is shown below:


Difference Table to Compute sin 45°

x k y = sin T 'y '2y '3y '4y '5y '6y


20 –2 0.34202
0.15998
30 –1 0.502 –0.01919
0.14079 0.00165
40 0 0.64279 –0.01754 –0.0074
0.12325 –0.00572 0.01002
50 1 0.76604 –0.02326 0.00265 –0.01181
0.09999 –0.00307 –0.00179
60 2 0.86603 –0.02633 0.00086
0.07366 –0.00221
70 3 0.93969 –0.02854
0.04512
80 4 0.98481

Here, x 0 = 40, x = 45
45  40
And k = = 0.5.
10
Thus, by Gauss’s backward formula, you will have
k ( k  1) 2 (k  1) k (k  1) 3
y(x) = y0  k 'y1  ' y1  ' y 2
2! 3!
(k  2) (k  1) k (k  1) 4 (k  2) (k  1) k (k  1) ( k  2) 5
 ' y 2  ' y3 + ...
4! 5!
Self-Instructional
Material 135
Central Differences You have,
Interpolations Formulae
1.5 u 0.5
y(45) = 0.64279 + 0.5 × 0.14079 + × (–0.01754)
2!
NOTES
+
3!
= 0.64279 + 0.070395 + (–0.0065775) – (0.000103125) – 0.00028789
= 0.70679.

8.4 STIRLING’S FORMULAE


Gauss’s forward and backward formula are used to derive Stirling’s formula.
Gauss’s forward formula is,
k ( k  1) 2 (k  1) k (k  1) 3
f(k) = f (0)  k 'f (0)  ' f ( 1)  ' f ( 1)
2! 3!
( k  1) k ( k  1) ( k  2) 4
+ ' f ( 2)  ... (8.8)
4!
Gauss’s backward formula is,
(k  1) 2 (k  1) k (k  1) 3
f(k) = f (0)  k ' ( f  1)  ' f (1)  ' f ( 2)
2! 3!
( k  2) k ( k  1) ( k  1) 4
+ ' f ( 2)  ... (8.9)
4!
Take the mean of Equations (8.8) and (8.9),
2

1 1 1
This is called Stirling’s formula. It is useful when | k |  or   k  . It
2 2 2
1 1
gives the best estimate when – k .
4 4
Example 8.3: Use Stirling’s formula to evaluate f(1.22) given,

x: 1.0 1.1 1.2 1.3 1.4


f ( x) : 0.841 0.891 0.932 0.963 0.985
Self-Instructional
136 Material
Solution: The difference table to evaluate f(1.22) is as shown in table is given Central Differences
Interpolations Formulae
below:
x k y 'y '2y '3y '4y
1.0 –2 0.841 NOTES
0.05 –0.009
1.1 –1 0.891 –0.001
0.041 0.002
1.2 0 0.932 –0.01
0.031 0.001
1.3 1 0.963 –0.009
0.022
1.4 2 0.985

x  1.2
Taking x0 = 1.2 and h = 0.2 p =
0.2
Using Stirling’s formula,
2
ª 0.031  0.041 º (0.2)
y(1.22) = 0.932 + (0.2) × « »¼  u (0.01)
¬ 2 2!
+
(0.2) [(0.2) 2  12 ] ª 0.001  0.001 º (0.2) 2 [(0.2) 2  12 ]
u« »¼  u (0.002)
3! ¬ 2 4!
= 0.932 + 0.0072 – 0.0002 – 0.0000032
= 0.9389 (approx.)

8.5 BESSEL’S FORMULAE

Gauss’s forward formula is,


k ( k  1) 2 ( k  1) k ( k  1) 3
f(k) = f (0)  k ' (0)  ' f (1) + ' f ( 1)
2! 3!

( k  1) k ( k  1) ( k  2) 4
+ ' f ( 2) ... (8.11)
4!
Gauss’s backward formula is,
k (k  1) 2 ( k  1) k ( k  1) 3
f(k) = f (0)  k 'f (1)  ' f (1) + ' f (2)
2! 3!

( k  2) k ( k  1) ( k  1) 4
+ ' f ( 2)  ... (8.12)
4!
Self-Instructional
Material 137
Central Differences In Equation (8.12), shift the origin to 1 by replacing k by k – 1 and adding
Interpolations Formulae
1 to each argument 0, –1, –2, ..., you get
k ( k  1) 2
f(k) = f (1)  (k  1) 'f (0)  ' f (0)
NOTES 2!

k ( k  1) ( k  2) 3
 ' f (1)
3!

( k  1) k ( k  1) ( k  2) 4
+ ' f ( 1)  ... (8.13)
4!
By taking mean of Equations (8.11) and (8.13), you get

­ f (0)  f (1) ½ ­ k  (k  1) ½
f(k) = ® ¾® ¾ 'f (0)
¯ 2 ¿ ¯ 2 ¿

k (k  1) ­° ' 2 f (1)  ' 2 f (0) °½


+ ® ¾
2! ¯° 2 °¿

k ( k  1) ' 3 f ( 1)
+ ( k  1  k  2)
3! 2

(k  1) k (k  1) (k  2) °­ ' 4 f (2)  ' 4 f (1) °½


+ ® ¾  ...
4! ¯° 2 ¿°
Finally, you get

­ f (0)  f (1) ½ § 1·
f (k ) ® ¾  ¨ k  ¸ 'f (0)
¯ 2 ¿ © 2¹
k (k  1) ­° ' 2 f (1)  ' 2 f (0) ½°
 ® ¾
2! °¯ 2 °¿
§ 1·
(k  1) ¨ k  ¸ u
 © 2¹
' 3 f (1)
3!
(k  1) k (k  1) (k  2) °­ ' 4 f (2)  ' 4 f (1) °½
 ® ¾  ...
4! ¯° 2 ¿°

(8.14)
Example 8.4: Using Bessel’s formula obtain y26. Given that y20 = 2854,
y24 = 3162, y28 = 3544 and y32 = 3992.
Self-Instructional
138 Material
x  24 Central Differences
Solution: With x0 = 24 and k = the central difference table is shown Interpolations Formulae
4
below:
Central Difference Table for Obtaining y26
NOTES
x k y 'y '2y '3y
20 –1 2854
308
24 0 3162 74
382 –8
28 1 3544 66
448
32 2 3992
Using Bessel’s formula,
0.5 (0.5  1) § 74  66 ·
y26 = 3162 + 0.5 × 382 + u¨ ¸ + 0 × (–8)
2! © 2 ¹
= 3162 + 191 + (–8.75)
= 3344.25.

8.6 LAGRANGE’S INTERPOLATION FORMULA


Let f ( x0 ), f ( x1 ) ,..., f ( xn ) be (n + 1) entries of a function y = f(x), where
f(x) is assumed to be a polynomial corresponding to the arguments x0, x1, x2,
..., xn.
The polynomial f(x) may be written as,
f(x) = A0 ( x  x1 ) ( x  x2 ) ... ( x  xn )

+ A1 ( x  x0 ) ( x  x2 ) ... ( x  xn )

+ ... + An ( x  x0 ) ( x  x1 ) ... ( x  xn  1 ) (8.15)


Where, A0, A1, ..., An are constants to be determined.

By putting x = x0 , x1 , ..., xn in Eqation (8.15), you will get


f(x0) = A0 ( x0  x1 ) ( x0  x2 ) ... ( x0  xn )
f ( x0 )
? A0 = ...(8.16)
( x0  x1 ) ( x0  x2 ) ... ( x0  xn )

f(x1) = A1 ( x1  x0 ) ( x1  x2 ) ... ( x1  xn )
f ( x1 )
? A1 = (8.17)
( x1  x0 ) ( x1  x2 ) ... ( x1  xn ) Self-Instructional
Material 139
Central Differences
Interpolations Formulae # # #
f ( xn )
Similarly, An = (8.18)
( xn  x0 ) ( xn  x1 ) ... ( xn  xn 1 )
NOTES
Substituting the values of A0, A1, ..., An in Equation (8.15), you will get,

( x  x1 ) ( x  x2 ) ... ( x  xn )
f ( x) f ( x0 )
( x0  x1 ) ( x0  x2 ) ... ( x0  xn )
( x  x0 ) ( x  x2 ) ... ( x  xn )
 f ( x1 ) ..(8.19)
( x1  x0 ) ( x1  x2 ) ... ( x1  xn )
( x  x0 ) ( x  x1 ) ... ( x  xn 1 )
 ...  f ( xn )
( xn  x0 ) ( xn  x1 ) ... ( xn  xn 1 )

This is called Lagrange’s Interpolation Formula. In Equation (8.19), dividing


both sides by (x – x0) (x – x1) ... (x – xn), Lagrange’s formula may also be written
as,
f ( x) f ( x0 ) 1
= .
( x  x0 ) ( x  x1 ) ... ( x  xn ) ( x0  x1 ) ( x0  x2 ) ... ( x0  xn ) ( x  x0 )
f ( x1 ) 1
+ .  ...
( x1  x0 ) ( x1  x2 ) ... ( x1  xn ) ( x  x1 )
f ( xn ) 1
+ . . ... (8.20)
( xn  x0 ) ( xn  x1 ) ... ( xn  xn 1 ) ( x  xn )

Another form of Lagrange’s Formula


Prove that the Lagrange’s formula can be put in the form,
n
I( x ) f ( x )
Pn ( x) ¦ ( x  x ) Ic(rx )
r 0 r r

n
Where, I(x) = – ( x  xr )
r 0

ªd º
And Ic( xr ) = « [I( x)]»
¬ dx ¼x xr

Proof:
You have the Lagrange’s formula,
n ( x  x0 ) ( x  x1) ... ( x  xr 1) ( x  xr 1 ) ... ( x  xn )
Pn(x) = ¦ (x  x0 ) ( xr  x1) ... ( xr  xr 1 ) ( xr  xr 1 ) ... ( xr  xn )
f ( xr )
Self-Instructional r 0 r
140 Material
Central Differences
n ­ I( x) ½ ­° f ( xr ) °½
= ¦®
Interpolations Formulae
¾® ¾
r 0 ¯ x  xr ¿°¯ ( xr  x0 ) ( xr  x1 ) ... ( xr  xr 1) ( xr  xr 1) ... ( xr  xn ) °¿
...(8.21)
NOTES
Now,
n
I( x ) = – ( x  xr )
r 0

= ( x  x0 ) ( x  x1 ) ... ( x  xr  1 ) ( x  xr ) ( x  xr  1 ) ... ( x  xn )
? Ic( x ) = ( x  x1 ) ( x  x2 ) ... ( x  xr ) ... ( x  xn )
+ ( x  x0 ) ( x  x2 ) ... ( x  xr ) ... ( x  xn )  ...
+ ( x  x0 ) ( x  x1 ) ... ( x  xr 1 ) ... ( x  xr 1 ) ... ( x  xn )  ...

+ ( x  x0 ) ( x  x1 ) ... ( x  xr ) ... ( x  xn 1 )

Ÿ Ic( xr ) = Ic( x) x xr

= ( xr  x0 ) ( xr  x1 ) ... ( xr  xr 1 ) ( xr  xr 1 ) ... ( xr  xn ) ...(8.22)


Hence from Equation (8.21),
n
I( x ) f ( x )
Pn ( x) = ¦ ( x  x ) Ic(rx ) | Using Equation (8.22)
r 0 r r

Hence proved.
Example 8.5: Find the unique polynomial P(x) of degree 2 such that,
P(1) = 1, P(3) = 27, P(4) = 64
Use the Lagrange’s method of Interpolation.
Solution: Here, x0= 1, x 1 = 3, x2 = 4
f(x0) = 1, f(x1) = 27, f(x2) = 64
Lagrange’s Interpolation formula is,
( x  x1 ) ( x  x2 ) ( x  x0 ) ( x  x2 )
P(x) = f ( x0 )  k f ( x1 )
( x0  x1 ) ( x0  x2 ) ( x1  x0 ) ( x1  x2 )

( x  x0 ) ( x  x1 )
+ f ( x2 )
( x2  x0 ) ( x2  x1 )

( x  3) ( x  4) ( x  1) ( x  4) ( x  1) ( x  3)
= (1)  (27)  (64)
(1  3) (1  4) (3  1) (3  4) (4  1) (4  3)
Self-Instructional
Material 141
Central Differences
1 2 27 64
Interpolations Formulae
= ( x  7 x  12)  ( x 2  5 x  4)  ( x 2  4 x  3)
6 2 3
= 8x2 – 19x + 12
NOTES Hence, the required unique polynomial is,
P(x) = 8x2 – 19x + 12

8.7 EVERETT’S FORMULA

Hugh Everett III was an American physicist who first proposed the Many-Worlds
Interpretation (MWI) of quantum physics, which he termed his ‘Relative State’
formulation. In contrast to the then-dominant Copenhagen interpretation, the MWI
postulates that the Schrödinger equation never collapses and that all possibilities
of a quantum superposition are objectively real.
Everett’s interpolation formula is a formula for estimating the value of a
function at an intermediate value of the independent variable, when its value is
known at a series of equally spaced points (such as, those that appear in a table),
in terms of the central differences of the function of even order only and coefficients
which are polynomial functions of the independent variable.
Everett Interpolation Formula: Everett interpolation formula is a method
of writing the interpolation polynomial obtained from the Gauss interpolation formula
for forward interpolation at x = x0 + th with respect to the nodes x0, x0 + h, x0 –
h … x0 + nh, x0 – nh, x0 + (n + 1)h.
That is,

Without finite differences of odd order, which are eliminated by means of


the relation,

Adding like terms yields Everett’s interpolation formula,

Where u = 1 “ t and,

Self-Instructional
142 Material
Central Differences
Everett’s Formula for Numerical Analysis Interpolations Formulae

NOTES

Examples 8.6: Find Solution for the given polynomial using Everett’s formula.
x f(x)
1 1
1.1 1.049
1.2 1.096
1.3 1.140
Given is x = 1.15
Solution: The value of table for x and y
x 1 1.1 1.2 1.3
y 1 1.049 1.096 1.14
Using Everett method to find solution we have,
h = 1.1 – 1 = 0.1
Taking x0 = 1.1, then

Now the central difference table is as follows,

x = 1.15
Self-Instructional
Material 143
Central Differences
Interpolations Formulae

NOTES

Everett’s interpolation formula is,

Solution of Everett interpolation is y(1.15) = 1.07281

8.8 HERMITE'S FORMULA

In numerical analysis, Hermite interpolation, named after Charles Hermite, is a


method of interpolating data points as a polynomial function. The generated Hermite
interpolating polynomial is closely related to the Newton polynomial, in that both
are derived from the calculation of divided differences. However, the Hermite
interpolating polynomial may also be computed without using divided differences.
Hermite polynomials were defined by Pierre-Simon Laplace in 1810, though
in scarcely recognizable form, and studied in detail by Pafnuty Chebyshev in 1859.
Chebyshev’s work was overlooked, and they were named later after Charles
Hermite, who wrote on the polynomials in 1864, describing them as new. They
were consequently not new, although Hermite was the first to define the
multidimensional polynomials in his later 1865 publications.
The nth order Hermite polynomial is a polynomial of degree n. The
probabilist’s version Hen has leading coefficient 1, while the physicist’s version Hn
has leading coefficient 2n.
In mathematics, the Hermite polynomials are defined as a classical orthogonal
polynomial sequence. The polynomials arise in:
x Signal processing as Hermitian wavelets for wavelet transform analysis;
x Probability, such as the Edgeworth series, as well as in connection with
Brownian motion;

Self-Instructional
144 Material
Central Differences
x Combinatorics, as an example of an Appell sequence, obeying the umbral Interpolations Formulae
calculus;
x Numerical analysis as Gaussian quadrature;
x Physics, where they give rise to the eigenstates of the quantum harmonic NOTES
oscillator;
x Systems theory in connection with nonlinear operations on Gaussian
noise.
x Random matrix theory in Gaussian ensembles.
Hermite Interpolation Formula
Hermite interpolation formula is a form of writing the polynomial Hm of degree m
that solves the problem of interpolating a function f and its derivatives at points
x0…xn, that is, satisfying the conditions,

The Hermite interpolation formula can be written in the form,

Where,

Check Your Progress


1. Write the Gauss's forward difference formula.
2. Explain the Gauss's backward formula.
3. State the Stirling's formula.
4. Elaborate on the Bessel's formula.
5. Define the Everett's formula.
6. Analyse the Hermite's formula.

Self-Instructional
Material 145
Central Differences
Interpolations Formulae 8.9 ANSWERS TO CHECK YOUR PROGRESS
QUESTIONS

NOTES
k (k  1) 2 ( k  1) k (k  1) 3
1. f (u ) f (0)  k 'f (0)  ' f (1)  ' f (1)
2! 3!
k ( k  1) (k  1) (k  2) 4
 ' f (2)  ...
4!
2. Gauss’s backward formula is,
k ( k  1) 2 (k  1) k ( k  1) 3
y = y0  k 'y1  ' y1  ' y 2
2! 3!
( k  2) ( k  1) k ( k  1) 4 ( k  2) ( k  1) k (k  1) ( k  2) 5
+ ' y 2  ' y 2
4! 5!
+ ...
2
3.

­ f (0)  f (1) ½ § 1·
4. f (k ) ® ¾  ¨ k  ¸ 'f (0)
¯ 2 ¿ © 2¹
k (k  1) °­ ' 2 f (1)  ' 2 f (0) °½
 ® ¾
2! °¯ 2 °¿
§ 1·
(k  1) ¨ k  ¸ u
 © 2¹
' 3 f (1)
3!
(k  1) k (k  1) (k  2) ­° ' 4 f (2)  ' 4 f (1) ½°
 ® ¾  ...
4! ¯° 2 ¿°

5.

Self-Instructional
146 Material
6. The nth order Hermite polynomial is a polynomial of degree n. The Central Differences
Interpolations Formulae
probabilist’s version Hen has leading coefficient 1, while the physicist’s
version Hn has leading coefficient 2n.
In mathematics, the Hermite polynomials are defined as a classical orthogonal NOTES
polynomial sequence.

8.10 SUMMARY

k (k  1) 2 (k  1) k (k  1) 3
x f (u ) f (0)  k 'f (0)  ' f (1)  ' f (1)
2! 3!
k (k  1) ( k  1) ( k  2) 4
 ' f (2)  ...
4!

x Gauss’s backward formula is,


k ( k  1) 2 (k  1) k ( k  1) 3
y = y0  k 'y1  ' y1  ' y 2
2! 3!

(k  2) ( k  1) k ( k  1) 4 ( k  2) ( k  1) k (k  1) ( k  2) 5
+ ' y 2  ' y 2
4! 5!
+ ...

2
x

x Everett’s interpolation formula is a formula for estimating the value of a


function at an intermediate value of the independent variable, when its value
is known at a series of equally spaced points (such as, those that appear in
a table), in terms of the central differences of the function of even order only
and coefficients which are polynomial functions of the independent variable.
x The nth order Hermite polynomial is a polynomial of degree n. The
probabilist’s version Hen has leading coefficient 1, while the physicist’s
version Hn has leading coefficient 2n.
In mathematics, the Hermite polynomials are defined as a classical orthogonal
polynomial sequence.

Self-Instructional
Material 147
Central Differences
Interpolations Formulae 8.11 KEY WORDS

x Gauss’s formula: Gauss’s formula alternately adds new points at the left
NOTES and right ends, thereby keeping the set of points centered near the same
place (near the evaluated point).
x Stirling’s formula: Stirling’s formula remains centered about a particular
data point, for use when the evaluated point is nearer to a data point than to
a middle of two data points.
x Bessel’s formula: Bessel’s formula remains centered about a particular
middle between two data points, for use when the evaluated point is nearer
to a middle than to a data point.
x Lagrange formula: The Lagrange formula is at its best when all the
interpolation will be done at one x value, with only the data points’ y values
varying from one problem to another, and when it is known, from past
experience, how many terms are needed for sufficient accuracy.

8.12 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define the Gauss’s formula.
2. State the Gauss’s backward difference formula.
3. Explain the Stirling’s formula.
4. Elaborate on the Bessel’s formula.
5. Interpret the Lagrange’s interpolation formula.
6. Analyse the Everett’s formula.
7. Define the Hermite’s formula.
Long-Answer Questions
1. Briefly discuss the Gauss’s forward differences formula.
2. Explain the Gauss’s backward differences formula.
3. Analyse the Stirling’s formula.
4. Discuss the Bessel’s formula.
5. What is Lagrange’s interpolation formula?
6. Define the Everett’s formula.
7. Explain the Hermite’s formula.

Self-Instructional
148 Material
Central Differences
8.13 FURTHER READINGS Interpolations Formulae

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd. NOTES
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
Material 149
Numerical Differentiation
BLOCK - III
NUMERICAL DIFFERENTIATION
AND INTEGRATION
NOTES

UNIT 9 NUMERICAL
DIFFERENTIATION
Structure
9.0 Introduction
9.1 Objectives
9.2 Numerical Differentiation
9.2.1 Differentiation Using Newton’s Forward Difference Interpolation Formula
9.2.2 Differentiation Using Newton’s Backward Difference Interpolation
Formula
9.3 Answers to Check Your Progress Questions
9.4 Summary
9.5 Key Words
9.6 Self Assessment Questions and Exercises
9.7 Further Readings

9.0 INTRODUCTION

In numerical analysis, numerical differentiation is the process of finding the numerical


value of a derivative of a given function at a given point. It is the process of
computing the derivatives of a function f(x) when the function is not explicitly
known, but the values of the function are known only at a given set of arguments
x = x0, x1, x2,..., xn. For finding the derivatives, a suitable interpolating polynomial
is used and then its derivatives are used as the formulae for the derivatives of the
function. Thus, for computing the derivatives at a point near the beginning of an
equally spaced table, Newton’s forward difference interpolation formula is used,
whereas Newton’s backward difference interpolation formula is used for computing
the derivatives at a point near the end of the table.
In this unit, you will study about the numerical differentiation, differentiation
using Newton’s for ward difference Interpolation formula, and differentiation using
Newton’s backward difference interpolation.

9.1 OBJECTIVES

After going through this unit, you will be able to:


x Describe numerical differentiation
Self-Instructional
150 Material
x Differentiate using Newton’s forward difference interpolation formula Numerical Differentiation

x Differentiate using Newton’s backward difference interpolation formula

9.2 NUMERICAL DIFFERENTIATION NOTES


Numerical differentiation is the process of computing the derivatives of a function
f(x) when the function is not explicitly known, but the values of the function are
known only at a given set of arguments x = x0, x1, x2,..., xn. For finding the derivatives,
we use a suitable interpolating polynomial and then its derivatives are used as the
formulae for the derivatives of the function. Thus, for computing the derivatives at a
point near the beginning of an equally spaced table, Newton’s forward difference
interpolation formula is used, whereas Newton’s backward difference interpolation
formula is used for computing the derivatives at a point near the end of the table.
Again, for computing the derivatives at a point near the middle of the table, the
derivatives of the central difference interpolation formula is used. If, however, the
arguments of the table are unequally spaced, the derivatives of the Lagrange’s
interpolating polynomial are used for computing the derivatives of the function.

9.2.1 Differentiation Using Newton’s Forward Difference


Interpolation Formula
Let the values of an unknown function y = f(x) be known for a set of equally spaced
values x0, x1, …, xn of x, where xr = x0 + rh. Newton’s forward difference interpo-
lation formula is,

n
x  x0
Where u
h
dy
The derivative can be evaluated as,
dx

(9.1)

(9.2)

For a value of x near the beginning of a table, u = (x – xo)/h is computed first


and then Equations (9.1) and (9.2) can be used to compute f c( x) and f cc( x). At the
tabulated point x0, the value of u is zero and the formulae for the derivatives are
given by,
Self-Instructional
Material 151
Numerical Differentiation
1ª 1 1 1 1 º
y c( x0 ) « 'y 0  '2 y 0  '3 y0  '4 y0  '5 y0  ...» (9.3)
h¬ 2 3 4 5 ¼
1 ª 2 11 5 º
y cc( x0 ) 2 «
' y0  '3 y0  '4 y 0  '5 y0  ...» (9.4)
NOTES h ¬ 12 6 ¼

9.2.2 Differentiation Using Newton’s Backward Difference


Interpolation Formula
For an equally spaced table of a function, Newton’s backward difference
interpolation formula is,

Where v

dy d2y
The derivatives and 2 , obtained by differentiating the above formula
dx dx
are given by,
dy 1ª 2v  1 2 3v 2  6v  2 3 2v 3  9v 2  11v  3 4 º
«’y n  ’ yn  ’ yn  ’ y n  ...»
dx h ¬« 2 6 12 »¼
(9.5)
d2y 1 ª 2 3 6v 2  18v  11 4 º
2 «
’ y n  ( v  1) ’ y n  ’ y n  ...» (9.6)
dx 2 h ¬« 12 ¼»

dy d2y
For a given x near the end of the table, the values of and 2 are com-
dx dx
puted by first computing v = (x – xn)/h and using the above formulae. At the tabu-
lated point xn, the derivatives are given by,
1ª 1 1 1 º
y c( xn ) ’y n  ’ 2 y n  ’ 3 y n  ’ 4 y n  ...»
h «¬ 2 3 4 ¼
(9.7)
1 ª 2 11 5 º
y cc( xn ) 2 «
’ y n  ’ 3 y n  ’ 4 y n  ’ 5 y n  ...»
h ¬ 12 6 ¼
(9.8)
Example 9.1: Compute the values of f c(2.1), f cc(2.1), f c(2.0) and f cc(2.0) when f
(x) is not known explicitly, but the following table of values is given:
x f(x)
2.0 0.69315
2.2 0.78846
2.4 0.87547

Self-Instructional
152 Material
Solution: Since the points are equally spaced, we form the finite difference table. Numerical Differentiation

x f ( x) 'f ( x) '2 f ( x)
2.0 0.69315
9531 NOTES
2.2 0.78846  83
8701
2.4 0.87547

For computing the derivatives at x = 2.1, we have

 

The value of f c(2.0) is given by,,

1 ª 1 º
f c(2.0) « 'f 0  ' 2 f 0 »
0.2 ¬ 2 ¼
1 ª 1 º
« 0.09531  u 0.00083»
0.2 ¬ 2 ¼
0.09572
0.4786
0.2
1
f cc(2.0) u ( 0.0083)
(0.2) 2
0.21
Example 9.2: For the function f(x) whose values are given in the table below
compute values of f c(1), f cc(1), f c(5.0), f cc(5.0).

x 1 2 3 4 5 6
f ( x) 7.4036 7.7815 8.1291 8.4510 8.7506 9.0309

Solution: Since f(x) is known at equally spaced points, we form the finite differ-
ence table to be used in the differentiation formulae based on Newton’s interpolat-
ing polynomial.

Self-Instructional
Material 153
Numerical Differentiation 2 3 4 5
x f ( x) 'f ( x) ' f ( x) ' f ( x) ' f ( x) ' f ( x)
1 7.4036
0.3779
NOTES 2 7.7815  303
0.3476 46
3 8.1291  257  12
0.3219 34 8
4 8.4510  223 4
0.2996 30
5 8.7506  193
0.2803
6 9.0309

To calculate f c(1) and f cc(1), we use the derivative formulae based on Newton’ss
forward difference interpolation at the tabulated point given by,

1ª 1 1 1 1 º
f c( x0 ) « 'f 0  ' 2 f0  '3 f 0  ' 4 f 0  ' 5 f 0 »
h¬ 2 3 4 5 ¼
1 ª 2 11 4 5 5 º
f cc( x0 ) « ' f 0  ' f 0  12 ' f 0  6 ' f 0 »
3

h2 ¬ ¼
1ª 1 1 1 1 º
? f c(1) « 0.3779  u ( 0.0303)  u 0.0046  u (0.0012)  u 0.0008»
1¬ 2 3 4 5 ¼
0.39507
ª 11 5 º
f cc(1) «¬0.0303  0.0046  12 u (0.0012)  6 u 0.0008»¼
0.0367

Similarly, for evaluating f c(5.0) and f cc(5.0), we use the following formulae
1ª 1 2 1 3 1 4 1 5 º
f c( xn ) «’f n  ’ f n  ’ f n  ’ f n  ’ f n »
h¬ 2 3 4 5 ¼
1 ª 2 11 5 º
f cc( xn ) 2 «
’ f n  ’3 f n  ’ 4 f n  ’5 f n »
h ¬ 12 6 ¼
ª 1 1 1 º
f c(5) «0.2996  2 (0.0223)  3 u 0.0034  4 (0.0012)»
¬ ¼
0.2893
11
f cc(5) [0.0223  0.0034  u 0.0012]
12
0.0178

Self-Instructional
154 Material
Numerical Differentiation
Example 9.3: Compute the values of y c(0), y cc(0.0), y c(0.02) and y cc(0.02) for the
function y = f(x) given by the following tabular values:

x 0.0 0.05 0.10 0.15 0.20 0.25


y 0.00000 0.10017 0.20134 0.30452 0.41075 0.52110 NOTES
Solution: Since the values of x for which the derivatives are to be computed lie
near the beginning of the equally spaced table, we use the differentiation formulae
based on Newton’s forward difference interpolation formula. We first form the
finite difference table.
x y 'y '2 y '3 y '4 y
0.0 0.00000
0.10017
0.05 0.10017 100
0.10117 101
0.10 0.20134 201 3
0.10318 104
0.15 0.30452 305 3
0.10623 107
0.20 0.41075 412
0.11035
0.25 0.52110

For evaluating y c (0,0), we use the formula


1ª 1 1 1 º
yc( x0 ) « 'y0  ' 2 y0  ' 3 y0  ' 4 y0 »
h¬ 2 3 4 ¼
1 § 1 1 1 ·
? yc(0.0) ¨ 0.10017  u 0.00100  u 0.00101  u 0.00003 ¸
0.05 © 2 3 4 ¹
2.00000
For evaluating y cc (0,0), we use the formula
1§ 2 11 ·
ycc( x0 ) 2 ¨
' y0  ' 3 y0  ' 4 y0 ¸
h © 12 ¹
1 § 11 ·
2 ¨
0.00100  0.00101  u 0.00003 ¸
(0.05) © 12 ¹
0.007
For evaluating y c (0.02) and y cc (0.02), we use the following formulae, with
0.02  0.00
u 0.4
0.05

Self-Instructional
Material 155
Numerical Differentiation
1ª 2u  1 2 3u 2  6u  2 3 2u 3  9u 2  11u  3 4 º
y c(0.02) «'y0  ' y0  ' y0  ' y0 »
h¬ 2 6 12 ¼
1 ª 2 6(u  1) 3 6u 2  18u  11 4 º
y cc(0.02) 2 «
' y0  ' y0  u ' y0 »
h ¬ 6 12 ¼
NOTES
1 ª 2 u 0.4  1 3 u (0.4)  6 u 0.4  2
2
? y c(0.02) «0.10017  u 0.00100  u 0.00101
0.05 ¬ 2 6
2 u 0.43  9 u 0.42  11 u 0.4  3 º
 u 0.00003»
12 ¼
4.00028
1 ª 6 u 0.16  18 u 0.4  11 º
y cc(0.02) «0.00100  0.00101 u (0.6)  u 0.00003»
(0.05) 2 ¬ 12 ¼
0.800

Example 9.4: Compute f c(6.0) and f cc(6.3) by numerical differentiation formulae


for the function f(x) given in the following table.

x 6.0 6.1 6.2 6.3 6.4


f ( x)  0.1750  0.1998  0.2223  0.2422  0.2596

Solution: We first form the finite difference table,


x f ( x) 'f ( x) '2 f ( x) '3 f ( x)
6.0  0.1750
 248
6.1  0.1998 23
 225 3
6.2  0.2223 26
 199 1
6.3  0.2422 25
 174
6.4  0.2596

For evaluating f c(6.0) , we use the formula derived by differentiating Newton’ss


forward difference interpolation formula.

1ª 1 1 º
f c( x0 ) « 'f 0  ' 2 f 0  ' 3 f 0 »
h¬ 2 3 ¼
1 ª 1 1 º
? f c(6.0) « 0.0248  u 0.0023  u 0.0003»
0.1 ¬ 2 3 ¼
10[0.0248  0.00115  0.0001]
0.2585
For evaluating f cc(6.3), we use the formula obtained by differentiating Newton’ss
backward difference interpolation formula. It is given by,
Self-Instructional
156 Material
Numerical Differentiation
1 2
f cc( xn ) [ ’ f n  ’3 f n ]
h2
? 1
f cc(6.3) [0.0026  0.0003] 0.29
(0.1) 2 NOTES
Example 9.5: Compute the values of y c(1.00) and y cc(1.00) using suitable numerical
differentiation formulae on the following table of values of x and y:

x 1.00 1.05 1.10 1.15 1.20


y 1.0000 1.02470 1.04881 1.07238 1.09544

Solution: For computing the derivatives, we use the formulae derived on differen-
tiating Newton’s forward difference interpolation formula, given by
1ª 1 1 1 º
f c( x0 ) 'y0  '2 y0  '3 y0  '4 y0  ...»
h «¬ 2 3 4 ¼
1 ª 2 11 º
f cc( x0 ) 2 «
' y0  '3 y0  '4 y 0  ...»
h ¬ 12 ¼
Now, we form the finite difference table.
x y 'y '2 y '3 y '4 y
1.00 1.00000
2470
1.05 1.02470  59
2411 5
1.10 1.04881  54 2
2357 3
1.15 1.07238  51
2306
1.20 1.09544

Thus with x0 = 1.00, we have


1 § 1 1 1 ·
y c(1.00) ¨ 0.02470  u 0.00059  u 0.00005  u 0.00002 ¸
0.05 © 2 3 4 ¹
0.502
1 § 11 ·
y cc(1.00) ¨  0.00059  0.00005  u 0.00002 ¸
2
(0.05) © 12 ¹
0.26
Example 9.6: Using the following table of values, find a polynomial representation
of f c(x) and then compute f c(0.5).

x 0 1 2 3
f ( x) 1 3 15 40

Self-Instructional
Material 157
Numerical Differentiation Solution: Since the values of x are equally spaced we use Newton’s forward
difference interpolating polynomial for finding f c( x) and f c(0.5). We first form the
finite difference table as given below:
NOTES x f ( x) 'f ( x) '2 f ( x) '3 f ( x)
0 1
2
1 3 10
12 3
2 15 13
25
3 40

x  x0
Taking x0 0, we have u x. Thus the Newton’s forward difference
h
interpolation gives,
u (u  1) 2 u (u  1) (u  2) 3
f f 0  u'f 0  ' f0  ' f0
2! 3!
x( x  1) x( x  1) ( x  2)
i.e., f ( x) | 1  2 x  u 10  u3
2 6
13 2 1 3
Or, f ( x ) 1  3x  x  x
2 2
3
? f c( x ) 3  13 x  x 2
2
3
And, f c(0.5) 3  13 u 0.5  u (0.5) 2 3.12
2
Example 9.7: The population of a city is given in the following table. Find the rate
of growth in population in the year 2001 and in 1995.

Year x 1961 1971 1981 1991 2001


Population y 40.62 60.80 79.95 103.56 132.65

dy
Solution: Since the rate of growth of the population is , we have to compute
dx
dy
at x = 2001 and at x = 1995. For this we consider the formula for the derivative
dx
on approximating y by the Newton’s backward difference interpolation given by,

dy 1ª 2u  1 2 3u 2  6u  2 3 2u 3  9u 2  11u  3 4 º
«’y n  ’ yn  ’ yn  ’ y n  ...»
dx h «¬ 2 6 12 »¼

x  xn
Where u
h
Self-Instructional
158 Material
For this we construct the finite difference table as given below: Numerical Differentiation

x y 'y '2 y '3 y '4 y


1961 40.62
20.18 NOTES
1971 60.80  1.03
19.15 5.49
1981 79.95 4.46  4.47
23.61 1.02
1991 103.56 5.48
29.09
2001 132.65

x  xn
For x = 2001, u 0
h

§ dy · 1 ª 1 1 1 º
¨ ¸ 29.09  u 5.48  u 1.02  u (4.47) »
? © dx ¹2001 10 ¬« 2 3 4 ¼
3.105

1995  1991
For x = 1995, u 0.4
10

§ dy · 1 ª 1.8 3 u 0.16  6 u 0.4  2 º


¨ ¸ « 23.61  u 4.46  u 5.49 »
© dx ¹1995 10 ¬ 2 6 ¼
3.21

Check Your Progress


1. Define the process of numerical differentiation.
2. Write Newton’s forward difference interpolation formula.
3. Write Newton’s backward difference interpolation formula.

9.3 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. Numerical differentiation is the process of computing the derivatives of a


function f(x) when the function is not explicitly known, but the values of the
function are known for a given set of arguments x = x0, x1, x2, ..., xn. To
find the derivatives, we use a suitable interpolating polynomial and then its
derivatives are used as the formulae for the derivatives of the function.

Self-Instructional
Material 159
Numerical Differentiation 2. Newton’s forward difference interpolation formula is,

NOTES
Where
3. Newton’s backward difference interpolation formula is,

9.4 SUMMARY

x Numerical differentiation is the process of computing the derivatives of a


function f(x) when the function is not explicitly known, but the values of the
function are known only at a given set of arguments x = x0, x1, x2,..., xn.
x For finding the derivatives, we use a suitable interpolating polynomial and
then its derivatives are used as the formulae for the derivatives of the function.
x For computing the derivatives at a point near the beginning of an equally
spaced table, Newton’s forward difference interpolation formula is used,
whereas Newton’s backward difference interpolation formula is used for
computing the derivatives at a point near the end of the table.
x Let the values of an unknown function y = f(x) be known for a set of
equally spaced values x0, x1, …, xn of x, where xr = x0 + rh. Newton’s
forward difference interpolation formula is,

Where .

x At the tabulated point x0, the value of u is zero and the formulae for the
derivatives are given by,

Self-Instructional
160 Material
Numerical Differentiation
x For a given x near the end of the table, the values of and are

computed by first computing v = (x – xn)/h and using the above formulae.


At the tabulated point xn, the derivatives are given by, NOTES

x For computing the derivatives at a point near the middle of the table, the
derivatives of the central difference interpolation formula is used.
x If the arguments of the table are unequally spaced, then the derivatives of
the Lagrange’s interpolating polynomial are used for computing the derivatives
of the function.

9.5 KEY WORDS

x Numerical differentiation: It is the process of computing the derivatives


of a function f(x) when the function is not explicitly known, but the values of
the function are known for a given set of arguments x = x0, x1, x2, ..., xn.
x Newton’s forward difference interpolation formula: The Newton’s
forward difference interpolation formula is used for computing the derivatives
at a point near the beginning of an equally spaced table.
x Newton’s backward difference interpolation formula: Newton’s
backward difference interpolation formula is used for computing the
derivatives at a point near the end of the table.
x Central difference interpolation formula: For computing the derivatives
at a point near the middle of the table, the derivatives of the central difference
interpolation formula is used.

9.6 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Define the term numerical differentiation.
2. Give the differentiation formula for Newton’s forward difference interpolation.

3. How the derivative can be evaluated?


Self-Instructional
Material 161
Numerical Differentiation 4. Give the formulae for the derivatives at the tabulated point x0 where the
value of u is zero.
5. Give the differentiation formula for Newton’s backward difference
interpolation.
NOTES
6. Give the Newton’s backward difference interpolation formula for an equally
spaced table of a function.
Long-Answer Questions
1. Discuss numerical differentiation using Newton’s forward difference
interpolation formula and Newton’s backward difference interpolation
formula.

2. Use the following table of values to compute

x 0 1 2 3
f ( x) 1.6 3.8 8.2 15.4

3. Use suitable formulae to compute y c(1.4) and ycc(1.4) for the function y =
f(x), given by the following tabular values:
x 1.4 1.8 2.2 2.6 3.0
y 0.9854 0.9738 0.8085 0.5155 0.1411

dy d2y
4. Compute and for x =1 where the function y = f(x) is given by the
dx dx 2
following table:
x 1 2 3 4 5 6
y 1 8 27 64 125 216

5. A rod is rotating in a plane about one of its ends. The following table gives
the angle T (in radians) through which the rod has turned for different values
d
of time t seconds. Find its angular velocity and angular acceleration
dt
d2
at t = 1.0.
dt 2

t secs 0.0 0.2 0.4 0.6 0.8 1.0

dy d2y
6. Find and at x = 1 and at x = 3 for the function y = f(x), whose
dx dx 2
values in [1, 6] are given in the following table:

Self-Instructional
162 Material
Numerical Differentiation
x 1 2 3 4 5 6
y 2.7183 3.3210 4.0552 4.9530 6.0496 7.3891

dy d2y
7. Find and at x = 0.96 and at x = 1.04 for the function y = f(x) NOTES
dx dx 2
given in the following table:
x 0.96 0.98 1.0 1.02 1.04
y 0.7825 0.7739 0.7651 0.7563 0.7473

9.7 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
Material 163
Numerical Differentiation
Methods Based on Finite
Differences UNIT 10 NUMERICAL
DIFFERENTIATION
NOTES
METHODS BASED ON
FINITE DIFFERENCES
Structure
10.0 Introduction
10.1 Objectives
10.2 Numerical Differentiation
10.3 Methods Based on Finite Differences
10.4 Answers to Check Your Progress Questions
10.5 Summary
10.6 Key Words
10.7 Self Assessment Questions and Exercises
10.8 Further Readings

10.0 INTRODUCTION

In numerical analysis, numerical differentiation describes algorithms for estimating


the derivative of a mathematical function or function subroutine using values of the
function and perhaps other knowledge about the function.
A finite difference is a mathematical expression of the form f(x+b)-f(x+a).
If a finite difference is divided by b – a, one gets a difference quotient. The
approximation of derivatives by finite differences plays a central role in finite
difference methods for the numerical solution of differential equations, especially
boundary value problems. Certain recurrence relations can be written as difference
equations by replacing iteration notation with finite differences. Today, the term
“Finite Difference” is often taken as synonymous with finite difference
approximations of derivatives, especially in the context of numerical methods.
Finite difference approximations are finite difference quotients in the terminology
employed above.
In numerical analysis, finite-difference methods (FDM) are a class of
numerical techniques for solving differential equations by approximating derivatives
with finite differences. Both the spatial domain and time interval (if applicable) are
discretized, or broken into a finite number of steps, and the value of the solution at
these discrete points is approximated by solving algebraic equations containing
finite differences and values from nearby points.
In this unit, you will study about the numerical differentiation, and methods
based on finite differences.
Self-Instructional
164 Material
Numerical Differentiation
10.1 OBJECTIVES Methods Based on Finite
Differences

After going through this unit, you will be able to:


x Explain numerical differentiation NOTES
x Define forward and backward interpolation
x Analyse the methods based on finite differences

10.2 NUMERICAL DIFFERENTIATION

Sir Isaac Newton had proposed Interpolation formulae for forward and backward
interpolation. These are used for numerical differentiation. Such tools are widely
used in the field of engineering, statistics and other branches of mathematics.
Computer science also uses these concepts to find nearly accurate solution for
differentiation.
Forward interpolations
Sir Isaac Newton had proposed a formula for forward interpolation that bears his
name. It is expressed as a finite difference identity from which an interpolated
value, in between tabulated points using first value y0 with powers of forward
difference is used. Forward difference is shown by using ' which is known as
forward difference, operator. Forward difference is defined as the value obtained
by subtracting the present value from the next value. If initial value is y0 and next
value is y1 then 'y0 = y1 – y0. In a similar way '2 is used. The '2y0 = 'y1 – 'y0.
Proceeding this way, you may write for first forward difference, second forward
difference and like wise of nth forward difference as follows:
'y0 = y1 – y0 '2y0 = 'y1 – 'y0 .... 'ny0 = 'n – 1y1 – 'n – 1y0
Taking this difference you may denote the next term (s) and thus,
y1 = y0 + 'y0 Ÿ y1 = (1 + ')y0
Here, 1 + ' shows a forward shift and a separate operator E, known as
forward shift operator is used as E = 1 + '. Now in the light of this fact, you may
write y1 = Ey0 and y2 = Ey1 = E(Ey0) = E2y0. Proceeding this way, you may write
yn = En – 1 y0.
Backward interpolations
Just as there is forward different operators for forward interpolations, there are
backward difference operator for backward difference interpolation. This is also
credited to Newton. In forward you think to next, but in backward you think of
the preceding term, i.e., the one earlier to it. Backward difference are denoted by
backward difference operator ’ and is given as:
yn – 1 = yn – ’yn Ÿ ’yn = yn – yn – 1 and yn – 1 = (1 – ’)yn
Just as in forward difference it was y0 in backward difference operator it is yn.
Self-Instructional
Material 165
Numerical Differentiation Thus, ’y1 = y1 – y0 ’y2 = y2 – y1 Ÿ ’2y2 = ’y2 – ’y1 and proceeding this
Methods Based on Finite
Differences way you get, ’nyn = ’n – 1 yn – ’n – 1 yn – 1
Relation between difference operators
NOTES E =1+’ Ÿ ’=E–1
’(Ey0) = ’(y1) = y1 – y0
Thus, (1 – ’) Ey0 = Ey0 – ’(Ey0) = y1 – ’(y1) = y1 – (y1 – y0) = y0
Or, (1 – ’) (1 + ')y0= y0 which is true for all the terms of y, i.e., y0, y1, y2, .... yn.
Thus, (1 – ’) (1 + ') = 1 and (1 – ’)–1 = (1 + ') = E
And also, ' = (1 – ’) –1 –1.
Central difference operator
Forward shift operator if applied to a term shows the next term. Let any term
corresponding to the value of x be denoted as f(x) instead of y and with a very
small increment of h, when value of x becomes x + h, it is denoted by f(x + h), the
next term of y. Using forward shift operator, the same can also be written as Ef(x)
= f(x + h). You can also view the same as Eyn –1 = yn. If f(x) shows first term, then
f(x + h) shows the next term.
Central difference operator is defined as Gf(x) = f(x + h/2) – f(x – h/2).
This is known as first central difference operator. Higher difference operator can
also be given as :
G2f(x)= f(x + h) – 2f(x) + f(x – h)
And Gnf(x)= Gn – 1 f(x + h/2) – Gn – 1 f(x – h/2)
In following paragraphs, Newton’s formulae for forward and backward
interpolation, Stirling ’s and Bessel’s central difference formulae is explained.
(1) Newton’s forward difference interpolation formula
k ( k  1) 2 k ( k  1) ( k  2) 3
y = y0 + k'y0 + ' y0  ' y 0  ...
2! 3!
...(10.1)
xa
Where,k = ...(10.2)
h
Differentiating Equation (10.1) with respect to k, you get,
dy 2k  1 2 3k 2  6 k  2 3
= ' y0  ' y0  ' y0  ... ...(10.3)
dk 2 6
Differentiating Equation (10.2) with respect to x, you get,
dk 1
= ...(10.4)
dx h
You know that,

Self-Instructional
166 Material
Numerical Differentiation
dy dy dk 1 ª § 2k  1 · ' 2 y  § 3k  6k  2 · ' 3 y  ...º
2
Methods Based on Finite
= . '
« 0 y  ¨ ¸ 0 ¨ ¸ 0 » Differences
dx dk dx h ¬ © 2 ¹ © 6 ¹ ¼
...(10.5)
dy NOTES
Equation (10.5) provides the value of at any x which is not tabulated.
dx
Equation (10.5) becomes simple for tabulated values of x in particular when x = a
and k = 0.
Putting k = 0 in Equation (10.5), you get,

§ dy · 1ª
'y0  ' 2 y0  '3 y0  ' 4 y0  '5 y0  ...º» ...(10.6)
1 1 1 1
¨ ¸ «
© dx ¹ x a h ¬ 2 3 4 5 ¼
Differentiating Equation (10.5) with respect to x, you get
d2y d § dy · d § dy · dk
2 = ¨ ¸ ¨ ¸
dx dx © dx ¹ dk © dx ¹ dx
1ª 2 3 § 6 k 2  18k  11 · 4 º1
= « ' y 0  ( k  1) ' y 0  ¨ ¸ ' y0  ...»
h¬ © 12 ¹ ¼h
1 ª 2 3 § 6 k 2  18k  11 · 4 º
= 2 « ' y0  ( k  1) ' y0  ¨ ¸ ' y0  ...» ...(10.7)
h ¬ © 12 ¹ ¼
Putting k = 0 in Equation (10.7), you get
§ d2y · § 2 ' y0  ... ·¸
1 3 11 4
¨ 2¸ ¨ ' y0  ' y0  ...(10.8)
© dx ¹ x a h2 © 12 ¹
Similarly, you get
§ d3y · 1 § 3
' y0  ' 4 y0  ... ·¸
3
¨ 3¸ 3 ¨ ...(10.9)
© dx ¹ x a h © 2 ¹
And so on.
Aliter: You know that,
E = ehD Ÿ 1 + ' = ehD
' 2 '3 ' 4
? hD  
= log (1 + ') = ' –  ...
2 3 4
1ª 1 1 1 º
Ÿ D = « '  ' 2  '3  ' 4  ...»
h¬ 2 3 4 ¼
Similarly,
2
= 2 §¨ '  ' 2  '3  ' 4  ... ·¸
1 1 1 1
D 2

h © 2 3 4 ¹
1 § 2
'  '3  ' 4  '5  ... ·¸
11 5
2 ¨
h © 12 6 ¹

Self-Instructional
Material 167
Numerical Differentiation
1 § '3  3 ' 4 ·
Methods Based on Finite And D3 = ¨ ¸
Differences h3 © 2 ¹
(2) Newton’s backward difference interpolation formula
NOTES
k ( k  1) 2 k ( k  1) ( k  2) 3
y= yn + k ’yn  ’ yn  ’ yn  ... ...(10.10)
2! 3!
x  xn
Where, k = ...(10.11)
h
Differentiating Equation (10.10) with respect to k, you get,
dy § 2k  1 · 2 § 3k 2  6k  2 · 3
= ’ y n  ¨ ¸ ’ y 
n ¨ ¸ ’ yn  ... ...(10.12)
dk © 2 ¹ © 6 ¹
Differentiating Equation (10.11) with respect to x, you get,
dk 1
= ...(10.13)
dx h
dy dy dk
Now, = .
dx dx dx
1ª 2k  1 · 2 § 3k 2  6k  2 · 3 º
= «’yn  §¨ ¸ ’ y 
n ¨ ¸ ’ yn  ...» ...(10.14)
h¬ © 2 ¹ © 6 ¹ ¼
dy
Equation (10.14) provides the value of at any x which is not tabulated.
dx
At x = xh, you have k = 0
? Putting k = 0 in Equation (10.14), you get,
§ dy · 1§ 1 2 1 3 1 4 ·
¨ ¸ ¨ ’yn  ’ yn  ’ yn  ’ yn  ... ¸ ...(10.15)
© dx ¹ x xn h © 2 3 4 ¹
Differentiating Equation (10.14) with respect to x, you get,
d2y d § dy · dk
= ¨ ¸
dx 2
dk © dx ¹ dx
1 ª 2 3 § 6k 2  18k  11 · 4 º
= 2 «’ yn  (k  1)’ yn  ¨ ¸ ’ yn  ...» ...(10.16)
h ¬ © 12 ¹ ¼
Putting k = 0 in Equation (10.16), you get,
§ d2y · 1 § 2
’ yn  ’3 yn  ’ 4 yn  ... ·¸
11
¨ 2¸ 2 ¨ ...(10.17)
© dx ¹ x x0 h © 12 ¹
§ d3y · 1 § ’3 y  3 ’ 4 y  ... ·
Similarly, you get, ¨ 3 ¸ ¨ n n ¸ ...(10.18)
© dx ¹ x x0 h3 © 2 ¹
And so on.

Self-Instructional
168 Material
Formulae for computing higher derivatives may be obtained by successive Numerical Differentiation
Methods Based on Finite
differentiation. Differences
Aliter: You know that,
E –1 = 1 – ’
NOTES
e–hD = 1 – ’

? –hD = log (1 – ’) = – §¨ ’  1 ’ 2  1 ’3  1 ’ 4  ... ·¸


© 2 3 4 ¹
1§ 1 2 1 3 1 4 ·
Ÿ D = ¨ ’  ’  ’  ’  ... ¸
h© 2 3 4 ¹
2
1 § ’  1 ’ 2  1 ’3  ... ·
Also, D2 = ¨ ¸
h2 © 2 3 ¹
1 § 2
’  ’3  ’ 4  ... ·¸
11
= 2 ¨
h © 12 ¹
1 § 3 3 4
Similarly, D3 = 3 ¨
’  ’  ... ·¸ and so on.
h © 2 ¹
(3) Stirling’s central difference interpolation formula

k § 'y0  'y1 · k 2 2 k (k 2  12 ) § ' 3 y1  ' 3 y2 ·


y = y0  ¨ ¸ ' y1  ¨ ¸
1! © 2 ¹ 2! 3! © 2 ¹
k 2 (k 2  12 ) 4 k (k 2  12 ) (k 2  22 ) § '5 y2  '5 y3 ·
 ' y2  ¨ ¸  ... ...(10.19)
4! 5! © 2 ¹
xa
Where, k= ...(10.20)
h
Differentiating Equation (10.19) with respect to k, you get,

dy 'y0  'y1 § 3k 2  1 · § '3 y1  '3 y2 ·


=  k ' 2 y1  ¨ ¸¨ ¸
dk 2 © 6 ¹© 2 ¹

§ 4 k 3  2k · 4 § 5k 4  15k 2  4 · ª '5 y2  '5 y3 º


¨ ¸ ' y2  ¨ ¸« »  ... ...(10.21)
© 4! ¹ © 5! ¹¬ 2 ¼
Differentiating Equation (10.20) with respect to x, you get,
dk 1
= ...(10.22)
dx h
Now,
dy dy dk
= .
dx dk dx
1 ª 'y  'y1 § 3k 2  1 · § '3 y1  '3 y2 ·
= « 0  k ' 2 y1  ¨ ¸¨ ¸
h¬ 2 © 6 ¹© 2 ¹
Self-Instructional
Material 169
Numerical Differentiation
Methods Based on Finite
Differences § 4k 3  2 k · 4 § 5k 4  15k 3  4 · § '5 y2  '5 y3 · º
¨ ¸ ' y 
2 ¨ ¸¨ ¸  ...» ..(10.23)
© 4! ¹ © 5! ¹© 2 ¹ ¼
NOTES dy
Equation (10.23) provides the value of at any x which is not tabulated.
dx
Given x = a, you have k = 0.
? Given k = 0 in Equation (10.23), you get,
1 § 'y0  'y1 · 1 § ' y1  ' y2 · 1 § ' y2  ' y3 ·
3 3 5 5
§ dy ·
¨ ¸ ¨ ¸ ¨ ¸ ¨ ¸  ...
© dx ¹ xa h© 2 ¹ 6© 2 ¹ 30 © 2 ¹
...(10.24)
Differentiating Equation (10.23) with respect to x, you get,
d2y d § dy · dk
= ¨ ¸
dx 2 dk © dx ¹ ds
1 ª § '3 y1  '3 y2 · § 6k 2  1 · 4
= 2 « ' 2 y1  k ¨ ¸¨ ¸ ' y2
h ¬ © 2 ¹ © 12 ¹
§ 2k 3  3k · § '5 y2  '5 y3 · º
¨ ¸¨ ¸  ...» ...(10.25)
© 12 ¹ © 2 ¹ ¼
Given k = 0 in Equation (10.25), you get,
§ d2y · 1 § 2 1 4 1 6 ·
¨ 2¸ ¨ ' y1  ' y2  ' y3  ... ¸ ...(10.26)
© dx ¹ x a h2 © 12 90 ¹
And so on.
Formulae for computing higher derivatives may be obtained by successive
differentiation.
(4) Bessel’s central difference interpolation formula

§ y0  y1 · § 1· k (k  1) § ' 2 y1  ' 2 y0 ·


y=¨ ¸  ¨ k  ¸ 'y0  ¨ ¸
© 2 ¹ © 2¹ 2! © 2 ¹
k (k  1) §¨ k  ·¸
1
2 ¹ '3 y  k (k  1) (k  1) (k  2) § ' y2  ' y1 ·
4 4
 ©
1 ¨ ¸
3! 4! © 2 ¹

k ( k  1) ( k  1) ( k  2) §¨ k  ·¸
1
 © 2 ¹ '5 y
2
5!
k (k  1) (k  2) ( k  1) ( k  2) (k  3) § ' 6 y3  ' 6 y2 ·
 ¨ ¸  ......(10.27)
6! © 2 ¹
xa
Where, k= ...(10.28)
h
Self-Instructional
170 Material
Differentiating Equation (10.27) with respect to k, you get, Numerical Differentiation
Methods Based on Finite
Differences
§ 3k 2  3k  1 ·
dy § 2 k  1 · § ' 2
y 1  ' 2
y · ¨ 2 ¸ '2 y
= 'y0  ¨ ¸¨
0
¸¨ ¸ 1
dk © 2! ¹ © 2 ¹ © 3! ¹ NOTES
§ 4k 3  6k 2  2k  2 · § '4 y2  ' 4 y1 · § 5k 4  10k 3  3k  1 · 5
+¨ ¸¨ ¸¨ ¸ ' y2
© 4! ¹© 2 ¹ © 5! ¹
§ 6k 5  15k 4  20k 3  45k 2  8k  12 · § ' 6 y3  ' 6 y2 ·
+¨ ¸¨ ¸  ... ...(10.29)
© 6! ¹© 2 ¹
Differentiating Equation (10.28) with respect to x, you get,
dk 1
=
dx h
dy dy dk
Now, = .
dx dk dx
ª § 3k 2  3k  1 ·
1 « § 2 k  1 · § ' 2
y 1  ' 2
y0
· ¨ 2 ¸ '3 y
= « 'y0  ¨ ¸¨ ¸¨ ¸ 1
h ¬ © 2! ¹ © 2 ¹ © 3! ¹

§ 4k 3  6k 2  2k  2 · § ' 4 y2  ' 4 y1 · § 5k 4  10k 3  5k  1 · 5


¨ ¸¨ ¸¨ ¸ ' y2
© 4! ¹© 2 ¹ © 5! ¹

§ 6k 5  15k 4  20k 3  45k 2  8k  12 · § ' 6 y3  ' 6 y2 · º


¨ ¸¨ ¸  ...» ...(10.30)
© 6! ¹© 2 ¹ ¼
dy
Equation (10.30) provides us the value of at any x which is not tabulated.
dx
Given x = a, you have k = 0
? Given k = 0 in Equation (10.30), you get,

1ª 1 § ' y1  ' y0 · 1 3 1 § ' y2  ' y1 ·


2 2 4 4
§ dy ·
¨ ¸ « 'y0  ¨ ¸ ' y1  ¨ ¸
© dx ¹ x a h¬ 2© 2 ¹ 12 12 © 2 ¹
1 § ' y3  ' y2 ·
6 6
1 5

' y2  ¨ ¸
120 60 © 2 ¹
...(10.31)
Differentiating Equation (10.30) with respect to x, you get,
d2y d § dy · d § dy · dk
2 = ¨ ¸ ¨ ¸
dx dx © dx ¹ dk © dx ¹ dx
1 ª§ ' y1  ' y0 · § 2k  1 · 3
2 2
= 2 «¨ ¸¨ ¸ ' y1
h ¬© 2 ¹ © 2 ¹ Self-Instructional
Material 171
Numerical Differentiation
Methods Based on Finite § 6k 2  6k  1 · § ' 4 y2  ' 4 y1 · § 4k 3  6k 2  1 · 5
¨ ¸¨ ¸¨ ¸ ' y2
Differences © 12 ¹© 2 ¹ © 24 ¹

§ 15k 4  30k 3  30k 2  45k  4 · § ' 6 y3  '6 y2 · º


NOTES ¨ ¸¨ ¸  ...»
© 360 ¹© 2 ¹ ¼
...(10.32)
Given k = 0 in Equation (10.32), you get,
§ d2y · 1 ª§ ' y1  ' y0 · 1 3 1 § ' y2  ' y1 ·
2 2 4 4
¨ ¸ «¨ ¸  ' y 1  ¨ ¸
© dx 2 ¹ x a h 2 ¬© 2 ¹ 2 12 © 2 ¹

1 § ' y3  ' y2 ·


6 6
1 5
 ' y2  ¨ ¸  ...
24 90 © 2 ¹
...(10.33)
And so on.

10.3 METHODS BASED ON FINITE DIFFERENCES


In this method of solving boundary value problem, the derivatives appearing in the
differential equation and boundary conditions, if necessary, are replaced by appro-
priated difference gradients.
Consider the differential equation, ycc +p(x) y c +q(x)y = r(x) (10.34)
With the boundary conditions, y (a) = D and y (b) = E (10.35)
The interval [a, b] is divided into N equal parts each of width h, so that h =
(b–a)/N, and the end points are x0 = a and xn = b. The interior mesh points xi at
which solution values y(xi) are to be determined are,
xn = x0+ nh, n = 1, 2,...,N – 1 (10.36)
The values of y at the mesh points are denoted by yn given by,
yn = y (x0+ nh), n = 0, 1, 2,...,N (10.37)
The following central difference approximatious are usually used in finite dif-
ference method of solving boundary value problem,
y n 1  y n 1
y c( xn ) | (10.38)
2h

y n 1  2 y n  y n 1
y cc( xn ) | 2 (10.39)
h
Substituting these in the differential equation, we have
2(yn+1–2yn+ yn–1) + pn h(yn+1– yn–1) + 2h2gnyn = 2rnh2,
Where pn = p(xn), qn = q(xn), rn = r(xn) (10.40)
Rewriting the equation by regrouping we get,
(2–hpn)yn–1+(–4+2h2qn)yn+(2+h2qn)yn+1 = 2rnh2 (10.41)
Self-Instructional
172 Material
This equation is to be considered at each of the interior points, i.e., it is true for Numerical Differentiation
Methods Based on Finite
n = 1, 2, ..., N–1. Differences
The boundary conditions of the problem are given by,
y0 D , yn E (10.42) NOTES
Introducing these conditions in the relevant equations and arranging them, we
have the following system of linear equations in (N–1) unknowns y1, y2, ..., yn–1.

(4  2h 2 q1 ) y1  (2  hp1 ) y 2 2r1h 2  (2  hp1 )D


(2  hp2 ) y1  (4  2h 2 q 2 ) y 2  (2  hp 2 ) y3 2r2 h 2
(2  hp3 ) y 2  (4  2h 2 q3 ) y3  (2  hp3 ) y 4 2r3 h 2
... ... ... ... ...
2 2
(2  hp N  2 )  (4  2h q N  2 ) y N  2 (2  hp N  2 ) y N 1 2rN  2 h
2 2
(2  hp N 1 ) y N  2  (4  2h q N 1 ) y N 1 2rN 1h  (2  hp N 1 ) E (10.43)
The above system of N–1 equations can be expressed in matrix rotation in the
form
Ay b (10.44)
Where the coefficient matrix A is a tri-diagonal one, of the form

ª B1 C1 0 0... 0 0 0 º
«A B2 C2 0... 0 0 0 »
« 2 »
«0 A3 B3 C3 ... 0 0 0 »
A « »
« ... ... ... ... ... ... ... » (10.45)
«0 0 0 0... AN  2 BN 2 C N 2 »
« »
¬« 0 0 0 0... 0 AN 1 B N 1 ¼»

Where Bi 4  2h 2 qi , i 1, 2,..., N  1
Ci 2  hpi , i 1, 2,..., N  2 (10.46)
Ai 2  hpi , i 2, 3,..., N  1

The vector b has components,

b1 2J1h 2  (2  hp1 )D
bi 2J i h 2 , for i 2, 3,..., N  2
(10.47)
bN 1 2J N 1  h  (2  hlpN 1 )E
2

The system of linear equations can be directly solved using suitable methods.
Example 10.1. Compute values of y (1.1) and y (1.2) on solving the following initial
value problem, correct to their decimal places using Runge-Kutta mehtods of order 4.
yc
y cc  y 0 , with y(1) = 0.77, y c (1) = –0.44
x Self-Instructional
Material 173
Numerical Differentiation Solution: We first rewrite the initial value problem in the form of pair of first order
Methods Based on Finite
Differences equations.
z
yc z, z c y
x
NOTES
With y (1) = 0.77 and z (1) = –0.44.
We now employ Runge-Kutta methods of order 4 with h = 0.1.
1
y (1.1) y (1)  (k1  2k2  2k3  k4 )
6
1
y c(1.1) z (1.1) 1  (l1  2l2  2l3  l4 )
6
k1 0.44 u 0.1 0.044
§ 0.44 ·
l1 0.1 u ¨  0.77 ¸ 0.033
© 1 ¹
§ 0.033 ·
k2 0.1 u ¨ 0.44  ¸ 0.04565
© 2 ¹
§ 0.4565 ·
l2 0.1 u ¨  0.748 ¸ 0.031323809
© 1.05 ¹

§ 0.03123809 ·
k3 0.1 u ¨ 0.44  ¸ 0.0455661904
© 2 ¹
ª 0.0455661904 º
l3 0.1 u «  0.747175» 0.031321128
¬ 1.05 ¼
k4 0.1 u (0.47132112) 0.047132112
§ 0.047132112 ·
0.1 u ¨
l4  0.72443381¸ 0.068158643
© 1.1 ¹
1
? y (1.1) 0.77  [0.044  2 u (0.045661904)  0.029596005] 0.727328602
6
1
y c(1.1) 0.44  [ 0.033  2(0.031323809)  2( 0.031321128)  0.029596005]
6
1
0.44  [ 0.33  0.062647618  0.062642256  0.029596005]
6
0.526322021
Example 10.2: Compute the solution of the following initial value problem
for x = 0.2, using Taylor series solution method of order 4.
d2y dy
2
yx , y (0) 1, y c(0) 0.
dx dx
Solution: Given y cc y  xy c, we put z y c, so that
zc y  xz, y c z and y (0) 1, z (0) 0.
We solve for y and z by Taylor series method of order 4. For this we first
compute y cc(0), y ccc(0), y iv (0),...
Self-Instructional
174 Material
We have, y cc(0) y(0)  0 u y c(0) 1, z c(0) 1 Numerical Differentiation
Methods Based on Finite
y ccc(0) z cc(0) y c(0)  z (0)  0.z c(0) 0 Differences
y iv (0) z ccc(0) y cc(0)  2 z c(0)  0.z cc(0) 3
iv
z (0) 4 z cc(0)  0.z ccc(0) 0 NOTES
By Taylor series of order 4, we have

x2 x3 x 4 iv
y (0  x ) y (0)  xy c(0)  y cc(0)  y ccc(0)  y (0)
2! 3! 4!

x2 x4
Or, y ( x) 1   u3
2! 4!
(0.2) 2 (0.2) 4
? y (0.2) 1   1.0202
2! 8

(0.2) 3
Similarly, y c(0.2) z (0.2) 0 .2  u 3 0.204
4!
Example 10.3: Compute the solution of the following initial value problem for x =
d2y
0.2 by fourth order Runge -Kutta methods xy, y (0) 1, y c(0) 1
dx 2

Solution: Given y cc xy, we put y c z and the simultaneous first order problem.

We use Runge-Kutta 4th order formulae, with h = 0.2, to compute y (0.2) and
y c(0.2), given below..
k1 h f ( x0 , y0 , z0 ) 0.2 u 1 0.2
l1 h g ( x0 , y0 , z0 ) 0.2 u 0 0
§ h k l ·
k2 h f ¨ x0  , y0  1 , z0  1 ¸ 0.2 u (1  0) 0.2
© 2 2 2¹
§ h k l · 0.2 § 0.2 ·
l2 h g ¨ x0  , y0  1 , z0  1 ¸ 0.2 u ¨1  ¸ 0.022
© 2 2 2¹ 2 © 2 ¹
§ h k l ·
k3 h f ¨ x0  , y0  2 , z0  2 ¸ 0.2 u 1.011 0.2022
© 2 2 2¹
§ h k l ·
l3 h g ¨ x0  , y0  2 , z0  2 ¸ 0.2 u 0.1 u 1.1 0.022
© 2 2 2¹
k4 h f ( x0  h, y0  k3 , z0  l3 ) 0.2 u 1.022 0.2044
l4 h g ( x0  h, y0  k3 , z0  l3 ) 0.2 u 0.2 u 1.2022 0.048088
1
y (0.2) 1  (0.2  2(0.2  0.2022)  0.2044) 1.2015
6
1
y c(0.2) 1  (0  2 (0.022  0.022)  0.048088) 1.02268
6

Self-Instructional
Material 175
Numerical Differentiation
Methods Based on Finite
Differences Check Your Progress
1. What is forward interpolations in numerical differentiation?
NOTES 2. Define the backward interpolation.
3. Explain the relation between difference operators.
4. Illustrate the central difference operator.
5. Elaborate on the methods based on finite differences.

10.4 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. It is expressed as a finite difference identity from which an interpolated


value, in between tabulated points using first value y0 with powers of forward
difference is used. Forward difference is shown by using ' which is known
as forward difference, operator.
2. Just as there is forward different operators for forward interpolations, there
are backward difference operator for backward difference interpolation.
This is also credited to Newton.
3. E =1+’ Ÿ ’=E–1
’(Ey0) = ’(y1) = y1 – y0
Thus, (1 – ’) Ey0 = Ey0 – ’(Ey0) = y1 – ’(y1) = y1 – (y1 – y0) = y0
Or, (1 – ’) (1 + ')y0 = y0 which is true for all the terms of y, i.e.,
y0, y1, y2, .... yn.
Thus, (1 – ’) (1 + ') = 1 and (1 – ’)–1 = (1 + ') = E
And also, ' = (1 – ’) –1 –1.
4. Central difference operator is defined as Gf(x) = f(x + h/2) – f(x – h/2).
This is known as first central difference operator. Higher difference operator
can also be given as :
G2f(x) = f(x + h) – 2f(x) + f(x – h)
And Gnf(x) = Gn – 1 f(x + h/2) – Gn – 1 f(x – h/2)
5. In this method of solving boundary value problem, the derivatives appearing
in the differential equation and boundary conditions, if necessary, are replaced
by appropriated difference gradients.

Self-Instructional
176 Material
Numerical Differentiation
10.5 SUMMARY Methods Based on Finite
Differences

x Sir Isaac Newton had proposed Interpolation formulae for forward and
backward interpolation. These are used for numerical differentiation. Such NOTES
tools are widely used in the field of engineering, statistics and other branches
of mathematics. Computer science also uses these concepts to find nearly
accurate solution for differentiation.
x It is expressed as a finite difference identity from which an interpolated
value, in between tabulated points using first value y0 with powers of forward
difference is used. Forward difference is shown by using ' which is known
as forward difference, operator.
x Just as there is forward different operators for forward interpolations, there
are backward difference operator for backward difference interpolation.
This is also credited to Newton.
x Central difference operator is defined as Gf(x) = f(x + h/2) – f(x – h/2).
This is known as first central difference operator. Higher difference operator
can also be given as :
G2f(x) = f(x + h) – 2f(x) + f(x – h)
And Gnf(x) = Gn – 1 f(x + h/2) – Gn – 1 f(x – h/2)
x In this method of solving boundary value problem, the derivatives appearing
in the differential equation and boundary conditions, if necessary, are replaced
by appropriated difference gradients.

10.6 KEY WORDS

x Forward interpolation: It is expressed as a finite difference identity from


which an interpolated value, in between tabulated points using first value y0
with powers of forward difference is used.
x Backward interpolation: Just as there is forward different operators for
forward interpolations, there are backward difference operator for backward
difference interpolation.
x Central difference operator: Central difference operator is defined as
Gf(x) = f(x+h/2)-f(x-h/2). This is known as first central difference operator.
x Finite differences methods: In this method of solving boundary value
problem, the derivatives appearing in the differential equation and boundary
conditions, if necessary, are replaced by appropriated difference gradients.

Self-Instructional
Material 177
Numerical Differentiation
Methods Based on Finite 10.7 SELF ASSESSMENT QUESTIONS AND
Differences
EXERCISES

NOTES Short-Answer Questions


1. Define the forward interpolation.
2. Explain the backward interpolation.
3. Interpret the relation between difference operators.
4. Elaborate on the central difference operator.
5. Analyse the methods based on finite differences.
Long-Answer Questions
1. Discuss briefly the forward interpolations of numerical differentiation.
2. Explain the backward interpolation.
3. Briefly define the central difference operator.
4. Elaborate on the methods based on finite differences.

10.8 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
178 Material
Numerical Integration

UNIT 11 NUMERICAL
INTEGRATION
NOTES
Structure
11.0 Introduction
11.1 Objectives
11.2 Numerical Integration
11.3 Trapezoidal Rule
11.4 Simpson’s 1/3 Rule
11.5 Simpson’s 3/8 Rule
11.6 Weddle’s Rule
11.7 Cotes Method
11.8 Answers to Check Your Progress Questions
11.9 Summary
11.10 Key Words
11.11 Self Assessment Questions and Exercises
11.12 Further Readings

11.0 INTRODUCTION

In numerical analysis, numerical integration comprises a broad family of algorithms


for calculating the numerical value of a definite integral, and by extension, the term
is also sometimes used to describe the numerical solution of differential equations.
The term numerical quadrature (often abbreviated to quadrature) is more or less a
synonym for numerical integration, especially as applied to one-dimensional
integrals. Some authors refer to numerical integration over more than one dimension
as cubature; others take quadrature to include higher-dimensional integration. The
term “Numerical Integration” first appears in 1915 in the publication A Course in
Interpolation and Numeric Integration for the Mathematical Laboratory by David
Gibb.
Numerical integration methods can generally be described as combining
evaluations of the integrand to get an approximation to the integral. The integrand
is evaluated at a finite set of points called integration points and a weighted sum of
these values is used to approximate the integral. The integration points and weights
depend on the specific method used and the accuracy required from the
approximation.
In numerical analysis, the trapezoidal rule (also known as the trapezoid rule
or trapezium rule) is a technique for approximating the definite integral. The
trapezoidal rule may be viewed as the result obtained by averaging the left and
right Riemann sums, and is sometimes defined this way.

Self-Instructional
Material 179
Numerical Integration In numerical integration, Simpson’s rules are several approximations for
definite integrals, named after Thomas Simpson (1710–1761). The most basic of
these rules, called Simpson’s 1/3 rule, or just Simpson’s rule. Simpson’s 3/8 rule,
also called Simpson’s second rule requests one more function evaluation inside
NOTES the integration range, and is exact if f is a polynomial up to cubic degree. Simpson’s
1/3 and 3/8 rules are two special cases of closed Newton–Cotes formulas.
Weddle’s Rule is a method of integration, the Newton-Cotes formula with
N=6.
In this unit, you will study about the numerical integration, trapezoidal rule,
Simpson’s 1/3 rule, Simpson’s 3/8 rule, Weddle’s rule, and Cotes method.

11.1 OBJECTIVES

After going through this unit, you will be able to:


x Analyse the numerical integration
x Explain the trapezoidal rule
x Define the Simpson’s 1/3 rule
x Elaborate on the Simpson’s 3/8 rule
x Understand the Weddle’s rule
x Comprehend the Cotes method

11.2 NUMERICAL INTEGRATION


The evaluation of a definite integral cannot be carried out when the integrand f (x)
is not integrable, as well as when the function is not explicitly known but only the
function values are known at a finite number of values of x. However, the value of
the integral can be determined numerically by applying numerical methods. There
are two types of numerical methods for evaluating a definite integral based on the
following formula.
b

³ f ( x) dx
a
(11.1)

They are termed as Newton-Cotes quadrature and Gaussian quadrature. We


first confine our attention to Newton-Cotes quadrature which is based on integrating
polynomial interpolation formulae. This quadrature requires a table of values of the
integrand at equally spaced values of the independent variable x.

Self-Instructional
180 Material
Numerical Integration
11.3 TRAPEZOIDAL RULE
xn
For evaluating the integral ³ f ( x)dx, we have to sum the integrals for each of the
x0
NOTES

subintervals (x0, x1), (x1, x2),..., (xn–1, xn). Thus,


xn
h
³ f ( x)dx
x0
2
[( f 0  f1 )  ( f1  f 2 )  ...  ( f n 1  f n )]

xn
h (11.2)
Or ³ f ( x)dx
x0 2
[ f 0  2( f1  f 2  ...  f n 1 )  f n ]

This is known as trapezoidal rule of numerical integration.


The error in the trapezoidal rule is,
x

x0

Where
Thus, we can write

2
(11.3)
Where
Or,
b
Algorithm: Evaluation of ³ f ( x)dx by trapezoidal rule.
a

Step 1: Define function f (x)


Step 2: Initialize a, b, n
Step 3: Compute h = (b–a)/n
Step 4: Set x = a, S = 0
Step 5: Compute x = x + h
Step 6: Compute S = S + f (x)
Step 7: Check if x < b, then go to Step 4 else go to the next step

Self-Instructional
Material 181
Numerical Integration Step 8: Compute I = h (S + (f (a) + f (b))/2)
Step 9: Output I, n

NOTES 11.4 SIMPSON’S 1/3 RULE


Taking n = 2 in the Newton-Cotes formula, we get Simpson’s one-third formula of
numerical integration given by,
x

x0

(11.4)
x

x0

This is known as Simpson’s one-third formula of numerical integration.


The error in Simpson’s one-third formula is defined as,
x2
h
ES ³ f ( x) dx  3 ( f
x0
0  4 f1  f 2 )

Assuming F c( x) f ( x), we obtain:


h
ES F ( x 2 )  F ( x0 )  ( f  4 f1  f 2 )
3 0
Expanding F(x2) = F(x0+2h), f1 = f (x0+h) and f2 = f (x0+2h) in powers of h,
we have:

..]
5

(11.5)
Geometrical interpretation of Simpson’s one-third formula is that the integral
represented by the area under the curve is approximated by the area under the
parabola through the points (x0, f0), (x1, f1) and (x2, f2) shown in Figure 11.1.

Self-Instructional
182 Material
Numerical Integration

NOTES

Fig. 11.1 Simpson’s One-Third Integration

Simpson’s One-Third Rule


On dividing the interval [a, b] into 2m sub-intervals by points x0 = a, x1 = a + h, x2 =
a + 2h, ..., x2m = a+2mh, where b = x2m and h = (b–a)/(2m), and using Simpson’s
one-third formula in each pair of consecutive sub-intervals, we have
b x2 x4 x2 m

³ f ( x)dx ³ f ( x)dx  ³ f ( x)dx  ...  ³ f ( x)dx


a x0 x2 x 2 m 2

h
( f 0  4 f1  f 2 )  ( f 2  4 f 3  f 4 )  ( f 4  4 f 5  f 6 )  ...  ( f 2 m  2  4 f 2 m 1  f 2 m )
3
b
h
³ f ( x)dx
a
f  4 ( f1  f 3  f 5  ...  f 2 m 1 )  2 ( f 2  f 4  f 6  ...  f 2 m  2 )  f 2 m .
3 0
(11.6)
This is known as Simpson’s one-third rule of numerical integration.
The error in this formula is given by the sum of the errors in each pair of intervals
as,
5

Which can be rewritten as,


5

4
(11.7)

Algorithm: Evaluation of ³ f ( x)dx by Simpson’s one-third rule.


a

Step 1: Define f (x)


Step 2: Input a, b, n (even)
Step 3: Compute h = (b–a)/n
Step 4: Compute S1 = f (a) + f (b)
Self-Instructional
Material 183
Numerical Integration Step 5: Set S2 = 0, x = a
Step 6: Compute x = x + 2h
Step 7: Compute S2 = S2+ f (x)
NOTES Step 8: Check If x < b then go to Step 5 else go to next step
Step 9: Compute x = a + h
Step 10: Compute S4 = S4+ f (x)
Step 11: Compute x = x + 2h
Step 12: Check If x > b go to next Step else go to Step 9
Step 13: Compute I = (S1+ 4S4+ 2S2)h/3
Step 14: Write I, n

11.5 SIMPSON’S 3/8 RULE


Taking n = 3, Newton-Cotes formula can be written as,
x3 3
u (u  1) 2 u (u  1)(u  2) 3
³ f ( x)dx h³ ( f
x0 0
0  u 'f 0 
2!
' f0 
3!
' f 0 )du

3
ª u
2
1§u
3
u · 2
2
1§u
4
2· 3
º
'f 0  ¨  ¸' f 0  ¨  u  u ¸' f 0 »
3
h «uf 0 
«¬ 2 2 ¨© 3 2 ¸¹ 6 ¨© 4 ¸
¹ »¼
0

ª 9 9 2 3 3 º
h «3 y0  'y 0  ' y0  ' y0 »
¬ 2 4 8 ¼
ª 9 9 3 º (11.8)
h «3 y0  ( y1  y0 )  ( y 2  2 y1  y0 )  ( y3  3 y 2  3 y1  y 0 )»
¬ 2 4 8 ¼
x3
3h
³ f ( x) dx
x0
( y  3 y1  y3 )
8 0

5
The truncation error in this formula is .

This formula is known as Simpson’s three-eighth formula of numerical


integration.
As in the case of Simpson’s one-third rule, we can write Simpson’s three-eighth
rule of numerical integration as,
b
3h
³ f ( x) dx
a
[ y  3 y1  3 y 2  2 y3  3 y 4  3 y5  2 y 6  ...  2 y3m 3  3 y3m  2  3 y3m 1  y3m ]
8 0

(11.9)
Where h = (b–a)/(3m); for m = 1, 2,...
i.e., the interval (b–a) is divided into 3m number of sub-intervals.
Self-Instructional
184 Material
The rule in Equation (11.9) can be rewritten as, Numerical Integration

b
3h
³ f ( x) dx
a
8
[ y 0  y3m  3 ( y1  y 2  y 4  y5  ...  y3m  2  y3m 1 )  2 ( y3  y6  ...  y3m 3 )]

(11.10) NOTES
The truncation error in Simpson’s three-eighth rule is

240

11.6 WEDDLE’S RULE


In Newton-Cotes formula with n = 6 some minor modifications give the Weddle’s
formula. Newton-Cotes formula with n = 6, gives
x6
ª 123 5 41 6 º
³ ydx
2 3
h «6 y0  18 'y0  27' y0  24 y' y 0  ' y0  ' y0 »
¬ 10 140 ¼
x0

41 6
This formula takes a very simple form if the last term ' y 0 is replaced by
140
42 6 3 6
' y0 ' y0 . Then the error in the formula will have an additional term
140 10
1 6
' y0 . The above formula then becomes,
140
x6
ª 123 5 3 º
³ ydx
x0
h «6 y0  18'y0  27' 2 y0  24'3 y0 
¬ 10
' y0  ' 6 y0 »
10 ¼
? x6
3h (11.11)
³ ydx
x0 10
[ y0  5 y1  y2  6 y3  y4  5 y5  y6 ]

On replacing the differences in terms of yi’s, this formula is known as Weddle’s


formula.

The error Weddle’s formula is (11.12)

Weddle’s rule is a composite Weddle’s formula, when the number of subintervals


is a multiple of 6. One can use a Weddle’s rule of numerical integration by sub-
dividing the interval (b – a) into 6m number of sub-intervals, m being a positive
integer. The Weddle’s rule is,
b
3h
³ f ( x)dx
a
10 [y0+5y1+y2+6y3+y4+5y5+2y6+5y7+y8+6y9+y10+5y11+...

+2y6m–6+5y6m–5+y6m–4+6y6m–3+y6m–2+5y6m–1+y6m] (11.13)
Self-Instructional
Material 185
Numerical Integration Where b–a = 6mh
b
3h
i.e., ³ f ( x) dx
a
[ y  y6 m  5 ( y1  y5  y 7  y11  ...  y6m 5  y6m 1 )  y 2  y 4  y8  y10  ...
10 0
NOTES  y 6 m  4  y6m  2  6 ( y3  y9  ...  y6 m3 )  2 ( y6  y12  ...  y6 )]

The error in Weddle’s rule is given by (11.14)


2
Example 11.1: Compute the approximate value of ³ x 4 dx by taking four sub-intervals
0
and compare it with the exact value.
2 1
Solution: For four sub-intervals of [0, 2], we have h = = = 0.6. We tabulate
4 2
f ( x) x4.
x 0 0.5 1.0 1.5 2.0
f ( x) 0 0.0625 1.0 5.062 16.0

By trapezoidal rule, we get


2
0 .5
³ x dx |
4
[0  2 u (0.0625  1.0  5.062)  16.0]
2
0
1 28.2690
| [12.2690  16.0] 7.0672
4 4
By Simpson’s one-third rule, we get
2
0˜5
³ x dx
4
[0  4 u (0.0625  5.062)  2 u 1.0  16.0]
3
0
1 38.5380
[4 u 5.135  18.0] 6.4230
6 6

25 32
Exact value 6 .4
5 5
Error in the result by trapezoidal rule = 6.4 – 7.0672 = – 0.6672
Error in the result by Simpson’s one third rule = 6.4 – 6.4230 = – 0.0230
Example 11.2: Evaluate the following integral:
1

³ (4 x  3x
2
)dx by taking n = 10 and using the following rules:
0

(i) Trapezoidal rule and (ii) Simpson’s one-third rule. Also compare them with
the exact value and find the error in each case.

Self-Instructional
186 Material
Solution: We tabulate f (x) = 4x–3x2, for x = 0, 0.1, 0.2, ..., 1.0. Numerical Integration

x 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
f ( x) 0.0 0.37 0.68 0.93 1.12 1.25 1.32 1.33 1.28 1.17 1.0
NOTES
(i) Using trapezoidal rule, we have
1
0.1
³ ( 4 x  3x
2
) dx 0  2 (0.37  0.68  0.93  1.12  1.25  1.32  1.33  1.28  1.17)  1.0
2
0
0.1
u (18.90  1.0) 0.995
2

(ii) Using Simpson’s one-third rule, we have


1
0 .1
³ (4 x  3x
2
) dx [0  4 (0.37  0.93  1.25  1.33  1.17)  2(0.68  1.12  1.32  1.28)  1.0]
3
0
0 .1
[4 u 5.05  2 u 4.40  1.0]
3
0 .1
u [30.0] 1.00
3

(iii) Exact value = 1.0


Error in the result by trapezoidal rule is 0.005 and there is no error in the result
by Simpson’s one-third rule.
1
Example 11.3: Evaluate ³ e  x dx, using (i) Simpson’s one-third rule with 10 sub-
2

0
intervals and (ii) Trapezoidal rule.
2
Solution: We tabulate values of e  x for the 11 points x = 0, 0.1, 0.2, 0.3, ...., 1.0 as
given below.

2
x e x
0.0 1.00000
0.1 0.990050
0.2 0.960789
0.3 0.913931
0.4 0.852144
0.5 0.778801
0.6 0.697676
0.7 0.612626
0.8 0.527292
0.9 0.444854
1.0 0.367879
1.367879 3.740262 3.037901
Self-Instructional
Material 187
Numerical Integration Hence, by Simpson’s one-third rule we have,
1
h
³e
 x2
dx [ f 0  f10  4 ( f1  f 3  f5  f 7  f9 )  2 ( f 2  f 4  f 6  f8 )]
0 3
NOTES 0.1
[1.367879  4 u 3.740262  2 u 3.037901]
3
0.1
[1.367879  14.961048  6.075802]
3
2.2404729
0.7468243 | 0.746824
3
Using trapezoidal rule, we get
1
h
³e
 x2
dx [ f 0  f10  2 ( f1  f 2  ...  f9 )]
0
2
0.1
[1.367879  6.778163]
2
0.4073021
4

³ (x
3 2
Example 11.4: Compute the integral I  2 x  1)dx, using Simpson’s one-
0

third rule taking h = 1 and show that the computed value agrees with the exact
value. Give reasons for this.
Solution: The values of f (x) = x3–2x2+1 are tabulated for x = 0, 1, 2, 3, 4 as

x 0 1 2 3 4
f ( x) 1 0 1 10 33

The value of the integral by Simpson’s one-third rule is,

The exact value

Thus, the computed value by Simpson’s one-third rule is equal to the exact
value. This is because the error in Simpson’s one-third rule contains the fourth order
derivative and so this rule gives the exact result when the integrand is a polynomial
of degree less than or equal to three.
0.5

Example 11.5: Compute ³ e dx by (i) Trapezoidal rule and (ii) Simpson’s one-
x

0.1

third rule and compare the results with the exact value, by taking h = 0.1.

Self-Instructional
188 Material
Solution: We tabulate the values of f (x) = ex for x = 0.1 to 0.5 with spacing h = 0.1. Numerical Integration

x 0.1 0.2 0.3 0.4 0.5


x
f ( x) e 1.1052 1.2214 1.3498 1.4918 1.6847
NOTES
The value of the integral by trapezoidal rule is,
0.1
IT [1.1052  2 (1.2214  1.3498  1.4918)  1.6487]
2
0.1
[2.7539  2 u 4.0630] 0.5439
2
The value computed by Simpson’s one-third rule is,

0.1
IS [1.1052  4 (1.2214  1.4918)  2 u1.3498  1.6487]
3
0.1 0.1
[2.7539  4 u 2.7132  2.6996] [16.3063] 0.5435
3 3

Exact value = e0.5– e0.1 = 1.6487–1.1052 = 0.5435


The trapezoidal rule gives the value of the intergal with an error – 0.0004 but
Simpson’s one-third rule gives the exact value.
1
dx
Example 11.6: Compute ³ 1  x using (i) Trapezoidal rule (ii) Simpson’s one-third
0

rule taking 10 sub-intervals. Hence, find log e2 and compare it with the exact value
up to six decimal places.
1
Solution: We tabulate the values of f(x) = for x = 0, 0.1, 0.2,..., 1.0 as given
1 x
below:

1
x y f ( x)
1 x
0.0 y0 1.000000
0.1 y1 0.9090909
0.2 y2 0.8333333
0.3 y3 0.1692307
0.4 y4 0.7142857
0.5 y5 0.6666667
0.6 y6 0.6250000
0.7 y7 0.5882352
0.8 y8 0.5555556
0.9 y9 0.5263157
1.0 y10 0.500000
1.500000 3.4595391 2.7281746
Self-Instructional
Material 189
Numerical Integration (i) Using trapezoidal rule, we have
1
dx h
³ 1 x
0
[ f  f10  2 ( f1  f 2  f 3  f 4  ...  f 0 )]
2 0
NOTES 0.1
[1.500000  2 u (3.4595391  2.7281745)]
2
0.1
[1.500000  12.3754272] 0.6437714.
2
(ii) Using Simpson’s one-third rule, we get
1
dx h
³ 1 x
0
3
[ f 0  f10  4 ( f1  f 3  ...  f 9 )  2 ( f 2  f 4  ...  f 8 )]

0 .1
[1.500000  4 u 3.4595391  2 u 2.7281745]
3
0 .1 0.1
[1.5  13.838156  5.456349] u 20.794505 0.6931501
3 3
(iii) Exact value:
1
dx 0.1
³1 x
0
log e2
3
[1.500000  4 u 3.4595391  2 u 2.7281745]

0.6931472

The trapezoidal rule gives the value of the integral having an error 0.693147 –
0.6437714 = 0.0493758, while the error in the value by Simpson’s one-third rule is
– 0. 000029.

Example 11.7: Compute by (i) Simpson’s rule and (ii) Weddle’s formula
taking six sub-intervals.

Solution: Sub-division of [0, ] into six sub-intervals will have


2

h D
For applying the integration rules we tabulate cos .

(i) The value of the integral by Simpson’s one-third rule is given by,
0.26179
IS [1  4 u (0.98281  0.84089  0.50874)  2 u (0.093061  0.070711)  0)]
3
0.26179
[1  4 u 2.33244  2 u 1.63772]
3
0.26179
u 13.6052 1.18723
3
Self-Instructional
190 Material
(ii) The value of the integral by Weddle’s formula is, Numerical Integration

3
IW u 0.26179 [1.05  7.45775  5.04534  0.93061  0.070711]
10
3 u 0.026179 [14.554411] 1.143059 | 1.14306
NOTES

Example 11.8: Evaluate the integral by Weddle’s formula.

Solution: On dividing the interval into six sub-intervals, the length of each sub-

interval will be h D
For computing the integral by Weddle’ss

formula, we tabulate f

f 0.91542
The value of the integral by Weddle’s formula is given by,

IW

11.7 COTES METHOD


We start with Newton’s forward difference interpolation formula which uses a table
of values of f (x) at equally spaced points in the interval [a, b]. Let the interval [a, b]
be divided into n equal sub-intervals such that,
a = x0, xi = xo+ ih, for i = 1, 2, ..., n – 1, xn = b (11.15)
So that, nh = b–a
Newton’s forward difference interpolation formula is,

n
(11.16)
x  x0
Where s
h
Replacing f (x) by I (s) we get
xn n
ª s ( s  1) 2 º
³ f ( x) dx h³ «¬ f
x0 0
0  s 'f 0 
2!
' f 0  ...» ds
¼

Since when x = x0, s = 0 and x = xn, s = n and dx = h du.

Self-Instructional
Material 191
Numerical Integration Performing the integration on the RHS we have,
xn
ª n2 1 § n3 n 2 · 1 § n4 n3 n2 ·
³
x0
f ( x)dx h «nf 0 
«¬ 2
'f 0  ¨  ¸'2 f 0  ¨
2 ¨© 3 2 ¸¹ 6 ¨© 4
 3  2 ¸'3 f 0
3 2 ¸¹
NOTES 1 §¨ n 5 3n 4 11n 3 · º
    3n 2 ¸'4 f 0  ...»
¨
24 © 5 2 3 ¸
¹ ¼»
(11.17)
We can derive different integration formulae by taking particular values of n =
1, 2, 3, .... Again, on replacing the differences, the Newton-Cotes formula can be
expressed in terms of the function values at x0, x1,..., xn, as
xn n

³ f ( x) dx h ¦c
k 0
k f ( xk ) (11.18)
x0

The error in the Newton-Cotes formula is given by,


n

n
(11.19)

Check Your Progress


1. What do you understand by the numerical integration?
2. Explain the trapezoidal rule.
3. Analyse the Simpson's 1/3 rule.
4. Elaborate on the Simpson's 3/8 rule.
5. Define the Weddle's rule.
6. State the Cotes method.

11.8 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. The evaluation of a definite integral cannot be carried out when the integrand
f (x) is not integrable, as well as when the function is not explicitly known
but only the function values are known at a finite number of values of x.
However, the value of the integral can be determined numerically by applying
numerical methods.
xn
2. For evaluating the integral ³ f ( x)dx, we have to sum the integrals for each
x0

of the subintervals (x0, x1), (x1, x2),..., (xn–1, xn).

Self-Instructional
192 Material
Thus, Numerical Integration
xn
h
³ f ( x)dx
x0
2
[( f 0  f1 )  ( f1  f 2 )  ...  ( f n 1  f n )]

xn
h (11.2) NOTES
Or ³x [ f 0  2( f1  f 2  ...  f n1 )  f n ]
f ( x)dx
2
0

This is known as trapezoidal rule of numerical integration.


3. Taking n = 2 in the Newton-Cotes formula we get Simpson’s one-third
formula of numerical integration given by,
x

x0

(11.4)
x

x0

This is known as Simpson’s one-third formula of numerical integration.


4. Taking n = 3, Newton-Cotes formula can be written as,

x3 3
u (u  1) 2 u (u  1)(u  2) 3
³
x0
f ( x)dx ³
h ( f 0  u 'f 0 
0
2!
' f0 
3!
' f 0 ) du

3
ª u
2
1§u
3
u · 2
2
1§u
4
2· 3
º
'f 0  ¨  ¸' f 0  ¨  u  u ¸' f 0 »
3
h «uf 0 
«¬ 2 2 ¨© 3 2 ¸¹ 6 ¨© 4 ¸
¹ »¼ 0
ª 9 9 2 3 3 º
h «3 y0  'y 0  ' y0  ' y0 »
¬ 2 4 8 ¼
ª 9 9 3 º
h «3 y0  ( y1  y0 )  ( y 2  2 y1  y0 )  ( y3  3 y 2  3 y1  y 0 )»
¬ 2 4 8 ¼
x3
3h
³ f ( x) dx
x0
( y  3 y1  y3 )
8 0

5. Weddle’s rule is a composite Weddle’s formula, when the number of


subintervals is a multiple of 6. One can use a Weddle’s rule of numerical
integration by subdividing the interval (b – a) into 6m number of subintervals,
m being a positive integer. The Weddle’s rule is,
b
3h
³ f ( x)dx
a
10 [y0+5y1+y2+6y3+y4+5y5+2y6+5y7+y8+6y9+y10+5y11+...

+2y6m–6+5y6m–5+y6m–4+6y6m–3+y6m–2+5y6m–1+y6m]
Where b–a = 6mh
Self-Instructional
Material 193
Numerical Integration 6. We start with Newton’s forward difference interpolation formula which uses
a table of values of f (x) at equally spaced points in the interval [a, b]. Let the
interval [a, b] be divided into n equal sub-intervals such that,
a = x0, xi = xo+ ih, for i = 1, 2, ..., n – 1, xn = b
NOTES
So that, nh = b–a

11.9 SUMMARY
x The evaluation of a definite integral cannot be carried out when the integrand
f (x) is not integrable, as well as when the function is not explicitly known
but only the function values are known at a finite number of values of x.
However, the value of the integral can be determined numerically by applying
numerical methods.
x Geometrical interpretation of Simpson’s one-third formula is that the integral
represented by the area under the curve is approximated by the area under
the parabola through the points (x0, f0), (x1, f1) and (x2, f2)
5
x The truncation error in this formula is .
x The truncation error in Simpson’s three-eighth rule is

240
x In Newton-Cotes formula with n = 6 some minor modifications give the
Weddle’s formula.

11.10 KEY WORDS


x Numerical integration: There are two types of numerical methods for
evaluating a definite integral such as Newton-Cotes quadrature and Gaussian
quadrature methods.
x Weddle’s rule: Weddle’s rule is a composite Weddle’s formula, when the
number of subintervals is a multiple of 6.
x Cotes method: We start with Newton’s forward difference interpolation
formula which uses a table of values of f(x) at equally spaced points in the
interval [a,b].

11.11 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Explain the numerical integration.
2. State the trapezoidal rule.
Self-Instructional
194 Material
3. Define the Simpson’s 1/3 rule. Numerical Integration

4. Elaborate on the Simpson’s 3/8 rule.


5. Analyse the Weddle’s rule.
6. Interpret the Cotes method. NOTES
Long-Answer Questions
1. Explain the numerical integration with the help of example.
2. Discuss briefly the trapezoidal rule.
3. Define in brief the Simpson’s 1/3 rule.
4. Analyse the Simpson’s 3/8 rule.
5. Elaborate on the Weddle’s rule.
6. Illustrate the Cotes method.

11.12 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
Material 195
Numerical Solutions of
Ordinary Differential BLOCK - IV
Equations
NUMERICAL SOLUTIONS OF ODE

NOTES
UNIT 12 NUMERICAL SOLUTIONS
OF ORDINARY
DIFFERENTIAL
EQUATIONS
Structure
12.0 Introduction
12.1 OBJECTIVES
12.2 Ordinary Differential Equations
12.3 Taylor’s Series Method
12.4 Picard’s Method of Successive Approximations
12.5 Euler’s Method
12.5.1 Modified Euler’s Method
12.5.2 Euler’s Method for a Pair of Differential Equations
12.6 Runge-Kutta Methods
12.7 Multistep Methods
12.8 Predictor-Correction Methods
12.8.1 Euler’s Predictor-Corrector Formula
12.8.2 Milne’s Predictor-Corrector Formula
12.9 Numerical Solution of Boundary Value Problems
12.9.1 Reduction to a Pair of Initial Value Problem
12.9.2 Finite Difference Method
12.10 Answers to Check Your Progress Questions
12.11 Summary
12.12 Key Words
12.13 Self Assessment Questions and Exercises
12.14 Further Readings

12.0 INTRODUCTION

In mathematics, an ordinary differential equation is a relation that contains functions


of only one independent variable and one or more of their derivatives with respect
to that variable. Ordinary differential equations are distinguished from partial
differential equations, which involve partial derivatives of functions of several
variables. Ordinary differential equations arise in many different contexts including
geometry, mechanics, astronomy and population modelling. The Picard—Lindelöf
theorem, Picard’s existence theorem or Cauchy–Lipschitz theorem is an important
theorem on existence and uniqueness of solutions to first-order equations with
Self-Instructional
196 Material
given initial conditions. The Picard method is a way of approximating solutions of Numerical Solutions of
Ordinary Differential
ordinary differential equations. Originally, it was a way of proving the existence of Equations
solutions. It is only by advanced symbolic computing that it has become a practical
way of approximating solutions. Euler’s method is a first-order numerical procedure
for solving ordinary differential equations with a given initial value. It is the most NOTES
basic kind of explicit method for numerical integration of ordinary differential
equations and is the simplest kind of Runge-Kutta method. Other methods like
multistep method, predictor-correction method, Euler’s method for a pair of
differential equations, Runge-Kutta method for a pair of equations and for a second
order differential equations are also very useful.
In this unit, you will study about the numerical solutions of ordinary differential
equations. Taylor’s series method, picard’s method, Euler’s method, and Runge-
Kutta methods.

12.1 OBJECTIVES

After going through this unit, you will be able to:


x Define Picard’s method of successive approximation
x Describe Euler’s method and Taylor series method
x Explain Runge-Kutta and multistep methods
x Understand predictor-corrector methods
x Find numerical solution of boundary value problems
x Define finite difference method

12.2 ORDINARY DIFFERENTIAL EQUATIONS

Even though there are many methods to find an analytical solution of ordinary
differential equations, for many differential equations solutions in closed form cannot
be obtained. There are many methods available for finding a numerical solution for
differential equations. We consider the solution of an initial value problem associated
with a first order differential equation given by,
dy
f ( x, y )
dx
(12.1)
With y (x0) = y0 (12.2)
In general, the solution of the differential equation may not always exist. For the
existence of a unique solution of the differential Equation (12.1), the following
conditions, known as Lipshitz conditions must be satisfied,
(i) The function f(x, y) is defined and continuous in the strip

Self-Instructional
Material 197
Numerical Solutions of (ii) There exists a constant L such that for any x in (x0, b) and any two numbers
Ordinary Differential
Equations y and y1
|f(x, y) – f(x, y1)| d L|y – y1|
(12.3)
NOTES
The numerical solution of initial value problems consists of finding the approximate
numerical solution of y at successive steps x1, x2,..., xn of x. A number of good
methods are available for computing the numerical solution of differential equations.

12.3 TAYLOR’S SERIES METHOD


Consider the solution of the first order differential equation,
dy
f ( x, y ) with y ( x0 ) y0 (12.4)
dx
Where f (x, y) is sufficiently differentiable with respect to x and y. The solution y (x)
of the problem can be expanded about the point x0 by a Taylor series in the form,

(12.5)

The derivatives in the above expansion can be determined as follows,


y c( x0 ) f ( x0 , y 0 )
y cc ( x0 ) f x ( x0 , y0 )  f y ( x0 , y0 ) y c( x0 )

y ccc ( x0 ) f xx ( x0 , y0 )  2 f xy ( x0 , y0 ) y c( x0 )  f yy ( x0 , y0 ) { y c ( x0 )}2  f y ( x, y ) y cc ( x0 )

Where a suffix x or y denotes partial derivative with respect to x or y.


Thus, the value of y1 = y (x0+h), can be computed by taking the Taylor series
expansion shown above. Usually, because of difficulties in obtaining higher order
derivatives, commonly a fourth order method is used. The solution at x2 = x1+h, can
be found by evaluating the derivatives at (x1, y1) and using the expansion; otherwise,
writing x2 = x0+2h, we can use the same expansion. This process can be continued
for determining yn+1 with known values xn, yn.
Note: If we take k = 1, we get the Euler’s method, y1 = y0+h f (x0, y0).
Thus, Euler’s method is a particular case of Taylor series method.
Example 12.1: Form the Taylor series solution of the initial value problem,
dy
xy  1, y (0) 1 up to five terms and hence compute y (0.1) and y (0.2), correct
dx
to four decimal places.
Solution: We have, y c xy  1, y (0) 1
Differentiating successively we get,

y cc( x) xy c  y , ? y cc(0) 1
y ccc( x) xy cc  2 y c, ? y ccc(0) 2
y (iv ) ( x) xy ccc  3 y cc, ? y (iv ) (0) 3
y (v ) ( x) xy (iv )  3 y ccc, ? y ( v ) (0) 6
Self-Instructional
198 Material
Hence, the Taylor series solution y (x) is given by, Numerical Solutions of
Ordinary Differential
Equations
x2 x3 x 4 (iv ) x 5 (v)
y ( x) | y (0)  xy c(0)  y cc(0)  y ccc(0)  y (0)  y (0)
2 3! 4! 5!
x 2 x3 x4 x5 x 2 x3 x 4 x5 NOTES
| 1 x   u2 u3 u 6 1 x    
2 6 24 120 2 3 8 20
0.01 0.001 0.0001 0.00001
? y (0.1) | 1  0.1     1.1053
2 3 8 20

Similarly, y (0.2) | 1  0.2  0.04  0.008  0.0016  0.00032 1.04274


2 3 8 20
Example 12.2: Find first two non-vanishing terms in the Taylor series solution of
the initial value problem y c x 2  y 2 , y (0) 0. Hence, compute y (0.1), y (0.2), y
(0.3) and comment on the accuracy of the solution.
Solution: We have, y c x 2  y 2 , y (0) 0
Differentiating successively we have,

y cc 2 x  2 yy c, ? y cc(0) 0
2
y ccc 2  2[ yy cc  ( y c) , y ccc(0) 2
(iv ) ( iv )
y 2 ( yy ccc  3 y cy cc), ?y (0 ) 0
(v) (iv ) 2 (v )
y 2[ yy  4 y cy ccc  3 ( y cc) ], ?y 0
(vi ) (v) (iv ) ( vi )
y 2[ yy  5 y cy  10 y ccy ccc], ?y 0
( vii ) ( vi ) (v) (iv ) 2 ( vi )
y 2[ yy  6 yc y  15 y cc y  10 ( y ccc) ] ? y (0) 80

x3 x7 80 1 3 x7
The Taylor series up to two terms is y ( x) u2 u x 
6 7 7! 3 63
Example 12.3: Given x y c = x – y2, y(2) = 1, evaluate y(2.1), y(2.2) and y(2.3)
correct to four decimal places using Taylor series method.
Solution: Given To compute
y(2.1) by Taylor series method, we first find the derivatives of y at x = 2.
1
yc 1  y 2 / x ? y c(2) 1  0.5
2
xy cc  y c 1  2 yy c
1 1 1 2 1
2 y cc(2)  1  2. ? y cc(2)  u 0.25
2 2 4 2 2
xy ccc  2 y cc 2 y c  2 yy cc
2

2
§ 1· §1· § 1·
? 2 y ccc(2)  2 ¨  ¸ 2 ¨ ¸  2 ¨  ¸
© 4¹ © 2¹ © 4¹
1 1
2 y ccc(2) ? y ccc(2) 0.25
2 4
xy (iv )  3 y ccc 4 y cy cc  2 ycy cc  2 yyccc Self-Instructional
Material 199
Numerical Solutions of
Ordinary Differential y
Equations

y
NOTES

12.4 PICARD’S METHOD OF SUCCESSIVE


APPROXIMATIONS
Consider the solution of the initial value problem,
dy
with y(x0) = y0
dx
Taking y = y (x) as a function of x, we can integrate the differential equation
with respect to x from x = x0 to x, in the form
x
y y0  ³ f ( x, y( x)) dx
x0
(12.6)

The integral contains the unknown function y (x) and it is not possible to integrate
it directly. In Picard’s method, the first approximate solution y (1) ( x) is obtained by
replacing y (x) by y0.
x

Thus, y (1) ( x ) y0  ³ f ( x, y )dx


x0
0 (12.7)

Self-Instructional
200 Material
The second approximate solution is derived on replacing y by y(1) (x). Thus, Numerical Solutions of
Ordinary Differential
x Equations

³ f ( x, y
( 2) (1)
y ( x) y0  ( x)) dx
(12.8)
x0
NOTES
The process can be continued, so that we have the general approximate solution
given by,
x

³ f ( x, y
(n) ( n 1)
y ( x) y0  ( x))dx,
for n = 2, 3... (12.9)
x0

This iteration formula is known as Picard’s iteration for finding solution of a first
order differential equation, when an initial condition is given. The iterations are
continued until two successive approximate solutions y(k) and y(k+1) give approximately
the same result for the desired values of x up to a desired accuracy.
Note: Due to practical difficulties in evaluating the necessary integration, this method
cannot be always used. However, if f (x, y) is a polynomial in x and y, the successive
approximate solutions will be obtained as a power series of x.
Example 12.4: Find four successive approximate solutions for the following initial
value problem: y c x  y, with y (0) = 1, by Picard’s method. Hence compute y
(0.1) and y (0.2) correct to five significant digits.
Solution: We have, with y (0) = 1.
The first approximation by Picard’s method is,

The second approximation is,


x
x2 x3
³
( 2) 2
y ( x) 1  ( x  1  x  ) dx 1  x  x 
2 6
0

Similarly, the third approximation is,


3

The fourth approximation is,

?
Self-Instructional
Material 201
Numerical Solutions of It is clear that successive approximations are easily determined as power series
Ordinary Differential
Equations of x having one degree more than the previous one. The value of y (0.1) is given by,
(0.1)3 (0.1)4
y (0.1) 1  0.1  (0.1)2    ... | 1.1103, correct to five significant
NOTES 3 4
digits.

Similarly, y

Example 12.5: Find the successive approximate solution of the initial value problem,
with y (0) = 1, by Picard’s method.
Solution: The first approximate solution is given by,
x
x2
³
(1)
y ( x) 1  ( x  1) dx 1  x 
2
0

The second and third approximate solutions are,

x
x2 x2 x3 x 4
y ( 2) ( x ) ³
1  [ x(1  x 
0
2
)  1]dx 1  x 
2

3

8
x
x 2 x3 x 4 x 2 x3 x 4 x5 x6
y (3) ( x ) ³
1  [ x(1  x 
0
2

3
 )  1]dx 1  x 
4 2

3
  
8 15 48

Example 12.6: Compute y (0.25) and y (0.5) correct to three decimal places by
solving the following initial value problem by Picard’s method:
dy x2
, y (0) 0
dx 1 y2

dy x2
Solution: We have dx , y (0) 0
1 y2

By Picard’s method, the first approximation is,


x
x2 x3
y (1) ( x ) 0 ³
0
1 0
dx
3

The second approximate solution is,


x 2
x
y ( 2) ( x ) ³ 1 [ y
0
(1)
( x)]2
dx

x 2 3
x x
³
1
dx tan
x6 3
0 1
9

Self-Instructional
202 Material
2
Numerical Solutions of
Ordinary Differential
For Equations

2
y NOTES

Correct to three decimal place.

(0.5)2
Again, for x = 0.5, y (1) (0.5) 0.083333
3

(0.5) 3
y ( 2) (0.5) tan 1 0.0416
3
Thus, correct to three decimal places, y (0.5) = 0.042.
Note: For this problem we observe that, the integral for getting the third and higher
approximate solution is either difficult or impossible to evalute, since
x
x2
y (3) ( x) ³
0 § x3 ·
2 is not integrable.
1  ¨ tan 1 ¸
© 3¹
Example 12.7: Use Picard’s method to find two successive approximate solutions
of the initial value problem,
dy yx
, y (0) 1
dx yx

Solution: The first approximate solution by Picard’s method is given by,


x

y (1) ( x) y0  ³ f ( x, y0 )dx
0
x x
1 x 2  (1  x)
? y (1) ( x) 1  ³ dx 1  ³ dx
0 1 x 0 1 x
? y ( x) 1  2log e |1  x |  x
(1)

The second approximate solution is given by,

x
y ( 2) ( x) ³ f ( x, y
(1)
y0  ( x))dx
0
x x
x  2 x  2 log e | 1  x | x
1 ³
0
1  2 log e | 1  x |
dx 1  x  2 ³ 1  2 log
0 e
|1 x |
dx

We observe that, it is not possible to obtain the integral for getting y(2)(x). Thus
Picard’s method is not applicable to get successive approximate solutions.

Self-Instructional
Material 203
Numerical Solutions of
Ordinary Differential 12.5 EULER’S METHOD
Equations

This is a crude but simple method of solving a first order initial value problem:
NOTES dy
f ( x, y ), y ( x0 ) y0
dx
This is derived by integrating f (x0, y0) instead of f (x, y) for a small interval,

?
x0 x0

? y ( x0  h)  y ( x0 ) hf ( x0 , y0 )

Writing y1 = y (x0+ h), we have


y1 = y0+h f (x0, y0) (12.10)
Similarly, we can write
y2 = y (x1+ h) = y1+ h f (x1, y1) (12.11)
Where x1 = x0+ h.
Proceeding successively, we can get the solution at any xn = x0+ nh, as
yn = yn–1+ h f (xn–1, yn–1) (12.12)
This method, known as Euler’s method, can be geometrically interpreted, as
shown in Figure 12.1.

Fig. 12.1 Euler’s Method


For small step size h, the solution curve y = y (x), is approximated by the tangential
line.
The local error at any xk , i.e., the truncation error of the Euler’s method is given
by,
ek = y(xk+1) – yk+1
Where yk+1 is the solution by Euler’s method.

? ek y ( xk  h)  { yk  hf ( xk , yk )}
h2
yk  hy c( xk )  ycc( xk  Th)  yk  hyc( xk ), 0  T  1
2
h2
ek y cc( xk  Th), 0  T  1
Self-Instructional 2
204 Material
Note: The Euler’s method finds a sequence of values {yk} of y for the sequence of Numerical Solutions of
Ordinary Differential
values {xk}of x, step by step. But to get the solution up to a desired accuracy, we Equations
have to take the step size h to be very small. Again, the method should not be used
for a larger range of x about x0, since the propagated error grows as integration
proceeds. NOTES
Example 12.8: Solve the following differential equation by Euler’s method for x =
dy
0.1, 0.2, 0.3; taking h = 0.1; x 2  y, y (0) 1. Compare the results with exact
dx
solution.
dy
Solution: Given x 2  y, with y (0) = 1.
dx
In Euler’s method one computes in successive steps, values of y1, y2, y3,... at
x1 = x0+ h, x2 = x0 + 2h, x3 = x0 + 3h, using the formula,
yn 1 yn  hf ( xn , yn ), for n 0, 1, 2,...
? yn 1 yn  h ( x n  yn )
2

With h = 0.1 and starting with x0 = 0, y0 = 1, we present the successive


computations in the table given below.

2
n xn yn f ( xn , y n ) x n  y n y n 1 y n  hf ( xn , y n )
0 0.0 1.000  1.000 0.9000
1 0.1 0.900  0.8900 0.8110
2 0.2 0.8110  0.7710 0.7339
3 0.3 0.7339  0.6439 0.6695

dy
The analytical solution of the differential equation written as y x 2 , is
dx

ye x ³ x e dx  c
2 x

Or, ye x x 2 e x  2 xe x  2e x  c.

Since, y 1 for x 0,? c 1.

? y x 2  2 x  2  e x .
The following table compares the exact solution with the approximate solution
by Euler’s method.
n xn Approximate Solution Exact Solution % Error
1 0.1 0.9000 0.9052 0.57
2 0.2 0.8110 0.8213 1.25
3 0.3 0.7339 0.7492 2.04
Self-Instructional
Material 205
Numerical Solutions of Example 12.9: Compute the solution of the following initial value problem by Euler’s
Ordinary Differential
Equations method, for x = 0.1 correct to four decimal places, taking h = 0.02,
dy yx
, y (0) 1 .
dx yx
NOTES
Solution: Euler’s method for solving an initial value problem,
dy
dx
Taking h = 0.02, we have x1 = 0.02, x2 = 0.04, x3 = 0.06, x4 = 0.08, x5 = 0.1.
Using Euler’s method, we have, since y(0) = 1

1 0
y (0.02) y1 y 0  h f ( x0 , y0 ) 1  0.02 u 1.0200
1 0
1.0200  0.02
y (0.04) y2 y1  h f ( x1 , y1 ) 1.0200  0.02 u 1.0392
1.0200  0.02

1.0392  0.04
y (0.06) y3 y 2  h f ( x 2 , y 2 ) 1.0392  0.02 u 1.0577
1.0392  0.04
1.0577  0.06
y (0.08) y4 y3  h f ( x3 , y3 ) 1.0577  0.02 u 1.0756
1.0577  0.06
1.0756  0.08
y (0.1) y5 y 4  h f ( x4 , y 4 ) 1.0756  0.02 u 1.0928
1.0756  0.08
Hence, y (0.1) = 1.0928.

12.5.1 Modified Euler’s Method


In order to get somewhat moderate accuracy, Euler’s method is modified by computing
the derivative y c f ( x, y ), at a point xn as the mean of f (xn, yn) and f (xn+1, y(0)n+1),
where,

(12.13)

This modified method is known as Euler-Cauchy method. The local truncation


error of the modified Euler’s method is of the order O(h3).
Note: Modified Euler’s method can be used to compute the solution up to a desired
accuracy by applying it in an iterative scheme as stated below.

(12.14)

The iterations are continued until two successive approximations yn( k)1 and yn( k11)
coincide to the desired accuarcy. As a rule, the iterations converge rapidly for a
sufficiently small h. If, however, after three or four iteration the iterations still do not
give the necessary accuracy in the solution, the spacing h is decreased and iterations
Self-Instructional are performed again.
206 Material
Example 12.10: Use modified Euler’s method to compute y (0.02) for the initial Numerical Solutions of
Ordinary Differential
value problem, dy x 2  y, with y (0) = 1, taking h = 0.01. Compare the result with Equations
dx
the exact solution.
Solution: Modified Euler’s method consists of obtaining the solution at successive NOTES
points, x1 = x0 + h, x2 = x0 + 2h,..., xn = x0 + nh, by the two stage computations given
by,
y n(0)1 yn  hf ( x n , y n )

h
y n(1)1 yn  f ( xn , y n )  f ( x n 1 , y n( 0)1 ) .
2
For the given problem, f (x, y) = x2 + y and h = 0.01
y1(0) y0  h[ x02  y0 ] 1  0.01 u 1 1.01
0.01
y1(1) 1 [1.0  1.01  (0.01)2 ] 1.01005
2
i.e., y1 y (0.01) 1.01005

Next, y 2( 0) y1  h [ x12  y1 ]
1.01005  0.01[(0.1) 2  1.01005]
1.01005  0.010102 1.02015

0.01
y2(1) 1.01005  [(0.01)2  1.01005  (0.01)2  1.02015]
2
0.01
1.01005  u (2.02140)
2
1.01005  0.10107
1.11112
? y2 y (0.02) 1.11112

12.5.2 Euler’s Method for a Pair of Differential Equations


Consider an initial value problem associated with a pair of first order differential
equation given by,
dy dz
f ( x, y, z ), g ( x, y , z ) (12.15)
dx dx
With y (x0) = y0, z (x0) = z0 (12.16)
Euler’s method can be extended to compute approximate values yi and zi of y
(xi) and z (xi) respectively given by,
yi+1 = yi+h f (xi, yi, zi)
zi+1 = zi+h g (xi, yi, zi) (12.17)
Starting with i = 0 and continuing step by step for i = 1, 2, 3,... Evidently, we can also
extend Euler’s method for an initial value problem associated with a second order
differential equation by rewriting it as a pair of first order equations.
Self-Instructional
Material 207
Numerical Solutions of Consider the initial value problem,
Ordinary Differential
Equations
d2y § dy ·
2
g ¨ x, y, ¸ , with y(x ) = y , y c( x0 ) y0c
dx © dx ¹ 0 0

NOTES
dy dz
We write z , so that g ( x, y, z ) with y (x0) = y0 and z (x0) = y0c .
dx dx
Example 12.11: Compute y(1.1) and y(1.2) by solving the initial value problem,
yc
y cc  y 0, with y (1) = 0.77, y c (1) = –0.44
x

z
Solution: We can rewrite the problem as y c z , z c   y; with y, (1) = 0.77 and
x
z (1.1) = –0.44.
Taking h = 0.1, we use Euler’s method for the problem in the form,
yi 1 yi  hzi
ª z º
z i 1 z i  h «  1  yi » , i 0, 1, 2,...
¬« xi ¼»
Thus, y1 = y (1.1) and z1 = z (1.1) are given by,
y1 y0  hz 0 0.77  0.1u (0.44) 0.726
ª z º
z1 z 0  h « 0  y 0 » 0.44  0.1u (0.44  0.77)
«¬ x0 »¼
0.44  0.33 0.473

Similarly, y2 = y(1.2) = y1 + hz1 = 0.726 – 0.1 (–0.473) = 0.679

Example 12.12: Using Euler’s method, compute y (0.1) and y (0.2) for the initial
value problem,
y cc  y 0, y (0) 0, y c(0) 1

Solution: We rewrite the initial value problem as y c z, z c  y, with y (0) = 0,


z (0) = 1.

Self-Instructional
208 Material
Taking h = 0.1, we have by Euler’s method, Numerical Solutions of
Ordinary Differential
y1 y (0.1) y0  hz0 0  0.1 u 1 0.1 Equations

z1 z (0.1) z0  h( y0 ) 1  0.1 u 0 1.0


y2 y (0.2) y1  hz1 0.1  0.1 u 1.0 0.2 NOTES
z2 z (0.2) z1  hy1 1.0  0.1 u 0.1 0.99

Example 12.13: For the initial value problem y cc  xy c  y 0 , y(0) = 0, y c (0) = 1.


Compute the values of y for 0.05, 0.10, 0.15 and 0.20, having accuracy not exceeding
0.5u10–4.
Solution: We form Taylor series expansion using y (0), y c (0) = 1 and from the
differential equation,
y cc  xy c  y 0, we get y cc(0) 0
y ccc( x)  xy cc  2 y c ? y ccc(0) 2

(iv ) iv
y ( x)  xy ccc  3 y cc ? y (0) 0
v (iv ) v
y ( x)  xy  4 y ccc ? y (0) 8

And in general, y(2n)(0) = 0, y ( 2n 1) (0) 2ny ( 2n 1) (0) (1) n 2 n.2!

x3 x5 2n n ! x 2n 1
Thus, y ( x) x  3  15  ...  (1) (2n  1)!  ...
n

This is an alternating series whose terms decrease. Using this, we form the
solution for y up to 0.2 as given below:

x 0 0.05 0.10 0.15 0.20


y ( x) 0 0.0500 0.0997 0.1489 0.1973

12.6 RUNGE-KUTTA METHODS


Runge-Kutta methods can be of different orders. They are very useful when the
method of Taylor series is not easy to apply because of the complexity of finding
higher order derivatives. Runge-Kutta methods attempt to get better accuracy and
at the same time obviate the need for computing higher order derivatives. These
methods, however, require the evaluation of the first order derivatives at several
off-step points.
Here, we consider the derivation of Runge-Kutta methods of order 2.
The solution of the (n + 1)th step is assumed in the form,
yn+1 = yn+ ak1+ bk2 (12.18)
Where k1 = h f (xn, yn) and
k2 = h f(xn+ Dh, yn+ Ek1), for n = 0, 1, 2,... (12.19)

Self-Instructional
Material 209
Numerical Solutions of The unknown parameters a, b, D, and E are determined by expanding in Taylor
Ordinary Differential
Equations
series and forming equations by equating coefficients of like powers of h. We have,
2 3
h h
y n 1 y ( x n  h) y n  h y c( xn )  y cc( xn )  y ccc( xn )  0 (h 4 )
2 6
NOTES h
2
h
3
[ f xx  2 ff yy  f yy f  f x f y  f y f ]n  0(h ) (12.20)
2 2 4
y n  h f ( xn , y n )  [ f  ff y ]n 
2 x 6
The subscript n indicates that the functions within brackets are to be evaluated
at (xn, yn).
Again, expanding k2 by Taylor series with two variables, we have

(12.21)
Thus, on substituting the expansion of k2, we get from Equation (12.21)

On comparing with the expansion of yn+1 and equating coefficients of h and h2


we get the relations,

There are three equations for the determination of four unknown parameters.
Thus, there are many solutions. However, usually a symmetric solution is taken by

setting
Thus, we can write a Runge-Kutta methods of order 2 in the form,
h
yn 1 yn  [ f ( xn , yn )  f ( xn  h, yn  h f ( xn , yn ))], for n 0, 1, 2,...
2
(12.22)
Proceeding as in second order method, Runge-Kutta methods of order 4 can be
formulated. Omitting the derivation, we give below the commonly used Runge-Kutta
methods of order 12.

1 5
y n 1 y n  (k1  2k 2  2k3  k 4 )  0 (h )
6
k1 h f ( xn , y n )
§ h k ·
k2 h f ¨¨ xn  , y n  1 ¸¸
© 2 2¹
§ h k ·
k3 h f ¨¨ xn  , y n  2 ¸¸
© 2 2 ¹
k4 h f ( x n  h, y n  k 3 ) (12.23)
Runge-Kutta methods of order 4 requires the evaluation of the first order
derivative f (x, y), at four points. The method is self-starting. The error estimate
with this method can be roughly given by,
Self-Instructional
210 Material
Numerical Solutions of
y n*  y n
|y (xn) – yn| | (12.24) Ordinary Differential
15 Equations
h
Where yn* and yn are the approximate values computed with and h, respectively
2
as step size and y (xn) is the exact solution. NOTES
Note: In particular, for the special form of differential equation y c F (x), a function
of x alone, the Runge-Kutta methods reduces to the Simpson’s one-third formula of
numerical integration from xn to xn+1. Then,
xn1

yn+1 = yn+ ³ F ( x)dx


xn

h h
Or, yn+1 = yn+ [F(xn) + 4F(xn+ ) + F(xn+h)]
6 2
Runge-Kutta methods are widely used particularly for finding starting values at
steps x1, x2, x3,..., since it does not require evaluation of higher order derivatives. It
is also easy to implement the method in a computer program.
Example 12.14: Compute values of y (0.1) and y (0.2) by 4th order Runge-Kutta
methods, correct to five significant figures for the initial value problem,
dy
x  y , y ( 0) 1
dx
dy
Solution: We have x  y , y ( 0) 1
dx
? f ( x, y ) x  y, h 0.1, x0 0, y0 1
By Runge-Kutta methods,
1
y (0.1) y (0 )  ( k1  2k 2  2k 3  k 4 )

Where,

1.130516

Thus, x1 0.1, y1 1.130516

Self-Instructional
Material 211
Numerical Solutions of Example 12.15: Use Runge-Kutta methods of order 4 to evaluate y (1.1) and y
Ordinary Differential
Equations
(1.2), by taking step length h = 0.1 for the initial value problem,
dy
x 2  y 2 , y (1) 0
dx
NOTES
Solution: For the initial value problem,
dy
the Runge-Kutta methods of order 4 is given as,
dx
1
y n 1 yn  ( k1  2k 2  2k 3  k 4 )
6

Where

For the given problem, f (x, y) = x2 + y2, x0 = 1, y0 = 0, h = 0.1.


Thus,

For y (1.2):

k1
k2
k3
k4

0069)

631
Self-Instructional
212 Material
Algorithm: Solution of first order differential equation by Runge-Kutta methods of Numerical Solutions of
Ordinary Differential
order 2: y c f (x) with y (x0) = y0. Equations
Step 1: Define f (x, y)
Step 2: Read x0, y0, h, xf [h is step size, xf is final x] NOTES
Step 3: Repeat Steps 4 to 11 until x1 > xf
Step 4: Compute k1 = f (x0, y0)
Step 5: Compute y1 = y0+ hk1
Step 6: Compute x1 = x0+ h
Step 7: Compute k2 = f (x1, y1)
Step 8: Compute y1 y0  h u (k1  k 2 ) / 2
Step 9: Write x1, y1
Step 10: Set x0 = x1
Step 11: Set y0 = y1
Step 12: Stop
Algorithm: Solution of y1 f ( x, y ), y ( x0 ) y0 by Runge-Kutta method of
order 4.
Step 1: Define f (x, y)
Step 2: Read x0, y0, h, xf
Step 3: Repeat Step 4 to Step 16 until x1 > xf
Step 4: Compute k1 = h f (x0, y0)
h
Step 5: Compute x x0 
2

k1
Step 6: Compute y y0 
2
Step 7: Compute k2 = h f (x, y)
k2
Step 8: Compute y y0 
2
Step 9: Compute k3 = h f(x, y)
Step 10: Compute x1 = x0+ h
Step 11: Compute y = y0+ k3
Step 12: Compute k4 = h f (x1, y)
Step 13: Compute y1 = y0+ (k1+ 2 (k2+ k3) + k4)/6
Step 14: Write x1, y1
Step 15: Set x0 = x1
Step 16: Set y0 = y1
Step 17: Stop
Self-Instructional
Material 213
Numerical Solutions of Runge-Kutta Method for a Pair of Equations
Ordinary Differential
Equations Consider an initial value problem associated with a system of two first order ordinary
differential equations in the form,
NOTES

The Runge-Kutta methods of order 4 can be easily extended in the following


form,

(12.25)

Where k1 hf ( xi , yi , zi ), l1 hg ( xi , yi , z i )
§ h k l · § h k l ·
k2 hf ¨¨ xi  , yi  1 , z i  1 ¸¸, l2 hg ¨¨ xi  , yi  1 , z1  1 ¸¸
© 2 2 2¹ © 2 2 2¹
§ h k l · § h k l ·
k3 hf ¨¨ xi  , yi  2 , z i  2 ¸¸, l3 hg ¨¨ xi  , yi  2 , zi  2 ¸¸
© 2 2 2¹ © 2 2 2¹
k4 hf ( xi  h, y1  k 3 , zi  l3 ), l4 hf ( xi  h, yi  k 3 , z i  l3 )

yi y ( xi ), z i z ( xi ), i 0, 1, 2,...

The solutions for y(x) and z(x) are determined at successive step points x1 = x0+h, x2
= x1+h = x0+2h,..., xN = x0+Nh.
Runge-Kutta Methods for a Second Order Differential Equation
Consider the initial value problem associated with a second order differential equation,
d2y
g ( x, y, y c)
dx 2
With y (x0) = y0 and y c (x0) =
On substituting z y c, the above problem is reduced to the problem,

dy dz
z, g ( x, y , z )
dx dx
With y (x0) = y0 and z (x0) = y c (x0) =
Which is an initial value problem associated with a system of two first order differential
equations. Thus we can write the Runge-Kutta methods for a second order differential
equation as,

(12.26)
Self-Instructional
214 Material
Where k1 h( zi ), l1 hg ( xi , yi , zi ) Numerical Solutions of
Ordinary Differential
§ l · § h k l · Equations
k2 h ¨ zi  1 ¸ , l2 hg ¨ xi  , yi  1 , zi  1 ¸
© 2¹ © 2 2 2¹
§ l · § h k l · NOTES
k3 h ¨ zi  2 ¸ , l3 hg ¨ xi  , yi  2 , zi  2 ¸
© 2¹ © 2 2 2¹
k4 h( zi  l3 ), l4 hg ( xi  h, yi  k3 , zi  l3 )

12.7 MULTISTEP METHODS


We have seen that for finding the solution at each step, the Taylor series method and
Runge-Kutta methods requires evaluation of several derivatives. We shall now
develop the multistep method which require only one derivative evaluation per step;
but unlike the self starting Taylor series or Runge-Kutta methods, the multistep
methods make use of the solution at more than one previous step points.
Let the values of y and y1 already have been evaluated by self-starting methods
at a number of equally spaced points x0, x1,..., xn. We now integrate the differential
equation,

dy
dx

i.e.,
xn xn
x

?
xn

To evaluate the integral on the right hand side, we consider f (x, y) as a function
of x and replace it by an interpolating polynomial, i.e., a Newton’s backward difference
interpolation using the (m + 1) points xn, xn+1, xn–2,..., xn–m,

Substituting pm(x) in place of f (x, y), we obtain

Self-Instructional
Material 215
Numerical Solutions of The coefficients Jk can be easily computed to give,
Ordinary Differential
Equations
, etc.

NOTES Taking m = 3, the above formula gives,

ª 1 5 3 º
y n 1 y n  h « f n  'f n 1  '2 f n  2  '3 f n 3 »
¬ 2 12 8 ¼
Substituting the expression of the differences in terms of function values given
by,
'f n 1 f n  f n 1 , '2 f n  2 f n  2 f n 1  f n  2
3
' f n3 f n  3 f n 1  3 f n  2  f n 3
We get on arranging,
h
y n 1 yn  [55 f n  59 f n 1  37 f n  2  9 f n 3 ] (12.27)
24
This is known as Adams-Bashforth formula of order 4. The local error of this
formula is,

(12.28)

By using mean value theorem of integral calculus,

Or, (12.29)

The fourth order Adams-Bashforth formula requires four starting values, i.e.,
the derivaties, f3, f2, f1 and f0. This is a multistep method.

12.8 PREDICTOR-CORRECTION METHODS


These methods use a pair of multistep numerical integration. The first is the Predictor
formula, which is an open-type explicit formula derived by using, in the integral, an
interpolation formula which interpolates at the points xn, xn–1,..., xn–m. The second is
the Corrector formula which is obtained by using interpolation formula that interpolates
at the points xn+1, xn, ..., xn–p in the integral.

12.8.1 Euler’s Predictor-Corrector Formula


The simplest formula of the type is a pair of formula given by,
y n( p1) y n  h f ( xn , yn ) (12.30)

Self-Instructional
216 Material
Numerical Solutions of
(c ) h ( p) Ordinary Differential
y n 1 y n  [ f ( xn , y n )  f ( xn 1 , y n 1 )] (12.31)
2 Equations

In order to determine the solution of the problem upto a desired accuracy, the
corrector formula can be employed in an iterative manner as shown below: NOTES
Step 1: Compute yn(0)1 , using Equation (12.30)

i.e., yn(0)1 = yn+ h f (xn, yn)

Step 2: Compute yn( k)1 using Equation (12.31)

h
(k )
i.e., yn 1 yn  [ f ( xn , yn )  f ( xn 1 , yn( k11) )], for K 1, 2, 3,...,
2
The computation is continued till the condition given below is satisfied,

y nk (12.32)

Where H is the prescribed accuracy.


It may be noted that the accuracy achieved will depend on step size h and on
the local error. The local error in the predictor and corrector formula are,

respectively..

12.8.2 Milne’s Predictor-Corrector Formula


A commonly used Predictor-Corrector system is the fourth order Milne’s Predictor-
Corrector formula. It uses the following as Predictor and Corrector.
( p) 4h *
y n 1 y n 3  (2 f n  f n 1  2 f n  2 )
3
h
y n(c)1 y n 1  [ f n 1  4 f n  f n 1 ( xn 1 , y n( p1) )] (12.33)
3
The local errors in these formulae are respectively,

(12.34)
Example 12.16: Compute the Taylor series solution of the problem
dy
xy  1, y (0) 1, up to x5 terms and hence compute values of y(0.1), y (0.2) and
dx
y (0.3). Use Milne’s Predictor-Corrector method to compute y (0.4) and y (0.5).
Solution: We have y c xy  1, with y (0) = 1, ? y c(0) 1
Differentiating successively, we get
y cc( x)
xy c  y ? y cc(0) 1
y ccc( x)
xy cc  y c ? y ccc(0) 2
(iv ) ( iv )
y ( x) xy ccc  3 y cc ?y (0) 3
(v) (iv ) ( iv )
y ( x) xy  4 y ccc ? y (0) 8
Self-Instructional
Material 217
Numerical Solutions of Thus, the Taylor series solution is given by,
Ordinary Differential
Equations

2 3 4 5
NOTES
2 3 4 5

y
15

For application of Milne’s Predictor-Corrector method, we compute y c (0.1),


y c (0.2) and y c (0.3).

y c(0.1) 0.1 u 1.1053  1 1.11053


y c(0.2) 0.2 u 1.22288  1 1.244576
y c(0.3) 0.3 u 1.35526  1 1.40658

4h
The Predictor formula gives, y4 = y(0.4) = y0+ (2 y1c  y 2c  2 y3c ) .
3

4 u 0.1
? y4(0) 1 (2 u 1.11053  1.24458  2 u 1.40658)
3
1.50528 ? y4c 1  0.4 u 1.50528 1.602112
(1) h
The Corrector formula gives, y 4 y2  ( y c  4 y3c  y c4 ) .
3 2

12.9 NUMERICAL SOLUTION OF BOUNDARY


VALUE PROBLEMS
We consider the solution of ordinary differential equation of order 2 or more, when
value of the dependent variable is given at more than one point, usually at the two
ends of an interval in which the solution is required. For example, the simplest
boundary value problem associated with a second order differential equation is,
Self-Instructional
218 Material
Numerical Solutions of
ycc +p (x) y c +q (x)y = r (x) (12.35) Ordinary Differential
With boundary conditions, y (a) = A, y (b) = B. (12.36) Equations

The following two methods reduce the boundary value problem into initial value
problems which are then solved by any of the methods for solving such problems. NOTES
12.9.1 Reduction to a Pair of Initial Value Problem
This method is applicable to linear differential equations only. In this method, the
solution is assumed to be a linear combination of two solutions in the form,
y(x) = u(x) + (x) (12.37)
Where O is a suitable constant determined by using the boundary condition and u(x)
and X(x) are the solutions of the following two initial value problems:
(i) u cc + p(x) u c + q(x)u = r(x)
u(a) = A, u c (a) = D1, (say). (12.38)
(ii) Xs + p(x) Xc + q(x) X = r(x)
X (a) = 0 and Xc(a) = D 2 , (say) (12.39)
Where D1 and D2 are arbitrarily assumed constants. After solving the two initial
value problems, the constant O is determined by satisfying the boundary condition at
x = b. Thus,

Or, (12.40)

Evidently, y(a) = A, is already satisfied.


If X(b) = 0, then we solve the initial value problem for X again by choosing
Xc(a) = D3, for some other value for which X(b) will be non-zero.
Another method which is commonly used for solving boundary problems is the
finite difference method discussed below.
12.9.2 Finite Difference Method
In this method of solving boundary value problem, the derivatives appearing in the
differential equation and boundary conditions, if necessary, are replaced by appropriate
difference gradients.
Consider the differential equation, ycc +p(x) y c + q(x)y = r(x) (12.41)
With the boundary conditions, y (a) = D and y (b) = E
(12.42)
The interval [a, b] is divided into N equal parts each of width h, so that h =
(b–a)/N, and the end points are x0 = a and xn = b. The interior mesh points xi at
which solution values y(xi) are to be determined are,
xn = x0+ nh, n = 1, 2, ..., N – 1 (12.43)

Self-Instructional
Material 219
Numerical Solutions of The values of y at the mesh points is denoted by yn given by,
Ordinary Differential
Equations yn = y (x0+ nh), n = 0, 1, 2, ..., N (12.44)
The following central difference approximations are usually used in finite
difference method of solving boundary value problem,
NOTES
y n 1  y n1
y c( xn ) | (12.45)
2h

y n 1  2 y n  y n 1
y cc( xn ) | 2 (12.46)
h
Substituting these in the differential equation, we have
2(yn+1–2yn+ yn–1) + pn h(yn+1– yn–1) + 2h2gnyn = 2rnh2,
Where pn = p(xn), qn = q(xn), rn = r(xn) (12.47)
Rewriting the equation by regrouping we get,
(2–hpn)yn–1+(–4+2h2qn)yn+(2+h2qn)yn+1 = 2rnh2 (12.48)
This equation is to be considered at each of the interior points, i.e., it is true for
n = 1, 2, ..., N–1.
The boundary conditions of the problem are given by,
(12.49)
Introducing these conditions in the relevant equations and arranging them, we
have the following system of linear equations in (N–1) unknowns y1, y2, ..., yn–1.

... ... ... ... ...

(12.50)
The above system of N–1 equations can be expressed in matrix notation in the
form
Ay b (12.51)
Where the coefficient matrix A is a tridiagonal one, of the form

ª B1 C1 0 0... 0 0 0 º
«A B2 C2 0... 0 0 0 »
« 2 »
«0 A3 B3 C3 ... 0 0 0 »
A « » (12.52)
« ... ... ... ... ... ... ... »
«0 0 0 0... AN  2 BN 2 C N 2 »
« »
«¬ 0 0 0 0... 0 AN 1 B N 1 »¼

Self-Instructional
220 Material
Numerical Solutions of
Where Bi 4  2h 2 qi , i 1, 2,..., N  1 Ordinary Differential
Equations
Ci 2  hpi , i 1, 2,..., N  2 (12.53)
Ai 2  hpi , i 2, 3,..., N  1
NOTES
The vector b has components,

(12.54)

The system of linear equations can be directly solved using suitable methods.
Example 12.17: Compute values of y (1.1) and y (1.2) on solving the following
initial value problem, using Runge-Kutta methods of order 4:
yc
y cc  y 0 , with y(1) = 0.77, y c (1) = –0.44
x
Solution: We first rewrite the initial value problem in the form of pair of first order
equations.
z
yc z, z c y
x
With y (1) = 0.77 and z (1) = –0.44.
We now employ Runge-Kutta methods of order 4 with h = 0.1,
1
y (1.1) y (1) 
(k1  2k2  2k3  k4 )
6
1
y c(1.1) z (1.1) 1  (l1  2l2  2l3  l4 )
6
k1 0.44 u 0.1 0.044
§ 0.44 ·
l1 0.1 u ¨  0.77 ¸ 0.033
© 1 ¹
§ 0.033 ·
k2 0.1 u ¨ 0.44  ¸ 0.04565
© 2 ¹
§ 0.4565 ·
l2 0.1 u ¨  0.748 ¸ 0.031323809
© 1.05 ¹

§ 0.03123809 ·
k3 0.1 u ¨ 0.44  ¸ 0.0455661904
© 2 ¹
ª 0.0455661904 º
l3 0.1 u «  0.747175» 0.031321128
¬ 1.05 ¼
k4 0.1 u ( 0.47132112) 0.047132112
§ 0.047132112 ·
l4 0.1 u ¨  0.72443381¸ 0.068158643
© 1.1 ¹ Self-Instructional
Material 221
Numerical Solutions of 1
Ordinary Differential ? y (1.1) 0.77  [ 0.044  2 u (0.045661904)  0.029596005] 0.727328602
Equations 6
1
y c(1.1) 0.44  [ 0.033  2( 0.031323809)  2( 0.031321128)  0.029596005]
6
NOTES 1
0.44  [0.33  0.062647618  0.062642256  0.029596005]
6
0.526322021

Example 12.18: Compute the solution of the following initial value problem for
x = 0.2, using Taylor series solution method of order 4: n.l.

Solution: Given so that

We solve for y and z by Taylor series method of order 4. For this we first
compute y cc(0), y ccc(0), y iv (0),...

We have, y cc(0) y (0)  0 u y c(0) 1, z c(0) 1


y ccc(0) z cc(0) y c(0)  z (0)  0.z c(0) 0
y iv (0) z ccc(0) y cc(0)  2 z c(0)  0.z cc(0) 3
z iv (0) 4 z cc(0)  0.z ccc(0) 0
By Taylor series of order 4, we have

x2 x3 x 4 iv
y (0  x ) y (0)  xy c(0)  y cc(0)  y ccc(0)  y (0)
2! 3! 4!

x2 x4
Or, y ( x) 1   u3
2! 4!
(0.2) 2 (0.2) 4
? y (0.2) 1   1.0202
2! 8
3
Similarly,

Example 12.19: Compute the solution of the following initial value problem for x =
2
d y
0.2 by fourth order Runge -Kutta method: n.l. xy, y (0) 1, y c(0) 1
dx 2

Solution: Given y cc xy, we put y c z and the simultaneous first order problem,

We use Runge-Kutta 4th order formulae, with h = 0.2, to compute y (0.2) and
y c(0.2), given below..
Self-Instructional
222 Material
Numerical Solutions of
Ordinary Differential
Equations

NOTES

Check Your Progress


1. How are Euler's method and Taylor’s method related?
2. Define Picard’s method of successive approximation.
3. Why should we not use Euler’s method for a larger range of x?
4. When are Runge-Kutta methods applied?
5. What is a predictor formula?
6. What are local errors in Milne's predictor-corrector formulae?
7. Where can the method of reduction to a pair of initial value problem be
applied?

12.10 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS

1. If we take k = 1, we get the Euler’s method, y1 = y0 + h f(x0, y0).


2. In Picard’s method the first approximate solution is obtained by

replacing y(x) by y 0. Thus, . The second

approximate solution is derived on replacing y by y(1) (x). Thus,

.
Self-Instructional
Material 223
Numerical Solutions of This iteration formula is known as Picard’s iteration for finding solution of a
Ordinary Differential
Equations first order differential equation, when an initial condition is given. The iterations
are continued until two successive approximate solutions yk and yk + 1 give
approximately the same result for the desired values of x up to a desired
NOTES accuracy.
3. The method should not be used for a larger range of x about x0, since the
propagated error grows as integration proceeds.
4. Runge-Kutta methods are very useful when the method of Taylor series is
not easy to apply because of the complexity of finding higher order
derivatives.
5. A predictor formula is an open-type explicit formula derived by using, in the
integral, an interpolation formula which interpolates at the points xn, xn – 1,
..., xn – m.

6. The local errors in these formulae are .

7. This method is applicable to linear differential equations only.

12.11 SUMMARY

x There are many methods available for finding a numerical solution for
differential equations.
x Picard’s iteration is a method of finding solutions of a first order differential
equation when an initial condition is given.
x Euler’s method is a crude but simple method for solving a first order initial
value problem.
x Euler’s method is a particular case of Taylor’s series method.
x Runge-Kutta methods are useful when the method of Taylor series is not
easy to apply because of the complexity of finding higher order derivatives.
x For finding the solution at each step, the Taylor series method and Runge-
Kutta methods require evaluation of several derivatives.
x The multistep method requires only one derivative evaluation per step; but
unlike the self starting Taylor series on Runge-Kutta methods, the multistep
methods make use of the solution at more than one previous step points.
x These methods use a pair of multistep numerical integration. The first is the
predictor formula, which is an open-type explicit formula derived by using,
in the integral, an interpolation formula which interpolates at the points
xn, xn – 1, ..., xn – m. The second is the corrector formula which is obtained by
using interpolation formula that interpolates at the points xn + 1, xn, ..., xn – p in
the integral.
Self-Instructional
224 Material
x The solution of ordinary differential equation of order 2 or more, when Numerical Solutions of
Ordinary Differential
values of the dependent variable is given at more than one point, usually at Equations
the two ends of an interval in which the solution is required.
x The methods used to reduce the boundary value problem into initial value NOTES
problems are reduction to a pair of initial value problem and finite difference
method.

12.12 KEY WORDS

x Predictor formula: It is an open-type explicit formula derived by using, in


the integral, an interpolation formula which interpolates at the points xn,
xn – 1, ..., xn – m.
x Corrector formula: It is obtained by using interpolation formula that
interpolates at the points xn + 1, xn, ..., xn – p in the integral.

12.13 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. What are ordinary differential equations?
2. Name the methods for computing the numerical solution of differential
equations.
3. What is the significance of Runge-Kutta methods of different orders?
4. When is multistep method used?
5. Name the predictor-corrector methods.
6. How will you find the numerical solution of boundary value problems?
Long-Answer Questions
1. Use Picard’s method to compute values of y(0.1), y(0.2) and y(0.3) correct
to four decimal places, for the problem, yc = x + y, y(0) = 1.
2. Compute values of y at x = 0.02, by Euler’s method taking h = 0.01, given
y is the solution of the following initial value problem: = x3 + y, y(0) = 1.
3. Evaluate y(0.02) by modified Euler’s method, given yc = x2 + y, y(0) = 1,
correct to four decimal places.

4. Given yc = y(4) = 4, compute y(4.2) by Taylor series method,


taking h = 0.1.

Self-Instructional
Material 225
Numerical Solutions of 5. Using Runge-Kutta method of order 4, compute y(0.1) for each of the
Ordinary Differential
Equations
following problems:

(a)
NOTES
(b)
6. Compute solution of the following initial value problem by Runge-Kutta
method of order 4 taking h = 0.2 upto x = 1; yc = x – y, y(0) = 1.5.

7. Given and y(0) = 1, y(0.1) = 1.06, y(0.2) = 1.12, y(0.3)


= 1.21. Compute y(0.4) by Milne’s predictor-corrector method.

12.14 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
226 Material
Ordinary Differential

UNIT 13 ORDINARY DIFFERENTIAL Equations

EQUATIONS
NOTES
Structure
13.0 Introduction
13.1 Objectives
13.2 Runge-Kutta Methods
13.3 Euler’s Method
13.4 Taylor Series Method
13.5 Multiple Methods
13.6 Euler’s Method for a Pair of Differential Equations
13.7 Runge-Kutta Methods for a Pair of Equations
13.8 Runge-Kutta Methods for a Second Order Differential Equation
13.9 Numerical Solutions of Boundary Value Problems
13.10 Answers to Check Your Progress Questions
13.11 Summary
13.12 Key Words
13.13 Self Assessment Questions and Exercises
13.14 Further Readings

13.0 INTRODUCTION

In numerical analysis, an ordinary differential equation (ODE) is a differential


equation containing one or more functions of one independent variable and the
derivatives of those functions. The term ordinary is used in contrast with the term
partial differential equation which may be with respect to more than one independent
variable.
Ordinary differential equations (ODEs) arise in many contexts of mathematics
and social and natural sciences. Mathematical descriptions of change use differentials
and derivatives. Various differentials, derivatives, and functions become related
via equations, such that a differential equation is a result that describes dynamically
changing phenomena, evolution, and variation. Often, quantities are defined as the
rate of change of other quantities (for example, derivatives of displacement with
respect to time), or gradients of quantities, which is how they enter differential
equations.
In this unit, you will study about the numerical solutions of ordinary differential
equations using Runge-Kutta second order and fourth order methods.

Self-Instructional
Material 227
Ordinary Differential
Equations 13.1 OBJECTIVES

After going through this unit, you will be able to:


NOTES x Analyse the numerical solutions of ordinary differential equations using
Runge-Kutta second order methods
x Finding out the numerical solutions of ordinary differential equations using
Runge-Kutta fourth order methods

13.2 RUNGE-KUTTA METHODS


Runge-Kutta methods can be of different orders. They are very useful when the
method of Taylor series is not easy to apply because of the complexity of finding
higher order derivatives. Runge-Kutta methods attempt to get better accuracy and
at the same time obviate the need for computing higher order derivatives. These
methods, however, require the evaluation of the first order derivatives at several
off-step points.
Here we consider the derivation of Runge-Kutta methods of order 2.
The solution of the (n + 1)th step is assumed in the form,
yn+1 = yn+ ak1+ bk2 (13.1)
Where k1 = h f (xn, yn), and
k2 = h f(xn+ D h, yn+ E k1), for n = 0, 1, 2,... (13.2)
The unknown parameters a, b, D , and E are determined by expanding in Taylor
series and forming equations by equating coefficients of like powers of h. We have,

h2 h3 4
y n 1 y ( x n  h) y n  h y c( xn )  y cc( xn )  y ccc( xn )  0 (h )
2 6
2 3
h h
[ f xx  2 ff yy  f yy f  f x f y  f y f ]n  0(h ) (13.3)
2 2 4
y n  h f ( xn , y n )  [ f  ff y ]n 
2 x 6

The subscript n indicates that the functions within brackets are to be evaluated
at (xn, yn).
Again, expanding k2 by Taylor series with two variables, we have

D 2E 2 E 2 k12 3
k2 h[ f n  ah ( f x ) n  Ek1 ( f y ) n  ( f xx ) n  DEhk1 ( f xy ) n  ( f yy ) n  0(h )]
2 2
(13.4)
Thus, on substituting the expansion of k2, we get from Equation (13.2)

Self-Instructional
228 Material
On comparing with the expansion of yn+1 and equating coefficients of h and h2 Ordinary Differential
Equations
we get the following relations,
1
a  b 1, bD bE
2 NOTES
There are three equations for the determination of four unknown parameters.
Thus, there are many solutions. However, usually a symmetric solution is taken by

setting

Thus we can write a Runge-Kutta methods of order 2 in the form,


h
yn 1 yn  [ f ( xn , yn )  f ( xn  h, yn  h f ( xn , yn ))], for n 0, 1, 2,...
2
(13.5)
Proceeding as in second order method, Runge-Kutta methods of order 4 can be
formulated. Omitting the derivation, we give below the commonly used Runge-Kutta
methods of order 4.
1 5
y n 1 y n  (k1  2k 2  2k3  k 4 )  0 (h )
6
k1 h f ( xn , y n )
§ h k ·
k2 h f ¨¨ xn  , y n  1 ¸¸
© 2 2¹
§ h k ·
k3 h f ¨¨ xn  , y n  2 ¸¸
© 2 2 ¹
k4 h f ( x n  h, y n  k 3 ) (13.6)
Runge-Kutta methods of order 4 requires the evaluation of the first order
derivative f (x, y), at four points. The method is self-starting. The error estimate
with this method can be roughly given by,
y n*  y n
|y (xn) – yn| | (13.7)
15
h
Where yn* and yn are the approximate values computed with and h, respectively,,
2
as step size and y (xn) is the exact solution.
Note. In particular, for the special form of differential equation y c F (x), a function
of x alone, the Runge-Kutta methods reduces to the Simpson’s one-third formula of
numerical integration from xn to xn+1. Then,
xn1

yn+1 = yn+ ³ F ( x)dx


xn

h h
Or, yn+1 = yn+ [F(xn) + 4F(xn+ ) + F(xn+h)]
6 2

Self-Instructional
Material 229
Ordinary Differential Runge-Kutta methods are widely used particularly for finding starting values at
Equations
steps x1, x2, x3,..., since it does not require evaluation of higher order derivatives. It
is also easy to implement the method in a computer program.
Example 13.1. Compute values of y (0.1) and y (0.2) by 4th order Runge-Kutta
NOTES methods, correct to five significant figures for the initial value problem.
dy
x  y , y ( 0) 1
dx
dy
Solution. We have x  y , y ( 0) 1
dx
? f ( x, y ) x  y, h 0.1, x0 0, y0 1
By Runge-Kutta methods,
1
y (0.1) y (0 )  (k  2k 2  2k 3  k 4 )
6 1
Where, k1 h f ( x0 , y0 ) 0.1 u (0  1) 0.1
§ h k ·
k2 h f ¨ x0  , y0  2 ¸ 0.1 u (0.05  1.05) 0.11
© 2 2¹
§ h k ·
k3 h f ¨ x0  , y0  2 ¸ 0.1 u (0.05  1.055) 0.1105
© 2 2¹
k4 h f ( x0  h, y0  k3 ) 0.1 u (0.1  1.1105) 0.12105
1
? y (0.1) 1  [0.1  2 u (0.11  0.1105  0.12105] 1.130516
6
Thus, x1 0.1, y1 1.130516

1
y (0.2) y (0.1)  (k1  2k2  2k3  k4 )
6
k1 h f ( x1 , y1 ) 0.1 u (0.1  1.11034) 0.121034
§ h k ·
k2 h f ¨ x1  , y1  1 ¸ 0.1 (0.15  1.17086) 0.132086
© 2 2¹
§ h k ·
k3 h f ¨ x1  , y1  2 ¸ 0.1 (0.15  1.17638) 0.132638
© 2 2¹
k4 h f ( x1  h, y1  k3 ) 0.1 (0.2  1.24298) 0.144298
1
y2 y (0.2) 1.11034  [0.121034  2 (0.132086  0.132638)  0.144298] 1.2428
6
Example 13.2. Use Runge-Kutta methods of order 4 to evaluate y (1.1) and y
(1.2), by taking step length h = 0.1 for the initial value problem:
dy
x 2  y 2 , y (1) 0
dx
Solution. For the initial value problem,
dy
f ( x, y ), y ( x0 ) y 0 ; the Runge-Kutta methods of order 4 is given as,
dx

Self-Instructional
230 Material
1 Ordinary Differential
y n 1 yn  ( k1  2k 2  2k 3  k 4 ) Equations
6
k1 h f ( xn , yn )
§ h k · NOTES
k2 h f ¨ xn  , yn  1 ¸
© 2 2¹

Where, § h k2 ·
k3 h f ¨ xn  , yn  ¸
© 2 2 ¹
k4 h f ( xn  h, yn  k3 ); for n 0, 1, 2,...
For the given problem, f (x, y) = x2 + y2, x0 = 1, y0 = 0, h = 0.1.
Thus,
k1 h f ( x0 , y0 ) 0.1 u (12  02 ) 0.1
§ h k ·
k2 h f ¨ x0  , y0  1 ¸ 0.1 u [(1.05) 2  (0.5)2 ] 0.13525
© 2 2¹
§ h k ·
k3 h f ¨ x0  , y0  2 ¸ 0.1 u [(1.05)2  (0.05525) 2 ] 0.13555
© 2 2¹
k4 h f ( x0  h, y0  k3 ) 0.1 u [(1.1)2  (0.13555) 2 ] 0.12283
1
? y1 y0  (k1  2k2  2k3  k4 )
6
1 1
(0.1  0.2705  0.2711  0.12283) u 0.76443
6 6
0.127405
For y (1.2):
k1 0.1 [(1.1) 2  (0.11072)2 ] 0.12226
k2 0.1 [(1.15) 2  (0.17183) 2 ] 0.135203
k3 0.1 [(1.15) 2  (0.17832)2 ] 0.135430
k4 0.1 [(1.2)2  (0.24615)2 ] 0.150059.
1
? y2 y (1.2) 0.11072  (0.12226  0.270406  0.270860  0.150069)
6
0.24631
Algorithm. Solution of first order differential equation by Runge-Kutta methods of
order 2: y c f (x) with y (x0) = y0.
Step 1. Define f (x, y)
Step 2. Read x0, y0, h, xf [h is step size, xf is final x]
Step 3. Repeat steps 4 to 11 until x1 > xf
Step 4. Compute k1 = f (x0, y0)
Step 5. Compute y1 = y0+ hk1
Step 6. Compute x1 = x0+ h
Step 7. Compute k2 = f (x1, y1)
Self-Instructional
Material 231
Ordinary Differential
Equations
Step 8. Compute y1 y0  h u (k1  k 2 ) / 2
Step 9. Write x1, y1
Step 10. Set x0 = x1
NOTES Step 11. Set y0 = y1
Step 12. Stop
Algorithm. Solution of y1 f ( x, y ), y ( x0 ) y0 by Runge-Kutta methods of
order 4.
Step 1. Define f (x, y)
Step 2. Read x0, y0, h, xf
Step 3. Repeat step 4 to step 16 until x1 > xf
Step 4. Compute k1 = h f (x0, y0)
h
Step 5. Compute x x0 
2

k1
Step 6. Compute y y0 
2
Step 7. Compute k2 = h f (x, y)
k2
Step 8. Compute y y0 
2
Step 9. Compute k3 = h f(x, y)
Step 10. Compute x1 = x0+ h
Step 11. Compute y = y0+ k3
Step 12. Compute k4 = h f (x1, y)
Step 13. Compute y1 = y0+ (k1+ 2 (k2+ k3) + k4)/6
Step 14. Write x1, y1
Step 15. Set x0 = x1
Step 16. Set y0 = y1
Step 17. Stop

13.3 EULER’S METHOD

This is a crude but simple method of solving a first order initial value problem:
dy
f ( x, y ), y ( x0 ) y0
dx
This is derived by integrating f (x0, y0) instead of f (x, y) for a small interval,
x0  h x0  h

? ³
x0
dy ³
x0
f ( x0 , y0 )dx.
Self-Instructional
232 Material
Ordinary Differential
? y ( x0  h)  y ( x0 ) hf ( x0 , y0 ) Equations

Writing y1 = y (x0+ h), we have


y1 = y0+h f (x0, y0) (13.8)
NOTES
Similarly, we can write
y2 = y (x1+ h) = y1+ h f (x1, y1) (13.9)
Where x1 = x0+ h.
Proceeding successively, we can get the solution at any xn = x0+ nh, as
yn = yn–1+ h f (xn–1, yn–1) (13.10)
This method, known as Euler’s method, can be geometrically interpreted, as
shown in figure below.

Fig. 5.3 Euler’s Method


For small step size h, the solution curve y = y (x), is approximated by the tangential
line.
The local error at any xk , i.e., the truncation error of the Euler’s method is given
by
ek = y(xk+1) – yk+1
Where yk+1 is the solution by Euler’s method.

? ek y ( xk  h)  { yk  hf ( xk , yk )}
h2
yk  hy c( xk )  ycc( xk  Th)  yk  hyc( xk ), 0  T  1
2
h2
ek y cc( xk  Th), 0  T  1
? 2

Note. The Euler’s method finds a sequence of values {yk} of y for the sequence of
values {xk}of x, step by step. But to get the solution up to a desired accuracy, we
have to take the step size h to be very small. Again, the method should not be used
for a larger range of x about x0, since the propagated error grows as integration
proceeds.
Example 13.3. Solve the following differential equation by Euler’s method for x =
dy
0.1, 0.2, 0.3; taking h = 0.1; x 2  y, y (0) 1. Compare the results with exact
dx
solution. Self-Instructional
Material 233
Ordinary Differential
Equations dy
Solution. Given x 2  y, with y (0) = 1.
dx
In Euler’s method one computes in successive steps, values of y1, y2, y3,... at x1
NOTES = x0+h, x2 = x0+2h, x3 = x0+ 3h, using the formula,
yn 1 yn  hf ( xn , yn ), for n 0, 1, 2,...
? yn 1 yn  h ( x n  yn )
2

With h = 0.1, and starting with x0 = 0, y0 = 1, we present the successive


computations in the table given below.

2
n xn yn f ( xn , y n ) x n  y n y n 1 y n  hf ( xn , y n )
0 0.0 1.000  1.000 0.9000
1 0.1 0.900  0.8900 0.8110
2 0.2 0.8110  0.7710 0.7339
3 0.3 0.7339  0.6439 0.6695

dy
The analytical solution of the differential equation written as y x 2 , is
dx

ye x ³ x e dx  c
2 x

Or, ye x x 2 e x  2 xe x  2e x  c.

Since, y 1 for x 0,? c 1.

? y x 2  2 x  2  e x .
The following table compares the exact solution with the approximate solution
by Euler’s method.
n xn approx. sol. exact sol. % error
1 0.1 0.9000 0.9052 0.57
2 0 .2 0.8110 0.8213 1.25
3 0.3 0.7339 0.7492 2.04

Example 13.4. Compute the solution of the following initial value problem by Euler’s
method, for x = 0.1 correct to four decimal places, taking h = 0.02,
dy yx
, y (0) 1 .
dx yx

Solution. Euler’s method for solving an initial value problem,


dy
f ( x, y ), y ( x0 ) y0 , is yn 1 yn  h f ( xn , yn ), for n 0, 1, 2,...
dx

Self-Instructional
234 Material
Taking h = 0.02, we have x1 = 0.02, x2 = 0.04, x3 = 0.06, x4 = 0.08, x5 = 0.1. Ordinary Differential
Equations
Using Euler’s method, we have, since y(0) = 1

1 0
y (0.02) y1 y 0  h f ( x0 , y0 ) 1  0.02 u 1.0200
1 0 NOTES
1.0200  0.02
y (0.04) y2 y1  h f ( x1 , y1 ) 1.0200  0.02 u 1.0392
1.0200  0.02

1.0392  0.04
y (0.06) y3 y 2  h f ( x2 , y 2 ) 1.0392  0.02 u 1.0577
1.0392  0.04
1.0577  0.06
y (0.08) y4 y3  h f ( x3 , y3 ) 1.0577  0.02 u 1.0756
1.0577  0.06
1.0756  0.08
y (0.1) y5 y 4  h f ( x4 , y 4 ) 1.0756  0.02 u 1.0928
1.0756  0.08
Hence, y (0.1) = 1.0928

Modified Euler’s method


In order to get somewhat moderate accuracy, Euler’s method is modified by computing
the derivative y c f ( x, y ), at a point xn as the mean of f (xn, yn) and f (xn+1, y(0)n+1),
where,

y (0) n 1 y n  h f ( xn , y n ),
h (13.11)
y (1) n 1 y n  [ f ( xn , y n )  f ( xn 1 , y ( 0) n 1 )].
2

This modified method is known as Euler-Cauchy method. The local truncation


error of the modified Euler’s method is of the order 0(h3).
Note. Modified Euler’s method can be used to compute the solution up to a desired
accuracy by applying it in an iterative scheme as stated below.

(13.12)
The iterations are continued until two successive approximatious
yn( k)1 and yn( k11) coincide to the desired accuarcy. As a rule, the iterations converge
rapidly for a sufficiently small h. If, however, after three or four iteration the iterations
still do not give the necessary accuracy in the solution, the spacing h is decreased
and iterations are performed again.
Example 13.5. Use modified Euler’s method to compute y (0.02) for the initial
value problem, dy x 2  y, with y (0) = 1, taking h = 0.01. Compare the result with
dx
the exact solution.

Self-Instructional
Material 235
Ordinary Differential Solution. Modified Euler’s method consists of obtaining the solution at successive
Equations
points, x1 = x0+h, x2 = x0+2h,... xn = x0+nh, by the two stage computations given by,
y n(0)1 y n  hf ( xn , yn )
NOTES h
y n(1)1 yn  f ( xn , y n )  f ( x n 1 , y n( 0)1 ) .
2
For the given problem, f (x, y) = x2 + y and h = 0.01

y1(0) y0  h[ x02  y0 ] 1  0.01 u 1 1.01


0.01
y1(1) 1 [1.0  1.01  (0.01)2 ] 1.01005
2
i.e., y1 y (0.01) 1.01005

Next, y 2( 0) y1  h [ x12  y1 ]
1.01005  0.01[(0.1) 2  1.01005]
1.01005  0.010102 1.02015

0.01
y2(1) 1.01005  [(0.01)2  1.01005  (0.01)2  1.02015]
2
0.01
1.01005  u (2.02140)
2
1.01005  0.10107
1.11112
? y2 y (0.02) 1.11112

13.4 TAYLOR SERIES METHOD


Consider the solution of the first order differential equation,
dy
f ( x, y ) with y ( x0 ) y0 (13.13)
dx
Where f (x, y) is sufficiently differentiable with respect to x and y. The solution y (x)
of the problem can be expanded about the point x0 by a Taylor series in the form,
h
2
y k ( x0 ) k h
k 1
y ( x0  h ) y ( x 0 )  hy c ( x 0 )  y cc( x0 )  ...  h  ([ ) (13.14)
2! k! (k  1)!
The derivatives in the above expansion can be determined as follows,
y c( x0 ) f ( x0 , y 0 )
y cc ( x0 ) f x ( x0 , y0 )  f y ( x0 , y0 ) y c( x0 )

y ccc ( x0 ) f xx ( x0 , y0 )  2 f xy ( x0 , y0 ) y c( x0 )  f yy ( x0 , y0 ) { y c ( x0 )}2  f y ( x, y ) y cc ( x0 )

Where a suffix x or y denotes partial derivative w. r. t. x or y.

Self-Instructional
236 Material
Thus, the value of y1 = y (x0+h), can be computed by taking the Taylor series Ordinary Differential
Equations
expansion shown above. Usually, because of difficulties in obtaining higher order
derivatives, commonly a fourth order method is used. The solution at x2 = x1+h, can
be found by evaluating the derivatives at (x1, y1) and using the expansion; otherwise,
writing x2 = x0+2h, we can use the same expansion. This process can be continued NOTES
for determining yn+1 with known values xn, yn.
Note. If we take k = 1, we get the Euler’s method, y1 = y0+h f (x0, y0)
Thus, Euler’s method is a particular case of Taylor series method.
Example 13.6. Form the Taylor series solution of the initial value problem,
dy
xy  1, y (0) 1 up to five terms and hence compute y (0.1) and y (0.2), correct
dx
to four decimal places.
Solution. We have, y c xy  1, y (0) 1
Differentiating successively we get,

y cc( x) xy c  y , ? y cc(0) 1
y ccc( x) xy cc  2 y c, ? y ccc(0) 2
y (iv ) ( x) xy ccc  3 y cc, ? y (iv ) (0) 3
y (v ) ( x) xy (iv )  3 y ccc, ? y ( v ) (0) 6

Hence, the Taylor series solution y (x) is given by,

x2 x3 x 4 (iv ) x 5 (v)
y ( x) | y (0)  xy c(0)  y cc(0)  y ccc(0)  y (0)  y (0)
2 3! 4! 5!
x 2 x3 x4 x5 x 2 x3 x 4 x5
| 1 x   u2 u3 u 6 1 x    
2 6 24 120 2 3 8 20
0.01 0.001 0.0001 0.00001
? y (0.1) | 1  0.1     1.1053
2 3 8 20
0.04 0.008 0.0016 0.00032
Similarly, y (0.2) | 1  0.2     1.04274
2 3 8 20
Example 13.7. Find first two non-vanishing terms in the Taylor series solution of
the initial value problem y c x 2  y 2 , y (0) 0. Hence, compute y (0.1), y (0.2), y
(0.3) and comment on the accuracy of the solution.
Solution. We have, y c x 2  y 2 , y (0) 0

Self-Instructional
Material 237
Ordinary Differential Differentiating successively we have,
Equations
y cc 2 x  2 yy c, ? y cc(0) 0
2
y ccc 2  2[ yy cc  ( y c) , y ccc(0) 2
NOTES y (iv ) 2 ( yy ccc  3 y cy cc), ? y ( iv ) (0) 0
(v) (iv ) 2 (v)
y 2[ yy  4 y cy ccc  3 ( y cc) ], ?y 0
(vi ) (v) (iv ) ( vi )
y 2[ yy  5 y cy  10 y ccy ccc], ?y 0
( vii ) ( vi ) (v) (iv ) 2 ( vi )
y 2[ yy  6 yc y  15 y cc y  10 ( y ccc) ] ? y (0) 80

x3 x7 80 1 3 x7
The Taylor series up to two terms is y ( x) u2 u x 
6 7 7! 3 63
Example 13.8. Given x y c = x – y2, y (2) = 1, evaluate y (2.1), y (2.2) and y (2.3)
correct to four decimal places using Taylor series method.
Solution. Given y c x  y 2 , i.e., y c 1  y 2 / x, and y 1 for x 2. To compute y (2.1)
by Taylor series method, we first find the derivatives of y at x = 2.
1
yc 1  y 2 / x ? y c(2) 1  0.5
2
xy cc  y c 1  2 yy c
1 1 1 2 1
2 y cc(2)  1  2. ? y cc(2)  u 0.25
2 2 4 2 2
xy ccc  2 y cc 2 y c2  2 yycc
2
§ 1· §1· § 1·
? 2 y ccc(2)  2 ¨  ¸ 2 ¨ ¸  2 ¨  ¸
© 4¹ ©2¹ © 4¹
1 1
2 y ccc(2) ? y ccc(2) 0.25
2 4
xy ( iv )  3 y ccc 4 ycycc  2 y cy cc  2 yy ccc

1 1 §1· 1
2 y cccc(2)  3 u 6u u¨ ¸  2u
4 2 ©4¹ 4
§3 3 1·1
y cccc(2) ¨   ¸ 0.25
© 4 4 2¹2
(0.1)2 (0.1)3 (0.1)4
y (2.1) y (2)  0.1 y c(2)  y cc(2)  y ccc(2)  y cccc(2)
2 3! 4!
0.01 0.001 0.0001
1  0.1 u 0.5  u ( 0.25)  u 0.25  u (0.25)
2 6 24
1  0.05  0.00125  0.00004  0.000001
1.0488
0.04 0.008 0.0016
y (2.2) 1  0.2 u 0.5  u ( 0.25)  u 0.25  u (0.5)
2 6 24
1  0.1  0.005  0.00032  0.00003
1.0954
0.09 0.009 0.0081
y (2.3) 1  0.3 u 0.5  (0.25)  u 0.25  u (0.5)
2 2 24
1  0.15  0.01125  0.001125  0.000168
Self-Instructional 1.005043
238 Material
Ordinary Differential
13.5 MULTIPLE METHODS Equations

We have seen that for finding the solution at each step, the Taylor series method and
Runge-Kutta methods require evaluation of several derivatives. We shall now develop
NOTES
the multistep method which require only one derivative evaluation per step; but
unlike the self starting Taylor series on Runge-Kutta methods, the multistep methods
make use of the solution at more than one previous step points.
Let the values of y and y1 already have been evaluated by self-starting methods
at a number of equally spaced points x0, x1,...xn. We now integrate the different
equation,

dy
f ( x, y ), from xn to xn 1
dx
xn 1 xn 1
i.e., ³
xn
dy ³
xn
f ( x, y) dx

xn 1

yn 1 yn  ³
xn
f ( x, y ( x)) dx

To evaluate the integral on the right hand side, we consider f (x, y) as a function
of x and replace it by an interpolating polynomial, i.e., a Newton’s backward difference
interpolation using the (m+1) points xn, xn+1, xn–2,...xn–m,
m
x  xn
pm ( x) ¦ (1)
k 0
k s
( )' k f n  k , where s
k
h
s 1
k  s ( s  1)( s  2)...( s  k  1) ˜
k!

Substituting pm(x) in place of f (x, y); we obtain

The coefficients J k can be easily computed to give,


1 5 3 251
J0 1, J 1 ,J 2 ,J 3 ,J 4 , etc.
2 12 8 720
Taking m = 3, the above formula gives,

ª 1 5 3 º
y n 1 y n  h « f n  'f n 1  '2 f n  2  '3 f n 3 »
¬ 2 12 8 ¼
Substituting the expression of the differences in terms of function values given
by, Self-Instructional
Material 239
Ordinary Differential
Equations 'f n 1 f n  f n 1 , '2 f n  2 f n  2 f n 1  f n  2
'3 f n 3 f n  3 f n 1  3 f n  2  f n 3
We get on arranging,
NOTES h
y n 1 yn  [55 f n  59 f n 1  37 f n  2  9 f n 3 ] (13.15)
24
This is known as Adams-Bashforth formula of order 4. The local error of this
formula is,
1
§ s  3·
³
5 iv
E h f ([ ) ¨¨ ¸ ds (13.16)
4 ¸¹

By using mean value theorem of integral calculus,


1
§ s  3·
E h5 f iv (K) ³ ¨ ¸ ds

4 ¹
Or, 251 (13.17)
E h5 f iv (K).
720

The fourth order Adams-Bashforth formula requires four starting values, i.e.,
the derivaties, f3, f2, f1 and f0. This is a multistep method.

Check Your Progress


1. Explain the Runge-Kutta methods.
2. Define the Euler's method.
3. Analyse the modified Euler's method.
4. What is Taylor series method?
5. Elaborate on the multiple methods.

13.6 EULER’S METHOD FOR A PAIR OF


DIFFERENTIAL EQUATION
Consider an initial value problem associated with a pair of first order differential
equation given by,
dy dz
f ( x, y, z ), g ( x, y , z ) (13.18)
dx dx
With y (x0) = y0, z (x0) = z0 (13.19)
Euler’s method can be extended to compute approximate values yi and zi of y
(xi) and z (xi), respectively, given by,
yi+1 = yi+h f (xi, yi, zi)
zi+1 = zi+h g (xi, yi, zi) (13.20)

Self-Instructional
240 Material
Starting with i = 0 and continuing step by step for i = 1, 2, 3,... Evidently, we can Ordinary Differential
Equations
also extend Euler’s method for an initial value problem associated with a second
order differential equation by rewriting it as a pair of first order equations.
Consider the initial value problem,
NOTES
2
d y § dy ·
2
g ¨ x, y, ¸ , with y(x ) = y , y c( x0 ) y0c
dx © dx ¹ 0 0

dy dz
We write z , so that g ( x, y, z ) with y (x0) = y0 and z (x0) = y0c
dx dx
Example 13.9. Compute y(1.1) and y(1.2) by solving the initial value problem,
yc
y cc  y 0, with y (1) = 0.77, y c = –0.44
x (1)

z
Solution. We can rewrite the problem as y c z , z c   y; with y (1) = 0.77 and z
x
(1.1) = –0.44.
Taking h = 0.1, we use Euler’s method for the problem in the form,
yi 1 yi  hzi
ª z º
z i 1 z i  h «  1  yi » , i 0, 1, 2,...
¬« xi ¼»
Thus y1 = y (1.1) and z1 = z (1.1) are given by,
y1 y0  hz 0 0.77  0.1u (0.44) 0.726
ª z º
z1 z 0  h « 0  y 0 » 0.44  0.1 u (0.44  0.77)
¬« x0 ¼»
0.44  0.33 0.473

Similarly, y2 = y(1.2) = y1 + hz1 = 0.726 – 0.1 (–0.473) = 0.679

Example 13.10. Using Euler’s method, compute y (0.1) and y (0.2) for the initial
value problem,
y cc  y 0, y (0) 0, y c(0) 1

Self-Instructional
Material 241
Ordinary Differential
Equations
Solution. We rewrite the initial value problem as y c z, z c  y, with y (0) = 0,
z (0) = 1.
Taking h = 0.1, we have by Euler’s method,
NOTES y1 y (0.1) y0  hz0 0  0.1 u 1 0.1
z1 z (0.1) z0  h( y0 ) 1  0.1 u 0 1.0
y2 y (0.2) y1  hz1 0.1  0.1 u 1.0 0.2
z2 z (0.2) z1  hy1 1.0  0.1 u 0.1 0.99

Example 13.11. For the initial value problem y cc  xy c  y 0 , y(0) = 0, y c (0) = 1.


Compute the values of y for 0.05, 0.10, 0.15 and 0.20, having accuracy not exceeding
0.5u10–4.
Solution. We form Taylor series expansion up to orderly we have y (0), y c (0) = 1
and from the differential equation,
y cc  xy c  y 0, we get y cc(0) 0
y ccc( x)  xy cc  2 y c ? y ccc(0) 2

y (iv ) ( x)  xy ccc  3 y cc ? y iv (0) 0


y v ( x)  xy (iv )  4 y ccc ? y v (0) 8

And in general, y(2n)(0) = 0, y ( 2 n 1) (0) 2ny ( 2n 1) (0) (1) n 2 n.2!

x3 x 5 2n n ! x 2n 1
Thus, y ( x) x   ...  (1) n  ...
3 15 (2n  1)!
This is an alternating series whose terms decrease. Using this we form the
solution for y up to 0.2 is given below:

x 0 0.05 0.10 0.15 0.20


y ( x) 0 0.0500 0.0997 0.1489 0.1973

13.7 RUNGE-KUTTA METHODS FOR A PAIR OF


EQUATIONS
Consider an initial value problem associated with a system of two first order ordinary
differential equation in the form,

The Runge-Kutta method of order 4 can be easily extended in the following


form,

Self-Instructional
242 Material
1 Ordinary Differential
y i 1 yi  (k1  2k 2  2k 3  k 4 ) Equations
6
1 (3.21)
z i 1 z i  (l1  2l 2  2l3  l 4 ) i 0, 1, 2,...
6
NOTES
Where, k1 hf ( xi , yi , z i ), l1 hg ( xi , yi , z i )
§ h k l · § h k l ·
k2 hf ¨¨ xi  , yi  1 , z i  1 ¸¸, l2 hg ¨¨ xi  , yi  1 , z1  1 ¸¸
© 2 2 2¹ © 2 2 2¹
§ h k l · § h k l ·
k3 hf ¨¨ xi  , yi  2 , z i  2 ¸¸, l3 hg ¨¨ xi  , yi  2 , zi  2 ¸¸
© 2 2 2¹ © 2 2 2¹
k4 hf ( xi  h, y1  k 3 , z i  l3 ), l4 hf ( xi  h, yi  k 3 , z i  l3 )

yi y ( xi ), z i z ( xi ), i 0, 1, 2,...

The solutions for y(x), and z(x) are determined at successive step points x1 = x0+h, x2
= x1+h = x0+2h,...xN = x0+Nh.

13.8 RUNGE-KUTTA METHODS FOR A SECOND


ORDER DIFFERENTIAL EQUATIONS
Consider the initial value problem associated with a second order differential equation,
d2y
g ( x, y, yc)
dx 2
With y (x0) = y0 and y c (x0) = D 0
On substituting z y c, the above problem is reduced to the following problem,

dy dz
z, g ( x, y , z )
dx dx
With y (x0) = y0 and z (x0) = y c (x0) = D 0
Which is an initial value problem associated with a system of two first order differential
equation. Thus we can write the Runge-Kutta methods for a second order differential
equation as,

1
y i 1 yi  (k1  2k 2  2k 3  k 4 ),
6
1 (13.22)
z i 1 y ic1 z i  (l1  2l 2  2l3  l 4 ). i 0, 1, 2,...
6
Where, k1 h( zi ), l1 hg ( xi , yi , zi )
§ l · § h k l ·
k2 h ¨ zi  1 ¸ , l2 hg ¨ xi  , yi  1 , zi  1 ¸
© 2¹ © 2 2 2¹
§ l · § h k l ·
k3 h ¨ zi  2 ¸ , l3 hg ¨ xi  , yi  2 , zi  2 ¸
© 2¹ © 2 2 2¹
k4 h( zi  l3 ), l4 hg ( xi  h, yi  k3 , zi  l3 )
Self-Instructional
Material 243
Ordinary Differential
Equations 13.9 NUMERICAL SOLUTIONS OF BOUNDARY
VALUE PROBLEMS

NOTES We consider the solution of ordinary differential equation of order 2 or more, when
values of the dependent variable is given at more than one point, usually at the two
ends of an interval in which the solution is required. For example, the simplest
boundary value probelm associated with a second order differential equation is,
ycc +p (x) y c +q (x)y = r (x) (13.23)
With boundary conditions, y (a) = A, y (b) = B. (13.24)
The following two methods reduce the boundary value problem into initial value
problems which are then solved by any of the methods for solving such problems.
Reduction to Pair of Initial Value Problem
This method is applicable to linear differential equations only. In this method, the
solution is assumed to be a linear combination of two solution in the form,
y(x) = u(x)+ OX (x) (13.25)
Where O is a suitable constant determined by using the boundary condition and
u(x) and X (x) are the solution of the following two initial value problem.
(i) u cc +p(x) u c +q(x)u = r(x)
u(a) = A, u c (a) = D1 , (say). (13.26)
(ii) X cc +p(x) X c +q(x) X = r(x)
X (a) = 0 and X c (a) = D 2 , (say) (13.27)
Where D1 and D 2 are arbitraily assumed constants. After solving the two initial
value problems, the constant O is determined by satisfying the boundary condition
at x = b. Thus,

Or, (13.28)

Evidently, y (a) = A, is already satisfied.


If X (b) = 0, then we solve the initial value problem for X again by choosing
X c (a) = D 3 , some other value for which X (b) will be non-zero.
Another method which is commonly used for solving boundary problems is the
finite difference method discussed below.

Check Your Progress


6. Interpret the Euler's method for a pair of differential equations.
7. State the Runge-Kutta methods for a pair of equations.
8. Define the Runge-Kutta methods for a second order differential equation.
9. Explain the numerical solutions of boundary value problems.
Self-Instructional
244 Material
Ordinary Differential
13.10 ANSWERS TO CHECK YOUR PROGRESS Equations

QUESTIONS
1. Runge-Kutta methods can be of different orders. They are very useful when NOTES
the method of Taylor series is not easy to apply because of the complexity of
finding higher order derivatives. Runge-Kutta methods attempt to get better
accuracy and at the same time obviate the need for computing higher order
derivatives. These methods, however, require the evaluation of the first order
derivatives at several off-step points.
2. This is a crude but simple method of solving a first order initial value prob-
lem:
dy
f ( x, y ), y ( x0 ) y0
dx
This is derived by integrating f (x0, y0) instead of f (x, y) for a small interval,
x0  h x0  h

? ³
x0
dy ³
x0
f ( x0 , y0 )dx.

? y ( x0  h)  y ( x0 ) hf ( x0 , y0 )

 In order to get somewhat moderate accuracy, Euler’s method is modified


by computing the derivative y c f ( x, y), at a point xn as the mean of f (xn,
yn) and f (xn+1, y(0)n+1), where,

y ( 0) n 1 y n  h f ( xn , y n ),
h
y (1) n 1 y n  [ f ( xn , y n )  f ( xn 1 , y ( 0) n 1 )].
2

This modified method is known as Euler-Cauchy methods.


4. Consider the solution of the first order differential equation,
dy
f ( x, y ) with y ( x0 ) y0
dx
Where f (x, y) is sufficiently differentiable with respect to x and y. The solution
y (x) of the problem can be expanded about the point x0 by a Taylor series in
the form,
h
2
y k ( x0 ) k h
k 1
y ( x0  h ) y ( x 0 )  hy c ( x 0 )  y cc( x0 )  ...  h  ([ )
2! k! (k  1)!
5. We have seen that for finding the solution at each step, the Taylor series
method and Runge-Kutta methods require evaluation of several derivatives.
We shall now develop the multistep method which require only one derivative
evaluation per step; but unlike the self starting Taylor series on Runge-Kutta
methods, the multistep methods make use of the solution at more than one
previous step points.
Self-Instructional
Material 245
Ordinary Differential 6. Consider an initial value problem associated with a pair of first order differential
Equations equation given by,
dy dz
f ( x, y, z ), g ( x, y , z )
dx dx
NOTES With y (x0) = y0, z (x0) = z0
7. Consider an initial value problem associated with a system of two first order
ordinary differential equation in the form,

8. Consider the initial value problem associated with a second order differential
equation,
d2y
g ( x, y, yc)
dx 2
With y (x0) = y0 and y c (x0) = D 0
9. We consider the solution of ordinary differential equation of order 2 or more,
when values of the dependent variable is given at more than one point, usually
at the two ends of an interval in which the solution is required. For example,
the simplest boundary value probelm associated with a second order differential
equation is,
ycc +p (x) y c +q (x)y = r (x)
With boundary conditions, y (a) = A, y (b) = B.

13.11 SUMMARY
x Runge-Kutta methods can be of different orders. They are very useful when
the method of Taylor series is not easy to apply because of the complexity of
finding higher order derivatives. Runge-Kutta methods attempt to get better
accuracy and at the same time obviate the need for computing higher order
derivatives. These methods, however, require the evaluation of the first order
derivatives at several off-step points.
x Runge-Kutta methods are widely used particularly for finding starting values
at steps x1, x2, x3,..., since it does not require evaluation of higher order de-
rivatives. It is also easy to implement the method in a computer program.
x The Euler’s method finds a sequence of values {yk} of y for the sequence of
values {xk}of x, step by step. But to get the solution up to a desired accuracy,
we have to take the step size h to be very small. Again, the method should not
be used for a larger range of x about x0, since the propagated error grows as
integration proceeds.
x We have seen that for finding the solution at each step, the Taylor series
method and Runge-Kutta methods require evaluation of several derivatives.
Self-Instructional
246 Material
We shall now develop the multistep method which require only one derivative Ordinary Differential
Equations
evaluation per step; but unlike the self starting Taylor series on Runge-Kutta
methods, the multistep methods make use of the solution at more than one
previous step points.
x Another method which is commonly used for solving boundary problems is NOTES
the finite difference method discussed below.

13.12 KEY WORDS

x Runge-Kutta methods: Runge-Kutta methods can be of different orders.


They are very useful when the method of Taylor series is not easy to apply
because of the complexity of finding higher order derivatives.
x Modified Euler’s method: In order to get somewhat moderate accuracy,
Euler’s method is modified by computing the derivatives.
x Multiple methods: Multiple method require only one derivative evaluation
per step; but unlike the self-starting Taylor’s series on Runge-Kutta methods,
the multistep methods make use of the solution at more than one previous
step points.

13.13 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. State the Runge-Kutta methods.
2. Explain the Euler’s method.
3. Elaborate on the modified Euler’s method.
4. Analyse the Taylor series method.
5. Interpret the multiple methods.
6. Explain the Euler’s method for a pair of differential equation.
7. Illustrate the Runge-Kutta methods for a pair of equations.
8. Define the Runge-Kutta methods for a second order differential equation.
9. Analyse the numerical solutions of boundary value problems.
Long-Answer Questions
1. Explain the Taylor series method with the help of example.
2. Illustrate the multiple methods. Give appropriate example.
3. Define the Euler’s method for a pair of differential equation.

Self-Instructional
Material 247
Ordinary Differential 4. Discuss briefly the Runge-Kutta methods for a pair of equations.
Equations
5. Explain the Runge-Kutta methods for a second order differential equations.
6. Analyse the numerical solutions of boundary value problems.
NOTES
13.14 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.
Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
248 Material
Predictor-Corrector

UNIT 14 PREDICTOR-CORRECTOR Methods

METHODS
NOTES
Structure
14.0 Introduction
14.1 Objectives
14.2 Predictor-Corrector Method
14.3 Milne’s Predictor-Corrector Method
14.4 Adam’s Predictor-Corrector Method
14.5 Answers to Check Your Progress Questions
14.6 Summary
14.7 Key Words
14.8 Self Assessment Questions and Exercises
14.9 Further Readings

14.0 INTRODUCTION

In numerical analysis, predictor–corrector methods belong to a class of algorithms


designed to integrate ordinary differential equations, and to find an unknown function
that satisfies a given differential equation. All such algorithms proceed in two steps;
The initial, “Prediction” step, starts from a function fitted to the function-values
and derivative-values at a preceding set of points to extrapolate (anticipate) this
function’s value at a subsequent, new point. The next, “Corrector” step refines the
initial approximation by using the predicted value of the function and another method
to interpolate that unknown function’s value at the same subsequent point. When
considering the numerical solution of ordinary differential equations (ODEs), a
predictor–corrector method typically uses an explicit method for the predictor
step and an implicit method for the corrector step.
In this unit, you will study about the predictor-corrector methods, Milne’s
predictor-corrector methods, and Adam’s predictor-corrector methods.

14.1 OBJECTIVES

After going through this unit, you will be able to:


x Understand the predictor-corrector methods
x Explain the Milne’s predictor-corrector method
x Analyse Adam’s predictor-corrector methods

Self-Instructional
Material 249
Predictor-Corrector
Methods 14.2 PREDICTOR-CORRECTOR METHOD
These methods use a pair of multistep numerical integration. The first is the Predictor
NOTES formula, which is an open-type explicit formula derived by using, in the integral, an
interpolation formula which interpolates at the points xn, xn–1,...,xn–m. The second is
the Corrector formula which is obtained by using interpolation formula that interpolates
at the points xn+1, xn, ..., xn–p in the integral.

14.3 MILNE’S PREDICTOR-CORRECTOR METHOD


The simplest formula of the type is a pair of formula given by,

y n( p1) y n  h f ( xn , yn ) (14.1)

(c) h ( p)
y n 1 y n  [ f ( xn , y n )  f ( xn 1 , y n 1 )] (14.2)
2
In order to determine the solution of the problem upto a desired accuracy the
corrector formula can be employed in an iterative manner as shown below.
Step 1. Compute yn(0)1 , using equation (14.3)

i.e., yn(0)1 = yn+ h f (xn, yn)

Step 2. Compute yn( k)1 using equation (14.4)

h
(k )
i.e., yn 1 yn  [ f ( xn , yn )  f ( xn 1 , yn( k11) )], for K 1, 2, 3,...,
2
The computation is continued till the condition given below is satisfied,

y n( k)1  y n( k11)
H (14.5)
y n( k)1

Where H is the prescribed accuracy..


It may be noted that the accuracy achieved will depend on step size h and, by
the local error. The local error in the predictor and corrector formula are,

h2 h3
y cc(K1 ) and  y ccc(K2 ), respectively..
2 12

14.4 ADAM’S PREDICTOR-CORRECTOR METHOD

Single-step methods (such as Euler’s method) refer to only one previous point
Self-Instructional
and its derivative to determine the current value. Methods such as Runge–Kutta
250 Material
Predictor-Corrector
take some intermediate steps (for example, a half-step) to obtain a higher order Methods
method, but then discard all previous information before taking a second step.
Multistep methods attempt to gain efficiency by keeping and using the information
from previous steps rather than discarding it. Consequently, multistep methods NOTES
refer to several previous points and derivative values.
There are some more useful linear multistep methods, such as Two-step
Adams–Bashforth methods and Adams–Moulton methods. Two-step Adams–
Bashforth methods is more accurate than Euler’s method. This is always the case
if the step size is small enough.
Adams–Bashforth methods

The Adams–Bashforth methods are explicit methods. The coefficients are


and while the are chosen such that the

methods have order s (this determines the methods uniquely).


The Adams–Bashforth methods with s = 1, 2, 3, 4, 5 are
(This is the Euler method)

The coefficients can be determined as follows. Use polynomial


interpolation to find the polynomial p of degree such that,

The Lagrange’s formula for polynomial interpolation yields

The polynomial p is locally a good approximation of the right-hand side of


the differential equation that is to be solved, so consider the equation

Self-Instructional
Material 251
Predictor-Corrector
Methods instead. This equation can be solved exactly; the solution is simply the
integral of p. This suggests taking

NOTES

The Adams–Bashforth method arises when the formula for p is substituted.


The coefficients turn out to be given by

Replacing by its interpolant p incurs an error of order s, and it


follows that the s-step Adams–Bashforth methods has indeed order s.
The Adams–Bashforth methods were designed by John Couch Adams to
solve a differential equation modelling capillary action due to Francis Bashforth.
Bashforth (1883) published his theory and Adams’ numerical method (Goldstine
1977).
Adams–Moulton methods
The Adams–Moulton methods are similar to the Adams–Bashforth methods in
that they also have and Again, the b
coefficients are chosen to obtain the highest order possible. However, the Adams–
Moulton methods are implicit methods. By removing the restriction that
an s-step Adams–Moulton methods can reach order while an s-step
Adams–Bashforth methods has only order s.
The Adams–Moulton methods with s = 0, 1, 2, 3, 4 are
This is the backward Euler’s method

This is the trapezoidal rule

Self-Instructional
252 Material
Predictor-Corrector
Methods

The derivation of the Adams–Moulton methods is similar to that of the


Adams–Bashforth methods; however, the interpolating polynomial uses not only NOTES
the points as above, but also The coefficients are given by

The Adams–Moulton methods are solely due to John Couch Adams, like
the Adams–Bashforth methods. The name of Forest Ray Moulton became
associated with these methods because he realized that they could be used in
tandem with the Adams–Bashforth methods as a predictor-corrector pair (Moulton
1926); Milne (1926) had the same idea. Adams used Newton’s method to solve
the implicit equation.

Check Your Progress


1. What do you understand by the predictor-corrector method?
2. Explain the Milne's predictor-corrector method.
3. Elaborate on the Adams-Bashforth methods.
4. Analyse the Adams-Moulton methods.

14.5 ANSWERS TO CHECK YOUR PROGRESS


QUESTIONS
1. These methods use a pair of multistep numerical integration. The first is the
Predictor formula, which is an open-type explicit formula derived by using, in
the integral, an interpolation formula which interpolates at the points xn, xn–
1
,...,xn–m. The second is the Corrector formula which is obtained by using
interpolation formula that interpolates at the points xn+1, xn, ..., xn–p in the
integral.
2. The simplest formula of the type is a pair of formula given by,

y n( p1) y n  h f ( xn , yn )

(c) h ( p)
y n 1 y n  [ f ( xn , y n )  f ( xn 1 , y n 1 )]
2
In order to determine the solution of the problem upto a desired accuracy the
corrector formula can be employed in an iterative manner

Self-Instructional
Material 253
Predictor-Corrector 3. The Adams–Bashforth methods are explicit methods. The coefficients are
Methods
and while the are chosen such
that the methods have order s (this determines the methods uniquely).
NOTES
4. The Adams–Moulton methods are similar to the Adams–Bashforth methods
in that they also have and Again,
the b coefficients are chosen to obtain the highest order possible. However,
the Adams–Moulton methods are implicit methods. By removing the
restriction that an s-step Adams–Moulton methods can reach
order while an s-step Adams–Bashforth methods has only order s.

14.6 SUMMARY
x These methods use a pair of multistep numerical integration. The first is the
Predictor formula, which is an open-type explicit formula derived by using, in
the integral, an interpolation formula which interpolates at the points xn, xn–
1
,...,xn–m. The second is the Corrector formula which is obtained by using
interpolation formula that interpolates at the points xn+1, xn, ..., xn–p in the
integral.
x The simplest formula of the type is a pair of formula given by,

y n( p1) y n  h f ( xn , yn )

(c) h ( p)
y n 1 y n  [ f ( xn , y n )  f ( xn 1 , y n 1 )]
2
In order to determine the solution of the problem upto a desired accuracy the
corrector formula can be employed in an iterative manner
x The Adams–Bashforth methods are explicit methods. The coefficients are
and while the are chosen such
that the methods have order s (this determines the methods uniquely).
x The Adams–Moulton methods are similar to the Adams–Bashforth methods
in that they also have and Again,
the b coefficients are chosen to obtain the highest order possible. However,
the Adams–Moulton methods are implicit methods. By removing the
restriction that an s-step Adams–Moulton methods can reach
order while an s-step Adams–Bashforth methods has only order s.

Self-Instructional
254 Material
Predictor-Corrector
14.7 KEY WORDS Methods

x Predictor-corrector method: These methods use a pair of multistep


numerical integration. The first is the predictor formula, which is an open- NOTES
type explicit formula derived by using, in the integral, an interpolation formula
which interpolates at the points xn, xn-1,….,xn-m.

14.8 SELF ASSESSMENT QUESTIONS AND


EXERCISES

Short-Answer Questions
1. Explain the predictor-corrector method.
2. State the Milne’s predictor-corrector method.
3. Define the Adam’s predictor-corrector method.
4. Elaborate on the Adams-Bashforth methods.
5. Interpret the Adams-Moulton methods.
Long-Answer Questions
1. Describe the predictor-corrector method. Give appropriate example.
2. Briefly discuss the Milne’s predictor-corrector method.
3. Define the Adams-Bashforth methods.
4. Explain the Adams-Moulton methods.

14.9 FURTHER READINGS

Arumugam, S., A. Thangapandi Isaac and A. Somasundaram. 2012. Numerical


Methods. Chennai (Tamil Nadu): Scitech Publications Pvt. Ltd.
Gupta, Dr. P. P., Dr. G. S. Malik and J. P. Chauhan, 2020. Calculus of Finite
Differences & Numerical Analysis, 45th Edition. Meerut (UP): Krishna
Prakashan Media Pvt. Ltd.
Venkataraman, Dr. M. K. 1999. Numerical Methods in Science and
Engineering. Chennai (Tamil Nadu): The National Publishing Company.
Skeel, Robert. D and Jerry B. Keiper. 1993. Elementary Numerical Computing
with Mathematica. New York: McGraw-Hill.
Balaguruswamy, E. 1999. Numerical Methods. New Delhi: Tata McGraw-Hill.

Self-Instructional
Material 255
Predictor-Corrector Conte, Samuel D. and Carl de Boor. 1980. Elementary Numerical Analysis:
Methods
An Algorithmic Approach. New York: McGraw Hill.
Datta, N. 2007. Computer Oriented Numerical Methods. New Delhi: Vikas
Publishing House Pvt. Ltd.
NOTES
Sastry, S. S. 2012. Introductory Methods of Numerical Analysis, 5th Edition.
Prentice Hall of India Pvt. Ltd.

Self-Instructional
256 Material

You might also like