0% found this document useful (0 votes)
152 views

Nsopde Book

This document discusses numerical methods for solving ordinary differential equations. It introduces the general form of a first order differential equation and describes Taylor series and Picard's method of successive approximations for obtaining numerical solutions. These methods provide approximations in the form of a power series or tabulated values.

Uploaded by

Rahul Roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views

Nsopde Book

This document discusses numerical methods for solving ordinary differential equations. It introduces the general form of a first order differential equation and describes Taylor series and Picard's method of successive approximations for obtaining numerical solutions. These methods provide approximations in the form of a power series or tabulated values.

Uploaded by

Rahul Roy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

Chapter

Numerical Solution of
Ordinary Differential Equations

8.1 INTRODUCTION

Many problems in science and engineering can be reduced to the problem


of solving differential equations satisfying certain given conditions. The analytical
methods of solution, with which the reader is assumed to be familiar, can
be applied to solve only a selected class of differential equations. Those
equations which govern physical systems do not possess, in general closed-
form solutions, and hence recourse must be made to numerical methods for
solving such differential equations.
To describe various numerical methods for the solution of ordinary
differential equations, we consider the general first order differential equation
dy
= f ( x, y ) (8.1a)
dx
with the initial condition,
y ( x0 ) = y0 (8.1b)
and illustrate the theory with respect to this equation. The methods so developed
can, in general, be applied to the solution of systems of first-order equations,
and will yield the solution in one of the two forms:
(i) A series for y in terms of powers of x, from which the value of y
can be obtained by direct substitution.
(ii) A set of tabulated values of x and y.
The methods of Taylor and Picard belong to class (i), whereas those of
Euler, Runge–Kutta, Adams–Bashforth, etc., belong to class (ii). These latter

302
SECTION 8.2: Solution by Taylor’s Series 303

methods are called step-by-step methods or marching methods because the


values of y are computed by short steps ahead for equal intervals h of the
independent variable. In the methods of Euler and Runge–Kutta, the interval
length h should be kept small and hence these methods can be applied for
tabulating y over a limited range only. If, however, the function values are
desired over a wider range, the methods due to Adams–Bashforth, Adams–
Moulton, Milne, etc., may be used. These methods use finite-differences and
require ‘starting values’ which are usually obtained by Taylor’s series or
Runge–Kutta methods.
It is well-known that a differential equation of the nth order will have
n arbitrary constants in its general solution. In order to compute the numerical
solution of such an equation, we therefore need n conditions. Problems in
which all the initial conditions are specified at the initial point only are called
initial value problems. For example, the problem defined by Eqs. (8.1) is an
initial value problem. On the other hand, in problems involving second-and
higher-order differential equations, we may prescribe the conditions at two
or more points. Such problems are called boundary value problems.
We shall first describe methods for solving initial value problems of the
type (8.1), and at the end of the chapter we will outline methods for solving
boundary value problems for second-order differential equations.

8.2 SOLUTION BY TAYLOR’S SERIES


We consider the differential equation
y ′ = f ( x, y ) (8.1a)
with the initial condition
y ( x0 ) = y0 . (8.1b)
If y (x) is the exact solution of Eq. (8.1), then the Taylor’s series for y (x)
around x = x0 is given by
( x − x0 ) 2
y ( x) = y0 + ( x − x0 ) y0′ + y0′′ + " (8.2)
2!
If the values of y0′ , y0′′, … are known, then Eq. (8.2) gives a power series
for y. Using the formula for total derivatives, we can write
y ′′ = f ′ = f x + y ′f y = f x + ffy ,
where the suffixes denote partial derivatives with respect to the variable
concerned. Similarly, we obtain
y ′′′ = f ′′ = f xx + f xy f + f ( f yx + f yy f ) + f y ( f x + f y f )
= f xx + 2 ffxy + f 2 f yy + f x f y + ff y2
and other higher derivatives of y. The method can easily be extended to
simultaneous and higher-order differential equations.
304 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

Example 8.1 From the Taylor series for y (x), find y (0.1) correct to four
decimal places if y (x) satisfies
y′ = x − y 2 and y (0) = 1.
The Taylor series for y (x) is given by
x2 x3 x 4 iv x5 v
y ( x) = 1 + xy0′ + y0′′ + y0′′′+ y + y +"
2 6 24 0 120 0
The derivatives y0′ , y0′′, … etc. are obtained thus:

y ′( x) = x − y 2 y0′ = −1

y ′′( x) = 1 − 2 yy ′ y0′′ = 3

y ′′′( x) = −2 yy ′′ − 2 y ′2 y0′′′= −8

y iv ( x) = −2 yy ′′′ − 6 y ′y ′′ y 0iv = 34

y v ( x) = −2 yy iv − 8 y ′y ′′′ − 6 y ′′2 y 0v = −186

Using these values, the Taylor series becomes


3 2 4 3 17 4 31 5
y ( x) = 1 − x +
x − x + x − x +"
2 3 12 20
To obtain the value of y (0.1) correct to four decimal places, it is found that
the terms up to x4 should be considered, and we have y (0.1) = 0.9138.
Suppose that we wish to find the range of values of x for which the
above series, truncated after the term containing x4, can be used to compute
the values of y correct to four decimal places. We need only to write
31 5 or
x ≤ 0.00005 x ≤ 0.126.
20
Example 8.2 Given the differential equation
y ′′ − xy ′ − y = 0
with the conditions y(0) = 1 and y¢(0) = 0, use Taylor’s series method to
determine the value of y (0.1).
We have y (x) = 1 and y¢(x) = 0 when x = 0. The given differential
equation is
y ′′( x) = xy ′( x) + y ( x) (i)
Hence y¢¢(0) = y(0) = 1. Successive differentiation of (i) gives
y ′′′( x) = xy ′′( x) + y ′( x) + y ′( x) = xy ′′( x) + 2 y ′( x), (ii)

y iv ( x) = xy ′′′( x) + y ′′( x) + 2 y ′′( x) = xy ′′′( x) + 3 y ′′( x), (iii)

y v ( x) = xy iv ( x) + y ′′′( x) + 3 y ′′′( x) = xy iv ( x) + 4 y ′′′( x), (iv)

y vi ( x) = xy v ( x) + y iv ( x) + 4 y iv ( x) = xy v ( x) + 5 y iv ( x), (v)
SECTION 8.3: Picard’s Method of Successive Approximations 305

and similarly for higher derivatives. Putting x = 0 in (ii) to (v), we obtain


y ′′′(0) = 2 y ′(0) = 0,
y iv (0) = 3 y ′′(0) = 3, y v (0) = 0, y vi (0) = 5.
By Taylor’s series, we have

x2 x3 x 4 iv
y ( x) = y (0) + xy ′(0) + y ′′(0) + y ′′′(0) + y (0)
2 6 24
x5 v x6 vi
+ y (0) + y (0) + "
120 720
Hence

(0.1)2 (0.1)4 (0.1)6


y (0.1) = 1 + + (3) + (5) + "
2 24 720

= 1 + 0.005 + 0.0000125, neglecting the last term

= 1.0050125, correct to seven decimal places.

8.3 PICARD’S METHOD OF SUCCESSIVE APPROXIMATIONS

Integrating the differential equation given in Eq. (8.1), we obtain


x
y = y0 + ∫ f ( x, y ) dx. (8.3)
x0

Equation (8.3), in which the unknown function y appears under the integral
sign, is called an integral equation. Such an equation can be solved by the
method of successive approximations in which the first approximation to y
is obtained by putting y0 for y on right side of Eq. (8.3), and we write
x
y (1) = y0 + ∫ f ( x, y0 ) dx
x0
The integral on the right can now be solved and the resulting y(1) is substituted
for y in the integrand of Eq. (8.3) to obtain the second approximation y(2):
x
y (2) = y0 + ∫ f ( x, y (1) ) dx
x0

Proceeding in this way, we obtain y (3) , y (4) ,…, y ( n −1) and y ( n ) , where
x

∫ f ( x, y ( n −1) ) dx
( n)
y = y0 + with y (0) = y0 (8.4)
x0
306 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

Hence this method yields a sequence of approximations y (1) , y (2) , …, y ( n ) and


it can be proved (see, for example, the book by Levy and Baggot) that if
the function f (x, y) is bounded in some region about the point ( x0 , y0 ) and
if f ( x, y ) satisfies the Lipschitz condition, viz.,
| f ( x, y ) − f ( x, y )| ≤ K | y − y | K being a constant (8.5)

then the sequence y (1) , y (2) , … converges to the solution of Eq. (8.1).
Example 8.3 Solve the equation y ′ = x + y 2, subject to the condition y = 1
when x = 0.
We start with y (0) = 1 and obtain
x
1
∫ ( x + 1) dx = 1 + x + 2 x .
(1) 2
y =1+
0
Then the second approximation is
x
⎡ ⎛ 1 ⎞ ⎤
2


(2)
y =1+ ⎢ x + ⎜1 + x + x 2 ⎟ ⎥ dx
0 ⎢⎣ ⎝ 2 ⎠ ⎥⎦

3 2 2 3 1 4 1 5
=1+ x +
x + x + x + x .
2 3 4 20
It is obvious that the integrations might become more and more difficult as
we proceed to higher approximations.
Example 8.4 Given the differential equation
dy x2
= 2
dx y + 1
with the initial condition y = 0 when x = 0, use Picard’s method to obtain y
for x = 0.25, 0.5 and 1.0 correct to three decimal places.
We have
x
x2
y= ∫ y2 + 1
dx.
0
(0)
Setting y = 0, we obtain
x
1
y (1) = ∫ x 2 dx = x3
3
0
and
x
x2 ⎛1 ⎞ 1 1
∫ dx = tan −1 ⎜ x3 ⎟ = x3 − x9 + "
(2)
y = 6
0
(1/9) x + 1 ⎝3 ⎠ 3 81
SECTION 8.4: Euler’s Method 307

so that y(1) and y(2) agree to the first term, viz., (1/3)x3. To find the range
of values of x so that the series with the term (1/3)x3 alone will give the
result correct to three decimal places, we put
1 9
x ≤ 0.0005
81
which yields
x ≤ 0.7
Hence
1
y (0.25) = (0.25)3 = 0.005
3
1
y (0.5) = (0.5)3 = 0.042
3
1 1
y (1.0) = − = 0.321
3 81

8.4 EULER’S METHOD

We have so far discussed the methods which yield the solution of a differential
equation in the form of a power series. We will now describe the methods
which give the solution in the form of a set of tabulated values.
Suppose that we wish to solve the Eqs. (8.1) for values of y at
x = xr = x0 + rh(r = 1, 2, …). Integrating Eq. (8.1), we obtain
x1
y1 = y0 + ∫ f ( x, y ) dx. (8.6)
x0

Assuming that f ( x, y ) = f ( x0 , y0 ) in x0 ≤ x ≤ x1 , this gives Euler’s formula


y1 ≈ y0 + hf ( x0 , y0 ). (8.7a)
Similarly for the range x1 ≤ x ≤ x2 , we have
x2
y2 = y1 + ∫ f ( x, y ) dx.
x1
Substituting f ( x1 , y1 ) for f ( x, y ) in x1 ≤ x ≤ x2 we obtain
y2 ≈ y1 + hf ( x1 , y1 ). (8.7b)
Proceeding in this way, we obtain the general formula
yn +1 = yn + hf ( xn , yn ), n = 0, 1, 2, … (8.8)
The process is very slow and to obtain reasonable accuracy with Euler’s
method, we need to take a smaller value for h. Because of this restriction
308 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

on h, the method is unsuitable for practical use and a modification of it,


known as the modified Euler method, which gives more accurate results, will
be described in Section 8.4.2.

Example 8.5 To illustrate Euler’s method, we consider the differential


equation y ′ = − y with the condition y (0) = 1.
Successive application of Eq. (8.8) with h = 0.01 gives

y (0.01) = 1 + 0.1(−1) = 0.99

y (0.02) = 0.99 + 0.01 (−0.99) = 0.9801

y (0.03) = 0.9801 + 0.01 (−0.9801) = 0.9703

y (0.04) = 0.9703 + 0.01 (−0.9703) = 0.9606.

The exact solution is y = e− x and from this the value at x = 0.04 is 0.9608.

8.4.1 Error Estimates for the Euler Method


Let the true solution of the differential equation at x = xn be y(xn) and also
let the approximate solution be yn. Now, expanding y(xn+1) by Taylor’s
series, we get

h2
y ( xn 1 )  y ( xn ) hy b ( xn ) y bb ( xn ) "
2
h2
 y ( xn ) hy b( xn ) y bb (U n ), where xn c U n c xn 1. (8.9)
2
We usually encounter two types of errors in the solution of differential
equations. These are (i) local errors, and (ii) rounding errors. The local
error is the result of replacing the given differential equation by means of
the equation
yn +1 = yn + hyn′ .
This error is given by
1
Ln +1 = – h 2 y ′′ (U n ) (8.10)
2
The total error is then defined by
en = yn – y(xn) (8.11)
Since y0 is exact, it follows that e0 = 0.
Neglecting the rounding error, we write the total solution error as
en +1 = yn +1 − y ( xn +1 )
= yn + hyn′ – [ y ( xn ) + hy ′( xn ) – Ln +1 ]
= en + h [ f ( xn , yn ) – y ′( xn )] + Ln +1.
SECTION 8.4: Euler’s Method 309

Þ en 1  en h < f ( xn , yn )  f ( xn , y ( xn ))> Ln 1.
By mean value theorem, we write
vf
f ( xn , yn )  f ( xn , y( xn ))  < yn – y( xn )> ( xn , Yn ), y ( xn ) c Y n c yn .
vy
Hence, we have
en +1 = en ⎡⎣1 + hf y ( xn , Yn ) ⎤⎦ + Ln +1 (8.12)
Since e0 = 0, we obtain successively:
e1  L1 ; e2  ©1
«
hf y ( x1 , Y1 )¸º L1 L2 ;

e3 = ⎡⎣1 + hf y ( x2 , Y2 ) ⎤⎦ ⎡⎣1 + hf y ( x1 , Y1 ) ⎤⎦ ( L1 + L2 ) + L3 ; etc.

See the book by Isaacson and Keller [1966] for more details.
Example 8.6 We consider, again, the differential equation y ′ = − y with the
condition y (0) = 1, which we have solved by Euler’s method in Example 8.5.
Choosing h = 0.01, we have
1 + hf y ( xn , Yn ) = 1 + 0.01( −1) = 0.99.
and
1
Ln +1 = − h 2 y ′′( Sn ) = −0.00005 y ( Sn ).
2
In this problem, y ( Sn ) ≤ y ( xn ), since y ′ is negative. Hence we successively
obtain

| L1 | ≤ 0.00005 = 5 × 10−5,

| L2 | ≤ (0.00005) (0.99) < 5 × 10−5,

| L3 | ≤ (0.00005) (0.9801) < 5 × 10−5,


and so on. For computing the total solution error, we need an estimate of
the rounding error. If we neglect the rounding error, i.e., if we set
Rn +1 = 0,
then using the above bounds, we obtain from Eq. (8.12) the estimates
e0 = 0,
| e1 | ≤ 5 × 10−5
| e2 | ≤ 0.99e1 + 5 × 10−5 < 10−4
| e3 | ≤ 0.99e2 + 5 × 10−5 < 10−4 + 5 × 10−5
| e4 | ≤ 0.99e3 + 5 × 10−5 < 10−4 + 10−4 = 2 × 10−4 = 0.0002
#
310 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

It can be verified that the estimate for e4 agrees with the actual error in the
value of y(0.04) obtained in Example 8.5.

8.4.2 Modified Euler’s Method


Instead of approximating f ( x, y ) by f ( x0 , y0 ) in Eq. (8.6), we now
approximate the integral given in Eq. (8.6) by means of trapezoidal rule to
obtain
h
y1 = y0 + [ f ( x0 , y0 ) + f ( x1 , y1 )] (8.13)
2
We thus obtain the iteration formula
h
y 1( n +1) = y0 + [ f ( x0 , y0 ) + f ( x1 , y 1( n ) )], n = 0, 1, 2, … (8.14)
2
where y 1( n ) is the nth approximation to y1. The iteration formula (8.14) can
be started by choosing y 1(0) from Euler’s formula:
y 1(0) = y0 + hf ( x0 , y0 ).

Example 8.7 Determine the value of y when x = 0.1 given that


y (0) = 1 and y′ = x 2 + y
We take h = 0.05. With x0 = 0 and y0 =1.0, we have f ( x0 , y0 ) = 1.0. Hence
Euler’s formula gives
y 1(0) = 1 + 0.05(1) = 1.05

Further, x1 = 0.05 and f ( x1 , y 1(0) ) = 1.0525. The average of f ( x0 , y0 ) and


f ( x1 , y 1(0) ) is 1.0262. The value of y 1(1) can therefore be computed by using
Eq. (8.14) and we obtain
y 1(1) = 1.0513.
Repeating the procedure, we obtain y 1(2) =1.0513. Hence we take y1 = 1.0513,
which is correct to four decimal places.
Next, with x1 = 0.05, y1 = 1.0513 and h = 0.05, we continue the procedure
to obtain y2, i.e., the value of y when x = 0.1. The results are
y 2(0) = 1.1040, y 2(1) = 1.1055, y 2(2) = 1.1055.

Hence we conclude that the value of y when x = 0.1 is 1.1055.

8.5 RUNGE–KUTTA METHODS

As already mentioned, Euler’s method is less efficient in practical problems


since it requires h to be small for obtaining reasonable accuracy. The
SECTION 8.5: Runge–Kutta Method 311

Runge–Kutta methods are designed to give greater accuracy and they possess
the advantage of requiring only the function values at some selected points
on the subinterval.
If we substitute y1 = y0 + hf ( x0 , y0 ) on the right side of Eq. (8.13), we
obtain
h
y1 = y0 + [ f 0 + f ( x0 + h, y0 + hf 0 )],
2
where f 0 = f ( x0 , y0 ). If we now set
k1 = hf 0 and k 2 = hf ( x0 + h, y0 + k1 )
then the above equation becomes
1
y1 = y0 + (k1 + k2 ), (8.15)
2
which is the second-order Runge–Kutta formula. The error in this formula
can be shown to be of order h3 by expanding both sides by Taylor’s series.
Thus, the left side gives
h2 h3
y0 hy0b y0bb y0bbb "
2 6
and on the right side
⎡ ∂f ∂f ⎤
k2 = hf ( x0 + h, y0 + hf 0 ) = h ⎢ f 0 + h + hf 0 + O (h 2 ) ⎥ .
⎣ ∂x0 ∂y0 ⎦
Since
df ( x, y ) ∂f ∂f
= +f ,
dx ∂x ∂y
we obtain
k2 = h [ f 0 + hf 0′ + O (h 2 )] = hf 0 + h 2 f 0′ + O (h3 ),

so that the right side of Eq. (8.15) gives


1 1
y0 + [hf0 + hf0 + h2 f0′ + O (h3 )] = y0 + hf 0 + h2 f0′ + O (h3 )
2 2

h2
= y0 + hy0′ + y0′′ + O ( h3 ).
2
It therefore follows that the Taylor series expansions of both sides of Eq. (8.15)
agree up to terms of order h2, which means that the error in this formula
is of order h3.
More generally, if we set
y1 = y0 + W1k1 + W2 k 2 (8.16a)
where
k1 = hf 0 ⎫⎪
⎬ (8.16b)
k2 = hf ( x0 + α 0 h, y0 + β 0 k1 ) ⎪⎭
312 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

then the Taylor series expansions of both sides of the last equation in (8.16a)
gives the identity
h2 ⎛ ∂f ∂f ⎞ 3
y0 + hf0 + ⎜ + f0 ⎟ + O (h ) = y0 + (W1 + W2 ) hf0
2 ⎝ ∂x ∂y ⎠
⎛ ∂f ∂f ⎞
+ W2 h2 ⎜ B 0 + C0 f0 ⎟ + O (h3 ).
⎝ ∂x ∂y ⎠
Equating the coefficients of f (x, y) and its derivatives on both sides, we
obtain the relations
1 1
W1 + W2 = 1, W2B 0 = , W2 C0 = . (8.17)
2 2
Clearly, B 0 = C0 and if B 0 is assigned any value arbitrarily, then the remaining
parameters can be determined uniquely. If we set, for example, B 0 = C0 = 1,
then we immediately obtain W1 = W2 = 1/2, which gives formula (8.15).
It follows, therefore, that there are several second-order Runge–Kutta formulae
and that formulae (8.16) and (8.17) constitute just one of several such
formulae.
Higher-order Runge–Kutta formulae exist, of which we mention only the
fourth-order formula defined by
y1 = y0 + W1k1 + W2 k 2 + W3 k3 + W4 k 4 (8.18a)
where
k1 = hf ( x0 , y0 ) ⎫

k2 = hf ( x0 + B 0 h, y0 + C0 k1 ) ⎪⎪
⎬ (8.18b)
k3 = hf ( x0 + B1h, y0 + C1k1 + O1k2 ) ⎪

k4 = hf ( x0 + B 2 h, y0 + C 2 k1 + O 2 k2 + E1k3 ), ⎪⎭
where the parameters have to be determined by expanding both sides of the
first equation of (8.18a) by Taylor’s series and securing agreement of terms
up to and including those containing h4. The choice of the parameters is,
again, arbitrary and we have therefore several fourth-order Runge–Kutta
formulae. If, for example, we set
1 ⎫
B 0 = C0 = , B1 = 1 , B 2 = 1, ⎪
2 2 ⎪
1 ⎪
C1 = ( 2 − 1), C2 = 0 ⎪
2 ⎪
⎬ (8.19)
1 1 1 ⎪
O1 = 1 − , O = − , E = 1 + ,
2
2
2
1
2 ⎪

1 1⎛ 1 ⎞ 1⎛ 1 ⎞⎪
W1 = W4 = , W2 = ⎜1 − ⎟, W3 = ⎜1 + ⎟,
6 3⎝ 2⎠ 3⎝ 2 ⎠ ⎪⎭
SECTION 8.5: Runge–Kutta Method 313

we obtain the method of Gill, whereas the choice

B 0 = B1 = 1 , 1 ⎫
C0 = O1 =
2 2 ⎪

C1 = C2 = O 2 = 0, ⎪ (8.20)
B 2 = E1 = 1 ⎬

W1 = W4 = ,
1 2 ⎪
W2 = W3 = ⎪
6 6 ⎭
leads to the fourth-order Runge–Kutta formula, the most commonly used
one in practice:
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 ) (8.21a)
6
where
k1 = hf ( x0 , y0 ) ⎫

⎛ 1 1 ⎞⎪
k2 = hf ⎜ x0 + h, y0 + k1 ⎟ ⎪
⎝ 2 2 ⎠⎪
(8.21b)

⎛ 1 1 ⎞⎪
k3 = hf ⎜ x0 + h, y0 + k2 ⎟ ⎪
⎝ 2 2 ⎠

k4 = hf ( x0 + h, y0 + k3 ) ⎪

in which the error is of order h5. Complete derivation of the formula is
exceedingly complicated, and the interested reader is referred to the book by
Levy and Baggot. We illustrate here the use of the fourth-order formula by
means of examples.
Example 8.8 Given dy/dx = y − x where y (0) = 2, find y (0.1) and y (0.2)
correct to four decimal places.
(i) Runge–Kutta second-order formula: With h = 0.1, we find k1 = 0.2
and k 2 = 0.21. Hence
1
y1 = y (0.1) = 2 + (0.41) = 2.2050.
2
To determine y2 = y (0.2), we note that x0 = 0.1 and y0 = 2.2050. Hence,
k1 = 0.1(2.105) = 0.2105 and k2 = 0.1(2.4155 − 0.2) = 0.22155.
It follows that
1
y2 = 2.2050 + (0.2105 + 0.22155) = 2.4210.
2
Proceeding in a similar way, we obtain
y3 = y (0.3) = 2.6492 and y4 = y (0.4) = 2.8909
314 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

We next choose h = 0.2 and compute y (0.2) and y(0.4) directly. With h = 0.2.
x0 = 0 and y0 = 2, we obtain k1 = 0.4 and k2 = 0.44 and hence y(0.2) =
2.4200. Similarly, we obtain y(0.4) = 2.8880.
From the analytical solution y = x + 1 + ex, the exact values of y(0.2)
and y(0.4) are respectively 2.4214 and 2.8918. To study the order of conver-
gence of this method, we tabulate the values as follows:
x Computed y Exact y Difference Ratio

0.2 h = 0.1: 2.4210 2.4214 0.0004


3.5
h = 0.2 : 2.4200 0.0014

0.4 h = 0.1: 2.8909 2.8918 0.0009


4.2
h = 0.2 : 2.8880 0.0038

It follows that the method has an h2-order of convergence.


(ii) Runge–Kutta fourth-order formula: To determine y(0.1), we have
x0 = 0, y0 = 2 and h = 0.1. We then obtain
k1 = 0.2,
k2 = 0.205
k3 = 0.20525
k4 = 0.21053.
Hence
1
y (0.1) = 2 + ( k1 + 2k2 + 2k3 + k4 ) = 2.2052.
6
Proceeding similarly, we obtain y(0.2) = 2.4214.
Example 8.9 Given dy/dx = 1 + y2, where y = 0 when x = 0, find y(0.2),
y(0.4) and y(0.6).
We take h = 0.2. With x0 = y0 = 0, we obtain from (8.21a) and (8.21b),
k1 = 0.2,
k2 = 0.2 (1.01) = 0.202,
k3 = 0.2 (1 + 0.010201) = 0.20204,
k4 = 0.2 (1 + 0.040820) = 0.20816,
and
1
y (0.2) = 0 +(k1 + 2k2 + 2k3 + k4 ) = 0.2027,
6
which is correct to four decimal places.
To compute y(0.4), we take x0 = 0.2, y0 = 0.2027 and h = 0.2. With
these values, Eqs. (8.21a) and (8.21b) give
SECTION 8.6: Predictor–Corrector Methods 315

k1 = 0.2 [1 + (0.2027)2 ] = 0.2082,

k2 = 0.2 [1 + (0.3068) 2 ] = 0.2188,

k3 = 0.2 [1 + (0.3121)2 ] = 0.2195,

k4 = 0.2 [1 + (0.4222) 2 ] = 0.2356,


and
y (0.4) = 0.2027 + 0.2201 = 0.4228,
correct to four decimal places.
Finally, taking x0 = 0.4, y0 = 0.4228 and h = 0.2, and proceeding as
above, we obtain y(0.6) = 0.6841.

Example 8.10 We consider the initial value problem y ′ = 3x + y/2 with the
condition y(0) = 1.
The following table gives the values of y(0.2) by different methods, the
exact value being 1.16722193. It is seen that the fourth-order Runge–Kutta
method gives the accurate value for h = 0.05.
Method h Computed value

Euler 0.2 1.100 000 00


0.1 1.132 500 00
0.05 1.149 567 58
Modified Euler 0.2 1.100 000 00
0.1 1.150 000 00
0.05 1.162 862 42
Fourth-order Runge–Kutta 0.2 1.167 220 83
0.1 1.167 221 86
0.05 1.167 221 93

8.6 PREDICTOR–CORRECTOR METHODS

In the methods described so far, to solve a differential equation over a single


interval, say from x = xn to x = xn+1, we required information only at the
beginning of the interval, i.e. at x = xn. Predictor–corrector methods are the
ones which require function values at xn, xn–1, xn–2, … for the computation
of the function value at xn+1 A predictor formula is used to predict the
value of y at xn+1 and then a corrector formula is used to improve the value
of yn+1.
In Section 8.6.1 we derive Predictor–corrector formulae which use
backward differences and in Section 8.6.2 we describe Milne’s method
which uses forward differences.
316 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

8.6.1 Adams–Moulton Method


Newton’s backward difference interpolation formula can be written as

n (n + 1) 2 n (n + 1) (n + 2) 3 (8.22)
f ( x, y ) = f 0 + n∇f 0 + ∇ f0 + ∇ f0 + "
2 6
where
x − x0
n= and f 0 = f ( x0 , y0 ).
h
If this formula is substituted in
x1
y1 = y0 + ∫ f ( x, y ) dx, (8.23)
x0

we get

x1
⎡ n ( n + 1) 2 ⎤
y1 = y0 + ∫ ⎢ f0 + n∇f0 +
⎣ 2
∇ f0 + "⎥ dx

x0

1
⎡ n (n + 1) 2 ⎤
= y0 + h ∫ ⎢⎣ f0 + n∇f0 + 2
∇ f0 + "⎥ dn

0

⎛ 1 5 3 251 4 ⎞
= y0 + h ⎜1 + ∇ + ∇ 2 + ∇3 + ∇ + "⎟ f0 .
⎝ 2 12 8 720 ⎠

It can be seen that the right side of the above relation depends only on y0,
y–1, y–2, …, all of which are known. Hence this formula can be used to
compute y1. We therefore write it as

⎛ 1 5 3 251 4 ⎞
y 1p = y0 + h ⎜ 1 + ∇ + ∇ 2 + ∇3 + ∇ + "⎟ f0 (8.24)
⎝ 2 12 8 720 ⎠

This is called Adams–Bashforth formula and is used as a predictor formula


(the superscript p indicating that it is a predicted value).
A corrector formula can be derived in a similar manner by using Newton’s
backward difference formula at f1:

n (n 1) 2 n (n 1) (n 2) 3
f ( x, y )  f1 n‘f1 ‘ f1 ‘ f1 " (8.25)
2 6
SECTION 8.6: Predictor–Corrector Methods 317

Substituting Eq. (8.25) in Eq. (8.23), we obtain


x1
⎡ n ( n + 1) 2 ⎤
y1 = y0 + ∫ ⎢ f1 + n∇f1 +
⎣ 2
∇ f1 + "⎥ dx

x0

0
⎡ n (n + 1) 2 ⎤
= y0 + h ∫ ⎢⎣ f1 + n∇f1 + 2
∇ f1 + "⎥ dn

1

⎛ 1 1 1 19 4 ⎞
= y0 + h ⎜1 − ∇ − ∇ 2 − ∇3 − ∇ − " ⎟ f1 (8.26)
⎝ 2 12 24 720 ⎠
The right side of Eq. (8.26) depends on y1, y0, y–1, … where for y1 we
p
use y 1 , the predicted value obtained from (8.24). The new value of y1 thus
obtained from Eq. (8.26) is called the corrected value, and hence we rewrite
the formula as
⎛ 1 1 1 19 4 ⎞
y 1c = y0 + h ⎜ 1 − ∇ − ∇ 2 − ∇3 − ∇ − " ⎟ f 1p (8.27)
⎝ 2 12 24 720 ⎠
This is called Adams–Moulton corrector formula the superscript c indicates
that the value obtained is the corrected value and the superscript p on the
right indicates that the predicted value of y1 should be used for computing
the value of f (x1, y1).
In practice, however, it will be convenient to use formulae (8.24) and
(8.27) by ignoring the higher-order differences and expressing the lower-
order differences in terms of function values. Thus, by neglecting the fourth
and higher-order differences, formulae (8.24) and (8.27) can be written as
h
y 1p = y0 + (55 f 0 − 59 f −1 + 37 f −2 − 9 f −3 ) (8.28)
24
and
h
y 1c = y0 +
(9 f 1p + 19 f 0 − 5 f −1 + f −2 ) (8.29)
24
in which the errors are approximately
251 5 (4) 19 5 (4)
h f0 and − h f 0 respectively.
720 720
The general forms of formulae (8.28) and (8.29) are given by
h
y np+1 = yn + [55 f n − 59 f n −1 + 37 f n − 2 − 9 f n −3 ] (8.28a)
24
and
h
y nc +1 = yn +
[9 f np+1 + 19 f n − 5 f n −1 + f n − 2 ] (8.29a)
24
Such formulae, expressed in ordinate form, are often called explicit predictor–
corrector formulae.
318 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

The values y–1, y–2 and y–3, which are required on the right side of
Eq. (8.28) are obtained by means of the Taylor’s series, or Euler’s method,
or Runge–Kutta method. Due to this reason, these methods are called starter
methods. For practical problems, Runge–Kutta fourth-order formula
together with formulae (8.28) and (8.29) have been found to be the most
successful combination. The following example will illustrate the application
of this method.

Example 8.11 We consider once again the differential equation given in


Example 8.9 with the same condition, and we wish to compute y(0.8).
For this example, the starter values are y (0.6), y (0.4) and y (0.2),
which are already computed in Example 8.9 by the fourth-order
Runge–Kutta method. Using now Eq. (8.28) with y0 = 0.6841, y–1 = 0.4228,
y–2 = 0.2027 and y–3 = 0, we obtain

0.2
y p (0.8)  0.6841 {55 [1 (0.6841) 2 ]  59 [1 (0.4228)2 ]
24
37 [1 (0.2027)2 ]  9}
 1.0233, on simplification.

Using this predicted value on the right side of Eq. (8.29), we obtain

0.2
y c (0.8)  0.6841 {9 [1 (0.0233)2 ] 19 [1 (0.6841) 2 ]
24
 5 [1 (0.4228)2 ] [1 (0.2027)2 ]}
 1.0296, which is correct to four decimal places

The importance of the method lies in the fact that when once y 1p is computed
from formula (8.28), formula (8.29) can be used iteratively to obtain the
value of y1 to the accuracy required.

8.6.2 Milne’s Method


This method uses Newton’s forward difference formula in the form

n (n − 1) 2 n (n − 1) (n − 2) 3 (8.30)
f ( x , y ) = f 0 + n Δf 0 + Δ f0 + Δ f0 + "
2 6
Substituting Eq. (8.30) in the relation
x4
y4 = y0 + ∫ f ( x, y ) dx (8.31)
x0
SECTION 8.6: Predictor–Corrector Methods 319

we obtain
x4
⎡ n (n − 1) 2 ⎤
y4 = y0 + ∫ ⎢ f 0 + n'f 0 +
⎣ 2
' f 0 + "⎥ dx

x0

4
⎡ n (n − 1) 2 ⎤
= y0 + h ∫ ⎢⎣ f0 + n'f0 + 2
' f 0 + "⎥ dn

0

⎛ 20 2 8 ⎞
= y0 + h ⎜ 4 f0 + 8'f0 + ' f0 + '3 f 0 + " ⎟
⎝ 3 3 ⎠

4h
= y0 + (2 f1 − f 2 + 2 f3 ) (8.32)
3
after neglecting fourth- and higher-order differences and expressing differences
'f 0 , ' 2 f 0 and '3 f 0 in terms of the function values.
This formula can be used to ‘predict’ the value of y4 when those of y0,
y1, y2 and y3 are known. To obtain a ‘corrector’ formula, we substitute
Newton’s formula from (8.30) in the relation
x2
y2 = y0 + ∫ f ( x, y ) dx (8.33)
x0

and get
2
⎡ n (n − 1) 2 ⎤
y2 = y0 + h ∫ ⎢⎣ f0 + n'f0 + 2
' f 0 + "⎥ dn

0

⎛ 1 ⎞
= y0 + h ⎜ 2 f0 + 2'f0 + ' 2 f0 + " ⎟
⎝ 3 ⎠
h (8.34)
= y0 + ( f0 + 4 f1 + f 2 )
3
The value of y4 obtained from Eq. (8.32) can therefore be checked by
using Eq. (8.34).
The general form of Eqs. (8.32) and (8.34) are:

4h
ynp+1 = yn −3 + (2 f n − 2 − f n −1 + 2 f n ) (8.32a)
3
and
h
ync +1 = yn −1 + ( f n −1 + 4 f n + f n +1 ) (8.34a)
3
320 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

The application of this method is illustrated by the following example.


Example 8.12 We consider again the differential equation discussed in
Examples 8.9 and 8.10, viz., to solve y¢ = 1 + y2 with y (0) = 0 and we wish
to compute y (0.8) and y (1.0).
With h = 0.2, the values of y (0.2), y (0.4) and y (0.6) are computed in
Example 8.9 and these values are given in the table below:
x y y ′ = 1+ y 2
0 0 1.0
0.2 0.2027 1.0411
0.4 0.4228 1.1787
0.6 0.6841 1.4681

To obtain y (0.8), we use Eq. (8.32) and obtain


0.8
y (0.8) = 0 + [2 (1.0411) − 1.1787 + 2 (1.4681)] = 1.0239
3
This gives
y ′(0.8) = 2.0480.
To correct this value of y (0.8), we use formula (8.34) and obtain
0.2
y (0.8) = 0.4228 + [1.1787 + 4 (1.4681) + 2.0480] = 1.0294.
3
Proceeding similarly, we obtain y (1.0) = 1.5549. The accuracy in the values
of y(0.8) and y (1.0) can, of course, be improved by repeatedly using formula
(8.34).
Example 8.13 The differential equation y ′ = x 2 + y 2 − 2 satisfies the following
data:
x y

–0.1 1.0900
0 1.0000
0.1 0.8900
0.2 0.7605

Use Milne’s method to obtain the value of y (0.3).


We first form the following table:
x y y′ = x2 + y2 − 2
– 0.1 1.0900 – 0.80190
0 1.0 –1.0
0.1 0.8900 –1.19790
0.2 0.7605 –1.38164
SECTION 8.7: Cubic Spline Method 321

Using Eq. (8.32), we obtain

4 (0.1)
y (0.3) = 1.09 + [2 (−1) − (−1.19790) + 2 (−1.38164)] = 0.614616.
3
In order to apply Eq. (8.34), we need to compute y ′(0.3). We have

y ′(0.3) = (0.3)2 + (0.614616)2 − 2 = −1.532247.


Now, Eq. (8.34) gives the corrected value of y (0.3):

0.1
y (0.3) = 0.89 + [−1.197900 + 4 (−1.38164) + (−1.532247)] = 0.614776.
3

8.7 CUBIC SPLINE METHOD

The governing equations of a cubic spline have been discussed in detail in


Section 5.2, where the cubic spline function has been obtained in terms of
its second derivatives, Mi. In certain applications, e.g. the solution of initial-
value problems, it would be convenient to use the governing equations in
terms of its first derivatives, i.e., mi. Using Hermite’s interpolation formula
(see Section 3.9.3), it would not be difficult to derive the following formula for
the cubic spline s(x) in xi −1 ≤ x ≤ xi in terms of its first derivatives s′( xi ) = mi :

( xi − x) 2 ( x − xi −1 ) ( x − xi −1 )2 ( xi − x)
s ( x) = mi −1 − mi
h2 h2
( xi − x)2 [2( x − xi −1 ) + h] ( x − xi −1 )2 [2( xi − x) + h]
+ yi −1 + yi , (8.35)
h3 h3

where h = xi − xi −1. Differentiating Eq. (8.35) with respect to x and simplifying,


we obtain
mi −1 mi
s′( x) = 2
( xi − x) (2 xi −1 + xi − 3 x) − ( x − xi −1 ) ( xi −1 + 2 xi − 3x)
h h2
6 ( yi − yi −1 )
+ ( x − xi −1 ) ( xi − x). (8.36)
h3
Again,
2mi −1 2mi
s′′( x) = − 2
( xi −1 + 2 xi − 3x) − (2 xi −1 + xi − 3x)
h h2
6( yi − yi −1 )
+ ( xi −1 + xi − 2 x), (8.37)
h3
322 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

which gives
2mi −1 4mi 6
s′′( xi ) = + − 2 ( yi − yi −1 )
h h h
2mi −1 4mi 6
= + − 2 ( si − si −1 ). (8.38)
h h h
If we now consider the initial-value problem
dy
= f ( x, y ) (8.39a)
dx
and
y ( x0 ) = y0 (8.39b)
then from Eq. (8.39a), we obtain

d2y ∂f ∂f dy
= + ,
dx 2 ∂x ∂y dx
or

y ′′( xi ) = f x ( xi , yi ) + f y ( xi , yi ) f ( xi , yi )

= f x ( xi , si ) + f y ( xi , si ) f ( xi , si ). (8.40)
Equating Eqs. (8.38) and (8.40), we obtain
2mi −1 4mi 6
+ − 2 ( si − si −1 ) = f x ( xi , si ) + f y ( xi , si ) f ( xi , si ) (8.41)
h h h
from which si can be computed. Substitution in Eq. (8.35) gives the required
solution.
The following example demonstrates the usefulness of the spline method.

Example 8.14 We consider again the initial-value problem defined by


1
y ′ = 3x + y, y (0) = 1, (i)
2
whose exact solution is given by

y = 13e x /2 − 6 x − 12 (ii)
We take, for simplicity, n = 2, i.e. h = 0.5 and compute the value of y (0.5).
Here f ( x, y ) = 3x + y/2 and therefore we have fx = 3 and fy = 1/2. Also,
1
f ( xi , si ) = 3 xi + si .
2
SECTION 8.8: Simultaneous and Higher-Order Equations 323

Hence, Eq. (8.41) gives


1⎛3 1 ⎞
4m0 + 8m1 − 24( s1 − s0 ) = 3 + ⎜ + s1 ⎟
2⎝ 2 2 ⎠
and
1⎛ 1 ⎞
4m1 + 8m2 − 24( s2 − s1 ) = 3 + ⎜ 3 + s2 ⎟
2⎝ 2 ⎠
Since m0 = 1/2, m1 = 3/2 + s1/2 and m2 = 3 + s2 /2, the above equations give
on simplification
s1 = 1.691358 and s2 = 3.430879.
The errors in these solutions are given by 0.000972 and 0.002497, respectively.
It can be shown that, under certain conditions, the spline method gives
O(h4) convergence and compares well with the multi-step Milne’s method.
For details, the reader is referred to Patricio [1978].

8.8 SIMULTANEOUS AND HIGHER-ORDER EQUATIONS

We consider the two equations


dx dy
= f (t , x , y ) and = G (t , x , y ) (8.42)
dt dt
with the initial conditions x = x0 and y = y0, when t = t0. Assuming that Δt = h,
Δx = k , and Δy = l , the fourth-order Runge–Kutta method gives
k1 = hf (t0 , x0 , y0 ); ⎫

l1 = hφ (t0 , x0 , y0 ); ⎪

⎛ 1 1 1 ⎞ ⎪
k2 = hf ⎜ t0 + h, x0 + k1 , y0 + l1 ⎟ ; ⎪
⎝ 2 2 2 ⎠ ⎪
⎛ 1 1 1 ⎞ ⎪
l2 = hφ ⎜ t0 + h, x0 + k1 , y0 + l1 ⎟ ; ⎪
⎝ 2 2 2 ⎠ ⎪
⎛ 1 1 1 ⎞ ⎪
k3 = hf ⎜ t0 + h, x0 + k2 , y0 + l2 ⎟ ; ⎪
⎝ 2 2 2 ⎠ ⎪⎪
⎬ (8.43)
⎛ 1 1 1 ⎞ ⎪
l3 = hφ ⎜ t0 + h, x0 + k2 , y0 + l2 ⎟ ; ⎪
⎝ 2 2 2 ⎠

k4 = hf (t0 + h, x0 + k3 , y0 + l3 ); ⎪


l4 = hφ (t0 + h, x0 + k3 , y0 + l3 ); ⎪
1 ⎪
x1 = x0 + (k1 + 2k2 + 2k3 + k4 ) ⎪
6 ⎪
1 ⎪
y1 = y0 + (l1 + 2l2 + 2l3 + l4 ). ⎪⎪⎭
6
324 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

In a similar manner, one can extend the Taylor series method or Picard’s
method to the system (8.42). The extension of the Runge–Kutta method to
a system of n equations is quite straightforward.
We now consider the second-order differential equation
y ′′ = F ( x, y, y ′) (8.44a)
with the initial conditions
y ( x0 ) = y0 and y ′( x0 ) = y0′ . (8.45a)

By setting z = y ′, the problem given in Eqs. (8.44a) and (8.45a) can be


reduced to the problem of solving the system
y′ = z and z ′ = F ( x, y , z ) (8.44b)
with the conditions
y ( x0 ) = y0 and z ( x0 ) = y0′ (8.45b)
which can be solved by the method described above. Similarly, any higher-
order differential equation, in which we can solve for the highest derivative,
can be reduced to a system of first-order differential equations.

8.9 SOME GENERAL REMARKS

In the preceding sections, we have given a brief discussion of some well-


known methods for the numerical solution of an ordinary differential equation
satisfying certain given initial conditions. If the solution is required over a
wider range, it is important to get the starting values as accurately as
possible by one of the methods described.
It is outside the scope of this book to present a comprehensive review
of the different methods described in this text for the numerical solution of
differential equations, but the following points are relevant to the methods
discussed.
The Taylor’s series method suffers from the serious disadvantage that
all the higher derivatives of f (x, y) [see Eqs. (8.1)] must exist and that h
should be small such that successive terms of the series diminish quite
rapidly. Likewise, in the modified Euler method, the value of h should be so
small that one or two applications of the iteration formula (8.14) will give
the final result for that value of h. The Picard method has probably little
practical value because of the difficulty in performing the successive integrations.
Although laborious, the Runge–Kutta method is the most widely used
one since it gives reliable starting values and is particularly suitable when the
computation of higher derivatives is complicated. When the starting values
have been found, the computations for the rest of the interval can be continued
by means of the predictor–corrector methods.
SECTION 8.10: Boundary-Value Problems 325

The cubic spline method is a one-step method and at the same time a
global one. The step-size can be changed during computations and, under
certain conditions, gives O(h4) convergence. The method can also be extended
to systems of ordinary differential equations.

8.10 BOUNDARY-VALUE PROBLEMS

Some simple examples of two-point linear boundary-value problems are:


(a) y ′′( x) + f ( x) y ′( x) + g ( x) y ( x) = r ( x) (8.46)
with the boundary conditions
y ( x0 ) = a and y ( xn ) = b (8.47)
(b) y iv ( x)  p( x) y ( x) q( x) (8.48)
with
y ( x0 ) = y ′( x0 ) = A and y ( xn ) = y ′( xn ) = B. (8.49)
Problems of the type (b), which involve the fourth-order differential equation,
are much involved and will not be discussed here. There exist many methods of
solving second-order boundary-value problems of type (a). Of these, the finite
difference method is a popular one and will be described in Section 8.10.1.
Finally, in Sections 8.10.2 and 8.10.3 we discuss methods based on the
application of cubic splines and weighted residuals.

8.10.1 Finite-difference Method


The finite-difference method for the solution of a two-point boundary value
problem consists in replacing the derivatives occurring in the differential
equation (and in the boundary conditions as well) by means of their finite-
difference approximations and then solving the resulting linear system of
equations by a standard procedure.
To obtain the appropriate finite-difference approximations to the derivatives,
we proceed as follows.
Expanding y ( x + h) in Taylor’s series, we have

h2 h3
y ( x + h) = y ( x) + hy ′( x) + y ′′( x) + y ′′′( x) + " (8.50)
2 6
from which we obtain
y ( x + h) − y ( x ) h
y ′( x) = − y ′′( x) −"
h 2
Thus we have
y ( x + h) − y ( x )
y ′( x) = + O ( h) (8.51)
h
326 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

which is the forward difference approximation for y ′( x). Similarly, expansion


of y ( x − h) in Taylor’s series gives

h2 h3
y ( x − h) = y ( x) − hy ′( x ) + y ′′( x ) − y ′′′( x ) + " (8.52)
2 6
from which we obtain
y ( x ) − y ( x − h)
y ′( x) = + O ( h) (8.53)
h
which is the backward difference approximation for y ′( x).
A central difference approximation for y ′( x) can be obtained by subtracting
Eq. (8.52) from Eq. (8.50). We thus have
y ( x + h) − y ( x − h)
y ′( x) = + O ( h 2 ). (8.54)
2h
It is clear that Eq. (8.54) is a better approximation to y ′( x) than either
Eq. (8.51) or Eq. (8.53). Again, adding Eqs. (8.50) and (8.52), we get an
approximation for y ′′( x)
y ( x − h) − 2 y ( x ) + y ( x + h)
y ′′( x) = + O (h 2 ). (8.55)
h2
In a similar manner, it is possible to derive finite-difference approximations
to higher derivatives.
To solve the boundary-value problem defined by Eqs. (8.46) and (8.47),
we divide the range [x0, xn] into n equal subintervals of width h so that
xi = x0 + ih, i = 1, 2, …, n.
The corresponding values of y at these points are denoted by
y ( xi ) = yi = y ( x0 + ih), i = 0, 1, 2, …, n.
From Eqs. (8.54) and (8.55), values of y ′( x) and y ′′( x) at the point x = xi
can now be written as
y − yi −1
yi′ = i +1 + O (h2 )
2h
and
y − 2 yi + yi +1
yi′′ = i −1 + O (h 2 ).
h2
Satisfying the differential equation at the point x = xi , we get
yi′′+ fi yi′ + gi yi = ri
Substituting the expressions for yi′ and yi′′, this gives
yi −1 − 2 yi + yi +1 yi +1 − yi −1
2
+ fi + gi yi = ri , i = 1, 2, …, n − 1,
h 2h
where yi = y ( xi ), gi = g ( xi ), etc.
SECTION 8.10: Boundary-Value Problems 327

Multiplying through by h2 and simplifying, we obtain


⎛ h ⎞ 2 ⎛ h ⎞ 2
⎜1 − fi ⎟ yi −1 + ( −2 + gi h ) yi + ⎜1 + fi ⎟ yi +1 = ri h , (8.56)
⎝ 2 ⎠ ⎝ 2 ⎠
i = 1, 2, …, n − 1
with
y0 = a and yn = b (8.57)
Equation (8.56) with the conditions (8.57) comprise a tridiagonal system
which can be solved by the method outlined in Section 7.5.9 of Chapter 7.
The solution of this tridiagonal system constitutes an approximate solution
of the boundary value problem defined by Eqs. (8.46) and (8.47).
To estimate the error in the numerical solution, we define the local
truncation error, τ , by
⎛ y − 2 yi + yi +1 ⎞ ⎛ y − yi −1 ⎞
U = ⎜ i −1 − yi′′⎟ + fi ⎜ i +1 − yi′ ⎟ .
2
⎝ h ⎠ ⎝ 2h ⎠
Expanding yi −1 and yi +1 by Taylor’s series and simplifying, the above gives

h 2 iv
U = ( y i + 2 fi yi′′′ ) + O (h 4 ). (8.58)
12
Thus, the finite difference approximation defined by Eq. (8.56) has second-
order accuracy for functions with continuous fourth derivatives on [x0, xn].
Further, it follows that τ → 0 as h → 0, implying that greater accuracy in
the result can be achieved by using a smaller value of h. In such a case, of
course, more computational effort would be required since the number of
equations become larger.
An easier way to improve accuracy is to employ Richardson’s deferred
approach to the limit, assuming that the O(h2) error is proportional to h2.
This means that the error has the form
y ( xi ) − yi = h 2 e( xi ) + O ( h4 ) (8.59)
For extrapolation to the limit, we solve Eq. (8.56) twice, with the
interval lengths h and h/2 respectively. Let the corresponding solutions of
Eq. (8.56) be denoted by yi(h) and yi(h/2). For a point xi common to both,
we therefore have
y ( xi ) − yi (h) = h2 e( xi ) + O (h 4 ) (8.60a)
and
2
⎛h⎞ h
y ( xi ) − yi ⎜ ⎟ = e( xi ) + O (h 4 ) (8.60b)
⎝2⎠ 4
from which we obtain
4 yi (h /2) − yi (h)
y ( xi ) = . (8.61)
3
328 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

We have explained the method with simple boundary conditions (8.47) where
the function values on the boundary are prescribed. In many applied problems,
however, derivative boundary conditions may be prescribed, and this requires
a modification of the procedures described above. The following examples
illustrate the application of the finite-difference method.
Example 8.15 A boundary-value problem is defined by
y ′′ + y + 1 = 0, 0 ≤ x ≤1
where
y (0) = 0 and y (1) = 0.
With h = 0.5, use the finite-difference method to determine the value of y(0.5).
This example was considered by Bickley [1968]. Its exact solution is
given by
1 − cos 1
y ( x) = cos x + sin x − 1,
sin1
from which, we obtain
y (0.5) = 0.139493927.
Here nh = 1. The differential equation is approximated as
yi −1 − 2 yi + yi +1
+ yi + 1 = 0
h2
and this gives after simplification
yi −1 − (2 − h 2 ) yi + yi +1 = − h2 , i = 1, 2, …, n − 1
which together with the boundary conditions y0 = 0 and yn = 0, comprises
a system of (n + 1) equations for the (n + 1) unknowns y0 , y1 , …, yn .
Choosing h = 1/2 (i.e. n = 2), the above system becomes
⎛ 1⎞ 1
y0 − ⎜ 2 − ⎟ y1 + y2 = − .
⎝ 4 ⎠ 4
With y0 = y2 = 0, this gives
1
y1 = y (0.5) = = 0.142857142…
7
Comparison with the exact solution given above shows that the error in the
computed solution is 0.00336.
On the other hand, if we choose h = 1/4 (i.e. n = 4), we obtain the three
equations:
31 1
y0 − y1 + y2 = −
16 16
31 1
y1 − y2 + y3 = −
16 16
31 1
y2 − y3 + y4 = − ,
16 16
SECTION 8.10: Boundary-Value Problems 329

where y0 = y4 = 0. Solving the system we obtain


63
y2 = y (0.5) = = 0.140311804,
449
the error in which is 0.00082. Since the ratio of the two errors is about 4,
it follows that the order of convergence is h2.
These results show that the accuracy obtained by the finite-difference
method depends upon the width of the subinterval chosen and also on the
order of the approximations. As h is reduced, the accuracy increases but the
number of equations to be solved also increases.
Example 8.16 Solve the boundary-value problem
d2y
− y=0
dx 2
with
y (0) = 0 and y (2) = 3.62686.
The exact solution of this problem is y = sinh x. The finite-difference
approximation is given by
1
( yi −1 − 2 yi + yi +1 ) = yi . (i)
h2
We subdivide the interval [0, 2] into four equal parts so that h = 0.5. Let the
values of y at the five points be y0 , y1 , y2 , y3 and y4. We are given that
y0 = 0 and y4 = 3.62686.
Writing the difference equations at the three interval points (which are the
unknowns), we obtain
4 ( y0 − 2 y1 + y2 ) = y1 ⎫
⎪⎪
4 ( y1 − 2 y2 + y3 ) = y2 ⎬
(ii)

4( y2 − 2 y3 + y4 ) = y3 , ⎪⎭
respectively. Substituting for y0 and y4 and rearranging, we get the system
−9 y1 + 4 y2 = 0 ⎫
⎪⎪
4 y1 − 9 y2 + 4 y3 = 0 ⎬ (iii)

4 y2 − 9 y3 = −14.50744. ⎪⎭
The solution of (iii) is given in the table below.
Computed value Exact value
x of y y = sinh x Error

0.5 0.52635 0.52110 0.00525


1.0 1.18428 1.17520 0.00908
1.5 2.13829 2.12928 0.00901
330 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

It is possible to obtain a better approximation for the value of y(1.0) by


extrapolation to the limit. For this we divide the interval [0, 2] into two
subintervals with h = 1.0. The difference equation at the single unknown
point y1 is given by
y0 − 2 y1 + y2 = y1
Using the values of y0 and y2, we obtain
y1 = 1.20895.
Hence Eq. (8.61) gives
4 (1.18428) − 1.20895
y (1.0) = = 1.17606,
3
which is a better approximation since the error is now reduced to 0.00086.

8.10.2 Cubic Spline Method


We consider again the boundary-value problem defined by Eqs. (8.46) and
(8.47). Let s(x) be the cubic spline approximating the function y(x) and let
s′′( xi ) = M i . Then, at x = xi the differential equation given in Eq. (8.46)
gives
M i + fi s ′( xi ) + gi yi = ri (8.62)
But
h 1
s ′( xi −) = (2 M i + M i −1 ) + ( yi − yi −1 ) (8.63)
3! h
and
h 1
s′( xi + ) = − (2 M i + M i +1 ) + ( yi +1 − yi ) (8.64)
3! h
Substituting Eqs. (8.63) and (8.64) successively in Eq. (8.62), we obtain the
equations
⎡h 1 ⎤
M i + f i ⎢ (2 M i + M i −1 ) + ( yi − yi −1 ) ⎥ + gi yi = ri (8.65)
⎣6 h ⎦
and
⎡ h 1 ⎤
M i + fi ⎢ − (2M i + M i +1 ) + ( yi +1 − yi ) ⎥ + gi yi = ri . (8.66)
⎣ 6 h ⎦
Since y0 and yn are known, Eqs. (8.65) and (8.66) constitute a system of
2n equations in 2n unknowns, viz., M 0 , M 1 , …, M n , y1 , y2 , …, yn −1. It is,
however, possible to eliminate the Mi and obtain a tridiagonal system for yi
(see, Albasiny and Hoskins [1969]). The following examples illustrate the use
of the spline method.
Example 8.17 We first consider the problem discussed in Example 8.15, viz.,
y ′′ + y + 1 = 0, y (0) = y (1) = 0 (i)
SECTION 8.10: Boundary-Value Problems 331

If we divide the interval [0, 1] into two equal subintervals, then from Eq. (i)
and the recurrence relations for Mi, we obtain
3
y (0.5) = = 0.13636, (ii)
22
and
25
M 0 = −1, M1 = − , M 2 = −1
22
Hence we obtain
47 47
s ′(0) =
, s′(1) = − , s′(0.5) = 0.
88 88
From the analytical solution of the problem (i), we observe that
y(0.5) = 0.13949 and hence the cubic spline solution of the boundary-value
problem has an error of 2.24% (see Bickley [1968]).
Example 8.18 Given the boundary-value problem
x 2 y ′′ + xy ′ − y = 0; y (1) = 1, y (2) = 0.5
apply the cubic spline method to determine the value of y(1.5).
The given differential equation is
1 1
y ′′ = − y ′ + 2 y. (i)
x x
Setting x = xi and y ′′( xi ) = M i , Eq. (i) becomes
1 1 (ii)
Mi = − yi′ + 2 yi .
xi xi
Using the expressions given in Eqs. (8.63) and (8.64), we obtain

1 ⎛ h h y − yi ⎞ 1
Mi = − ⎜ − M i − M i +1 + i +1 ⎟ + 2 yi , i = 0, 1, 2,…, n − 1. (iii)
xi ⎝ 3 6 h ⎠ xi
and
1 ⎛h h y − yi −1 ⎞ 1
Mi = − M i + M i −1 + i ⎟ + 2 yi , i = 1, 2,…, n. (iv)
xi ⎜⎝ 3 6 h ⎠ xi

If we divide [1, 2] into two subintervals, we have h = 1/2 and n = 2. Then


Eqs. (iii) and (iv) give
10 M 0 − M1 + 24 y1 = 36
16 M1 − M 2 − 32 y1 = −12
M 0 + 20 M1 + 16 y1 = 24
M1 + 26 M 2 − 24 y1 = −9
332 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

Eliminating M0, M1 and M2 from these system of equation we obtain


y1 = 0.65599.
Since the exact value of y1 = y (1.5) = 2/3, the error in the computed value
of y1 is 0.01, which is about 1.5% smaller.
Example 8.19 Consider a boundary-value problem in which the boundary
conditions involve derivatives
d2y
=y (i)
dx 2
with
y ′(0) = 0 and y (1) = 1 (ii)
The analytical solution of this problem is given by
cosh x
y= (iii)
cosh 1
In order to compare the finite-difference and spline methods, we solve this
problem by both the methods. For the finite-difference solution, we write
yi −1 − 2 yi + yi +1
= yi (iv)
h2
We divide the interval [0, 1] into two equal parts such that h = 1/2. Setting
i = 0 and i = 1, Eq. (iv) gives
1
y−1 − 2 y0 + y1 = y0 (v)
4
and
1
y0 − 2 y1 + y2 = y1 (vi)
4
From formula (8.54), we have
y1 − y−1
y0′ = or y1 − y−1 = 2hy0′ (vii)
2h
Using the boundary conditions y0′ = 0 and y2 = 1, Eqs. (v), (vi) and (vii) yield
36
y1 =
= 0.9376.
49
The exact value of y(0.5) is 0.7310 so that the finite-difference solution has
an error of 0.2066.
For the spline solution, we have
6
yi −1 + 4 yi + yi +1 = ( yi −1 − 2 yi + yi +1 ) (viii)
h2
SECTION 8.10: Boundary-Value Problems 333

With h = 1/2, we obtain


y0 + 4 y1 + y2 = 24 ( y0 − 2 y1 + y2 )
Since y2 = 1, the above equation becomes
y0 + 4 y1 = 24 ( y0 − 2 y1 ) + 23
or, equivalently
52 y1 = 23 y0 + 23 (ix)
For the derivative boundary condition, we use Eq. (8.64) and obtain
1 1
y0′ = 0 = − M 0 − M1 + 2 ( y1 − y0 )
6 12
Since M 0 = y0 and M1 = y1 , the above equation gives

2 y0 + y1 = 24 ( y1 − y0 ) (x)
Equations (ix) and (x) yield
598
y1 = y (0.5) = = 0.7266.
823
Thus the error in the cubic spline solution is 0.0044. This example demonstrates
the superiority of the cubic spline method over the finite difference method
when the boundary value problem contains derivative boundary conditions.

8.10.3 Galerkin’s Method


This method, also called the weighted residual method, uses trial functions
(or approximating functions) which satisfy the boundary conditions of the
problem. The trial function is substituted in the given differential equation
and the result is called the residual. The integral of the product of this
residual and a weighted function, taken over the domain, is then set to zero
which yields a system of equations for the unknown parameters in the trial
functions.
Let the boundary value problem be defined by
y bb p( x ) y b q( x ) y  f ( x ) a xb (8.67)
with the boundary conditions
p0 y ( a) + q0 y ′(a) = r0 ⎫
⎬ (8.68)
p1 y (b) + q1 y ′(b) = r1 ⎭
Let the approximate solution be given by

¥B G ( x),
n
t( x)  i i (8.69)
i1
334 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

where fi(x) are called base functions. Substituting for t(x) in Eq. (8.67), we
obtain a residual. Denoting this residual by R(t),
we obtain
R(t ) = t ′′ + p( x)t ′ + q( x)t − f ( x) (8.70)
Usually the base functions fi(x) are chosen as weight functions. We, therefore,
have
b


I = Gi ( x) R(t ) dx = 0, (8.71)
a
which yields a system of equations for the parameters ai. When ai are
known, t(x) can be calculated from Eq. (8.69).

Example 8.20 Solve the boundary value problem defined by


y ′′ + y + x = 0, 0 < x <1
with the conditions
y(0) = y(1) = 0.
Let
t(x) = a1f1(x)
Since both the boundary conditions must be satisfied by t(x), we choose
f1(x) = x(1 – x).
Substituting for t(x) in the given differential equation, we obtain
R(t) = t² + t + x.
Hence we have
1
I  ± (t bb t x ) B1 x (1  x ) dx  0
0
1
Þ (i)
± (t bb t x ) x(1  x ) dx  0
0
Now,
1 1

∫ t ′′x(1 − x) dx = [t ′x(1 − x) ]0 − t ′(1 − 2 x) dx,



1

0 0
on integrating by parts.
1
  ± t b(1  2 x ) dx, since the first term vanishes.
0
© 1 ¸
1
  ª\t (1  2 x )^0  ¹
ª
± t (–2) dx¹
« 0 º
1


= − 2 t dx, since t = 0 at x = 0 and x = 1.
0
EXERCISES 335

Hence (i) simplifies to


1
1 1
2± t dx ± tx(1  x )dx ± x 2 (1  x )dx  0
0 0
0
1 1
1 © x3 x4 ¸
± B1 x(1  x )dx ±0 B1 x (1  x )
2 2
Þ 2 dx ª  ¹  0
0 ª 3
«
4 º¹
0
5
Þ B1 
= 0.2778, an simplification.
18
Then a first approximation to the solution is
5
y (0.5) =
(0.5)(0.5) = 0.06944.
18
The exact solution to the given boundary value problem is
sin x
y ( x) = − x,
sin1
which means that our solution has an error of 0.0003.
The above approximation can be improved by assuming that
t(x) = a1x(1 – x) + a2 x2(1 – x).
Proceeding as above, we obtain
a1 = 0.1924 and a2 = 0.1707.
It is clear that by adding more terms to t(x), we can obtain the result to the
desired accuracy.

EXERCISES

8.1. Given
dy
= 1 + xy, y (0) = 1,
dx
obtain the Taylor series for y(x) and compute y(0.1), correct to four
decimal places.
8.2 Show that the differential equation

d2y
= – xy, y (0) = 1 and y ′(0) = 0 ,
dx 2
has the series solution

x3 1 t 4 6 1 t 4 t 7 9
y 1 x  x "
3! 6! 9!
336 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

8.3 If
dy 1
= with y (4) = 4,
dx x 2 + y
compute the values of y (4.1) and y (4.2) by Taylor’s series method.
8.4 Use Picard’s method to obtain a series solution of the problem given
in Problem 8.1 above.
8.5 Use Picard’s method to obtain y (0.1) and y (0.2) of the problem defined
by
dy
 x yx 4 , y(0)  3.
dx
8.6 Using Euler’s method, solve the following problems:
dy 3 3 dy
(a)  x y , y(0)  1 (b)  1 y 2 , y(0)  0
dx 5 dx
8.7 Compute the values of y (1.1) and y (1.2) using Taylor’s series method
for the solution of the problem
y ′′ + y 2 y1 = x3 , y (1) = 1 and y ′(1) = 1.
8.8 Find, by Taylor’s series method, the value of y (0.1) given that
y bb – xy b  y  0, y(0)  1 and y b(0)  0.
8.9 Using Picard’s method, find y (0.1), given that
dy y  x
 and y (0)  1.
dx y x
8.10 Using Taylor’s series, find y (0.1), y (0.2) and y (0.3) given that
dy
= xy + y 2 , y (0) = 1.
dx
8.11 Given the differential equation
dy
= x2 + y
dx
with y (0) = 1, compute y (0.02) using Euler’s modified method.

8.12 Solve, by Euler’s modified method, the problem


dy
= x + y , y (0) = 0.
dx
Choose h = 0.2 and compute y (0.2) and y (0.4).
8.13 Given the problem
dy
= f ( x, y ) and y ( x0 ) = y0 ,
dx
EXERCISES 337

an approximate solution at x = x0 + h is given by the third order Runge–


Kutta formula
1
y(x0 + h) = y0 + (k1 + 4k2 + k3) + R4,
6
where
1 1
k1 = hf (x0, y0), k2 = hf (x0 + h, y0 + k)
2 2 1
and k3 = hf(x0 + h, y0 + 2k2 – k1).
Show that R4 is of order h4.
8.14 Write an algorithm to implement Runge–Kutta fourth order formula for
solving an initial value problem.
Find y (0,1), y (0.2) and y (0.3) given that
2 xy
y′ = 1 + , y (0) = 0
1 + x2
8.15 Use Runge–Kutta fourth order formula to find y (0.2) and y (0.4) given
that
y2 − x2
y′ = , y (0) = 1.
y2 + x2
8.16 Solve the initial value problem defined by
dy 3x y
 , y(1)  1
dx x 2 y
and find y (1.2) and y (1.4) by the Runge–Kutta fourth order formula.
8.17 State Adam’s predictor-corrector formulae for the solution of the equation
y¢ = f (x, y), y (x0) = y0.
Given the problem
y¢ + y = 0, y(0) = 1,
find y (0.1), y (0.2), and y (0.3) by Runge–Kutta fourth order formula
and hence obtain y(0.4) by Adam’s formulae.
8.18 Given the initial value problem defined by
dy
= y (1 + x 2 ), y (0) = 1
dx
find the values of y for x = 0.2, 0.4, 0.6, 0.8 and 1.0 using the Euler, the
modified Euler and the fourth order Runge–Kutta methods. Compare
the computed values with the exact values.
8.19 State Milne’s predictor-corrector formulae for the solution of the problem
y¢ = f (x, y), y (x0) = y0.
Given the initial value problem defined by
y¢ = y2 + xy, y(0) = 1,
find, by Taylor’s series, the values of y (0.1), y (0.2) and y (0.3). Use
these values to compute y (0.4) by Milne’s formulae.
338 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

8.20 Using Milne’s formulae, find y (0.8) given that


dy
= x − y 2 , y (0) = 0, y (0.2) = 0.02,
dx
y (0.4) = 0.0795 and y (0.6) = 0.1762.
8.21 Explain what is meant by a fourth order formula. Discuss this with
reference to the solution of the problem
dy y
 3x , y(0)  1
dx 2
by Runge–Kutta fourth order formula.
8.22 Use Taylor’s series method to solve the system of differential equations
dx dy
= y − t, =x+t
dt dt
with x = 1, y = 1 when t = 0, taking Dx = Dt = 0.1.
8.23 Using fourth order Runge–Kutta method, compute the value of y (0.2)
given that
d2y
+ y=0
dx 2
with y (0) = 1 and y¢(0) = 0.
8.24 Given that
y¢¢ – xy¢ + 4y = 0, y (0) = 3, y¢(0) = 0,
compute the value of y (0.2) using Runge–Kutta fourth order formla.
8.25 Solve the boundary value problem defined by
y¢¢ – y = 0, y (0) = 0, y (1) = 1,
by finite difference and cubic spline methods. Compare the solutions obtained
at y(0.5) with the exact value. In each case, take h = 0.5 and
h = 0.25.
8.26 Shooting method This is a popular method for the solution of two-point
boundary value problems. If the problem is defined by
y¢¢ = f (x), y (x0) = 0 and y (x1) = A,
then it is first transformed into the initial value problem
y¢(x) = z, z¢(x) = f(x),
with y (x0) = 0 and z(x0) = m0, where m0 is a guess for the value of y¢ (x0).
Let the solution corresponding to x = x1 be Y0. If Y1 is the value obtained
by another guess m 1 for y¢ (x0), then Y 0 and Y 1 are related linearly.
Thus, linear interpolation can be carried out between the values (m0, y0)
and (m1, y1).
Obviously, the process can be repeated till we obtain a value for y(x1) which
is close to A.
EXERCISES 339

Apply the shooting method to solve the boundary value problem


y¢¢ = y(x), y(0) = 0 and y(1) = 1.
8.27 Fyfe [1969] discussed the solution of the boundary value problem defined
by
4x 2
y ′′ + 2
y′ + y = 0, y (0) = 1 and y (2) = 0.2.
1+ x 1 + x2
Solve this problem by cubic spline method first with h = 1 and then with
h = 1/2 to determine the value of y (1). Compare your results with the exact
values of y (1) obtained from the analytical solution y = 1/(1 + x2).
8.28 Method of Linear Interpolation Let the boundary value problem be
defined by
y¢¢ + f(x)y¢ + g(x)y = p(x),
y (x0) = y0 and y (xn) = yn.
Set up the finite difference approximation of the differential equation and
solve the algebraic equations using the initial value y0 and assuming a
value, say Y0, for y (x1). Again, we assume another value for y(x1), say Y1
and then compute the values of y2, y3, …, yn–1 and yn. We, thus, have two
sets of values of y (x1) and y (xn). Now we use linear interpolation formula
to compute the value of y (x1) for which y (xn) = yn. The process is repeated
until we obtain the value of y(xn) close to the given boundary condition (see
Problem 8.26).
Solve the boundary value problem defined by
y² + xy¢ – 2y = 0, y(0) = 1 and y(1) = 2
using the method of Linear interpolation.
8.29 Using Galerkin’s method, compute the value of y (0.5) given that
y¢¢ + y = x2, 0 < x < 1, y (0) = 0 and y (1) = 0.
8.30 Solve Poisson’s equation

∂ 2u ∂ 2u
+
= 2, 0 ≤ x, y ≤ 1
∂x 2 ∂y 2
with u = 0 on the boundary C of the square region 0 £ x £ 1, 0 £ y £ 1.

Answers to Exercises

8.1 1.1053

x3 1 × 4 6 1 × 4 × 7 9
8.2 1− + x − x +"
3! 6! 9!
8.3 4.005, 4.0098
340 CHAPTER 8: Numerical Solution of Ordinary Differential Equations

x 2 x3 x 4
8.4 1 + x + + + +"
2 3 8
8.5 3.005, 3.0202

8.6 (a) 1.0006, (b) y1 = 1.0000, y2 = 0.201, y3 = 0.3020

8.7 1.1002, 1.2015

8.8 1.005012

8.9 1.0906

8.10 1.11686, 1.27730, 1.5023

8.11 1.0202

8.12 0.0222, 0.0938

8.14 0.1006, 0.2052, 0.3176

8.15 0.19598, 1.3751

8.16 1.2636, 1.532

8.17 y (0.1) = 0.90484, y (0.2) = 0.81873, y (0.3) = 0.7408


y (0.4) = 0.6806 (Exact value = 0.6703).

8.18 x = 0.2 0.024664 (Euler)


0.003014 (Modified Euler)
0.000003 (Runge–Kutta)
x = 1.0 0.776885 (Euler)
0.12157 (Modified Euler)
0.000273 (Runge–Kutta)
8.19 y(0.1) = 1.1169, y(0.2) = 1.2773, y(0.3) = 1.5023,
y(0.4) = 1.8376.

8.20 y p(0.8) = 0.3049, yc(0.8) = 0.30460

8.21 h = 0.2, error = 0.00000110


h = 0.1, error = 0.0000007

8.22 x1 = 1.1003, y(0.1) = y1 = 1.1100

8.23 0.980067 (Exact value = 0.980066)


EXERCISES 341

8.24 y(0.2) = 2.762239 (Exact value = 2.7616)


z(0.2) = –2.360566 (Exact value = –2.368)

8.25 Exact value of y(0.5) = 0.443409


(a) 0.443674, (b) 0.443140

8.26 y¢(0) = 2.8

8.27 (a) h = 1, y1 = 0.4


1
h = , y2 = 0.485714
2
(b) h = 1, y1 = 0.542373
1
h = , y2 = 0.5228
2

8.28 y1 = 1.0828, y2 = 1.2918, y3 = 1.6282, y4 = 1.99997

8.29 y(0.5) = –0.041665 (Exact value = –0.04592)


5
8.30 u(x, y) = – xy(x – 1)(y – 1)
2

You might also like