Nsopde Book
Nsopde Book
Numerical Solution of
Ordinary Differential Equations
8.1 INTRODUCTION
302
SECTION 8.2: Solution by Taylors Series 303
Example 8.1 From the Taylor series for y (x), find y (0.1) correct to four
decimal places if y (x) satisfies
y′ = x − y 2 and y (0) = 1.
The Taylor series for y (x) is given by
x2 x3 x 4 iv x5 v
y ( x) = 1 + xy0′ + y0′′ + y0′′′+ y + y +"
2 6 24 0 120 0
The derivatives y0′ , y0′′, … etc. are obtained thus:
y ′( x) = x − y 2 y0′ = −1
y ′′( x) = 1 − 2 yy ′ y0′′ = 3
y ′′′( x) = −2 yy ′′ − 2 y ′2 y0′′′= −8
y iv ( x) = −2 yy ′′′ − 6 y ′y ′′ y 0iv = 34
y vi ( x) = xy v ( x) + y iv ( x) + 4 y iv ( x) = xy v ( x) + 5 y iv ( x), (v)
SECTION 8.3: Picards Method of Successive Approximations 305
x2 x3 x 4 iv
y ( x) = y (0) + xy ′(0) + y ′′(0) + y ′′′(0) + y (0)
2 6 24
x5 v x6 vi
+ y (0) + y (0) + "
120 720
Hence
Equation (8.3), in which the unknown function y appears under the integral
sign, is called an integral equation. Such an equation can be solved by the
method of successive approximations in which the first approximation to y
is obtained by putting y0 for y on right side of Eq. (8.3), and we write
x
y (1) = y0 + ∫ f ( x, y0 ) dx
x0
The integral on the right can now be solved and the resulting y(1) is substituted
for y in the integrand of Eq. (8.3) to obtain the second approximation y(2):
x
y (2) = y0 + ∫ f ( x, y (1) ) dx
x0
Proceeding in this way, we obtain y (3) , y (4) ,…, y ( n −1) and y ( n ) , where
x
∫ f ( x, y ( n −1) ) dx
( n)
y = y0 + with y (0) = y0 (8.4)
x0
306 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
then the sequence y (1) , y (2) , … converges to the solution of Eq. (8.1).
Example 8.3 Solve the equation y ′ = x + y 2, subject to the condition y = 1
when x = 0.
We start with y (0) = 1 and obtain
x
1
∫ ( x + 1) dx = 1 + x + 2 x .
(1) 2
y =1+
0
Then the second approximation is
x
⎡ ⎛ 1 ⎞ ⎤
2
∫
(2)
y =1+ ⎢ x + ⎜1 + x + x 2 ⎟ ⎥ dx
0 ⎢⎣ ⎝ 2 ⎠ ⎥⎦
3 2 2 3 1 4 1 5
=1+ x +
x + x + x + x .
2 3 4 20
It is obvious that the integrations might become more and more difficult as
we proceed to higher approximations.
Example 8.4 Given the differential equation
dy x2
= 2
dx y + 1
with the initial condition y = 0 when x = 0, use Picards method to obtain y
for x = 0.25, 0.5 and 1.0 correct to three decimal places.
We have
x
x2
y= ∫ y2 + 1
dx.
0
(0)
Setting y = 0, we obtain
x
1
y (1) = ∫ x 2 dx = x3
3
0
and
x
x2 ⎛1 ⎞ 1 1
∫ dx = tan −1 ⎜ x3 ⎟ = x3 − x9 + "
(2)
y = 6
0
(1/9) x + 1 ⎝3 ⎠ 3 81
SECTION 8.4: Eulers Method 307
so that y(1) and y(2) agree to the first term, viz., (1/3)x3. To find the range
of values of x so that the series with the term (1/3)x3 alone will give the
result correct to three decimal places, we put
1 9
x ≤ 0.0005
81
which yields
x ≤ 0.7
Hence
1
y (0.25) = (0.25)3 = 0.005
3
1
y (0.5) = (0.5)3 = 0.042
3
1 1
y (1.0) = − = 0.321
3 81
We have so far discussed the methods which yield the solution of a differential
equation in the form of a power series. We will now describe the methods
which give the solution in the form of a set of tabulated values.
Suppose that we wish to solve the Eqs. (8.1) for values of y at
x = xr = x0 + rh(r = 1, 2,
). Integrating Eq. (8.1), we obtain
x1
y1 = y0 + ∫ f ( x, y ) dx. (8.6)
x0
The exact solution is y = e− x and from this the value at x = 0.04 is 0.9608.
h2
y ( xn1 ) y ( xn ) hy b ( xn ) y bb ( xn ) "
2
h2
y ( xn ) hy b( xn ) y bb (U n ), where xn c U n c xn1. (8.9)
2
We usually encounter two types of errors in the solution of differential
equations. These are (i) local errors, and (ii) rounding errors. The local
error is the result of replacing the given differential equation by means of
the equation
yn +1 = yn + hyn′ .
This error is given by
1
Ln +1 = – h 2 y ′′ (U n ) (8.10)
2
The total error is then defined by
en = yn y(xn) (8.11)
Since y0 is exact, it follows that e0 = 0.
Neglecting the rounding error, we write the total solution error as
en +1 = yn +1 − y ( xn +1 )
= yn + hyn′ – [ y ( xn ) + hy ′( xn ) – Ln +1 ]
= en + h [ f ( xn , yn ) – y ′( xn )] + Ln +1.
SECTION 8.4: Eulers Method 309
Þ en 1 en h < f ( xn , yn ) f ( xn , y ( xn ))> Ln 1.
By mean value theorem, we write
vf
f ( xn , yn ) f ( xn , y( xn )) < yn – y( xn )> ( xn , Yn ), y ( xn ) c Y n c yn .
vy
Hence, we have
en +1 = en ⎡⎣1 + hf y ( xn , Yn ) ⎤⎦ + Ln +1 (8.12)
Since e0 = 0, we obtain successively:
e1 L1 ; e2 ©1
«
hf y ( x1 , Y1 )¸º L1 L2 ;
See the book by Isaacson and Keller [1966] for more details.
Example 8.6 We consider, again, the differential equation y ′ = − y with the
condition y (0) = 1, which we have solved by Eulers method in Example 8.5.
Choosing h = 0.01, we have
1 + hf y ( xn , Yn ) = 1 + 0.01( −1) = 0.99.
and
1
Ln +1 = − h 2 y ′′( Sn ) = −0.00005 y ( Sn ).
2
In this problem, y ( Sn ) ≤ y ( xn ), since y ′ is negative. Hence we successively
obtain
| L1 | ≤ 0.00005 = 5 × 10−5,
It can be verified that the estimate for e4 agrees with the actual error in the
value of y(0.04) obtained in Example 8.5.
RungeKutta methods are designed to give greater accuracy and they possess
the advantage of requiring only the function values at some selected points
on the subinterval.
If we substitute y1 = y0 + hf ( x0 , y0 ) on the right side of Eq. (8.13), we
obtain
h
y1 = y0 + [ f 0 + f ( x0 + h, y0 + hf 0 )],
2
where f 0 = f ( x0 , y0 ). If we now set
k1 = hf 0 and k 2 = hf ( x0 + h, y0 + k1 )
then the above equation becomes
1
y1 = y0 + (k1 + k2 ), (8.15)
2
which is the second-order RungeKutta formula. The error in this formula
can be shown to be of order h3 by expanding both sides by Taylors series.
Thus, the left side gives
h2 h3
y0 hy0b y0bb y0bbb "
2 6
and on the right side
⎡ ∂f ∂f ⎤
k2 = hf ( x0 + h, y0 + hf 0 ) = h ⎢ f 0 + h + hf 0 + O (h 2 ) ⎥ .
⎣ ∂x0 ∂y0 ⎦
Since
df ( x, y ) ∂f ∂f
= +f ,
dx ∂x ∂y
we obtain
k2 = h [ f 0 + hf 0′ + O (h 2 )] = hf 0 + h 2 f 0′ + O (h3 ),
h2
= y0 + hy0′ + y0′′ + O ( h3 ).
2
It therefore follows that the Taylor series expansions of both sides of Eq. (8.15)
agree up to terms of order h2, which means that the error in this formula
is of order h3.
More generally, if we set
y1 = y0 + W1k1 + W2 k 2 (8.16a)
where
k1 = hf 0 ⎫⎪
⎬ (8.16b)
k2 = hf ( x0 + α 0 h, y0 + β 0 k1 ) ⎪⎭
312 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
then the Taylor series expansions of both sides of the last equation in (8.16a)
gives the identity
h2 ⎛ ∂f ∂f ⎞ 3
y0 + hf0 + ⎜ + f0 ⎟ + O (h ) = y0 + (W1 + W2 ) hf0
2 ⎝ ∂x ∂y ⎠
⎛ ∂f ∂f ⎞
+ W2 h2 ⎜ B 0 + C0 f0 ⎟ + O (h3 ).
⎝ ∂x ∂y ⎠
Equating the coefficients of f (x, y) and its derivatives on both sides, we
obtain the relations
1 1
W1 + W2 = 1, W2B 0 = , W2 C0 = . (8.17)
2 2
Clearly, B 0 = C0 and if B 0 is assigned any value arbitrarily, then the remaining
parameters can be determined uniquely. If we set, for example, B 0 = C0 = 1,
then we immediately obtain W1 = W2 = 1/2, which gives formula (8.15).
It follows, therefore, that there are several second-order RungeKutta formulae
and that formulae (8.16) and (8.17) constitute just one of several such
formulae.
Higher-order RungeKutta formulae exist, of which we mention only the
fourth-order formula defined by
y1 = y0 + W1k1 + W2 k 2 + W3 k3 + W4 k 4 (8.18a)
where
k1 = hf ( x0 , y0 ) ⎫
⎪
k2 = hf ( x0 + B 0 h, y0 + C0 k1 ) ⎪⎪
⎬ (8.18b)
k3 = hf ( x0 + B1h, y0 + C1k1 + O1k2 ) ⎪
⎪
k4 = hf ( x0 + B 2 h, y0 + C 2 k1 + O 2 k2 + E1k3 ), ⎪⎭
where the parameters have to be determined by expanding both sides of the
first equation of (8.18a) by Taylors series and securing agreement of terms
up to and including those containing h4. The choice of the parameters is,
again, arbitrary and we have therefore several fourth-order RungeKutta
formulae. If, for example, we set
1 ⎫
B 0 = C0 = , B1 = 1 , B 2 = 1, ⎪
2 2 ⎪
1 ⎪
C1 = ( 2 − 1), C2 = 0 ⎪
2 ⎪
⎬ (8.19)
1 1 1 ⎪
O1 = 1 − , O = − , E = 1 + ,
2
2
2
1
2 ⎪
⎪
1 1⎛ 1 ⎞ 1⎛ 1 ⎞⎪
W1 = W4 = , W2 = ⎜1 − ⎟, W3 = ⎜1 + ⎟,
6 3⎝ 2⎠ 3⎝ 2 ⎠ ⎪⎭
SECTION 8.5: RungeKutta Method 313
B 0 = B1 = 1 , 1 ⎫
C0 = O1 =
2 2 ⎪
⎪
C1 = C2 = O 2 = 0, ⎪ (8.20)
B 2 = E1 = 1 ⎬
⎪
W1 = W4 = ,
1 2 ⎪
W2 = W3 = ⎪
6 6 ⎭
leads to the fourth-order RungeKutta formula, the most commonly used
one in practice:
1
y1 = y0 + (k1 + 2k2 + 2k3 + k4 ) (8.21a)
6
where
k1 = hf ( x0 , y0 ) ⎫
⎪
⎛ 1 1 ⎞⎪
k2 = hf ⎜ x0 + h, y0 + k1 ⎟ ⎪
⎝ 2 2 ⎠⎪
(8.21b)
⎬
⎛ 1 1 ⎞⎪
k3 = hf ⎜ x0 + h, y0 + k2 ⎟ ⎪
⎝ 2 2 ⎠
⎪
k4 = hf ( x0 + h, y0 + k3 ) ⎪
⎭
in which the error is of order h5. Complete derivation of the formula is
exceedingly complicated, and the interested reader is referred to the book by
Levy and Baggot. We illustrate here the use of the fourth-order formula by
means of examples.
Example 8.8 Given dy/dx = y − x where y (0) = 2, find y (0.1) and y (0.2)
correct to four decimal places.
(i) RungeKutta second-order formula: With h = 0.1, we find k1 = 0.2
and k 2 = 0.21. Hence
1
y1 = y (0.1) = 2 + (0.41) = 2.2050.
2
To determine y2 = y (0.2), we note that x0 = 0.1 and y0 = 2.2050. Hence,
k1 = 0.1(2.105) = 0.2105 and k2 = 0.1(2.4155 − 0.2) = 0.22155.
It follows that
1
y2 = 2.2050 + (0.2105 + 0.22155) = 2.4210.
2
Proceeding in a similar way, we obtain
y3 = y (0.3) = 2.6492 and y4 = y (0.4) = 2.8909
314 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
We next choose h = 0.2 and compute y (0.2) and y(0.4) directly. With h = 0.2.
x0 = 0 and y0 = 2, we obtain k1 = 0.4 and k2 = 0.44 and hence y(0.2) =
2.4200. Similarly, we obtain y(0.4) = 2.8880.
From the analytical solution y = x + 1 + ex, the exact values of y(0.2)
and y(0.4) are respectively 2.4214 and 2.8918. To study the order of conver-
gence of this method, we tabulate the values as follows:
x Computed y Exact y Difference Ratio
Example 8.10 We consider the initial value problem y ′ = 3x + y/2 with the
condition y(0) = 1.
The following table gives the values of y(0.2) by different methods, the
exact value being 1.16722193. It is seen that the fourth-order RungeKutta
method gives the accurate value for h = 0.05.
Method h Computed value
n (n + 1) 2 n (n + 1) (n + 2) 3 (8.22)
f ( x, y ) = f 0 + n∇f 0 + ∇ f0 + ∇ f0 + "
2 6
where
x − x0
n= and f 0 = f ( x0 , y0 ).
h
If this formula is substituted in
x1
y1 = y0 + ∫ f ( x, y ) dx, (8.23)
x0
we get
x1
⎡ n ( n + 1) 2 ⎤
y1 = y0 + ∫ ⎢ f0 + n∇f0 +
⎣ 2
∇ f0 + "⎥ dx
⎦
x0
1
⎡ n (n + 1) 2 ⎤
= y0 + h ∫ ⎢⎣ f0 + n∇f0 + 2
∇ f0 + "⎥ dn
⎦
0
⎛ 1 5 3 251 4 ⎞
= y0 + h ⎜1 + ∇ + ∇ 2 + ∇3 + ∇ + "⎟ f0 .
⎝ 2 12 8 720 ⎠
It can be seen that the right side of the above relation depends only on y0,
y1, y2,
, all of which are known. Hence this formula can be used to
compute y1. We therefore write it as
⎛ 1 5 3 251 4 ⎞
y 1p = y0 + h ⎜ 1 + ∇ + ∇ 2 + ∇3 + ∇ + "⎟ f0 (8.24)
⎝ 2 12 8 720 ⎠
n (n 1) 2 n (n 1) (n 2) 3
f ( x, y ) f1 nf1 f1 f1 " (8.25)
2 6
SECTION 8.6: PredictorCorrector Methods 317
0
⎡ n (n + 1) 2 ⎤
= y0 + h ∫ ⎢⎣ f1 + n∇f1 + 2
∇ f1 + "⎥ dn
⎦
1
⎛ 1 1 1 19 4 ⎞
= y0 + h ⎜1 − ∇ − ∇ 2 − ∇3 − ∇ − " ⎟ f1 (8.26)
⎝ 2 12 24 720 ⎠
The right side of Eq. (8.26) depends on y1, y0, y1,
where for y1 we
p
use y 1 , the predicted value obtained from (8.24). The new value of y1 thus
obtained from Eq. (8.26) is called the corrected value, and hence we rewrite
the formula as
⎛ 1 1 1 19 4 ⎞
y 1c = y0 + h ⎜ 1 − ∇ − ∇ 2 − ∇3 − ∇ − " ⎟ f 1p (8.27)
⎝ 2 12 24 720 ⎠
This is called AdamsMoulton corrector formula the superscript c indicates
that the value obtained is the corrected value and the superscript p on the
right indicates that the predicted value of y1 should be used for computing
the value of f (x1, y1).
In practice, however, it will be convenient to use formulae (8.24) and
(8.27) by ignoring the higher-order differences and expressing the lower-
order differences in terms of function values. Thus, by neglecting the fourth
and higher-order differences, formulae (8.24) and (8.27) can be written as
h
y 1p = y0 + (55 f 0 − 59 f −1 + 37 f −2 − 9 f −3 ) (8.28)
24
and
h
y 1c = y0 +
(9 f 1p + 19 f 0 − 5 f −1 + f −2 ) (8.29)
24
in which the errors are approximately
251 5 (4) 19 5 (4)
h f0 and − h f 0 respectively.
720 720
The general forms of formulae (8.28) and (8.29) are given by
h
y np+1 = yn + [55 f n − 59 f n −1 + 37 f n − 2 − 9 f n −3 ] (8.28a)
24
and
h
y nc +1 = yn +
[9 f np+1 + 19 f n − 5 f n −1 + f n − 2 ] (8.29a)
24
Such formulae, expressed in ordinate form, are often called explicit predictor
corrector formulae.
318 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
The values y1, y2 and y3, which are required on the right side of
Eq. (8.28) are obtained by means of the Taylors series, or Eulers method,
or RungeKutta method. Due to this reason, these methods are called starter
methods. For practical problems, RungeKutta fourth-order formula
together with formulae (8.28) and (8.29) have been found to be the most
successful combination. The following example will illustrate the application
of this method.
0.2
y p (0.8) 0.6841 {55 [1 (0.6841) 2 ] 59 [1 (0.4228)2 ]
24
37 [1 (0.2027)2 ] 9}
1.0233, on simplification.
Using this predicted value on the right side of Eq. (8.29), we obtain
0.2
y c (0.8) 0.6841 {9 [1 (0.0233)2 ] 19 [1 (0.6841) 2 ]
24
5 [1 (0.4228)2 ] [1 (0.2027)2 ]}
1.0296, which is correct to four decimal places
The importance of the method lies in the fact that when once y 1p is computed
from formula (8.28), formula (8.29) can be used iteratively to obtain the
value of y1 to the accuracy required.
n (n − 1) 2 n (n − 1) (n − 2) 3 (8.30)
f ( x , y ) = f 0 + n Δf 0 + Δ f0 + Δ f0 + "
2 6
Substituting Eq. (8.30) in the relation
x4
y4 = y0 + ∫ f ( x, y ) dx (8.31)
x0
SECTION 8.6: PredictorCorrector Methods 319
we obtain
x4
⎡ n (n − 1) 2 ⎤
y4 = y0 + ∫ ⎢ f 0 + n'f 0 +
⎣ 2
' f 0 + "⎥ dx
⎦
x0
4
⎡ n (n − 1) 2 ⎤
= y0 + h ∫ ⎢⎣ f0 + n'f0 + 2
' f 0 + "⎥ dn
⎦
0
⎛ 20 2 8 ⎞
= y0 + h ⎜ 4 f0 + 8'f0 + ' f0 + '3 f 0 + " ⎟
⎝ 3 3 ⎠
4h
= y0 + (2 f1 − f 2 + 2 f3 ) (8.32)
3
after neglecting fourth- and higher-order differences and expressing differences
'f 0 , ' 2 f 0 and '3 f 0 in terms of the function values.
This formula can be used to predict the value of y4 when those of y0,
y1, y2 and y3 are known. To obtain a corrector formula, we substitute
Newtons formula from (8.30) in the relation
x2
y2 = y0 + ∫ f ( x, y ) dx (8.33)
x0
and get
2
⎡ n (n − 1) 2 ⎤
y2 = y0 + h ∫ ⎢⎣ f0 + n'f0 + 2
' f 0 + "⎥ dn
⎦
0
⎛ 1 ⎞
= y0 + h ⎜ 2 f0 + 2'f0 + ' 2 f0 + " ⎟
⎝ 3 ⎠
h (8.34)
= y0 + ( f0 + 4 f1 + f 2 )
3
The value of y4 obtained from Eq. (8.32) can therefore be checked by
using Eq. (8.34).
The general form of Eqs. (8.32) and (8.34) are:
4h
ynp+1 = yn −3 + (2 f n − 2 − f n −1 + 2 f n ) (8.32a)
3
and
h
ync +1 = yn −1 + ( f n −1 + 4 f n + f n +1 ) (8.34a)
3
320 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
–0.1 1.0900
0 1.0000
0.1 0.8900
0.2 0.7605
4 (0.1)
y (0.3) = 1.09 + [2 (−1) − (−1.19790) + 2 (−1.38164)] = 0.614616.
3
In order to apply Eq. (8.34), we need to compute y ′(0.3). We have
0.1
y (0.3) = 0.89 + [−1.197900 + 4 (−1.38164) + (−1.532247)] = 0.614776.
3
( xi − x) 2 ( x − xi −1 ) ( x − xi −1 )2 ( xi − x)
s ( x) = mi −1 − mi
h2 h2
( xi − x)2 [2( x − xi −1 ) + h] ( x − xi −1 )2 [2( xi − x) + h]
+ yi −1 + yi , (8.35)
h3 h3
which gives
2mi −1 4mi 6
s′′( xi ) = + − 2 ( yi − yi −1 )
h h h
2mi −1 4mi 6
= + − 2 ( si − si −1 ). (8.38)
h h h
If we now consider the initial-value problem
dy
= f ( x, y ) (8.39a)
dx
and
y ( x0 ) = y0 (8.39b)
then from Eq. (8.39a), we obtain
d2y ∂f ∂f dy
= + ,
dx 2 ∂x ∂y dx
or
y ′′( xi ) = f x ( xi , yi ) + f y ( xi , yi ) f ( xi , yi )
= f x ( xi , si ) + f y ( xi , si ) f ( xi , si ). (8.40)
Equating Eqs. (8.38) and (8.40), we obtain
2mi −1 4mi 6
+ − 2 ( si − si −1 ) = f x ( xi , si ) + f y ( xi , si ) f ( xi , si ) (8.41)
h h h
from which si can be computed. Substitution in Eq. (8.35) gives the required
solution.
The following example demonstrates the usefulness of the spline method.
y = 13e x /2 − 6 x − 12 (ii)
We take, for simplicity, n = 2, i.e. h = 0.5 and compute the value of y (0.5).
Here f ( x, y ) = 3x + y/2 and therefore we have fx = 3 and fy = 1/2. Also,
1
f ( xi , si ) = 3 xi + si .
2
SECTION 8.8: Simultaneous and Higher-Order Equations 323
In a similar manner, one can extend the Taylor series method or Picards
method to the system (8.42). The extension of the RungeKutta method to
a system of n equations is quite straightforward.
We now consider the second-order differential equation
y ′′ = F ( x, y, y ′) (8.44a)
with the initial conditions
y ( x0 ) = y0 and y ′( x0 ) = y0′ . (8.45a)
The cubic spline method is a one-step method and at the same time a
global one. The step-size can be changed during computations and, under
certain conditions, gives O(h4) convergence. The method can also be extended
to systems of ordinary differential equations.
h2 h3
y ( x + h) = y ( x) + hy ′( x) + y ′′( x) + y ′′′( x) + " (8.50)
2 6
from which we obtain
y ( x + h) − y ( x ) h
y ′( x) = − y ′′( x) −"
h 2
Thus we have
y ( x + h) − y ( x )
y ′( x) = + O ( h) (8.51)
h
326 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
h2 h3
y ( x − h) = y ( x) − hy ′( x ) + y ′′( x ) − y ′′′( x ) + " (8.52)
2 6
from which we obtain
y ( x ) − y ( x − h)
y ′( x) = + O ( h) (8.53)
h
which is the backward difference approximation for y ′( x).
A central difference approximation for y ′( x) can be obtained by subtracting
Eq. (8.52) from Eq. (8.50). We thus have
y ( x + h) − y ( x − h)
y ′( x) = + O ( h 2 ). (8.54)
2h
It is clear that Eq. (8.54) is a better approximation to y ′( x) than either
Eq. (8.51) or Eq. (8.53). Again, adding Eqs. (8.50) and (8.52), we get an
approximation for y ′′( x)
y ( x − h) − 2 y ( x ) + y ( x + h)
y ′′( x) = + O (h 2 ). (8.55)
h2
In a similar manner, it is possible to derive finite-difference approximations
to higher derivatives.
To solve the boundary-value problem defined by Eqs. (8.46) and (8.47),
we divide the range [x0, xn] into n equal subintervals of width h so that
xi = x0 + ih, i = 1, 2, …, n.
The corresponding values of y at these points are denoted by
y ( xi ) = yi = y ( x0 + ih), i = 0, 1, 2, …, n.
From Eqs. (8.54) and (8.55), values of y ′( x) and y ′′( x) at the point x = xi
can now be written as
y − yi −1
yi′ = i +1 + O (h2 )
2h
and
y − 2 yi + yi +1
yi′′ = i −1 + O (h 2 ).
h2
Satisfying the differential equation at the point x = xi , we get
yi′′+ fi yi′ + gi yi = ri
Substituting the expressions for yi′ and yi′′, this gives
yi −1 − 2 yi + yi +1 yi +1 − yi −1
2
+ fi + gi yi = ri , i = 1, 2, …, n − 1,
h 2h
where yi = y ( xi ), gi = g ( xi ), etc.
SECTION 8.10: Boundary-Value Problems 327
h 2 iv
U = ( y i + 2 fi yi′′′ ) + O (h 4 ). (8.58)
12
Thus, the finite difference approximation defined by Eq. (8.56) has second-
order accuracy for functions with continuous fourth derivatives on [x0, xn].
Further, it follows that τ → 0 as h → 0, implying that greater accuracy in
the result can be achieved by using a smaller value of h. In such a case, of
course, more computational effort would be required since the number of
equations become larger.
An easier way to improve accuracy is to employ Richardsons deferred
approach to the limit, assuming that the O(h2) error is proportional to h2.
This means that the error has the form
y ( xi ) − yi = h 2 e( xi ) + O ( h4 ) (8.59)
For extrapolation to the limit, we solve Eq. (8.56) twice, with the
interval lengths h and h/2 respectively. Let the corresponding solutions of
Eq. (8.56) be denoted by yi(h) and yi(h/2). For a point xi common to both,
we therefore have
y ( xi ) − yi (h) = h2 e( xi ) + O (h 4 ) (8.60a)
and
2
⎛h⎞ h
y ( xi ) − yi ⎜ ⎟ = e( xi ) + O (h 4 ) (8.60b)
⎝2⎠ 4
from which we obtain
4 yi (h /2) − yi (h)
y ( xi ) = . (8.61)
3
328 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
We have explained the method with simple boundary conditions (8.47) where
the function values on the boundary are prescribed. In many applied problems,
however, derivative boundary conditions may be prescribed, and this requires
a modification of the procedures described above. The following examples
illustrate the application of the finite-difference method.
Example 8.15 A boundary-value problem is defined by
y ′′ + y + 1 = 0, 0 ≤ x ≤1
where
y (0) = 0 and y (1) = 0.
With h = 0.5, use the finite-difference method to determine the value of y(0.5).
This example was considered by Bickley [1968]. Its exact solution is
given by
1 − cos 1
y ( x) = cos x + sin x − 1,
sin1
from which, we obtain
y (0.5) = 0.139493927.
Here nh = 1. The differential equation is approximated as
yi −1 − 2 yi + yi +1
+ yi + 1 = 0
h2
and this gives after simplification
yi −1 − (2 − h 2 ) yi + yi +1 = − h2 , i = 1, 2, …, n − 1
which together with the boundary conditions y0 = 0 and yn = 0, comprises
a system of (n + 1) equations for the (n + 1) unknowns y0 , y1 , …, yn .
Choosing h = 1/2 (i.e. n = 2), the above system becomes
⎛ 1⎞ 1
y0 − ⎜ 2 − ⎟ y1 + y2 = − .
⎝ 4 ⎠ 4
With y0 = y2 = 0, this gives
1
y1 = y (0.5) = = 0.142857142…
7
Comparison with the exact solution given above shows that the error in the
computed solution is 0.00336.
On the other hand, if we choose h = 1/4 (i.e. n = 4), we obtain the three
equations:
31 1
y0 − y1 + y2 = −
16 16
31 1
y1 − y2 + y3 = −
16 16
31 1
y2 − y3 + y4 = − ,
16 16
SECTION 8.10: Boundary-Value Problems 329
If we divide the interval [0, 1] into two equal subintervals, then from Eq. (i)
and the recurrence relations for Mi, we obtain
3
y (0.5) = = 0.13636, (ii)
22
and
25
M 0 = −1, M1 = − , M 2 = −1
22
Hence we obtain
47 47
s ′(0) =
, s′(1) = − , s′(0.5) = 0.
88 88
From the analytical solution of the problem (i), we observe that
y(0.5) = 0.13949 and hence the cubic spline solution of the boundary-value
problem has an error of 2.24% (see Bickley [1968]).
Example 8.18 Given the boundary-value problem
x 2 y ′′ + xy ′ − y = 0; y (1) = 1, y (2) = 0.5
apply the cubic spline method to determine the value of y(1.5).
The given differential equation is
1 1
y ′′ = − y ′ + 2 y. (i)
x x
Setting x = xi and y ′′( xi ) = M i , Eq. (i) becomes
1 1 (ii)
Mi = − yi′ + 2 yi .
xi xi
Using the expressions given in Eqs. (8.63) and (8.64), we obtain
1 ⎛ h h y − yi ⎞ 1
Mi = − ⎜ − M i − M i +1 + i +1 ⎟ + 2 yi , i = 0, 1, 2,…, n − 1. (iii)
xi ⎝ 3 6 h ⎠ xi
and
1 ⎛h h y − yi −1 ⎞ 1
Mi = − M i + M i −1 + i ⎟ + 2 yi , i = 1, 2,…, n. (iv)
xi ⎜⎝ 3 6 h ⎠ xi
2 y0 + y1 = 24 ( y1 − y0 ) (x)
Equations (ix) and (x) yield
598
y1 = y (0.5) = = 0.7266.
823
Thus the error in the cubic spline solution is 0.0044. This example demonstrates
the superiority of the cubic spline method over the finite difference method
when the boundary value problem contains derivative boundary conditions.
¥B G ( x),
n
t( x) i i (8.69)
i1
334 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
where fi(x) are called base functions. Substituting for t(x) in Eq. (8.67), we
obtain a residual. Denoting this residual by R(t),
we obtain
R(t ) = t ′′ + p( x)t ′ + q( x)t − f ( x) (8.70)
Usually the base functions fi(x) are chosen as weight functions. We, therefore,
have
b
∫
I = Gi ( x) R(t ) dx = 0, (8.71)
a
which yields a system of equations for the parameters ai. When ai are
known, t(x) can be calculated from Eq. (8.69).
0 0
on integrating by parts.
1
± t b(1 2 x ) dx, since the first term vanishes.
0
© 1 ¸
1
ª\t (1 2 x )^0 ¹
ª
± t (–2) dx¹
« 0 º
1
∫
= − 2 t dx, since t = 0 at x = 0 and x = 1.
0
EXERCISES 335
EXERCISES
8.1. Given
dy
= 1 + xy, y (0) = 1,
dx
obtain the Taylor series for y(x) and compute y(0.1), correct to four
decimal places.
8.2 Show that the differential equation
d2y
= – xy, y (0) = 1 and y ′(0) = 0 ,
dx 2
has the series solution
x3 1 t 4 6 1 t 4 t 7 9
y 1 x x "
3! 6! 9!
336 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
8.3 If
dy 1
= with y (4) = 4,
dx x 2 + y
compute the values of y (4.1) and y (4.2) by Taylors series method.
8.4 Use Picards method to obtain a series solution of the problem given
in Problem 8.1 above.
8.5 Use Picards method to obtain y (0.1) and y (0.2) of the problem defined
by
dy
x yx 4 , y(0) 3.
dx
8.6 Using Eulers method, solve the following problems:
dy 3 3 dy
(a) x y , y(0) 1 (b) 1 y 2 , y(0) 0
dx 5 dx
8.7 Compute the values of y (1.1) and y (1.2) using Taylors series method
for the solution of the problem
y ′′ + y 2 y1 = x3 , y (1) = 1 and y ′(1) = 1.
8.8 Find, by Taylors series method, the value of y (0.1) given that
y bb – xy b y 0, y(0) 1 and y b(0) 0.
8.9 Using Picards method, find y (0.1), given that
dy y x
and y (0) 1.
dx y x
8.10 Using Taylors series, find y (0.1), y (0.2) and y (0.3) given that
dy
= xy + y 2 , y (0) = 1.
dx
8.11 Given the differential equation
dy
= x2 + y
dx
with y (0) = 1, compute y (0.02) using Eulers modified method.
∂ 2u ∂ 2u
+
= 2, 0 ≤ x, y ≤ 1
∂x 2 ∂y 2
with u = 0 on the boundary C of the square region 0 £ x £ 1, 0 £ y £ 1.
Answers to Exercises
8.1 1.1053
x3 1 × 4 6 1 × 4 × 7 9
8.2 1− + x − x +"
3! 6! 9!
8.3 4.005, 4.0098
340 CHAPTER 8: Numerical Solution of Ordinary Differential Equations
x 2 x3 x 4
8.4 1 + x + + + +"
2 3 8
8.5 3.005, 3.0202
8.8 1.005012
8.9 1.0906
8.11 1.0202