100% found this document useful (1 vote)
268 views30 pages

Finite Difference Method

This document is the final project report submitted by Dushyant Kumar Sengar for their M.Sc. project on the numerical solution of parabolic equations. The report introduces parabolic partial differential equations and describes two methods - the explicit method and Crank-Nicolson implicit method - for approximating the solution numerically. It also analyzes the convergence, stability, and consistency of these methods and concludes with their applicability for solving parabolic equations under suitable conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
268 views30 pages

Finite Difference Method

This document is the final project report submitted by Dushyant Kumar Sengar for their M.Sc. project on the numerical solution of parabolic equations. The report introduces parabolic partial differential equations and describes two methods - the explicit method and Crank-Nicolson implicit method - for approximating the solution numerically. It also analyzes the convergence, stability, and consistency of these methods and concludes with their applicability for solving parabolic equations under suitable conditions.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

[Link].

project (MTH 598A)


Final Project Report

NUMERICAL SOLUTION OF PARABOLIC EQUATIONS

Supervised by:
Prof. S. Ghorai

Submitted by:
Dushyant Kumar Sengar
Roll No. 181048
[Link]. Mathematics
Certificate
It is certified that the work embodied in this project entitled “NUMERICAL SOLUTION OF PARABOLIC

EQUATIONS” by Dushyant Kumar Sengar has been carried out under my supervision.

Prof. S. Ghorai

(Professor)

Department of mathematics & scientific computing

Indian Institute of Technology, Kanpur


Acknowledgement

I am very much grateful to all the people who have supported me throughout the course of this project. I am

thankful for their aspiring guidance and friendly advice during the project work. I express my sincere gratitude

towards my guide Prof. [Link] whose consistent efforts, diligent support and regular encouragement helped me

to overcome the challenges that I faced throughout this project and to acquire detailed knowledge in the concerned

topic throughout this endeavor.


Abstract

In this project, we will focus on numerical solution of parabolic equations. Here, we will consider the two

independent variables to be time (t) and distance (x) with dependent variable being the temperature (u). We will

approximate the solution of such equation using two methods, namely explicit method and Crank-Nicolson Implicit

method. We will also analyze their convergence, stability and consistency. Finally, we conclude with the applicability

of each of these methods to solve parabolic equations under suitable conditions.


Introduction

The mathematical formulation of most the problems that we come across in Science, involve rate of

change with respect to two or more independent variables, which leads to a partial differential equations

or a set of such equations. A general two dimensional second-order equation is

𝜕 2𝜑 𝜕 2𝜑 𝜕 2𝜑 𝜕𝜑 𝜕𝜑
𝑎 2
+ 𝑏 + 𝑐 2
+𝑑 +𝑒 + 𝑓𝜑 + 𝑔 = 0 ,
𝜕𝑥 𝜕𝑥𝜕𝑦 𝜕𝑦 𝜕𝑥 𝜕𝑦

where 𝑥 and 𝑦 are independent variables, 𝜑 is the dependent variable, and 𝑎, 𝑏, 𝑐, 𝑑, 𝑒, 𝑓 and 𝑔 may be

functions of the independent variables 𝑥 and 𝑦.

This equation is said to elliptic when 𝑏 2 − 4𝑎𝑐 < 0, parabolic when 𝑏 2 − 4𝑎𝑐 = 0, and

hyperbolic when 𝑏 2 − 4𝑎𝑐 > 0. Here, we get an approximate solution of parabolic equations using finite

difference method.

Finite-difference approximations to derivatives

For a function F, with derivatives being finite and continuous, by Tylor’s theorem,

ℎ2 ′′ ℎ3 (1.1)
𝐹(𝑥 + ℎ) = 𝐹(𝑥) + ℎ𝐹 ′ (𝑥) + 𝐹 (𝑥) + 𝐹 ′′′ (𝑥) + ⋯ ,
2! 3!
ℎ2 ′′ ℎ3 (1.2)
𝐹(𝑥 − ℎ) = 𝐹(𝑥) − ℎ𝐹 ′ (𝑥) + 𝐹 (𝑥) − 𝐹 ′′′ (𝑥) + ⋯ .
2! 3!
Adding the above equation (1.1) and (1.2), we get

𝐹(𝑥 + ℎ) + 𝐹(𝑥 − ℎ) = 2𝐹(𝑥) + ℎ2 𝐹 ′′ (𝑥) + 𝑂(ℎ4 ), (1.3)

where O(h4) denote the terms of fourth and higher powers of h. Considering O(h4) to be negligible in

comparison to the terms of lower power of h the above equation (1.3) results in

′′ (𝑥)
𝑑2𝐹 1 (1.4)
𝐹 = ( 2) ≈ 2 {𝐹(𝑥 + ℎ) − 2𝐹(𝑥) + 𝐹(𝑥 − ℎ)}
𝑑𝑥 𝑥=𝑥 ℎ

with a leading error on the right-hand side of order h2 .


Similarly, subtracting equation (1.2) from (1.3) and neglecting the terms of order h3, we get

𝑑𝐹 1 (1.5)
𝐹 ′ (𝑥) = ( ) ≈ {𝐹(𝑥 + ℎ) − 𝐹(𝑥 − ℎ)},
𝑑𝑥 𝑥=𝑥 2ℎ

with an error of order h2.

Hence, from Fig. 1.1, we see that equation (1.5) approximates the slope of the tangent at P by the slope

of the chord AB, which is called central-difference approximation. We can also approximate the slope

of the tangent at P by either the slope of the chord PB, giving forward difference formula,

1 (1.6)
𝑈 ′ (𝑥) ≈ {𝑈(𝑥 + ℎ) − 𝑈(𝑥)} ,

Or the slope of the chord AP giving backward-difference formula

1 (1.7)
𝑈 ′ (𝑥) ≈ {𝑈(𝑥) − 𝑈(𝑥 − ℎ)} .

P
A
u(x-h) P
u(x) u(x+h)

x-h x x+h Fig. 1.1

The leading errors in forward and backward difference formula are both O(h).

Transformation to non-dimensional form:

A lot of problems that are physically different can be dealt with a single mathematical equation when

expressed in terms of non-dimensional variables. These problems, need not be dimensionally different,
but merely variation of similar type of problem.

For instance, consider a heat insulated thin uniform rod so that the temperature changes occur

through heat conduction along its length. It will satisfy the following parabolic equation:

𝜕𝑈 𝜕 2𝑈 (2.1)
=𝑘 2 , 𝑘 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡,
𝜕𝑇 𝜕𝑋
the solution of which gives the temperature U at a distance X from one end of the rod after ā time T.

The non-dimensionalizing process for this parabolic equation is shown below.

Let L be the length of the rod and U0 be some particular temperature such as the maximum or minimum

temperature at zero time. Put


𝑋 𝑈
𝑥= 𝑎𝑛𝑑 𝑢=
𝐿 𝑈0

Then 𝜕𝑈 𝜕𝑈 𝑑𝑥 𝜕𝑈 1
= =
𝜕𝑋 𝜕𝑥 𝑑𝑋 𝜕𝑥 𝐿
and 𝜕 2𝑈 𝜕 𝜕𝑈 𝜕 1 𝜕𝑈 𝑑𝑥 1 𝜕 2𝑈
= ( ) = ( ) = .
𝜕𝑋 2 𝜕𝑥 𝜕𝑋 𝜕𝑥 𝐿 𝜕𝑥 𝑑𝑋 𝐿2 𝜕𝑥 2
Hence, equation (2.1) transforms to

𝜕(𝑢𝑈0 ) 𝑘 𝜕 2 (𝑢𝑈0 )
= 2 ,
𝜕𝑇 𝐿 𝜕𝑥 2
i.e. 1 𝜕𝑢 𝜕 2 𝑢
= .
𝑘𝐿−2 𝜕𝑇 𝜕𝑥 2
𝑘𝑇
Writing 𝑡 = and applying the function of a function rule to the lefts side results in
𝐿2

𝜕𝑢 𝜕 2 𝑢 (2.2)
=
𝜕𝑡 𝜕𝑥 2
as the non-dimensional form of (2.1).

An explicit method of solution:

By equation (1.10) and (1.8), the finite-difference approximation to


𝜕𝑈 𝜕 2 𝑈 (2.3)
=
𝜕𝑡 𝜕𝑥 2
is 𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗 𝑢𝑖+1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖−1,𝑗
= ,
𝑘 ℎ2
where 𝑥𝑖 = 𝑖ℎ, (𝑖 = 0, 1, 2, … )

and 𝑡𝑗 = 𝑗𝑘, (𝑗 = 0, 1, 2, … ).

This can be written as

𝑢𝑖,𝑗+1 = 𝑢𝑖−1,𝑗 + (1 − 2𝑟)𝑢𝑖,𝑗 + 𝑟𝑢𝑖+1,𝑗 , (2.4)


𝑘
where 𝑟 = ℎ2 , and gives a formula for the unknown ‘temperature’ 𝑢𝑖,𝑗+1 at the (i,j+1)th mesh

point in terms of the known ‘temperatures’ along the jth time-row (Fig. 2.1). A formula such as this,

which expresses one unknown pivotal value directly in terms of known pivotal values is called an

explicit formula. Let us consider some examples and solve them for equation (2.4). Left side of equation

(2.4) contains 1 unknown and the right side 3 known, pivotal values of u (Fig. 2.1).

Unknown value of u
u=0

j+1

Known values of u
k
h
i-1 i i+1

Example 2.1

Consider that the ends of the rod are kept in contact with blocks of melting ice. Find numerical solution
𝜕2 𝑈
of which has the initial temperature distribution (initial condition at t=0) in non-dimensional form is
𝜕𝑥 2

(i) 1
(𝑎) 𝑈 = 2𝑥 0≤𝑥≤ ,
2 (2.5)

1
(𝑏) 𝑈 = 2(1 − 𝑥), ≤𝑥 ≤1.
2
(ii) 𝑈 = 0 𝑎𝑡 𝑥 = 0 𝑎𝑛𝑑 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡 > 0. (The boundary condition)

Solution:
1 1
Notice that the problem is symmetric about 𝑥 = 2 so we need the solution only for 0 ≤ 𝑥 ≤ 2.

CASE 1:
1 1 1
Take ℎ = 10 , 𝛿𝑡 = 𝑘 = 1000 , so 𝑟 = 𝑘/ℎ2 = 10 . Equation (2.4) then becomes

1
𝑢𝑖,𝑗+1 = (𝑢 + 8𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 )
10 𝑖−1,𝑗 (2.6)

Using the given conditions, we will have the following table:

x= 0 0.1 0.2 0.3 0.4 0.5 0.6

j=0 0 0.2 0.4 0.6 0.8 1.0 0.8

j=1 0

j=2 0

j=3 0

j=4 0

TABLE 2.1

i,j+1

=
1 8 1
10 10 10

i-1,j i,j i+1,j


Fig. 2.2
The analytical solution of the partial differential equation satisfying these conditions is

8 1 1 2 2
𝑈 = 2 ∑ 2 (sin 𝑛𝜋) (sin 𝑛𝜋𝑥)𝑒 −𝑛 𝜋 𝑡
𝜋 𝑛 2
𝑛=1

On computation by the code using the equation (2.6), we will get the following data:

i=0 i=1 i=2 i=3 i=4 i=5 i=6


x=0 x=1 x=2 x=3 x=4 x=5 x=6
t=0.000 0.000000 0.200000 0.400000 0.600000 0.800000 1.000000 0.800000
0.001 0.000000 0.200000 0.400000 0.600000 0.800000 0.960000 0.800000
0.002 0.000000 0.200000 0.400000 0.600000 0.796000 0.928000 0.796000
0.003 0.000000 0.200000 0.400000 0.599600 0.789600 0.901600 0.789600
0.004 0.000000 0.200000 0.399960 0.598640 0.781800 0.879200 0.781800
0.005 0.000000 0.199996 0.399832 0.597088 0.773224 0.859720 0.773224
0.006 0.000000 0.199980 0.399574 0.594976 0.764260 0.842421 0.764260
0.007 0.000000 0.199941 0.399155 0.592364 0.755148 0.826789 0.755148
0.008 0.000000 0.199869 0.398554 0.589322 0.746033 0.812460 0.746033
0.009 0.000000 0.199750 0.397763 0.585916 0.737005 0.799175 0.737005
0.010 0.000000 0.199577 0.396777 0.582210 0.728113 0.786741 0.728113
0.011 0.000000 0.199339 0.395600 0.578257 0.719386 0.775015 0.719386
0.000000 0.199031 0.394240 0.574104 0.710836 0.763889 0.710836
0.012
0.000000 0.198649 0.392705 0.569791 0.702468 0.753279 0.702468
0.013
0.000000 0.198190 0.391008 0.565350 0.694281 0.743117 0.694281
0.014
0.000000 0.197652 0.389160 0.560809 0.686272 0.733349 0.686272
0.015
0.000000 0.197038 0.387174 0.556190 0.678433 0.723934 0.678433
0.016
0.000000 0.196348 0.385062 0.551513 0.670759 0.714834 0.670759
0.017
0.000000 0.195585 0.382836 0.546792 0.663242 0.706019 0.663242
0.018 0.000000 0.194751 0.380506 0.542042 0.655875 0.697463 0.655875
0.019

So, comparing the finite difference solution with the analytic solution:

(i) For 𝑥 = 0.3,

Finite- Analytical Difference Percentage


difference solution error
solution
t= 0.005000 0.597088 0.596604 0.000484 0.081173
t= 0.010000 0.582210 0.579898 0.002311 0.398591
t= 0.020000 0.537271 0.533353 0.003918 0.734624
t= 0.100000 0.247230 0.244405 0.002825 1.155965
(ii) For 𝑥 = 0.5 ,

Finite- Analytical Difference Percentage


difference solution error
solution
t= 0.005000 0.859720 0.840423 0.019297 2.296095
t= 0.010000 0.786741 0.774324 0.012417 1.603574
t= 0.020000 0.689146 0.680846 0.008299 1.218972
t= 0.100000 0.305618 0.302118 0.003499 1.158321

So, we can notice that the solution using finite difference method at 𝑥 = 0.3 is reasonably accurate. But

for 𝑥 = 0.5, the solution is not that good, the percentage error being reasonable enough. This is
𝜕𝑈
happening because of the discontinuity in the initial value of from +2 to -2 at 𝑥 = 0.5 . But this error
𝜕𝑥

dies away as t increases.

CASE 2:
1 5
Take ℎ = , 𝛿𝑡 = 𝑘 = , so 𝑟 = 𝑘/ℎ2 = 0.5 . Equation (2.4) then becomes
10 1000

1
𝑢𝑖,𝑗+1 = (𝑢 + 𝑢𝑖+1,𝑗 )
2 𝑖−1,𝑗 (2.7)

On computation by the code using the equation (27), we will come with the following data:

i=0 i=1 i=2 i=3 i=4 i=5 i=6

x=0 x=1 x=2 x=3 x=4 x=5 x=6

t=0.000 0.000000 0.200000 0.400000 0.600000 0.800000 1.000000 0.800000


0.001 0.000000 0.200000 0.400000 0.600000 0.800000 0.800000 0.800000
0.002 0.000000 0.200000 0.400000 0.600000 0.700000 0.800000 0.700000
0.003 0.000000 0.200000 0.400000 0.550000 0.700000 0.700000 0.700000
0.004 0.000000 0.200000 0.375000 0.550000 0.625000 0.700000 0.625000
0.005 0.000000 0.187500 0.375000 0.500000 0.625000 0.625000 0.625000
0.006 0.000000 0.187500 0.343750 0.500000 0.562500 0.625000 0.562500
0.007 0.000000 0.171875 0.343750 0.453125 0.562500 0.562500 0.562500
0.008 0.000000 0.171875 0.312500 0.453125 0.507813 0.562500 0.507813
0.009 0.000000 0.156250 0.312500 0.410156 0.507813 0.507813 0.507813
0.010 0.000000 0.156250 0.283203 0.410156 0.458984 0.507813 0.458984
0.011 0.000000 0.141602 0.283203 0.371094 0.458984 0.458984 0.458984
0.012 0.000000 0.141602 0.256348 0.371094 0.415039 0.458984 0.415039
0.013 0.000000 0.128174 0.256348 0.335693 0.415039 0.415039 0.415039
0.014 0.000000 0.128174 0.231934 0.335693 0.375366 0.415039 0.375366
0.015 0.000000 0.115967 0.231934 0.303650 0.375366 0.375366 0.375366
0.016 0.000000 0.115967 0.209808 0.303650 0.339508 0.375366 0.339508
0.017 0.000000 0.104904 0.209808 0.274658 0.339508 0.339508 0.339508
0.018 0.000000 0.104904 0.189781 0.274658 0.307083 0.339508 0.307083
0.019 0.000000 0.094891 0.189781 0.248432 0.307083 0.307083 0.307083

So, comparing the finite difference solution with the analytic solution:

(i) For 𝑥 = 0.3,

Finite- Analytical Difference Percentage


difference solution error
solution
t= 0.005000 0.600000 0.596604 0.003396 0.569269
t= 0.010000 0.600000 0.579898 0.020102 3.466439
t= 0.020000 0.550000 0.533353 0.016647 3.121132
t= 0.100000 0.248432 0.244405 0.004027 1.647866

Notice that this finite difference solution is not that good of an approximation as compared to the

previous Case 1.

CASE 3:
1 1
Take ℎ = 10 , 𝛿𝑡 = 𝑘 = 100 , so 𝑟 = 𝑘/ℎ2 = 1 . Equation (2.4) then becomes

𝑢𝑖,𝑗+1 = 𝑢𝑖−1,𝑗 − 𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 (2.9)

On computation by the code using the equation (2.9), get the following data:

i=0 i=1 i=2 i=3 i=4 i=5 i=6


x=0 x=1 x=2 x=3 x=4 x=5 x=6
t=0.000 0.000000 0.200000 0.400000 0.600000 0.800000 1.000000 0.800000
0.001 0.000000 0.200000 0.400000 0.600000 0.800000 0.600000 0.800000
0.002 0.000000 0.200000 0.400000 0.600000 0.400000 1.000000 0.400000
0.003 0.000000 0.200000 0.400000 0.200000 1.200000 -0.200000 1.200000
0.004 0.000000 0.200000 -0.000000 1.400000 -1.200000 2.600000 -1.200000
0.005 0.000000 -0.200000 1.600000 -2.600000 5.200000 -5.000000 5.200000
0.006 0.000000 1.800000 -4.400000 9.400000 -12.800000 15.400000 -12.800000
0.007 0.000000 -6.200000 15.600000 -26.600000 37.600000 -41.000000 37.600000
0.008 0.000000 21.800000 -48.400000 79.800000 -105.200000 116.200000 -105.200000
0.009 0.000000 -70.200000 150.000000 -233.400000 301.200000 -326.600000 301.200000
0.010 0.000000 220.200000 -453.600000 684.600000 -861.200000 929.000000 -861.200000
0.011 0.000000 -673.800000 1358.400000 -1999.400000 2474.800000 -2651.400000 2474.800000

Here, the percentage error is so much that considering it as a solution of the partial differential equation

is meaningless.

For 𝑥 = 0.5,

Finite- Analytical Difference Percentage


difference solution error
solution
t= 0.005000 0.600000 0.600000 0.000000 0.000000
t= 0.010000 0.600000 0.579898 0.020102 3.466439
t= 0.020000 0.600000 0.533353 0.066647 12.495780
t= 0.100000 684.600000 0.244405 684.355595 280009.181621

These three cases clearly indicate that the value of r is important and it will be proved later that
1
this explicit method works only for 0 ≤ 𝑟 ≤ 2 . The graphs below compare the analytical solution of the

partial differential equation with the finite difference solution for values of r just below or above ½, and

the same number of time steps.


r=0.1

t=0.05

t=0.1

t=0.2

r=0.5

t=0.05

t=0.1

t=0.2

r=0.51 t=0.05

t=0.1

t=0.2
Crank-Nicolson implicit method:

Thus, we noticed that explicit method is computationally simple but it has some drawbacks too. The
𝑘 1 1
time step δt=k must be very small because the process is valid only for 0 < ℎ2 ≤ 2 , i.e. 𝑘 ≤ 2 ℎ2 , and

ℎ = 𝛿𝑥 must be kept small in order to attain reasonable accuracy. Crank-Nicolson proposed a method

that reduces the computation and is valid for all values of r. They considered the partial differential
1 𝜕2 𝑈
equation as being satisfied at the mid-point {𝑖ℎ, (𝑗 + 2) 𝑘} and replaced by the mean of its finite
𝜕𝑥 2

difference approximation at the jth and (j+1)th time-levels. So,

𝜕𝑈 𝜕 2𝑈
( ) 1 = ( 2)
𝜕𝑡 𝑖,𝑗+ 𝜕𝑥 𝑖,𝑗+1
2 2

is replaced by

𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗 1 𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗


= { + }
𝑘 2 ℎ2 ℎ2
giving −𝑟𝑢𝑖−1,𝑗+1 + (2 + 2𝑟)𝑢𝑖,𝑗+1 − 𝑟𝑢𝑖+1,𝑗+1 = 𝑟𝑢𝑖−1,𝑗 + (2 − 1𝑟)𝑢𝑖,𝑗 + 𝑟𝑢𝑖+1,𝑗 (2.10)

where 𝑟 = 𝑘/ℎ2 . Hence, the left side of equation (2.10) contains 3 unknowns and the right side 3

known, pivotal values of u (Fig. ).

Unknown values of u

j+1

Known values of u

i-1 i i+1
If there are N internal mesh points along each time row : i=1, 2, 3, …, N, then equation (2.10) for j=0

gives N equations for N unknown pivotal values along first time-row in terms of known initial and

boundary values. Similarly, j=1 expresses N unknown values of u along the second time-row in terms of

the calculated values along the first row, etc. So, a method such as this where calculation of unknown

pivotal values necessitates the solution of a set of equations, is known as implicit method. Let us

consider an example and solve it using equation (2.10).

Example:

Use Crank-Nicolson method to calculate a numerical solution of the previously done example, namely,

𝜕𝑈 𝜕 2 𝑈
= , 0 < 𝑥 < 1, 𝑡 > 0
𝜕𝑡 𝜕𝑥 2
with the following conditions:

(i) 1
(𝑎) 𝑈 = 2𝑥 0≤𝑥≤ , 𝑡=0
2
1
(𝑏) 𝑈 = 2(1 − 𝑥), ≤ 𝑥 ≤ 1, 𝑡=0 .
2
(ii) 𝑈 = 0 𝑎𝑡 𝑥 = 0 𝑎𝑛𝑑 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡 > 0. (The boundary condition)

Solution:
1
Take ℎ = 10 . Although the method is valid for all finite values of r=k/h2, a large value will result in

𝜕𝑈
inaccurate approximation of . A suitable value is r=1 and has the advantage of making the coefficient
𝜕𝑡

1
of 𝑢𝑖,𝑗 zero in equation (2.10). Then 𝑘 = 100 and equation (2.10) becomes

−𝑢𝑖−1,𝑗+1 + 4𝑢𝑖,𝑗+1 − 𝑢𝑖+1,𝑗+1 = 𝑢𝑖−1,𝑗 + 𝑢𝑖+1,𝑗 (2.11)

The computational molecule for equation (2.11) is shown in (Fig. 2.5).

-1 4 -1

i-1,j+1 i,j+1 i+1,j+1

1 1
Fig. 2.5
i-1,j i+1,j
Denote 𝑢𝑖,𝑗+1 by 𝑢𝑖 (i=1, 2, 3, …, 9). For this problem, because of symmetry, 𝑢6 = 𝑢4 , 𝑢7 = 𝑢3 ,

etc.(Fig. 2.6). So, the system of equations for the first time step would be

−0 + 4𝑢1 − 𝑢2 = 0 + 0.4 ,

−𝑢1 + 4𝑢2 − 𝑢3 = 0.2 + 0.6 ,

−𝑢2 + 4𝑢3 − 𝑢4 = 0.4 + 0.8 ,

−𝑢3 + 4𝑢4 − 𝑢5 = 0.6 + 1.0 ,

−2𝑢4 + 4𝑢5 = 0.8 + 0.8 .

Solving these equations using Thomas Algorithm, we get

𝑢1 = 0.1989 , 𝑢2 = 0.3956 , 𝑢3 = 0.5834 , 𝑢4 = 0.7381 , 𝑢5 = 0.7691 .

So, now the equations for the pivotal values of u along the next time-row are

−0 + 4𝑢1 − 𝑢2 = 0 + 0.3956 ,

−𝑢1 + 4𝑢2 − 𝑢3 = 0.1989 + 0.5834 ,

−𝑢2 + 4𝑢3 − 𝑢4 = 0.3956 + 0.7381 ,

−𝑢3 + 4𝑢4 − 𝑢5 = 0.5834 + 0.7691 ,

−2𝑢4 + 4𝑢5 = 2 × 0.7381 .

On computation by the code using the equation (2.11), we come with the following data:

i=0 i=1 i=2 i=3 i=4 i=5 i=6


x=0 x=1 x=2 x=3 x=4 x=5 x=6
t=0.000 0.000000 0.200000 0.400000 0.600000 0.800000 1.000000 0.800000
0.001 0.000000 0.198893 0.395570 0.583389 0.737985 0.768551 0.736219
0.002 0.000000 0.193596 0.378813 0.539375 0.645132 0.689214 0.637521
0.003 0.000000 0.182488 0.351140 0.489101 0.581317 0.607577 0.566339
0.004 0.000000 0.167973 0.320754 0.443453 0.520601 0.542274 0.500839
0.005 0.000000 0.152958 0.291080 0.399934 0.467302 0.483546 0.445444
0.006 0.000000 0.138483 0.262851 0.360030 0.418886 0.432035 0.396507
0.007 0.000000 0.124892 0.236717 0.323465 0.375404 0.386085 0.353544
0.008 0.000000 0.112367 0.212751 0.290281 0.336250 0.345171 0.315486
0.009 0.000000 0.100928 0.190959 0.260262 0.301088 0.308637 0.281725
0.010 0.000000 0.090549 0.171235 0.233202 0.269527 0.276008 0.251692
0.011 0.000000 0.081170 0.153445 0.208859 0.241229 0.246847 0.224939
0.000000 0.072720 0.137436 0.186995 0.215870 0.220779 0.201079
0.012
0.013 0.000000 0.065123 0.123054 0.167379 0.193157 0.197472 0.179783
0.014 0.000000 0.058301 0.110149 0.149795 0.172819 0.176631 0.160764
0.015 0.000000 0.052182 0.098580 0.134041 0.154615 0.157992 0.143770
0.016 0.000000 0.046698 0.088213 0.119932 0.138322 0.141322 0.128582
0.017 0.000000 0.041786 0.078930 0.107302 0.123742 0.126413 0.115004
0.018 0.000000 0.037387 0.070618 0.095997 0.110697 0.113077 0.102864
0.019 0.000000 0.033449 0.063178 0.085879 0.099025 0.101148 0.092007

Now, we compare the finite difference solution with the analytic solution at 𝑥 = 0.5,

Finite- Analytical Difference Percentage


difference solution error
solution
t= 0.005000 1.000000 0.999595 0.000405 0.040545
t= 0.010000 0.768551 0.774324 0.005773 0.745576
t= 0.020000 0.689214 0.680846 0.008368 1.229050
t= 0.100000 0.276008 0.302118 0.026110 8.642380

Notice, the numerical solution using Crank-Nicolson method is clearly better than the explicit method.

As mentioned earlier, the greatest difference between the two solutions occur at 𝑥 = 0.5 because of the
𝜕𝑈
discontinuity in the initial value of at this point. Although the Crank-Nicolson method is stable for
𝜕𝑥

all positive values of r in the sense that the solution and all errors eventually tend to zero as j tends to

infinity. But it will be later explained that for large values of r, such as 50, can cause unwanted finite

oscillations in the numerical solutions.


The local truncation error and consistency

The local truncation error

Let 𝐹𝑖,𝑗 (𝑢) = 0 represent the difference equation approximation of the partial differential equation at the

(i,j)th mesh point, with the exact solution u. If u is replaced by U, at the mesh points of the difference

equation, where U is the exact solution of the partial differential equation, the value 𝐹𝑖,𝑗 (𝑈) is called the

truncation error (𝑇𝑖,𝑗 ) at (i,j) mesh point. We can analyze these truncation errors using the Taylor’s

expansion to know about the local accuracies. Let us consider the classical case of explicit difference
𝜕𝑈 𝜕2 𝑈
approximation i.e. − 𝜕𝑥 2 = 0 at the point (ih, jk). So,
𝜕𝑡

𝑢𝑖,𝑗+1 −𝑢𝑖,𝑗 𝑢𝑖−1,𝑗 −2𝑢𝑖,𝑗 +𝑢𝑖+1,𝑗


𝐹𝑖,𝑗 (𝑢) = − =0
𝑘 ℎ2

Therefore, 𝑈𝑖,𝑗+1 − 𝑈𝑖,𝑗 𝑈𝑖−1,𝑗 − 2𝑈𝑖,𝑗 + 𝑈𝑖+1,𝑗


𝑇𝑖,𝑗 = 𝐹𝑖.𝑗 (𝑈) = −
𝑘 ℎ2
By Taylor’s expansion

𝜕𝑈 1 𝜕 2𝑈 1 𝜕 3𝑈
𝑈𝑖+1,𝑗 = 𝑈{(𝑖 + 1)ℎ, 𝑗𝑘} = 𝑈(𝑥𝑖 + ℎ, 𝑡𝑗 ) = 𝑈𝑖,𝑗 + ℎ ( ) + ℎ2 ( 2 ) + ℎ3 ( 3 ) + ⋯
𝜕𝑥 𝑖,𝑗 2 𝜕𝑥 𝑖,𝑗 6 𝜕𝑥 𝑖,𝑗

𝜕𝑈 1 2 𝜕 2𝑈 1 3 𝜕 3𝑈
𝑈𝑖−1,𝑗 = 𝑈{(𝑖 − 1)ℎ, 𝑗𝑘} = 𝑈(𝑥𝑖 − ℎ, 𝑡𝑗 ) = 𝑈𝑖,𝑗 − ℎ ( ) + ℎ ( 2 ) − ℎ ( 3 ) + ⋯
𝜕𝑥 𝑖,𝑗 2 𝜕𝑥 𝑖,𝑗 6 𝜕𝑥 𝑖,𝑗

𝜕𝑈 1 2 𝜕 2𝑈 1 3 𝜕 3𝑈
𝑈𝑖,𝑗+1 = 𝑈(𝑥𝑖 , 𝑡𝑗 + 𝑘) = 𝑈𝑖,𝑗 + ℎ ( ) + 𝑘 ( 2 ) + 𝑘 ( 3 ) + ⋯
𝜕𝑡 𝑖,𝑗 2 𝜕𝑡 𝑖,𝑗 6 𝜕𝑡 𝑖,𝑗

Substitution into the expression for 𝑇𝑖,𝑗 gives

𝜕𝑈 𝜕 2 𝑈 1 𝜕 2𝑈 1 2 𝜕 4𝑈 1 𝜕 4𝑈 1 4 𝜕 6𝑈
𝑇𝑖,𝑗 = ( − 2 ) + 𝑘 ( 2 ) − ℎ ( 4 ) + ℎ2 ( 4 ) − ℎ ( 6) + ⋯
𝜕𝑡 𝜕𝑥 𝑖,𝑗 2 𝜕𝑡 𝑖,𝑗 12 𝜕𝑡 𝑖,𝑗 6 𝜕𝑥 𝑖,𝑗 360 𝜕𝑥 𝑖,𝑗

But U is the solution of the differential equation so

𝜕𝑈 𝜕 2 𝑈
( − 2) = 0
𝜕𝑡 𝜕𝑥 𝑖,𝑗
Therefore, the principal part of the local truncation error is

1 𝜕 2𝑈 1 𝜕 4𝑈
( 𝑘 2 − ℎ2 4 )
2 𝜕𝑡 12 𝜕𝑥 𝑖,𝑗

Hence, 𝑇𝑖,𝑗 = 𝑂(𝑘) + 𝑂(ℎ2 )


1
When 𝑘 = 𝑟ℎ2 , 0 ≤ 𝑟 ≤ 2 , 𝑇𝑖,𝑗 𝑖𝑠 𝑂(𝑘) 𝑜𝑟 𝑂(ℎ2 ). This error may be further reduced by choosing some

special values of 𝑘⁄ℎ2 because 𝑇𝑖,𝑗 can be written as

1 2 𝑘 𝜕 2𝑈 𝜕 4𝑈
𝑇𝑖,𝑗 = ℎ (6 2 2 − 4 ) + 𝑂(𝑘 2 ) + 𝑂(ℎ4 ).
12 ℎ 𝜕𝑡 𝜕𝑥 𝑖,𝑗

By the differential equation,

𝜕 𝜕2
=
𝜕𝑡 𝜕𝑥 2
So, 𝜕 𝜕𝑈 𝜕 2 𝜕 2𝑈
( ) = 2 ( 2 ),
𝜕𝑥 𝜕𝑡 𝜕𝑥 𝜕𝑥

assuming that these derivatives exist. So, if we put 6𝑘/ℎ2 = 1, then 𝑇𝑖,𝑗 is 𝑂(𝑘 2 ) + 𝑂(ℎ4 ).

Consistency or compatibility

While approximating a solution of a parabolic equation by using a finite-difference method, it is

sometimes possible that the method is stable and but it converges to a different solution then desired.

Such difference method is said to be inconsistent with the partial differential equation. This concept of

consistency is usually dealt using the theorem given by Lax which states that if a linear finite-difference

equation is consistent with properly posed linear initial conditions then stability guarantees convergence

of u to U as the mesh lengths tends to zero. The consistency is defined as follows:

Let L(U)=0 represent the partial differential equation in the independent variables x and t, with exact

solution U. Let F(u)=0 represent the approximating finite difference equation with exact solution u. Let

v be a continuous function of x and t with sufficient number of derivate to enable L(v) to be evaluated at
point (ih,jk). Then the truncation error (𝑇𝑖,𝑗 (𝑣)) at point (ih,jk) is defined by

𝑇𝑖,𝑗 (𝑣) = 𝐹𝑖,𝑗 (𝑣) − 𝐿(𝑣𝑖,𝑗 ).

If we put v=U then L(U)=0, from which it follows that

𝑇𝑖,𝑗 (𝑈) = 𝐹𝑖,𝑗 (𝑈) ,

and the truncation error coincides with the local truncation error. The difference equation is then

consistent if the limiting value of the local truncation error is zero as ℎ → 0, 𝑘 → 0.

Convergence and stability

To get a reasonably accurate approximation to the solution of parabolic equation, there are two major

concerns. The first one is the convergence of the exact solution of the approximating finite-difference

equation to the solution of the differential equation and the second is the stability problem i.e.,

unbounded growth or boundedness of the exact solution of the finite-difference equations.

Convergence

Let U be the exact solution of the partial differential equation with x, t as independent variables, and u

be the exact solution of the difference equation used to approximate the solution of the partial

differential equation. Then the finite-difference equation is said to be convergent when u tends to U at a

fixed point as δx and δt tend to zero.

The conditions under which the difference scheme is consistent for a non-linear 2nd order partial

differential equation is not yet known except for a few particular cases. Such situations are usually dealt

via Lax’s equivalence theorem. Explicit scheme however, can be analyzed directly by deriving

difference equation for the discretization error e (the difference (U-u)). Denote the exact solution of the

partial differential equation by U and the exact solution of the finite-difference equation by u. Then e =

U-u.
Consider the equation

𝜕𝑈 𝜕 2 𝑈 (2.26)
= , 0 < 𝑥 < 1, 𝑡 > 0,
𝜕𝑡 𝜕𝑥 2
where U is known for 0 ≤ 𝑥 ≤ 1 when t=0, and x=0 and 1 when t>0.

Consider the explicit finite-difference approximation for equation (2.26) to be

𝑢𝑖,𝑗+1 − 𝑢𝑖,𝑗 𝑢𝑖−1,𝑗 − 2𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 (2.27)



𝑘 ℎ2
At the mesh points,

𝑢𝑖,𝑗 = 𝑈𝑖,𝑗 − 𝑒𝑖,𝑗 , 𝑢𝑖,𝑗+1 = 𝑈𝑖,𝑗+1 − 𝑒𝑖,𝑗+1 , 𝑒𝑡𝑐.

Substitution into (2.27) leads to

𝑒𝑖,𝑗+1 = 𝑟𝑒𝑖−1,𝑗 + (1 − 2𝑟)𝑒𝑖,𝑗 + 𝑟𝑒𝑖+1,𝑗 + 𝑈𝑖,𝑗+1 − 𝑈𝑖,𝑗 (2.28)

+ 𝑟(2𝑈𝑖,𝑗 − 𝑈𝑖−1,𝑗 − 𝑈𝑖+1,𝑗 ).

By Taylor’s theorem,

𝜕𝑈 ℎ2 𝜕 2 𝑈
𝑈𝑖+1,𝑗 = 𝑈(𝑥𝑖 + ℎ, 𝑡𝑗 ) = 𝑈𝑖,𝑗 + ℎ ( ) + (𝑥 + 𝜃1 ℎ, 𝑡𝑗 ),
𝜕𝑥 𝑖,𝑗 2! 𝜕𝑥 2 𝑖

𝜕𝑈 ℎ2 𝜕 2 𝑈
𝑈𝑖−1,𝑗 = 𝑈(𝑥𝑖 − ℎ, 𝑡𝑗 ) = 𝑈𝑖,𝑗 − ℎ ( ) + (𝑥 − 𝜃2 ℎ, 𝑡𝑗 ),
𝜕𝑥 𝑖,𝑗 2! 𝜕𝑥 2 𝑖

𝜕𝑈
𝑈𝑖,𝑗+1 = 𝑈(𝑥𝑖 , 𝑡𝑗 + 𝑘) = 𝑈𝑖,𝑗 + 𝑘 (𝑥 , 𝑡 + 𝜃3 𝑘),
𝜕𝑡 𝑖 𝑗
where 0 < 𝜃1 < 1, 0 < 𝜃2 < 1 𝑎𝑛𝑑 0 < 𝜃3 < 1. Substitute in equation (2.28) gives

𝑒𝑖,𝑗+1 = 𝑟𝑒𝑖−1,𝑗 + (1 − 2𝑟)𝑒𝑖,𝑗 + 𝑟𝑒𝑖+1,𝑗 (2.29)

𝜕 𝜕2
+𝑘{ 𝑈(𝑥𝑖 , 𝑡𝑗 + 𝜃3 𝑘) − 2 𝑈(𝑥𝑖 + 𝜃4 ℎ, 𝑡𝑗 )}
𝜕𝑡 𝜕𝑥

where −1 < 𝜃4 < 1.

Let 𝐸𝑗 denote the maximum value of |𝑒𝑖,𝑗 | along the jth time row and M the maximum modulus of the
1
expression in the braces for all i and j. When 𝑟 ≤ 2 , all the coefficients of e in equation (2.29) are
positive or zero, so

|𝑒𝑖,𝑗+1 | ≤ 𝑟|𝑒𝑖−1,𝑗 | + (1 − 2𝑟)|𝑒𝑖,𝑗 | + 𝑟|𝑒𝑖+1,𝑗 | + 𝑘𝑀

≤ 𝑟𝐸𝑗 + (1 − 2𝑟)𝐸𝑗 + 𝑟𝐸𝑗 + 𝑘𝑀

= 𝐸𝑗 + 𝑘𝑀 .

As this is true for all values of i it is true for max|𝑒𝑖,𝑗+1 | . Hence

𝐸𝑗+1 ≤ 𝐸𝑗 + 𝑘𝑀 ≤ (𝐸𝑗−1 + 𝑘𝑀) + 𝑘𝑀 = 𝐸𝑗−1 + 2𝑘𝑀,

and so on, from which it follows that

𝐸𝑗 ≤ 𝐸0 + 𝑗𝑘𝑀 = 𝑡𝑀,

because the initial values for u and U are the same, i.e. 𝐸0 = 0. When h tends zero, 𝑘 = 𝑟ℎ2 also tends

to zero and M tends to

𝜕𝑈 𝜕 2 𝑈
( − 2) .
𝜕𝑡 𝜕𝑥 𝑖,𝑗

Since, U is a solution of equation (2.26) the limiting value of M and therefore of 𝐸𝑗 is zero. As |𝑈𝑖,𝑗 −
1
𝑢𝑖,𝑗 | ≤ 𝐸𝑗 , this proves that u converges to U as h tends to zero when 𝑟 ≤ 2 and t is finite.

Stability

The main idea defining stability is that the numerical method that is being applied, should limit the

amplification of all components of the initial conditions.

For linear initial-value boundary value problems, stability is related to convergence via Lax’s

Equivalence Theorem by defining stability in terms of the boundedness of the solution of the finite-
𝑇
difference equation. Assume that the vector of the solution values 𝑢𝑗+1 = [𝑢1,𝑗+1 , 𝑢2,𝑗+1 , … , 𝑢𝑁−1,𝑗+1 ]

of the finite difference equations at (j+1)th time-level is related to the vector solution values at the jth

time-level by the equation

𝑢𝑗+1 = 𝐴𝑢𝑗 + 𝑏𝑗 ,
where 𝑏𝑗 is a column vector of known boundary-values and zeros, and the matrix A an

(𝑁 − 1) × (𝑁 − 1) matrix of known elements. It will be shown that the under the consequence of Lax’s

stability definition, conditions that the norm of a matrix A

‖𝐴‖ ≤ 1

when the solution of the partial differential equation does not increase as t increases, will ensure the

boundedness of the rounding errors. In actual computations, unlike the matrix A associated in Lax’s

definition, the matrix A remains constant. The matrix method of analysis then shows that the equations

are stable if the largest of the moduli of the eigenvalues of matrix A, i.e. the spectral radius

𝜌(𝐴) ≤ 1 ,

when the solution of the partial differential equation does not increase as t increases.

Although this condition ensures the boundedness of the computed solution, it does not guarantee

convergence unless the eigenvalues of A satisfies 𝜌(𝐴) ≤ ‖𝐴‖ ≤ 1 , as 𝑁 → ∞ .

Vector and matrix norms

Vector Norms

The norm of a vector 𝒙 is a positive real number giving measure of the size of the vector and is denoted

by‖𝒙‖. It must satisfy the following axioms:

a. ‖𝒙‖ > 0 𝑖𝑓 𝑥 ≠ 0 𝑎𝑛𝑑 ‖𝑥‖ = 0 𝑖𝑓 𝑥 = 0 .

b. ‖𝑐𝒙‖ = |𝑐|‖𝑥‖ for a real or complex scalar c.

c. ‖𝒙 + 𝒚‖ ≤ ‖𝒙‖ + ‖𝒚‖ .

If the 𝑛 × 1 vector x has components 𝑥1 , 𝑥2 , … , 𝑥𝑛 , then

• 1-norm of 𝒙 is defined as:


𝑛

‖𝒙‖1 = |𝑥1 | + |𝒙𝟐 | + |𝒙𝟑 | + ⋯ + |𝒙𝟒 | = ∑|𝑥𝑖 | .


𝑖=1
• 2-norm of 𝒙 gives the vector length and is defined as:
1
𝑛 2
1
‖𝒙‖2 = (|𝑥1 |2 + |𝑥2 |2 + ⋯ + |𝑥𝑛 |2 )2 = (∑|𝑥𝑖 |2 ) .
𝑖=1

• Infinity-norm of 𝒙 is defined as: ‖𝑥‖∞ = max𝑖 |𝑥𝑖 | .

Matrix Norms

The norm of a matrix A is a positive real number giving measure of the size of the matrix. The following

axioms MUST be satisfied:

a. ‖𝑨‖ > 0 𝑖𝑓 𝐴 ≠ 0 𝑎𝑛𝑑 ‖𝐴‖ = 0 𝑖𝑓 𝐴 = 0 .

b. ‖𝑐𝑨‖ = |𝑐|‖𝐴‖ for a real or complex scalar c.

c. ‖𝑨 + 𝑩‖ ≤ ‖𝑨‖ + ‖𝑩‖ .

d. ‖𝑨𝑩‖ ≤ ‖𝑨‖‖𝑩‖ .

Vector and matrix norms are said to be compatible or consistent if

‖𝑨𝒙‖ ≤ ‖𝑨‖‖𝒙‖, 𝒙 ≠ 𝟎 .

The definitions of the 1, 2, and ∞ norms, leads to the following results which are proved in most linear

algebra books:

 1-norm of matrix A is maximum column sum of moduli of elements of A.

 Infinity-norm of matrix A is maximum row sum of moduli of elements of A.

 ̅ )𝑻 .
2-norm of matrix A is the square root of spectral radius of 𝐀𝐻 A, where 𝐀𝐻 = (𝑨

So, when matrix A is real and symmetric, 𝐀𝐻 = 𝑨, and


1 1
‖𝐴‖2 = [𝜌(𝐴2 )]2 = [𝜌2 (𝐴)]2 = 𝜌(𝐴) = max|𝜆𝑖 | .
𝑖
A bound for the spectral radius

Let 𝜆𝑖 be an eigenvalue of n x n matrix A and 𝒙𝒊 be the corresponding eigenvector. Hence,

A𝒙𝒊 = 𝜆𝑖 𝒙𝒊

and ‖𝑨𝒙𝒊 ‖ = ‖𝜆𝑖 𝒙𝒊 ‖ = ‖𝜆𝑖 ‖‖𝒙𝒊 ‖ .

According to the condition for compatibility,

‖𝜆𝑖 ‖‖𝒙𝒊 ‖ = ‖𝑨𝒙𝒊 ‖ ≤ ‖𝑨‖‖𝒙𝒊 ‖ .

Therefore, ‖𝜆𝑖 ‖ ≤ ‖𝑨‖ , 𝑖 = 1 𝑡𝑜 𝑛.

Hence, 𝜌(𝑨) ≤ ‖𝑨‖.

A necessary and sufficient condition for stability (Constant coefficient)

Let the solution domain of the partial differential equation be the finite rectangle 0 ≤ 𝑥 ≤ 1 , 0 ≤ 𝑡 ≤

𝑇, and subdivide it into uniform rectangular meshes by the lines 𝑥𝑖 = 𝑖ℎ, 𝑖 = 0 𝑡𝑜 𝑁, such that Nh=1,

and the lines 𝑡𝑗 = 𝑗𝑘, 𝑗 = 0 𝑡𝑜 𝐽 such that Jk=T. It will be assumed that h is related to k by some

relationship such as 𝑘 = 𝑟ℎ2 , 𝑟 > 0 and finite, so that ℎ → 0 𝑎𝑠 𝑘 → 0 .

Let the finite-difference equation relating (j+1)th time-row with the jth time-row is

𝑏𝑖−1 𝑢𝑖−1,𝑗+1 + 𝑏𝑖 𝑢𝑖,𝑗+1 + 𝑏𝑖+1 𝑢𝑖+1,𝑗+1 = 𝑐𝑖−1 𝑢𝑖−1,𝑗 + 𝑐𝑖 𝑢𝑖,𝑗 + 𝑐𝑖+1 𝑢𝑖+1,𝑗

where the coefficients are constants.

If the boundary values at i=0 and N, j>0, are known, these (N-1) equations for i=1 to N-1 can be written

in matrix form as
𝑏1 𝑏2 𝑢1,𝑗+1
𝑏1 𝑏2 𝑏3 𝑢2,𝑗+1
⋱ ⋱ ⋱ ⋮
𝑏𝑁−3 𝑏𝑁−2 𝑢
𝑏𝑁−1 𝑁−2,𝑗+1
[ 𝑏𝑁−2 𝑏𝑁−1 ] [𝑢𝑁−1,𝑗+1 ]

𝑐1 𝑐2 𝑢1,𝑗 𝑐0 𝑢0,𝑗 − 𝑏0 𝑢0,𝑗+1


𝑐1 𝑐2 𝑐3 𝑢2,𝑗 0
= ⋱ ⋱ ⋱ ⋮ + ⋮
𝑐𝑁−3 𝑐𝑁−2 𝑐𝑁−1 𝑢𝑁−2,𝑗 0
[ 𝑐𝑁−2 𝑐𝑁−1 ] [𝑢𝑁−1,𝑗 ] [𝑐𝑁 𝑢𝑁,𝑗 − 𝑏𝑁 𝑢𝑁,𝑗+1 ]

i.e. as 𝐵𝑢𝑗+1 = 𝐶𝑢𝑗 + 𝑑𝑗 , where the matrices B and C of order (N-1) are as shown, 𝑢𝑗+1 denotes the

column vector with components 𝑢1,𝑗+1 , 𝑢2,𝑗+1 , … , 𝑢𝑁−1,𝑗+1 and 𝑑𝑗 denotes the column vector of known

boundary values and zeros.

Hence, 𝑢𝑗+1 = 𝐵 −1 𝐶𝑢𝑗 + 𝐵 −1 𝑑𝑗 ,

which may be expressed more conveniently as

𝑢𝑗+1 = 𝐴𝑢𝑗 + 𝑓𝑗 ,

where 𝐴 = 𝐵 −1 𝐶 𝑎𝑛𝑑 𝑓𝑗 = 𝐵 −1 𝑑𝑗 . Applied recursively, this leads to

𝑢𝑗 = 𝐴𝑢𝑗−1 + 𝑓𝑗−1 = 𝐴(𝐴𝑢𝑗−2 + 𝑓𝑗−2 ) + 𝑓𝑗−1

= 𝐴2 𝑢𝑗−2 + 𝐴𝑓𝑗−2 + 𝑓𝑗−1

= 𝐴𝑗 𝑢0 + 𝐴𝑗−1 𝑓0 + 𝐴𝑗−2 𝑓1 + ⋯ + 𝑓𝑗−1 , (2.30)

where 𝑢0 is the vector of initial values and 𝑓0 , 𝑓1 , … , 𝑓𝑗−1 are vectors of known boundary-values. Now,

the stability of the equations can be investigated by propagation of a perturbation.

Perturb the vector of initial values 𝑢0 𝑡𝑜 𝑢0∗ . The exact solution at the jth time-row will then be

𝑢𝑗∗ = 𝐴𝑗 𝑢0∗ + 𝐴𝑗−1 𝑓0 + 𝐴𝑗−2 𝑓1 + ⋯ + 𝑓𝑗−1 . (2.31)

If the perturbation vector ‘e’ is defined by


𝑒 = 𝑢∗ − 𝑢 ,

it follows by equations (2.30) and (2.31) that

𝑒𝑗 = 𝑢𝑗∗ − 𝑢𝑗 = 𝐴𝑗 (𝑢𝑜∗ − 𝑢0 ) = 𝐴𝑗 𝑒0 , 𝑗 = 1 𝑡𝑜 𝐽 (2.32)

In other words, a perturbation 𝑒0 of the initial values will be propagated according to the equation

𝑒𝑗 = 𝐴𝑒𝑗−1 = 𝐴2 𝑒𝑗−2 = ⋯ = 𝐴𝑗 𝑒0 , 𝑗 = 1 𝑡𝑜 𝐽

Hence, for compatible matrix and vector norms,

‖𝑒𝑗 ‖ ≤ ‖𝐴𝑗 ‖‖𝑒0 ‖ .

Lax define the difference scheme to be stable when there exists a positive number M, independent of j,

h, and k, such that

‖𝐴𝑗 ‖ ≤ 𝑀, 𝑗 = 1 𝑡𝑜 𝐽.

This clearly limits the amplification of any initial perturbation, and therefore of any initial rounding

errors, because

‖𝑒𝑗 ‖ ≤ 𝑀‖𝑒0 ‖.

Since ‖𝐴𝑗 ‖ = ‖𝐴𝐴𝑗−1 ‖ ≤ ‖𝐴‖‖𝐴𝑗−1 ‖ ≤ ⋯ ≤ ‖𝐴‖𝑗 ,

It follows that the Lax’s definition of stability is satisfied by

‖𝐴‖ ≤ 1.

This is the necessary and sufficient condition for the difference equations to be stable when the solution

of the partial differential equation does not increase as t increases. When this condition is satisfied it

follows automatically that the spectral radius 𝜌(𝐴) ≤ 1 since 𝜌(𝐴) ≤ ‖𝐴‖. Now, let us analyze the

stability of classical explicit and Crank-Nicolson equations.

Stability of classical explicit equations

𝑢𝑖,𝑗+1 = 𝑢𝑖−1,𝑗 + (1 − 2𝑟)𝑢𝑖,𝑗 + 𝑟𝑢𝑖+1,𝑗 , 𝑖 = 1 𝑡𝑜 𝑁 − 1,

for which the (𝑁 − 1) × (𝑁 − 1) matrix A is


(1 − 2𝑟) 𝑟
𝑟 (1 − 2𝑟) 𝑟
[ ],
𝑟 (1 − 2𝑟) 𝑟
𝑟 (1 − 2𝑟)
𝑘
where 𝑟 = ℎ2 > 0, and it is assumed that the boundary values 𝑢0,𝑗 𝑎𝑛𝑑 𝑢𝑁,𝑗 are known for j=1, 2, …

1
When 1 − 2𝑟 < 0, 𝑟 > 2 , |1 − 2𝑟| = 2𝑟 − 1

and ‖𝐴‖∞ = 𝑟 + 2𝑟 − 1 + 𝑟 = 4𝑟 − 1 > 1.


1
Therefore the scheme is stable for 0 < 𝑟 ≤ . It has been shown that these equations are also consistent.
2

1
Hence by Lax’s equivalence theorem these equations are convergent for 0 < 𝑟 ≤ 2 .

Stability of Crank-Nicolson equations

−𝑟𝑢𝑖−1,𝑗+1 + (2 + 2𝑟)𝑢𝑖,𝑗+1 − 𝑟𝑢𝑖+1,𝑗+1 = 𝑟𝑢𝑖−1,𝑗 + (2 − 1𝑟)𝑢𝑖,𝑗 + 𝑟𝑢𝑖+1,𝑗 , 𝑖 = 1 𝑁 − 1

In matrix form, for known boundary values, these give

(2 + 2𝑟) −𝑟 𝑢1,𝑗+1
−𝑟 (2 + 2𝑟) −𝑟 𝑢2,𝑗+1
⋱ ⋱ ⋱ ⋮
−𝑟 (2 + 2𝑟) −𝑟 𝑢𝑁−2,𝑗+1
[ −𝑟 [ 𝑢
(2 + 2𝑟)] 𝑁−1,𝑗+1 ]

(2 − 2𝑟) 𝑟 𝑢1,𝑗
𝑟 (2 − 2𝑟) 𝑟 𝑢2,𝑗
= ⋱ ⋱ ⋱ ⋮ + 𝑏𝑗 ,
𝑟 (2 − 2𝑟) 𝑟 𝑢𝑁−2,𝑗
[ 𝑟 [ 𝑢
(2 − 2𝑟)] 𝑁−1,𝑗 ]

where 𝑏𝑗 is a vector of known boundary values and zeros. This can be written as

(2𝐼𝑛−1 − 𝑟𝑇𝑁−1 )𝑢𝑗+1 = (2𝐼𝑛−1 + 𝑟𝑇𝑁−1 )𝑢𝑗 + 𝑏𝑗

from which it follows that matrix A of equation (2.30) is

𝐴 = (2𝐼𝑛−1 − 𝑟𝑇𝑁−1 )−1 (2𝐼𝑛−1 + 𝑟𝑇𝑁−1 )

It can be easily proved that if B and C are 𝑛 × 𝑛 matrices and commute then 𝐵 −1 𝐶, 𝐵𝐶 −1 𝑎𝑛𝑑 𝐵 −1 𝐶 −1
are symmetric. Matrix 𝑇𝑁−1 is symmetric so 2𝐼𝑁−1 − 𝑟𝑇𝑁−1 𝑎𝑛𝑑 2𝐼𝑁−1 + 𝑟𝑇𝑁−1 are symmetric and

commute. Hence matrix A is symmetric. Since the eigen-values of 𝑇𝑁−1 (can be easily calculated) are
𝑠𝜋
𝜆𝑠 = −4 sin2 2𝑁 , 𝑠 = 1 𝑡𝑜 𝑁 − 1, it follows the eigenvalues of A are (2 + 4𝑟 sin2 𝑠𝜋/2𝑁)−1 (2 −

4𝑟 sin2 𝑠𝜋/2𝑁)

Therefore

1 − 2𝑟 sin2 𝑠𝜋/2𝑁
‖𝐴‖2 = 𝜌(𝐴) = 𝑚𝑎𝑥 | | < 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑟 > 0,
1 + 2𝑟 sin2 𝑠𝜋/2𝑁

proving that the Crank-Nicolson equations re unconditionally stable. They are also consistent, so they

are also convergent.

You might also like