Finite Difference Method
Finite Difference Method
Supervised by:
Prof. S. Ghorai
Submitted by:
Dushyant Kumar Sengar
Roll No. 181048
[Link]. Mathematics
Certificate
It is certified that the work embodied in this project entitled “NUMERICAL SOLUTION OF PARABOLIC
EQUATIONS” by Dushyant Kumar Sengar has been carried out under my supervision.
Prof. S. Ghorai
(Professor)
I am very much grateful to all the people who have supported me throughout the course of this project. I am
thankful for their aspiring guidance and friendly advice during the project work. I express my sincere gratitude
towards my guide Prof. [Link] whose consistent efforts, diligent support and regular encouragement helped me
to overcome the challenges that I faced throughout this project and to acquire detailed knowledge in the concerned
In this project, we will focus on numerical solution of parabolic equations. Here, we will consider the two
independent variables to be time (t) and distance (x) with dependent variable being the temperature (u). We will
approximate the solution of such equation using two methods, namely explicit method and Crank-Nicolson Implicit
method. We will also analyze their convergence, stability and consistency. Finally, we conclude with the applicability
The mathematical formulation of most the problems that we come across in Science, involve rate of
change with respect to two or more independent variables, which leads to a partial differential equations
𝜕 2𝜑 𝜕 2𝜑 𝜕 2𝜑 𝜕𝜑 𝜕𝜑
𝑎 2
+ 𝑏 + 𝑐 2
+𝑑 +𝑒 + 𝑓𝜑 + 𝑔 = 0 ,
𝜕𝑥 𝜕𝑥𝜕𝑦 𝜕𝑦 𝜕𝑥 𝜕𝑦
where 𝑥 and 𝑦 are independent variables, 𝜑 is the dependent variable, and 𝑎, 𝑏, 𝑐, 𝑑, 𝑒, 𝑓 and 𝑔 may be
This equation is said to elliptic when 𝑏 2 − 4𝑎𝑐 < 0, parabolic when 𝑏 2 − 4𝑎𝑐 = 0, and
hyperbolic when 𝑏 2 − 4𝑎𝑐 > 0. Here, we get an approximate solution of parabolic equations using finite
difference method.
For a function F, with derivatives being finite and continuous, by Tylor’s theorem,
ℎ2 ′′ ℎ3 (1.1)
𝐹(𝑥 + ℎ) = 𝐹(𝑥) + ℎ𝐹 ′ (𝑥) + 𝐹 (𝑥) + 𝐹 ′′′ (𝑥) + ⋯ ,
2! 3!
ℎ2 ′′ ℎ3 (1.2)
𝐹(𝑥 − ℎ) = 𝐹(𝑥) − ℎ𝐹 ′ (𝑥) + 𝐹 (𝑥) − 𝐹 ′′′ (𝑥) + ⋯ .
2! 3!
Adding the above equation (1.1) and (1.2), we get
where O(h4) denote the terms of fourth and higher powers of h. Considering O(h4) to be negligible in
comparison to the terms of lower power of h the above equation (1.3) results in
′′ (𝑥)
𝑑2𝐹 1 (1.4)
𝐹 = ( 2) ≈ 2 {𝐹(𝑥 + ℎ) − 2𝐹(𝑥) + 𝐹(𝑥 − ℎ)}
𝑑𝑥 𝑥=𝑥 ℎ
𝑑𝐹 1 (1.5)
𝐹 ′ (𝑥) = ( ) ≈ {𝐹(𝑥 + ℎ) − 𝐹(𝑥 − ℎ)},
𝑑𝑥 𝑥=𝑥 2ℎ
Hence, from Fig. 1.1, we see that equation (1.5) approximates the slope of the tangent at P by the slope
of the chord AB, which is called central-difference approximation. We can also approximate the slope
of the tangent at P by either the slope of the chord PB, giving forward difference formula,
1 (1.6)
𝑈 ′ (𝑥) ≈ {𝑈(𝑥 + ℎ) − 𝑈(𝑥)} ,
ℎ
Or the slope of the chord AP giving backward-difference formula
1 (1.7)
𝑈 ′ (𝑥) ≈ {𝑈(𝑥) − 𝑈(𝑥 − ℎ)} .
ℎ
P
A
u(x-h) P
u(x) u(x+h)
The leading errors in forward and backward difference formula are both O(h).
A lot of problems that are physically different can be dealt with a single mathematical equation when
expressed in terms of non-dimensional variables. These problems, need not be dimensionally different,
but merely variation of similar type of problem.
For instance, consider a heat insulated thin uniform rod so that the temperature changes occur
through heat conduction along its length. It will satisfy the following parabolic equation:
𝜕𝑈 𝜕 2𝑈 (2.1)
=𝑘 2 , 𝑘 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡,
𝜕𝑇 𝜕𝑋
the solution of which gives the temperature U at a distance X from one end of the rod after ā time T.
Let L be the length of the rod and U0 be some particular temperature such as the maximum or minimum
Then 𝜕𝑈 𝜕𝑈 𝑑𝑥 𝜕𝑈 1
= =
𝜕𝑋 𝜕𝑥 𝑑𝑋 𝜕𝑥 𝐿
and 𝜕 2𝑈 𝜕 𝜕𝑈 𝜕 1 𝜕𝑈 𝑑𝑥 1 𝜕 2𝑈
= ( ) = ( ) = .
𝜕𝑋 2 𝜕𝑥 𝜕𝑋 𝜕𝑥 𝐿 𝜕𝑥 𝑑𝑋 𝐿2 𝜕𝑥 2
Hence, equation (2.1) transforms to
𝜕(𝑢𝑈0 ) 𝑘 𝜕 2 (𝑢𝑈0 )
= 2 ,
𝜕𝑇 𝐿 𝜕𝑥 2
i.e. 1 𝜕𝑢 𝜕 2 𝑢
= .
𝑘𝐿−2 𝜕𝑇 𝜕𝑥 2
𝑘𝑇
Writing 𝑡 = and applying the function of a function rule to the lefts side results in
𝐿2
𝜕𝑢 𝜕 2 𝑢 (2.2)
=
𝜕𝑡 𝜕𝑥 2
as the non-dimensional form of (2.1).
and 𝑡𝑗 = 𝑗𝑘, (𝑗 = 0, 1, 2, … ).
point in terms of the known ‘temperatures’ along the jth time-row (Fig. 2.1). A formula such as this,
which expresses one unknown pivotal value directly in terms of known pivotal values is called an
explicit formula. Let us consider some examples and solve them for equation (2.4). Left side of equation
(2.4) contains 1 unknown and the right side 3 known, pivotal values of u (Fig. 2.1).
Unknown value of u
u=0
j+1
Known values of u
k
h
i-1 i i+1
Example 2.1
Consider that the ends of the rod are kept in contact with blocks of melting ice. Find numerical solution
𝜕2 𝑈
of which has the initial temperature distribution (initial condition at t=0) in non-dimensional form is
𝜕𝑥 2
(i) 1
(𝑎) 𝑈 = 2𝑥 0≤𝑥≤ ,
2 (2.5)
1
(𝑏) 𝑈 = 2(1 − 𝑥), ≤𝑥 ≤1.
2
(ii) 𝑈 = 0 𝑎𝑡 𝑥 = 0 𝑎𝑛𝑑 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡 > 0. (The boundary condition)
Solution:
1 1
Notice that the problem is symmetric about 𝑥 = 2 so we need the solution only for 0 ≤ 𝑥 ≤ 2.
CASE 1:
1 1 1
Take ℎ = 10 , 𝛿𝑡 = 𝑘 = 1000 , so 𝑟 = 𝑘/ℎ2 = 10 . Equation (2.4) then becomes
1
𝑢𝑖,𝑗+1 = (𝑢 + 8𝑢𝑖,𝑗 + 𝑢𝑖+1,𝑗 )
10 𝑖−1,𝑗 (2.6)
j=1 0
j=2 0
j=3 0
j=4 0
TABLE 2.1
i,j+1
=
1 8 1
10 10 10
On computation by the code using the equation (2.6), we will get the following data:
So, comparing the finite difference solution with the analytic solution:
So, we can notice that the solution using finite difference method at 𝑥 = 0.3 is reasonably accurate. But
for 𝑥 = 0.5, the solution is not that good, the percentage error being reasonable enough. This is
𝜕𝑈
happening because of the discontinuity in the initial value of from +2 to -2 at 𝑥 = 0.5 . But this error
𝜕𝑥
CASE 2:
1 5
Take ℎ = , 𝛿𝑡 = 𝑘 = , so 𝑟 = 𝑘/ℎ2 = 0.5 . Equation (2.4) then becomes
10 1000
1
𝑢𝑖,𝑗+1 = (𝑢 + 𝑢𝑖+1,𝑗 )
2 𝑖−1,𝑗 (2.7)
On computation by the code using the equation (27), we will come with the following data:
So, comparing the finite difference solution with the analytic solution:
Notice that this finite difference solution is not that good of an approximation as compared to the
previous Case 1.
CASE 3:
1 1
Take ℎ = 10 , 𝛿𝑡 = 𝑘 = 100 , so 𝑟 = 𝑘/ℎ2 = 1 . Equation (2.4) then becomes
On computation by the code using the equation (2.9), get the following data:
Here, the percentage error is so much that considering it as a solution of the partial differential equation
is meaningless.
For 𝑥 = 0.5,
These three cases clearly indicate that the value of r is important and it will be proved later that
1
this explicit method works only for 0 ≤ 𝑟 ≤ 2 . The graphs below compare the analytical solution of the
partial differential equation with the finite difference solution for values of r just below or above ½, and
t=0.05
t=0.1
t=0.2
r=0.5
t=0.05
t=0.1
t=0.2
r=0.51 t=0.05
t=0.1
t=0.2
Crank-Nicolson implicit method:
Thus, we noticed that explicit method is computationally simple but it has some drawbacks too. The
𝑘 1 1
time step δt=k must be very small because the process is valid only for 0 < ℎ2 ≤ 2 , i.e. 𝑘 ≤ 2 ℎ2 , and
ℎ = 𝛿𝑥 must be kept small in order to attain reasonable accuracy. Crank-Nicolson proposed a method
that reduces the computation and is valid for all values of r. They considered the partial differential
1 𝜕2 𝑈
equation as being satisfied at the mid-point {𝑖ℎ, (𝑗 + 2) 𝑘} and replaced by the mean of its finite
𝜕𝑥 2
𝜕𝑈 𝜕 2𝑈
( ) 1 = ( 2)
𝜕𝑡 𝑖,𝑗+ 𝜕𝑥 𝑖,𝑗+1
2 2
is replaced by
where 𝑟 = 𝑘/ℎ2 . Hence, the left side of equation (2.10) contains 3 unknowns and the right side 3
Unknown values of u
j+1
Known values of u
i-1 i i+1
If there are N internal mesh points along each time row : i=1, 2, 3, …, N, then equation (2.10) for j=0
gives N equations for N unknown pivotal values along first time-row in terms of known initial and
boundary values. Similarly, j=1 expresses N unknown values of u along the second time-row in terms of
the calculated values along the first row, etc. So, a method such as this where calculation of unknown
pivotal values necessitates the solution of a set of equations, is known as implicit method. Let us
Example:
Use Crank-Nicolson method to calculate a numerical solution of the previously done example, namely,
𝜕𝑈 𝜕 2 𝑈
= , 0 < 𝑥 < 1, 𝑡 > 0
𝜕𝑡 𝜕𝑥 2
with the following conditions:
(i) 1
(𝑎) 𝑈 = 2𝑥 0≤𝑥≤ , 𝑡=0
2
1
(𝑏) 𝑈 = 2(1 − 𝑥), ≤ 𝑥 ≤ 1, 𝑡=0 .
2
(ii) 𝑈 = 0 𝑎𝑡 𝑥 = 0 𝑎𝑛𝑑 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑡 > 0. (The boundary condition)
Solution:
1
Take ℎ = 10 . Although the method is valid for all finite values of r=k/h2, a large value will result in
𝜕𝑈
inaccurate approximation of . A suitable value is r=1 and has the advantage of making the coefficient
𝜕𝑡
1
of 𝑢𝑖,𝑗 zero in equation (2.10). Then 𝑘 = 100 and equation (2.10) becomes
-1 4 -1
1 1
Fig. 2.5
i-1,j i+1,j
Denote 𝑢𝑖,𝑗+1 by 𝑢𝑖 (i=1, 2, 3, …, 9). For this problem, because of symmetry, 𝑢6 = 𝑢4 , 𝑢7 = 𝑢3 ,
etc.(Fig. 2.6). So, the system of equations for the first time step would be
−0 + 4𝑢1 − 𝑢2 = 0 + 0.4 ,
So, now the equations for the pivotal values of u along the next time-row are
−0 + 4𝑢1 − 𝑢2 = 0 + 0.3956 ,
On computation by the code using the equation (2.11), we come with the following data:
Now, we compare the finite difference solution with the analytic solution at 𝑥 = 0.5,
Notice, the numerical solution using Crank-Nicolson method is clearly better than the explicit method.
As mentioned earlier, the greatest difference between the two solutions occur at 𝑥 = 0.5 because of the
𝜕𝑈
discontinuity in the initial value of at this point. Although the Crank-Nicolson method is stable for
𝜕𝑥
all positive values of r in the sense that the solution and all errors eventually tend to zero as j tends to
infinity. But it will be later explained that for large values of r, such as 50, can cause unwanted finite
Let 𝐹𝑖,𝑗 (𝑢) = 0 represent the difference equation approximation of the partial differential equation at the
(i,j)th mesh point, with the exact solution u. If u is replaced by U, at the mesh points of the difference
equation, where U is the exact solution of the partial differential equation, the value 𝐹𝑖,𝑗 (𝑈) is called the
truncation error (𝑇𝑖,𝑗 ) at (i,j) mesh point. We can analyze these truncation errors using the Taylor’s
expansion to know about the local accuracies. Let us consider the classical case of explicit difference
𝜕𝑈 𝜕2 𝑈
approximation i.e. − 𝜕𝑥 2 = 0 at the point (ih, jk). So,
𝜕𝑡
𝜕𝑈 1 𝜕 2𝑈 1 𝜕 3𝑈
𝑈𝑖+1,𝑗 = 𝑈{(𝑖 + 1)ℎ, 𝑗𝑘} = 𝑈(𝑥𝑖 + ℎ, 𝑡𝑗 ) = 𝑈𝑖,𝑗 + ℎ ( ) + ℎ2 ( 2 ) + ℎ3 ( 3 ) + ⋯
𝜕𝑥 𝑖,𝑗 2 𝜕𝑥 𝑖,𝑗 6 𝜕𝑥 𝑖,𝑗
𝜕𝑈 1 2 𝜕 2𝑈 1 3 𝜕 3𝑈
𝑈𝑖−1,𝑗 = 𝑈{(𝑖 − 1)ℎ, 𝑗𝑘} = 𝑈(𝑥𝑖 − ℎ, 𝑡𝑗 ) = 𝑈𝑖,𝑗 − ℎ ( ) + ℎ ( 2 ) − ℎ ( 3 ) + ⋯
𝜕𝑥 𝑖,𝑗 2 𝜕𝑥 𝑖,𝑗 6 𝜕𝑥 𝑖,𝑗
𝜕𝑈 1 2 𝜕 2𝑈 1 3 𝜕 3𝑈
𝑈𝑖,𝑗+1 = 𝑈(𝑥𝑖 , 𝑡𝑗 + 𝑘) = 𝑈𝑖,𝑗 + ℎ ( ) + 𝑘 ( 2 ) + 𝑘 ( 3 ) + ⋯
𝜕𝑡 𝑖,𝑗 2 𝜕𝑡 𝑖,𝑗 6 𝜕𝑡 𝑖,𝑗
𝜕𝑈 𝜕 2 𝑈 1 𝜕 2𝑈 1 2 𝜕 4𝑈 1 𝜕 4𝑈 1 4 𝜕 6𝑈
𝑇𝑖,𝑗 = ( − 2 ) + 𝑘 ( 2 ) − ℎ ( 4 ) + ℎ2 ( 4 ) − ℎ ( 6) + ⋯
𝜕𝑡 𝜕𝑥 𝑖,𝑗 2 𝜕𝑡 𝑖,𝑗 12 𝜕𝑡 𝑖,𝑗 6 𝜕𝑥 𝑖,𝑗 360 𝜕𝑥 𝑖,𝑗
𝜕𝑈 𝜕 2 𝑈
( − 2) = 0
𝜕𝑡 𝜕𝑥 𝑖,𝑗
Therefore, the principal part of the local truncation error is
1 𝜕 2𝑈 1 𝜕 4𝑈
( 𝑘 2 − ℎ2 4 )
2 𝜕𝑡 12 𝜕𝑥 𝑖,𝑗
1 2 𝑘 𝜕 2𝑈 𝜕 4𝑈
𝑇𝑖,𝑗 = ℎ (6 2 2 − 4 ) + 𝑂(𝑘 2 ) + 𝑂(ℎ4 ).
12 ℎ 𝜕𝑡 𝜕𝑥 𝑖,𝑗
𝜕 𝜕2
=
𝜕𝑡 𝜕𝑥 2
So, 𝜕 𝜕𝑈 𝜕 2 𝜕 2𝑈
( ) = 2 ( 2 ),
𝜕𝑥 𝜕𝑡 𝜕𝑥 𝜕𝑥
assuming that these derivatives exist. So, if we put 6𝑘/ℎ2 = 1, then 𝑇𝑖,𝑗 is 𝑂(𝑘 2 ) + 𝑂(ℎ4 ).
Consistency or compatibility
sometimes possible that the method is stable and but it converges to a different solution then desired.
Such difference method is said to be inconsistent with the partial differential equation. This concept of
consistency is usually dealt using the theorem given by Lax which states that if a linear finite-difference
equation is consistent with properly posed linear initial conditions then stability guarantees convergence
Let L(U)=0 represent the partial differential equation in the independent variables x and t, with exact
solution U. Let F(u)=0 represent the approximating finite difference equation with exact solution u. Let
v be a continuous function of x and t with sufficient number of derivate to enable L(v) to be evaluated at
point (ih,jk). Then the truncation error (𝑇𝑖,𝑗 (𝑣)) at point (ih,jk) is defined by
and the truncation error coincides with the local truncation error. The difference equation is then
To get a reasonably accurate approximation to the solution of parabolic equation, there are two major
concerns. The first one is the convergence of the exact solution of the approximating finite-difference
equation to the solution of the differential equation and the second is the stability problem i.e.,
Convergence
Let U be the exact solution of the partial differential equation with x, t as independent variables, and u
be the exact solution of the difference equation used to approximate the solution of the partial
differential equation. Then the finite-difference equation is said to be convergent when u tends to U at a
The conditions under which the difference scheme is consistent for a non-linear 2nd order partial
differential equation is not yet known except for a few particular cases. Such situations are usually dealt
via Lax’s equivalence theorem. Explicit scheme however, can be analyzed directly by deriving
difference equation for the discretization error e (the difference (U-u)). Denote the exact solution of the
partial differential equation by U and the exact solution of the finite-difference equation by u. Then e =
U-u.
Consider the equation
𝜕𝑈 𝜕 2 𝑈 (2.26)
= , 0 < 𝑥 < 1, 𝑡 > 0,
𝜕𝑡 𝜕𝑥 2
where U is known for 0 ≤ 𝑥 ≤ 1 when t=0, and x=0 and 1 when t>0.
By Taylor’s theorem,
𝜕𝑈 ℎ2 𝜕 2 𝑈
𝑈𝑖+1,𝑗 = 𝑈(𝑥𝑖 + ℎ, 𝑡𝑗 ) = 𝑈𝑖,𝑗 + ℎ ( ) + (𝑥 + 𝜃1 ℎ, 𝑡𝑗 ),
𝜕𝑥 𝑖,𝑗 2! 𝜕𝑥 2 𝑖
𝜕𝑈 ℎ2 𝜕 2 𝑈
𝑈𝑖−1,𝑗 = 𝑈(𝑥𝑖 − ℎ, 𝑡𝑗 ) = 𝑈𝑖,𝑗 − ℎ ( ) + (𝑥 − 𝜃2 ℎ, 𝑡𝑗 ),
𝜕𝑥 𝑖,𝑗 2! 𝜕𝑥 2 𝑖
𝜕𝑈
𝑈𝑖,𝑗+1 = 𝑈(𝑥𝑖 , 𝑡𝑗 + 𝑘) = 𝑈𝑖,𝑗 + 𝑘 (𝑥 , 𝑡 + 𝜃3 𝑘),
𝜕𝑡 𝑖 𝑗
where 0 < 𝜃1 < 1, 0 < 𝜃2 < 1 𝑎𝑛𝑑 0 < 𝜃3 < 1. Substitute in equation (2.28) gives
𝜕 𝜕2
+𝑘{ 𝑈(𝑥𝑖 , 𝑡𝑗 + 𝜃3 𝑘) − 2 𝑈(𝑥𝑖 + 𝜃4 ℎ, 𝑡𝑗 )}
𝜕𝑡 𝜕𝑥
Let 𝐸𝑗 denote the maximum value of |𝑒𝑖,𝑗 | along the jth time row and M the maximum modulus of the
1
expression in the braces for all i and j. When 𝑟 ≤ 2 , all the coefficients of e in equation (2.29) are
positive or zero, so
= 𝐸𝑗 + 𝑘𝑀 .
𝐸𝑗 ≤ 𝐸0 + 𝑗𝑘𝑀 = 𝑡𝑀,
because the initial values for u and U are the same, i.e. 𝐸0 = 0. When h tends zero, 𝑘 = 𝑟ℎ2 also tends
𝜕𝑈 𝜕 2 𝑈
( − 2) .
𝜕𝑡 𝜕𝑥 𝑖,𝑗
Since, U is a solution of equation (2.26) the limiting value of M and therefore of 𝐸𝑗 is zero. As |𝑈𝑖,𝑗 −
1
𝑢𝑖,𝑗 | ≤ 𝐸𝑗 , this proves that u converges to U as h tends to zero when 𝑟 ≤ 2 and t is finite.
Stability
The main idea defining stability is that the numerical method that is being applied, should limit the
For linear initial-value boundary value problems, stability is related to convergence via Lax’s
Equivalence Theorem by defining stability in terms of the boundedness of the solution of the finite-
𝑇
difference equation. Assume that the vector of the solution values 𝑢𝑗+1 = [𝑢1,𝑗+1 , 𝑢2,𝑗+1 , … , 𝑢𝑁−1,𝑗+1 ]
of the finite difference equations at (j+1)th time-level is related to the vector solution values at the jth
𝑢𝑗+1 = 𝐴𝑢𝑗 + 𝑏𝑗 ,
where 𝑏𝑗 is a column vector of known boundary-values and zeros, and the matrix A an
(𝑁 − 1) × (𝑁 − 1) matrix of known elements. It will be shown that the under the consequence of Lax’s
‖𝐴‖ ≤ 1
when the solution of the partial differential equation does not increase as t increases, will ensure the
boundedness of the rounding errors. In actual computations, unlike the matrix A associated in Lax’s
definition, the matrix A remains constant. The matrix method of analysis then shows that the equations
are stable if the largest of the moduli of the eigenvalues of matrix A, i.e. the spectral radius
𝜌(𝐴) ≤ 1 ,
when the solution of the partial differential equation does not increase as t increases.
Although this condition ensures the boundedness of the computed solution, it does not guarantee
Vector Norms
The norm of a vector 𝒙 is a positive real number giving measure of the size of the vector and is denoted
c. ‖𝒙 + 𝒚‖ ≤ ‖𝒙‖ + ‖𝒚‖ .
Matrix Norms
The norm of a matrix A is a positive real number giving measure of the size of the matrix. The following
c. ‖𝑨 + 𝑩‖ ≤ ‖𝑨‖ + ‖𝑩‖ .
d. ‖𝑨𝑩‖ ≤ ‖𝑨‖‖𝑩‖ .
‖𝑨𝒙‖ ≤ ‖𝑨‖‖𝒙‖, 𝒙 ≠ 𝟎 .
The definitions of the 1, 2, and ∞ norms, leads to the following results which are proved in most linear
algebra books:
̅ )𝑻 .
2-norm of matrix A is the square root of spectral radius of 𝐀𝐻 A, where 𝐀𝐻 = (𝑨
A𝒙𝒊 = 𝜆𝑖 𝒙𝒊
Let the solution domain of the partial differential equation be the finite rectangle 0 ≤ 𝑥 ≤ 1 , 0 ≤ 𝑡 ≤
𝑇, and subdivide it into uniform rectangular meshes by the lines 𝑥𝑖 = 𝑖ℎ, 𝑖 = 0 𝑡𝑜 𝑁, such that Nh=1,
and the lines 𝑡𝑗 = 𝑗𝑘, 𝑗 = 0 𝑡𝑜 𝐽 such that Jk=T. It will be assumed that h is related to k by some
Let the finite-difference equation relating (j+1)th time-row with the jth time-row is
𝑏𝑖−1 𝑢𝑖−1,𝑗+1 + 𝑏𝑖 𝑢𝑖,𝑗+1 + 𝑏𝑖+1 𝑢𝑖+1,𝑗+1 = 𝑐𝑖−1 𝑢𝑖−1,𝑗 + 𝑐𝑖 𝑢𝑖,𝑗 + 𝑐𝑖+1 𝑢𝑖+1,𝑗
If the boundary values at i=0 and N, j>0, are known, these (N-1) equations for i=1 to N-1 can be written
in matrix form as
𝑏1 𝑏2 𝑢1,𝑗+1
𝑏1 𝑏2 𝑏3 𝑢2,𝑗+1
⋱ ⋱ ⋱ ⋮
𝑏𝑁−3 𝑏𝑁−2 𝑢
𝑏𝑁−1 𝑁−2,𝑗+1
[ 𝑏𝑁−2 𝑏𝑁−1 ] [𝑢𝑁−1,𝑗+1 ]
i.e. as 𝐵𝑢𝑗+1 = 𝐶𝑢𝑗 + 𝑑𝑗 , where the matrices B and C of order (N-1) are as shown, 𝑢𝑗+1 denotes the
column vector with components 𝑢1,𝑗+1 , 𝑢2,𝑗+1 , … , 𝑢𝑁−1,𝑗+1 and 𝑑𝑗 denotes the column vector of known
𝑢𝑗+1 = 𝐴𝑢𝑗 + 𝑓𝑗 ,
where 𝑢0 is the vector of initial values and 𝑓0 , 𝑓1 , … , 𝑓𝑗−1 are vectors of known boundary-values. Now,
Perturb the vector of initial values 𝑢0 𝑡𝑜 𝑢0∗ . The exact solution at the jth time-row will then be
In other words, a perturbation 𝑒0 of the initial values will be propagated according to the equation
𝑒𝑗 = 𝐴𝑒𝑗−1 = 𝐴2 𝑒𝑗−2 = ⋯ = 𝐴𝑗 𝑒0 , 𝑗 = 1 𝑡𝑜 𝐽
Lax define the difference scheme to be stable when there exists a positive number M, independent of j,
‖𝐴𝑗 ‖ ≤ 𝑀, 𝑗 = 1 𝑡𝑜 𝐽.
This clearly limits the amplification of any initial perturbation, and therefore of any initial rounding
errors, because
‖𝑒𝑗 ‖ ≤ 𝑀‖𝑒0 ‖.
‖𝐴‖ ≤ 1.
This is the necessary and sufficient condition for the difference equations to be stable when the solution
of the partial differential equation does not increase as t increases. When this condition is satisfied it
follows automatically that the spectral radius 𝜌(𝐴) ≤ 1 since 𝜌(𝐴) ≤ ‖𝐴‖. Now, let us analyze the
1
When 1 − 2𝑟 < 0, 𝑟 > 2 , |1 − 2𝑟| = 2𝑟 − 1
1
Hence by Lax’s equivalence theorem these equations are convergent for 0 < 𝑟 ≤ 2 .
(2 + 2𝑟) −𝑟 𝑢1,𝑗+1
−𝑟 (2 + 2𝑟) −𝑟 𝑢2,𝑗+1
⋱ ⋱ ⋱ ⋮
−𝑟 (2 + 2𝑟) −𝑟 𝑢𝑁−2,𝑗+1
[ −𝑟 [ 𝑢
(2 + 2𝑟)] 𝑁−1,𝑗+1 ]
(2 − 2𝑟) 𝑟 𝑢1,𝑗
𝑟 (2 − 2𝑟) 𝑟 𝑢2,𝑗
= ⋱ ⋱ ⋱ ⋮ + 𝑏𝑗 ,
𝑟 (2 − 2𝑟) 𝑟 𝑢𝑁−2,𝑗
[ 𝑟 [ 𝑢
(2 − 2𝑟)] 𝑁−1,𝑗 ]
where 𝑏𝑗 is a vector of known boundary values and zeros. This can be written as
It can be easily proved that if B and C are 𝑛 × 𝑛 matrices and commute then 𝐵 −1 𝐶, 𝐵𝐶 −1 𝑎𝑛𝑑 𝐵 −1 𝐶 −1
are symmetric. Matrix 𝑇𝑁−1 is symmetric so 2𝐼𝑁−1 − 𝑟𝑇𝑁−1 𝑎𝑛𝑑 2𝐼𝑁−1 + 𝑟𝑇𝑁−1 are symmetric and
commute. Hence matrix A is symmetric. Since the eigen-values of 𝑇𝑁−1 (can be easily calculated) are
𝑠𝜋
𝜆𝑠 = −4 sin2 2𝑁 , 𝑠 = 1 𝑡𝑜 𝑁 − 1, it follows the eigenvalues of A are (2 + 4𝑟 sin2 𝑠𝜋/2𝑁)−1 (2 −
4𝑟 sin2 𝑠𝜋/2𝑁)
Therefore
1 − 2𝑟 sin2 𝑠𝜋/2𝑁
‖𝐴‖2 = 𝜌(𝐴) = 𝑚𝑎𝑥 | | < 1 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑟 > 0,
1 + 2𝑟 sin2 𝑠𝜋/2𝑁
proving that the Crank-Nicolson equations re unconditionally stable. They are also consistent, so they