0% found this document useful (0 votes)
135 views48 pages

Numerical Methods For MAT 120: Name Email

Uploaded by

md.mehidi.hasan1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
135 views48 pages

Numerical Methods For MAT 120: Name Email

Uploaded by

md.mehidi.hasan1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

Numerical Methods for MAT 120

Name Email
Irfan Sadat Ameen [email protected]
Syed Emad Uddin Shubha [email protected]
Fatema Tuz Zohora [email protected]
Rakibul Alam Shamim [email protected]
Krity Haque Charu [email protected]
Sarah Zabeen [email protected]
Reaz Shafqat [email protected]
Omer Tahsin [email protected]

Date: November 4, 2024


Institution: BRAC University
Contents

1 Numerical Root-Finding Methods 1

1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2 Bisection Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.2.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2.2 Python Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

1.2.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.3 Newton-Raphson Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

1.3.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.3.2 Python Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.3.3 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.4 Fixed Point Iteration Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.4.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

1.4.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.4.3 Homework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.4.4 Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.5 Banach Fixed-Point Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2 Numerical Differentiation 15
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2.1.1 Examples When Numerical Differentiation is Needed . . . . . . . . . . . . 15

2.2 Key Aspects and Intuition on Taylor Expansion . . . . . . . . . . . . . . . . . . . . 16

2.2.1 Taylor Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2.2 Intuition Behind Taylor Expansion . . . . . . . . . . . . . . . . . . . . . . . 16

2.3 Deriving Finite Difference Formulas Using Taylor Expansion . . . . . . . . . . . . 16

2.3.1 First-Order Derivative Approximations . . . . . . . . . . . . . . . . . . . . 17

2.3.2 Second-Order Derivative Approximations . . . . . . . . . . . . . . . . . . 19

2.4 Summary of Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.6 References for Further Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

3 Numerical Methods for Integration 23

3.1 Trapezoidal Rule: Numerical Integration . . . . . . . . . . . . . . . . . . . . . . . 23

3.1.1 Derivation of the Trapezoidal Rule . . . . . . . . . . . . . . . . . . . . . . . 23

3.1.2 Steps for Applying the Trapezoidal Rule . . . . . . . . . . . . . . . . . . . 25

3.1.3 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.1.4 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3.1.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.1.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.2 Monte Carlo Integration: Methods and Applications . . . . . . . . . . . . . . . . . 27

3.2.1 Averaging Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.2.2 Sampling Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.2.3 Comparison of Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33


3.2.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4 Numerical Methods for Solving Ordinary Differential Equations 34

4.1 Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.1.1 Example: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.1.2 Second-Order ODE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.1.3 Error Analysis for Euler’s Method . . . . . . . . . . . . . . . . . . . . . . . 38

4.1.4 Algorithm and Code Template for Euler’s Method . . . . . . . . . . . . . . 39

4.2 Runge-Kutta Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.2.1 Example: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

4.2.2 Algorithm and Code Template for RK4 Method . . . . . . . . . . . . . . . 44


Chapter 1

Numerical Root-Finding Methods

1.1 Introduction

Roots of a function means the solutions of f ( x ) = 0. Except for few specific functions, it is
generally not feasible to obtain an exact analytical expression for the root, thus preventing the
precise determination of the solution. We know, for example, that the solution of an equation
of the form √
2 −b ± b2 − 4ac
ax + bx + c = 0 is x= . (1.1)
2a

A analytic formula like this does not exist for polynomials of all degrees. Such problem can be
solved by approximating a solution using an iterative method. We can solve such problems
numerically using few algorithms.

1.2 Bisection Method

The bisection method is based on the Bolzano’s Theorem which states that

If a continuous function has values of opposite sign inside an interval, then it has a root in
that interval.

If a continuous function changes sign over an interval, we can narrow down the approximate
solution by using the midpoint of the interval and searching to the left and right of it (similar
to a binary search). This can be broken down into a few steps:

1
1. Confirm that a root lies between the interval by checking if f ( a) f (b) < 0. Otherwise,
choose different interval.

2. Introduce a new variable that is defined as the midpoint of the interval,

x0 = ( a + b)/2. (1.2)

3. Check if the root lies left or right of the midpoint.

(a) If f ( a) f ( x0 ) < 0, the root lies to the left.

(b) If f (b) f ( x0 ) < 0, the root lies to the right.

4. Repeat the process from step 2 until the width, b − a, is less than a very small number,
for example ϵ = 0.001.

Figure 1.1: Figure: Bisection Method applied to a function f ( x ) with initial guesses as a and
b.

2
1.2.1 Example

Problem

Find the approximate root of the function

f ( x ) = x3 + x2 + x − 8 (1.3)

within the interval [1, 2] using the bisection method with an accuracy of ε = 0.1.

Solution

We apply the bisection method step-by-step to narrow down the interval until it is smaller
than the required accuracy.

Check if f ( x ) has opposite signs at x = 1 and x = 2:

f ( 1 ) = 13 + 12 + 1 − 8 = − 5

f ( 2 ) = 23 + 22 + 2 − 8 = 6

Since f (1) = −5 and f (2) = 6 have opposite signs, there is at least one root in ([1, 2].

We will iteratively bisect the interval and evaluate f ( x ) at the midpoint until the interval width
is smaller than ε = 0.1.

Iteration 1

1+2
x0 = = 1.5
2
f (1.5) = 1.53 + 1.52 + 1.5 − 8 = −0.875

Since f (1.5) = −0.875 and f (2) = 6 have opposite signs, the root is in [1.5, 2].

3
Iteration 2

1.5 + 2
x0 = = 1.75
2
f (1.75) = 1.753 + 1.752 + 1.75 − 8 = 2.171875

Since f (1.5) = −0.875 and f (1.75) = 2.171875 have opposite signs, the root is in [1.5, 1.75].

Iteration 3

1.5 + 1.75
x0 = = 1.625
2
f (1.625) = 1.6253 + 1.6252 + 1.625 − 8 = −0.435546875

Since f (1.625) = −0.435546875 and f (1.75) = 2.171875 have opposite signs, the root is in
[1.625, 1.75].

Iteration 4

1.625 + 1.75
x0 = = 1.6875
2
f (1.6875) = 1.68753 + 1.68752 + 1.6875 − 8 = 0.340576171875

Since f (1.625) = −0.435546875 and f (1.6875) = 0.340576171875 have opposite signs, the root
is in [1.625, 1.6875].

Iteration 5

1.625 + 1.6875
x0 = = 1.65625
2
f (1.65625) = 1.656253 + 1.656252 + 1.65625 − 8 = −0.0494384765625

4
Since f (1.65625) = −0.0494384765625 and f (1.6875) = 0.340576171875 have opposite signs,
the root is in [1.65625, 1.6875].

Approximate Root

The interval [1.65625, 1.6875] has a width of

1.6875 − 1.65625 = 0.03125,

which is less than the required accuracy of 0.1. Therefore, we stop here and take the midpoint
as an approximate root:
Approximate root ≈ 1.671875.

1.2.2 Python Implementation

The implementation of this algorithm in python is the following,

eps = 1e-04 #0.0001

def bisection(expr,a,b):
#Check for wrong input:
if expr.subs(x,a)*expr.subs(x,b) >= 0:
print("Invalid values.")
else:
while (b-a) >= eps:
#Find the midpoint of a and b:
x0 = (a+b)/2
#set condition to see if root lies left or right
if expr.subs(x,a)*expr.subs(x,x0) < 0:
b = x0
elif expr.subs(x,b)*expr.subs(x,x0) < 0:
a = x0
print("The root is ", x0)
return

5
1.2.3 Exercises

(a) Basic Check for Root (Easy)


Given f ( x ) = x2 − 4, use the Bisection Method to determine if there’s a root between
a = 1 and b = 3. Perform two iterations to find an approximate value for the root.

(b) Intermediate Iteration (Easy-Medium)


Apply the Bisection Method to the function f ( x ) = x3 − 6x + 2 within the interval [1, 2].
Find the root to within ϵ = 0.1, completing at least three iterations.

(c) Precision Increase (Medium)


Using the Bisection Method, approximate the root of f ( x ) = x2 − 5 in the interval [2, 3].
Continue until the interval width is less than ϵ = 0.01.

(d) Nonlinear Root Approximation (Medium-Hard)


For f ( x ) = cos( x ) − x, find an approximate root within the interval [0, 1] using the Bisec-
tion Method. Complete enough iterations to reach a precision of ϵ = 0.001.

(e) High Precision and Multiple Iterations (Hard)


Apply the Bisection Method to approximate the root of f ( x ) = x3 − x − 1 within the
interval [1, 2] until the interval width is less than ϵ = 0.0001. Determine the number of
iterations needed to achieve this precision, showing each step.

1.3 Newton-Raphson Method

One can find the root of a continuous function using another method known as the Newton-
Raphson method. This method does not require that the root must lie within an interval, [ a, b],
rather an arbitrary point, x a , is chosen and a root (if there is one) that lies close to that point
will be discovered. The method starts with the point x a and a slope at point is taken. The point
where the slope crosses the x − axis is found. This point becomes the new value for x a .

Let x a be a good estimate of r and let r = x a + h. Since the true root is r, and h = r − x a , the
number h measures how far the estimate x a is from the truth. Since h is ’small’,

f ′′ ( x a )
f (r ) = f ( x a + h ) ≈ f ( x a ) + f ′ ( x a ) + h + ... = 0. (1.4)
2!

6
and therefore, as x a is an approximate and it is not equal to the root, f ( x a ) is close to 0,

f (xa )
h≈− . (1.5)
f ′ (xa )

As, r = x a + h, we can write


f (xa )
r = xa − . (1.6)
f ′ (xa )

We can turn this into an iterative formula,

f ( xn )
x n +1 = x n − . (1.7)
f ′ ( xn )

df
Here, f ′ = dx . This iterative process will continue until | f ( xn+1 )| < ϵ.

Figure 1.2: Figure: Demonstration of Newton-Raphson method. xn moves closer to the actual
root per iteration.

1.3.1 Example

Problem

Find the approximate root of the function

f ( x ) = x2 − 4

starting from an initial guess of x0 = 5 using Newton’s method, with an accuracy of ε = 0.1.

7
Solution

We will apply Newton’s method, which uses the formula:

f ( xn )
x n +1 = x n − (1.8)
f ′ ( xn )

where f ′ ( x ) is the derivative of f ( x ). For the function

f ( x ) = x2 − 4,

, we have
f ′ ( x ) = 2x.

Starting with x0 = 5, we will compute each successive approximation xn+1 until the difference
between consecutive approximations is less than ε = 0.1.

Iteration 1

f ( x0 )
x1 = x0 −
f ′ ( x0 )
52 − 4
= 5−
2·5
21
= 5−
10
= 2.9

Iteration 2

f ( x1 )
x2 = x1 −
f ′ ( x1 )
2.92 − 4
= 2.9 −
2 · 2.9
4.41
= 2.9 −
5.8
≈ 2.1397

8
Iteration 3

f ( x2 )
x3 = x2 −
f ′ ( x2 )
2.13972 − 4
= 2.1397 −
2 · 2.1397
≈ 2.0062

Iteration 4

f ( x3 )
x4 = x3 −
f ′ ( x3 )
2.00622 − 4
= 2.0062 −
2 · 2.0062
≈ 2.00002

Since the difference between x3 ≈ 2.0062 and x4 ≈ 2.00002 is less than ε = 0.1, we stop here.

Final Answer

The approximate root is:


x ≈ 2.000

9
1.3.2 Python Implementation

The implementation of this is the following,

def NM(expr, x0):


df = smp.diff(expr, x)
while abs(f.subs(x,x0)) >= eps:
r = f.subs(x,x0)/(df.subs(x,x0))
#As r becomes really small, we eventually approach our solution.
x0 -= r
#print(x0)
#Remove comment to see all the values explicitly
return(x0)

Here, f.subs(x, obj) will replace the symbol x in the expression f with the variable obj. This
variable can be of any type and will be replaced accordingly. If the substitute variable is an
integer or float, they will undergo the arithmetic operations in the expression and give us the
value. One can use sympy.lambdify() or define a lambda function and pass it as the argument
instead of an expression. In that case, .subs() is not required. The derivative of a symbolic
expression is taken, which is done analytically, using sympy.diff() where the first argument
is the expression and the second is the variable with respect to which the derivative is taken.

1.3.3 Exercise

(a) Simple Initial Guess (Easy)


For f ( x ) = x2 − 2, use Newton-Raphson Method with an initial guess x0 = 1.5. Perform
two iterations and calculate the approximate root.

(b) Polynomial Root with Calculated Derivative (Easy-Medium)


Given f ( x ) = x3 − 3x + 1 and an initial guess x0 = 1, use the Newton-Raphson Method
to approximate the root. Complete at least three iterations, showing all steps.

(c) Finding Root of Trigonometric Function (Medium)


Apply the Newton-Raphson Method to find the root of f ( x ) = sin( x ) − 0.5 with an initial
guess of x0 = 1. Perform at least three iterations.

10
(d) Higher Degree Polynomial (Medium-Hard)
Use the Newton-Raphson Method to approximate a root for f ( x ) = x4 − x − 10 with an
initial guess of x0 = 2. Perform four iterations and track the convergence of the solution.

(e) Precision and Convergence Check (Hard)


For f ( x ) = e x − 3x and initial guess x0 = 1, use the Newton-Raphson Method to find the
root. Continue until | f ( xn )| < 0.0001, and determine the number of iterations needed to
reach this precision.

1.4 Fixed Point Iteration Method

The Fixed Point Iteration method is an iterative process to find the solution of equations of the
form:

x = f (x)

Starting with an initial guess x0 , we then generate a sequence of values:

x n +1 = f ( x n )

This process is repeated until the sequence converges to a fixed point, i.e., when xn+1 ≈ xn .

1.4.1 Example 1

Consider the equation x3 − x + 1 = 0. We can rearrange this equation in two different ways:


3
• Using x = g( x ) = x − 1: Starting with an initial guess x0 , the iteration becomes:
p
3
x n +1 = g ( x n ) = xn − 1

Now for x0 = −1, the sequence becomes:


x0 x1 x2 x3 x4 ...
-1 -1.259921 -1.3122938367 -1.32235382 -1.3242687445515782 ...

So this seems to converge at a point. If we iterate it for a few more steps, we will reach
x15 = −1.3247179572395291, which is accurate up to 10 decimal points!

11
• Using x = f ( x ) = x3 + 1: Starting with an initial guess x0 , the iteration becomes:

xn+1 = f ( xn ) = xn3 + 1

Now for x0 = −1, the sequence becomes:

x0 x1 x2 x3 x4 ...
-1 0 1 2 9 ...

So this is diverging! Why? We can guess that, since | x3 | > 1 for | x | > 1, at some point of
iteration, | x | exceed 1 and hence the sequence converges.

1.4.2 Example 2

Now say we want to find a real root of f ( x ) = x2 − x − 1. From our previous experience, we

may want to write it in the form x = x + 1. And if we start from any x0 ≥ −1, we will

1+ 5
approach the root x = 2 .

1− 5
However, there is another root x = 2 , which can never be obtained using this form (why?).

So we proceed with the form x = − x + 1 to find the negative root of f ( x ) and the iteration
is given by:
p
x n +1 = − xn + 1

Now for x0 = 0, the sequence becomes:

x0 x1 x2 x3 x4 x5 ...
0 -1 0 -1 0 -1 ...

So this is neither diverging nor converging. The sequence is dancing between 0 and −1.
What if we start with x0 = 0.5? We will find x1 , but x2 , x3 , .. will be undefined.
However, starting with x0 = −0.5 yields:

x0 x1 x2 x3 x4 x5
-0.5 -0.70710678 -0.5411961 -0.67735 -0.5680223 -0.65725


1− 5
So the sequence seems approach x = 2 ≈ −0.618034.
Hence we conclude that xn+1 = f ( xn ) may generate a converging, diverging, or oscillating
sequence (or undefined values) depending on the initial point x0 and choice of f ( x ). We will
find out the reason soon.

12
1.4.3 Homework

Using the fixed point iteration method, we want to find the real solution of x3 + x2 − 1 = 0.
The actual solution is x ≈ 0.75487766624669276.

3

If we express it as x = f ( x ), we can have either f ( x ) = 1 − x2 or f ( x ) = 1 − x3 . Find x0
for both cases to generate sequences that converge, diverge, or oscillate.

1.4.4 Visualization

Figure 1.3: (a)Converging (b)Diverging

13
1.5 Banach Fixed-Point Theorem

From the previous examples, it should be apparent that f(x) plays a crucial role for the con-

vergence of the series. When f ( x ) = 1 + x3 , the sequence diverges, but f ( x ) = 3 x − 1 make
the sequence convergence. Why is so? The later seems to “contract” the sequence leading to a
fixed point. We call ϕ : [ a, b] → [ a, b] a contraction map if

|ϕ( x ) − ϕ(y)| ≤ k| x − y|, ∀ x, y ∈ [ a, b]

Where 0 ≤ k < 1 is called Lipschitz constant. We call x = x∗ a fixed point if f ( x∗ ) = x∗ .

From the definitions, it should be clear that the fixed point is a solution to x = f ( x ). Now the
Banach fixed-point theorem states that every contraction mapping on a non-empty complete
metric space has a unique fixed point, and for any x0 , the sequence xn+1 = f ( xn ) converges to
that fixed point, i.e.,

 
x∗ = ϕ lim xn = ϕ( x∗ )
n→∞

I won’t bore you with the proof but let’s define ϵk = xk − x∗ , where x∗ . Now,

x k +1 = ϵ k +1 + x ∗ = ϕ ( x k ) = ϕ ( ϵ k + x ∗ )

Using Taylor expansion, ϕ(ϵk + x∗ ) = ϕ( x∗ ) + ϕ′ ( x∗ )ϵk + O(ϵk2 ), and since ϕ′ ( x∗ ) = x∗ , we


have,
ϵk+1 = ϕ′ ( x∗ )ϵk + O(ϵk2 ) ≈ ϕ′ ( x∗ )ϵk

Note that ϵk is the error in each step, which will approach 0 if |ϕ′ ( x∗ )| < 1.
The Newton’s method is given by:

f ( xk )
x k +1 = x k −
f ′ ( xk )

f (x)
Let, g( x ) = x − f ′ (x)
, then xk+1 = g( xk ), which is just fixed point iteration!

14
Chapter 2

Numerical Differentiation

2.1 Introduction

Numerical differentiation is a mathematical technique used to approximate the derivative of a


function using discrete data points. It’s particularly useful in situations where:

• The function is known only at specific data points (e.g., experimental measurements).

• An analytical expression of the derivative is difficult or impossible to obtain.

• The function is too complex to differentiate analytically.

2.1.1 Examples When Numerical Differentiation is Needed

1. Engineering Applications: Estimating the rate of heat transfer in a material when tem-
perature measurements are taken at discrete points.

2. Physics Experiments: Calculating acceleration from position-time data collected during


an experiment.

3. Financial Modeling: Determining the rate of change of stock prices using historical data.

4. Biological Systems: Estimating population growth rates from periodic census data.

15
2.2 Key Aspects and Intuition on Taylor Expansion

2.2.1 Taylor Expansion

The Taylor expansion is a powerful mathematical tool that expresses a function as an infinite
sum of terms calculated from the derivatives of the function at a single point. For a function
f ( x ) that is infinitely differentiable at a point x = x0 , the Taylor series around x0 is:

f ′′ ( x0 ) f ′′′ ( x0 )
f ( x ) = f ( x0 ) + f ′ ( x0 )( x − x0 ) + ( x − x0 )2 + ( x − x0 )3 + · · ·
2! 3!

2.2.2 Intuition Behind Taylor Expansion

• Local Approximation: The Taylor series provides a polynomial approximation of a func-


tion around a specific point. The more terms included, the closer the approximation to
the actual function.

• Foundation for Numerical Methods: Many numerical techniques, including finite dif-
ference methods for differentiation, are derived from the Taylor expansion.

• Error Estimation: By truncating the series after a finite number of terms, we introduce a
truncation error. The remainder term gives us an estimate of this error.

2.3 Deriving Finite Difference Formulas Using Taylor Ex-


pansion

Finite difference methods approximate derivatives by combining function values at specific


points. The accuracy of these approximations depends on the order of the truncation error,
commonly expressed using Big O notation (e.g., O(h), O(h2 )).

16
2.3.1 First-Order Derivative Approximations

Forward Difference Scheme

First-Order Accuracy (O(h)) Using Taylor expansion around x:

f ′′ ( x ) 2 f ′′′ ( x ) 3
f ( x + h) = f ( x ) + f ′ ( x )h + h + h +···
2 6

Rearranged to solve for f ′ ( x ):

f ( x + h) − f ( x ) f ′′ ( x ) f ′′′ ( x ) 2
f ′ (x) = − h− h +···
h 2 6

Approximation:

f ( x + h) − f ( x )
f ′ (x) ≈ with an error of O(h)
h

Second-Order Accuracy (O(h2 )) Consider the Taylor expansions of f ( x + h) and f ( x + 2h):

f ′′ ( x ) 2 f ′′′ ( x ) 3
f ( x + h) = f ( x ) + f ′ ( x )h + h + h +···
2 6
4 f ′′′ ( x ) 3
f ( x + 2h) = f ( x ) + 2 f ′ ( x )h + 2 f ′′ ( x )h2 + h +···
3

Form a linear combination to eliminate f ′′ ( x ):

−4 f ( x + h) + 3 f ( x ) + f ( x + 2h) = −2 f ′ ( x )h + O(h3 )

Solve for f ′ ( x ):

−4 f ( x + h) + 3 f ( x ) + f ( x + 2h)
f ′ (x) = + O ( h2 )
2h

Approximation:

17
−4 f ( x + h) + 3 f ( x ) + f ( x + 2h)
f ′ (x) ≈ with an error of O(h2 )
2h

Backward Difference Scheme

First-Order Accuracy (O(h)) Using Taylor expansion around x:

f ′′ ( x ) 2 f ′′′ ( x ) 3
f ( x − h) = f ( x ) − f ′ ( x )h + h − h +···
2 6

Rearranged to solve for f ′ ( x ):

f ( x ) − f ( x − h) f ′′ ( x ) f ′′′ ( x ) 2
f ′ (x) = − h+ h +···
h 2 6

Approximation:

f ( x ) − f ( x − h)
f ′ (x) ≈ with an error of O(h)
h

Second-Order Accuracy (O(h2 )) Consider the Taylor expansions of f ( x − h) and f ( x − 2h):

f ′′ ( x ) 2 f ′′′ ( x ) 3
f ( x − h) = f ( x ) − f ′ ( x )h + h − h +···
2 6
4 f ′′′ ( x ) 3
f ( x − 2h) = f ( x ) − 2 f ′ ( x )h + 2 f ′′ ( x )h2 − h +···
3

Form a linear combination to eliminate f ′′ ( x ):

4 f ( x − h) − 3 f ( x ) − f ( x − 2h) = 2 f ′ ( x )h + O(h3 )

Solve for f ′ ( x ):

4 f ( x − h) − 3 f ( x ) − f ( x − 2h)
f ′ (x) = + O ( h2 )
2h

Approximation:

18
4 f ( x − h) − 3 f ( x ) − f ( x − 2h)
f ′ (x) ≈ with an error of O(h2 )
2h

Central Difference Scheme

Second-Order Accuracy (O(h2 )) Using Taylor expansions:

f ′′ ( x ) 2 f (3) ( x ) 3
f ( x + h) = f ( x ) + f ′ ( x )h + h + h +···
2 6
f ′′ ( x ) 2 f (3) ( x ) 3
f ( x − h) = f ( x ) − f ′ ( x )h + h − h +···
2 6

Subtract the second from the first:

f (3) ( x ) 3
f ( x + h) − f ( x − h) = 2 f ′ ( x )h + h +···
3

Rearranged to solve for f ′ ( x ):

f ( x + h) − f ( x − h) f (3) ( x ) 2
f ′ (x) = − h +···
2h 6

Approximation:

f ( x + h) − f ( x − h)
f ′ (x) ≈ with an error of O(h2 )
2h

2.3.2 Second-Order Derivative Approximations

Forward Difference Scheme

First-Order Accuracy (O(h)) Using Taylor expansions:

f ′′ ( x ) 2 f (3) ( x ) 3
f ( x + h) = f ( x ) + f ′ ( x )h + h + h +···
2 6
4 f (3) ( x ) 3
f ( x + 2h) = f ( x ) + 2 f ′ ( x )h + 2 f ′′ ( x )h2 + h +···
3

19
Form a combination:

f ( x + 2h) − 2 f ( x + h) + f ( x ) = 2 f ′′ ( x )h2 + O(h3 )

Solve for f ′′ ( x ):

f ( x + 2h) − 2 f ( x + h) + f ( x ) f (3) ( x )
f ′′ ( x ) = − h+···
h2 3

Approximation:

f ( x + 2h) − 2 f ( x + h) + f ( x )
f ′′ ( x ) ≈ with an error of O(h)
h2

Backward Difference Scheme

First-Order Accuracy (O(h)) Using Taylor expansions:

f ′′ ( x ) 2 f (3) ( x ) 3
f ( x − h) = f ( x ) − f ′ ( x )h + h − h +···
2 6
4 f (3) ( x ) 3
f ( x − 2h) = f ( x ) − 2 f ′ ( x )h + 2 f ′′ ( x )h2 − h +···
3

Form a combination:

f ( x ) − 2 f ( x − h) + f ( x − 2h) = 2 f ′′ ( x )h2 + O(h3 )

Solve for f ′′ ( x ):

f ( x ) − 2 f ( x − h) + f ( x − 2h) f (3) ( x )
f ′′ ( x ) = − h+···
h2 3

Approximation:

f ( x ) − 2 f ( x − h) + f ( x − 2h)
f ′′ ( x ) ≈ with an error of O(h)
h2

20
Central Difference Scheme

Second-Order Accuracy (O(h2 )) Using Taylor expansions:

f ′′ ( x ) 2 f (3) ( x ) 3 f (4) ( x ) 4
f ( x + h) = f ( x ) + f ′ ( x )h + h + h + h +···
2 6 24
f ′′ ( x ) 2 f (3) ( x ) 3 f (4) ( x ) 4
f ( x − h) = f ( x ) − f ′ ( x )h + h − h + h −···
2 6 24

Add the two equations:

f (4) ( x ) 4
f ( x + h) + f ( x − h) = 2 f ( x ) + f ′′ ( x )h2 + h +···
12

Rearranged to solve for f ′′ ( x ):

f ( x + h) − 2 f ( x ) + f ( x − h) f (4) ( x ) 2
f ′′ ( x ) = − h +···
h2 12

Approximation:

f ( x + h) − 2 f ( x ) + f ( x − h)
f ′′ ( x ) ≈ with an error of O(h2 )
h2

2.4 Summary of Formulas

Derivative Scheme Approximation Formula Error Order


f ( x + h) − f ( x )
First-order Forward Difference f ′ (x) ≈ O(h)
h
′ f ( x ) − f ( x − h)
First-order Backward Difference f (x) ≈ O(h)
h
′ f ( x + h) − f ( x − h)
First-order Central Difference f (x) ≈ O ( h2 )
2h
′′ f ( x + 2h) − 2 f ( x + h) + f ( x )
Second-order Forward Difference f (x) ≈ O(h)
h2
f ( x ) − 2 f ( x − h) + f ( x − 2h)
Second-order Backward Difference f ′′ ( x ) ≈ O(h)
h2
f ( x + h ) − 2 f ( x ) + f ( x − h)
Second-order Central Difference f ′′ ( x ) ≈ O ( h2 )
h2

21
2.5 Conclusion

Numerical differentiation is an essential tool in computational mathematics, enabling the ap-


proximation of derivatives using discrete data points. The finite difference methods derived
from Taylor expansions provide a systematic approach to approximating both first and second-
order derivatives with known error bounds.

Understanding the truncation errors associated with each scheme allows you to choose the
most appropriate method for your specific application. The central difference scheme gener-
ally offers higher accuracy (O(h2 )) compared to forward and backward schemes (O(h)) for the
same step size h.

By mastering these techniques, you can effectively tackle problems in engineering, physics,
finance, and other fields where analytical differentiation is impractical.

2.6 References for Further Study

1. “Numerical Methods for Engineers” by Steven C. Chapra and Raymond P. Canale:


This book provides a comprehensive introduction to numerical methods, including dif-
ferentiation and integration techniques.

2. “Applied Numerical Methods with MATLAB for Engineers and Scientists” by Steven
C. Chapra: This text offers practical insights into implementing numerical methods using
MATLAB.

3. Online Resources:

• https://ocw.mit.edu/courses/find-by-topic/
cat=engineering&subcat=civilengineering&spec=computationalmethods
MIT OpenCourseWare: Numerical Methods

• http://numerical.recipes/Numerical Recipes: Numerical Differentiation

Feel free to delve into these resources to deepen your understanding of numerical differentia-
tion and its applications.

22
Chapter 3

Numerical Methods for Integration

3.1 Trapezoidal Rule: Numerical Integration

The trapezoidal rule is a method used to approximate the definite integral of a function by
summing the areas of trapezoids under the curve. This is particularly useful when the func-
tion is complex or its integral cannot be determined analytically. In such cases, numerical
integration methods like the trapezoidal rule provide a practical solution.

3.1.1 Derivation of the Trapezoidal Rule

Consider a continuous function f ( x ) defined over the interval [ a, b]. To approximate the inte-
´b
gral a f ( x ) dx, we divide the interval [ a, b] into n subintervals of equal width h = b− a
n .

Let the points dividing the interval be x0 = a, x1 = a + h, x2 = a + 2h, ..., xn = b. For each
subinterval, we approximate the area under the curve by the area of a trapezoid, where the
height of the trapezoid is h, and the two parallel sides are the function values f ( xi ) and f ( xi+1 ).

The area of a trapezoid is given by:

h
Area of trapezoid = [ f ( xi ) + f ( xi+1 )] (3.1)
2

23
Figure 3.1: Approximating the area under a curve using trapezoids.

Summing the areas of all trapezoids gives us the approximation for the integral. We calculate
the area of each trapezoid individually and sum them:

h h h
( f ( x0 ) + f ( x1 )) + ( f ( x1 ) + f ( x2 )) + · · · + ( f ( xn−1 ) + f ( xn ))
2 2 2

By grouping the terms, we obtain:

h
= ( f ( x0 ) + 2 f ( x1 ) + 2 f ( x2 ) + · · · + 2 f ( xn−1 ) + f ( xn ))
2

This is the general form of the trapezoidal rule formula. The factor of 2 appears for the function
values that are shared between adjacent trapezoids (i.e., f ( x1 ) to f ( xn−1 )).

ˆ b
h
f ( x ) dx ≈ ( f ( x0 ) + 2 f ( x1 ) + 2 f ( x2 ) + · · · + 2 f ( xn−1 ) + f ( xn )) (3.2)
a 2

So, in advance words, we can write,

24
ˆ b n −1
!
h
a
f ( x ) dx ≈
2
f ( x0 ) + 2 ∑ f ( xi ) + f ( x n ) (3.3)
i =1

b− a
Where h = n and xi = a + i · h. The use of smaller sub-intervals improves the accuracy of
the approximation.

3.1.2 Steps for Applying the Trapezoidal Rule

1. Determine the interval: Choose the interval [ a, b] over which the integration is to be
performed.

2. Choose the number of sub-intervals n: Increasing n improves accuracy.

b− a
3. Calculate the step size h: The width of each sub-interval is given by h = n .

4. Evaluate the function at the endpoints and intermediate points:

• Calculate f ( a) and f (b).

• Compute the function at intermediate points f ( xi ) for i = 1, 2, ..., n − 1.

5. Apply the composite trapezoidal formula: Plug in the values into the trapezoidal rule
formula.

3.1.3 Example 1

´1
Consider the function f ( x ) = x2 , and we want to approximate the integral 0 x2 dx using 4
sub-intervals.

• a = 0, b = 1, n = 4

1−0
• h= 4 = 0.25

• Points: x0 = 0, x1 = 0.25, x2 = 0.5, x3 = 0.75, x4 = 1

• Evaluate function: f (0) = 02 = 0, f (0.25) = 0.0625, f (0.5) = 0.25, f (0.75) = 0.5625,


f (1) = 1

• Apply formula:

ˆ 1
0.25
x2 dx ≈ (0 + 2(0.0625 + 0.25 + 0.5625) + 1) = 0.34375
0 2

25
3.1.4 Example 2

Let’s consider the function f ( x ) = sin( x ) over the interval [0, π ]. Using 6 sub-intervals, the
width h is given by:
π−0 π
h= = (3.4)
6 6

Explanation of Sub-intervals: In this example, we are dividing the interval [0, π ] into 6 equal
π
sub-intervals. The width of each sub-interval is h = 6. This means that the points where the
function is evaluated (i.e., x0 , x1 , . . . , x6 ) are:

π 2π π 3π π
x0 = 0, x1 = , x2 = = , x3 = = , ..., x6 = π
6 6 3 6 2

These points correspond to the function values at which we apply the trapezoidal rule. For
 
example, f ( x1 ) = sin π6 , f ( x2 ) = sin π3 , and so on.

Applying the composite trapezoidal rule, we calculate:

ˆ π π
6
h π π i
sin( x ) dx ≈ sin(0) + 2(sin( ) + sin( ) + . . . ) + sin(π ) = 2 (3.5)
0 2 6 3

´π
Figure 3.2: Trapezoidal Rule Approximation of 0 sin( x ) dx with 6 sub-intervals.

In the graph above, the blue curve represents the function f ( x ) = sin( x ), and the orange
trapezoids represent the areas approximated by the trapezoidal rule. By summing the areas of
the trapezoids, we obtain a numerical approximation of the integral.

26
3.1.5 Exercises

1. Use the trapezoidal rule to approximate the integral of f ( x ) = e x from 0 to 1 using 4


sub-intervals.

´2
2. Approximate 0 ( x3 − 3x + 2) dx using 5 sub-intervals.

´4
3. Approximate 1 ln( x ) dx using 6 sub-intervals.

4. A car’s velocity v(t) (in m/s) is measured at several intervals over a 10-second period.
The values are provided below. Estimate the total distance traveled by the car over 10
seconds using the trapezoidal rule with 5 sub-intervals.

Time t (s) 0 2 4 6 8 10
Velocity v(t) (m/s) 0 4 8 7 3 0

5. The cross-sectional area A(d) (in m2 ) of a reservoir is recorded at different depths d. The
data is provided below. Approximate the total volume of water in the reservoir using the
trapezoidal rule with 5 sub-intervals.

Depth d (m) 0 2 4 6 8 10
Area A(d) (m2 ) 200 180 140 100 50 10

3.1.6 Conclusion

The trapezoidal rule provides a simple yet powerful tool for approximating definite integrals,
particularly when an analytical solution is difficult to obtain. By dividing the interval into
smaller sub-intervals and applying the trapezoidal formula, we can achieve higher precision
in our approximations.

3.2 Monte Carlo Integration: Methods and Applications

Monte Carlo integration is a technique for approximating the value of definite integrals us-
ing random sampling. It is particularly useful when traditional numerical methods (like the
trapezoidal or Simpson’s rule) are difficult to apply, especially in higher dimensions.

27
Monte Carlo integration is based on the principle of simulating random variables to estimate
the area under a curve. It can be applied to a wide variety of problems, particularly in physics,
statistics, and computer science.

In this section, we will discuss two popular methods for Monte Carlo integration:

• The averaging method

• The sampling method

3.2.1 Averaging Method

In the averaging method, the idea is to randomly sample points within the interval of interest,
evaluate the function at those points, and then take the average of these function values to
estimate the integral.

Steps for the Averaging Method

Suppose we want to approximate the integral of a function f ( x ) over the interval [ a, b]:

ˆ b
I= f ( x ) dx
a

The steps for the averaging method are as follows:

1. Generate random points: Randomly generate N points x1 , x2 , . . . , x N uniformly over the


interval [ a, b].

2. Evaluate the function: Evaluate the function f ( x ) at each of the sampled points.

3. Compute the average: Compute the average of these values:

N
1
N ∑ f ( xi )
i =1

4. Estimate the integral: Multiply the average by the length of the interval (b − a) to esti-
mate the integral:
N
1
I ≈ (b − a) ·
N ∑ f ( xi )
i =1

28
Why It Works

The averaging method is based on the law of large numbers, which states that as the number
of random points increases, the average of the values approaches the expected value of the
function. Since the integral is essentially the expected value of the function over the interval,
this method provides an accurate estimate of the integral as the number of points, N, increases.

Example 1: Estimating the Integral of f ( x ) = e x

Let’s use the averaging method to estimate the integral of f ( x ) = e x over the interval [0, 1].
Follow the steps outlined above:

ˆ 1
I= e x dx
0

1. Generate random points: - We randomly generate N = 1000 points uniformly in the interval
[0, 1]. These points are denoted as x1 , x2 , . . . , x N .

2. Evaluate the function: - For each of the sampled points xi , evaluate f ( xi ) = e xi .

3. Compute the average: - Compute the average value of f ( xi ) for all the randomly sampled
points:
N
1
N ∑ ex i

i =1

After computing this sum, we find that the average value of f ( x ) at these random points is
approximately 1.718.

4. Estimate the integral: - Multiply the average value by the length of the interval (b − a),
which is 1, to estimate the integral:

I ≈ (1 − 0) · 1.718 = 1.718

The exact value of the integral is:

ˆ 1
e x dx = e − 1 = 1.718
0

The estimated value is very close to the true value, illustrating the accuracy of the Monte Carlo

29
averaging method.

Figure 3.3: Monte Carlo Integration for f ( x ) = e x using the averaging method.

Example 2: Estimating the Integral of f ( x ) = sin( x ) over [0, π ]

Let’s estimate the integral of f ( x ) = sin( x ) over the interval [0, π ] using the same steps.

ˆ π
I= sin( x ) dx
0

Steps:

1. Generate random points: - Randomly generate N = 1000 points uniformly in the interval
[0, π ].

2. Evaluate the function: - Evaluate f ( xi ) = sin( xi ) at each of the randomly sampled points.

3. Compute the average: - Compute the average of f ( xi ) over all sampled points:

N
1
N ∑ sin(xi )
i =1

The average value of f ( x ) = sin( x ) at these random points is approximately 0.637.

4. Estimate the integral: - Multiply the average by the length of the interval (π − 0) to estimate

30
the integral:
I ≈ π · 0.637 = 2.001

The exact value of the integral is 2, so the Monte Carlo method provides a very close approxi-
mation.

Figure 3.4: Monte Carlo Integration for f ( x ) = sin( x ) using the averaging method.

3.2.2 Sampling Method

In the sampling method, we take a different approach. Instead of just averaging function
values over an interval, we randomly sample points both in the x-axis and the y-axis. We
count how many points fall under the curve to estimate the integral.

Steps for the Sampling Method

´b
Let’s say we want to approximate the same integral I = a f ( x ) dx:

1. Randomly generate N points ( xi , yi ) such that xi is uniformly distributed in [ a, b], and yi


is uniformly distributed in [0, max( f ( x ))].

2. Count how many points ( xi , yi ) fall under the curve, i.e., where yi ≤ f ( xi ).

3. The ratio of points under the curve to the total points approximates the area under the

31
curve:

Number of points under the curve


Integral ≈ × (b − a) × max( f ( x ))
N

Example 1: Hypothetical Example for Sampling Method

Let’s consider estimating the integral of f ( x ) = sin( x ) over the interval [0, π ] using the sam-
pling method.

- Suppose we generate 1000 random points in the 2D plane where x is distributed uniformly in
[0, π ] and y is distributed in [0, 1]. - Out of these 1000 points, we find that 700 points lie below
the curve y = sin( x ), and 300 points lie above the curve.

The total number of points under the curve is 700. Thus, the ratio of points under the curve to
the total points is:
700
= 0.7
1000

The length of the interval is π, and the maximum value of sin( x ) is 1, so the integral is approx-
imated as:
I ≈ π × 1 × 0.7 = 2.199

The exact value of the integral is 2, so this hypothetical example shows that we can closely
estimate the integral using the sampling method.

Figure 3.5: Monte Carlo Integration using the sampling method. Random points are sampled,
and the fraction of points under the curve is used to estimate the integral.

32
3.2.3 Comparison of Methods

Both the averaging and sampling methods are powerful, especially in higher dimensions,
where other numerical methods struggle. However, they may converge more slowly than
other techniques, particularly when the function is highly irregular.

• Averaging method: Easy to implement and can handle smooth functions very well. It is
generally preferred when we can sample points along the x-axis efficiently.

• Sampling method: More versatile, as it can be used to approximate the area under a
curve by counting points, even if the function has complex shapes.

3.2.4 Conclusion

Monte Carlo integration provides a flexible, powerful approach to estimating definite inte-
grals, especially in higher dimensions where other methods fall short. While it may not al-
ways be the fastest converging method, its simplicity and versatility make it an essential tool
in numerical analysis.

33
Chapter 4

Numerical Methods for Solving


Ordinary Differential Equations

An Ordinary Differential Equation or ODE is an equation that relates to a differential equation


that contains only one independent variable. Many problems in physics, engineering, biology,
and economics are modeled using ODEs such as wave equations, circuit designing, population
model, growth model etc.

Many ODEs cannot be solved analytically (i.e., with an exact solution). For such cases, we
use numerical methods like Euler’s method, Runge-Kutta methods, etc., to approximate the
solution.

4.1 Euler’s Method

The Euler’s method is one of the simplest and most basic numerical methods used to solve
ordinary differential equations (ODEs). Despite its simplicity, it is a foundational tool in nu-
merical analysis. It uses a straightforward iterative process to approximate the solution of a
differential equation.

Let’s say we have a first-order ODE

dy( x )
= f ( x, y) (4.1)
dx

34
with the initial condition: y( x0 ) = y0 . Where f ( x, y) is any given function in the problem. We
have to solve for y( x ).

We have yi ≡ y( xi ) at point xi . At point xi+1 we have yi+1 ≡ y( xi+1 ) = y( xi + h) after a small


change h = xi+1 − xi . The approximate forward difference formula for the first derivative is
given by,

dy y ( xi + h ) − y ( xi )

dx h
y i +1 − y i
= (4.2)
h

With this, (4.10) becomes,


y i +1 − y i
= f ( xi , yi ) (4.3)
h

Hence, we find formula for subsequent yi+1 from the previous yi :

y i +1 = y i + h f ( x i , y i ) (4.4)

This is Euler’s formula for solving first-order ODE.

To use this formula we start with the initial condition given y( x0 ) = y0 at x0 . We can obtain
y1 ≡ y( x1 ) using Euler formula given by, y1 = y0 + h f ( x0 , y0 ). And then use this to find
y2 = y1 + h f ( x1 , y1 ) and so on. We need to define the step size h = xi+1 − xi which should be
a small number. The starting point is x0 and the end point is xn .

4.1.1 Example:

To illustrate Euler’s method, we consider the differential equation y′ = −y with the initial
condition y(0) = 1. Equation (4) with h = 0.01 gives,

y(0.01) = 1 + 0.1(−1) = 0.99

y(0.02) = 0.99 + 0.01(−0.99) = 0.9801

y(0.03) = 0.9801 + 0.01(−0.9801) = 0.9703

y(0.04) = 0.9703 + 0.01(−0.9703) = 0.9606

The exact solution is y = e− x and from this the value at x = 0.04 is y = 0.9608

35
Problem 1: Decay Equation

Consider a sample of radioactive material with an initial N0 number of nuclei. As time passes,
some of the nuclei will decay into counterparts. Our job is to predict the number of nuclei
remaining at a given time. The decay equation follows as

dN
= −0.1N (4.5)
dt

Let’s solve it for t = (0, 50) and with initial condition N (0) = 500. Here, f (t, N ) = −0.1N and
we choose h = 1

Ni+1 = Ni + h f (ti , Ni )

= Ni − 0.1Ni

Or, Ni+1 = 0.9Ni

Also, remember that, ti+1 = ti + h.

i ti Ni Ni+1 = 0.9Ni
0 t0 = 0 N0 = 500 N1 = 0.9 × 500 = 450.00
1 1 450.00 405.00
2 2 405.00 364.50
3 3 364.50 328.05
... ... ... ...

Table 4.1: Euler’s Method

Figure 4.1: Euler’s Method

36
4.1.2 Second-Order ODE

The generic form of a second-order ODE can be written as:

d2 y dy
2
+ a + by + c = 0 (4.6)
dx dx

dy
Now, let’s define dx = v. With this, we have two first-order equations in our hand

dy
=v
dx

and,
dv
= −( av + by + c)
dx

Euler’s method for these two equations gives the following

yi+1 = yi + hvi (4.7)

f i+1 = f i − h( avi + byi + c) (4.8)

Problem 2:
Damped Harmonic Oscillator: F = −kx − 2mλv

d2 x dx
2
+ λ + ω2 x = 0
dt dt

Let ω = 2, λ = 0.5 and the initial conditions: x (0) = 0 and v(0) = ẋ (0) = 10.

d2 x dx
+ + 4x = 0
dt2 dt

Solution:

We choose h = 0.1. Given, x0 = 0 and v0 = 10. The two first-order equations

dx
=v
dt

dv
= −( av + bx + c)
dt

37
Euler’s method for these two equations gives the following

xi+1 = xi + 0.1vi

vi+1 = vi − 0.1(vi + 4xi )

ti xi xi+1 = xi + 0.1vi vi vi+1 = vi − 0.1(vi + 4xi )


0 x0 = 0 x1 = x0 + 0.1 · v0 = 10 v1 = v0 − 0.1(v0 + 4 ·
0 + 0.1 · 10 = 1 x0 ) = 10 − 0.1 · 10 = 9
0.1 x1 = 1 x2 = x1 + 0.1 · v1 = 9 v2 = v1 − 0.1(v1 + 4 ·
1 + 0.1 · 9 = 1.9 x1 ) = 9 − 0.1 · (9 + 4 ·
1) = 7.7
0.2 x2 = 1.9 x3 = x2 + 0.1 · v2 = ... ...
1.9 + 0.1 · 7.7 = 2.67

Table 4.2: Euler’s Method for Second Order ODE

Figure 4.2: Euler’s Method for Second Order ODE

4.1.3 Error Analysis for Euler’s Method

To derive the local error for the Euler method, we start with the Taylor series expansion of the
true solution y( x + h):

h2 ′′ h3
y( x + h) = y( x ) + hy′ ( x ) + y ( x ) + y′′′ ( x ) + O(h4 ) (4.9)
2 6

38
In the Euler method given by (4.4), we approximate y( x + h) using:

y( x + h) ≈ y( x ) + h f ( x, y( x )).

With f ( x, y) = y′ , we can express it as:

y( x + h) ≈ y( x ) + hy′ ( x ).

As we can see this only involves up to the second term in (4.9) which is the first order in h
leaving out the second and higher order terms. This means the error is given by,

h2 ′′ h3
τ= y ( x ) + y′′′ ( x ) + O(h4 )
2 6

This shows that the local error in each step is proportional to h2 .

On the other hand, the global error is the cumulative effect of the local errors committed in
t n − t0
each step. But the number of steps n = h , which is proportional to 1h , and the error com-
mitted in each step is proportional to h2 . Thus, it is to be expected that the global error will be
proportional to h. The following figure depicts how the accuracy of this method improves as
we make h smaller.

Figure 4.3: Euler Method for different values of h.

4.1.4 Algorithm and Code Template for Euler’s Method

Algorithm:

39
• Initial values x0 , y0 , step size h and number of steps N (or the end value for x)

• Set x = x0 , y = y0

• Create a loop for each step n from 0 to N − 1, where

1. Calculate the slope at the current point: f ( x, y)

2. Update y using yn+1 = yn + h f ( xn , yn )

3. Update x using xn+1 = xn + h

• The result will be the final ( x, y) values or the entire sequence of ( x, y) pairs.

Python Template:

4.2 Runge-Kutta Method

The goal is to approximate the value of the unknown function y( x ) at discrete points x0 , x1 , x2 , ...,
given an initial condition y( x0 ) = y0 and the ODE:

dy( x )
= f ( x, y) (4.10)
dx

The Runge-Kutta methods do this by estimating the slope (derivative) at multiple points be-
tween xn and xn+1 , and then combining them to give a weighted average slope, which im-
proves the accuracy over simpler methods like Euler’s.

The fourth order Runge-Kutta Method (RK4) is the most widely used form of Runge-Kutta
methods, because of its higher order of accuracy (O(h4 )).

The RK4 method uses four slopes k1 , k2 , k3 , k4 calculated as follows:

k1 = h f ( xn , yn ) (4.11)

40
h 1
k2 = h f ( xn + , yn + k1 ) (4.12)
2 2
h 1
k3 = h f ( xn + , yn + k2 ) (4.13)
2 2

k4 = h f ( xn + h, yn + k3 ) (4.14)

The next value of y is then computed as:

1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 ) (4.15)
6

Where,
k1 : The slope at the beginning of the interval (Euler’s method approximation).
k2 : The slope at the midpoint using k1 to estimate y
k3 : Another slope at the midpoint, but now using k2
k4 : The slope at the end of the interval using k3
The final estimate for yn+1 is a weighted average of these four slopes.

4.2.1 Example:

dy
Given dx = 1 + y2 , where y = 0 when x = 0. Find y(0.2), y(0.4) and y(0.6)

Sol: Considering h = 0.2, with x0 = y0 = 0, the coefficients become,

k1 = 0.2

k2 = 0.2(1.01) = 0.202

k3 = 0.2(1 + 0.010201) = 0.20204

k4 = 0.2(1 + 0.040820) = 0.20816

and,
1
y(0.2) = 0 + (k1 + 2k2 + 2k3 + k4 ) = 0.2027
6

which is correct to 4-decimal places.


For calculating y(0.4), we can take x0 = 0.2 and y0 = 0.2027. And the result found from the
calculation will be used to find the value of y(0.6).

41
Problem 3: Investment Growth with Interest

Imagine you have an investment that grows continuously with a constant interest rate. The
equation for the growth of an investment over time is:

dy
= r·y
dt

Where:

• y(t) is the amount of money at time t,

• r is the constant interest rate.

This differential equation models the exponential growth of money due to interest.

To predict the value of the investment at future points in time, we can apply the RK4 method.
Let’s say the initial amount of money, y0 , is $1000, and the interest rate r = 5% or 0.05. We
want to know how the money grows over time with a time step h (e.g., 1 year).

The RK4 method uses four slopes to calculate the next value yn+1 from the current value yn .
These slopes are:

k1 = h · f (yn ) = h · (r · y n )
    
k1 k1
k2 = h · f yn + = h · r · yn +
2 2
    
k2 k2
k3 = h · f yn + = h · r · yn +
2 2
k4 =h · f (yn + k3 ) = h · (r · (yn + k3 ))

Then, the next value yn+1 is calculated as:

1
yn+1 = yn + (k1 + 2k2 + 2k3 + k4 )
6

Let’s calculate the investment’s value after one year using RK4 with a time step h = 1 and an
initial investment of $1000.

42
• Initial y0 = 1000, r = 0.05

• k1 = 1 · (0.05 · 1000) = 50

• k2 = 1 · (0.05 · (1000 + 50/2)) = 51.25

• k3 = 1 · (0.05 · (1000 + 51.25/2)) = 51.28125

• k4 = 1 · (0.05 · (1000 + 51.28125)) = 52.5640625

Then:

1
y1 = 1000 + (50 + 2(51.25) + 2(51.28125) + 52.5640625) ≈ 1051.271
6

After one year, the investment grows from $1000 to approximately $1051.27 using the RK4
method.

This example shows how RK4 can be used in a straightforward financial problem like invest-
ment growth with interest!

43
4.2.2 Algorithm and Code Template for RK4 Method

Algorithm:

• Initial Values x0 , y0 , step size h and number of steps N.

• Set x = x0 , y = y0

• Create a loop for each step n from 0 to N − 1, where

1. Calculate the coefficients k1 , k2 , k3 , k4

2. Update y using yn+1 = yn + 61 (k1 + 2k2 + 2k3 + k4 )

3. Update x using xn+1 = xn + h

• The result will be the final ( x, y) values or the entire sequence of ( x, y) pairs.

Python Template:

44

You might also like