0% found this document useful (0 votes)
6 views

NA10 Notes

Uploaded by

simazuorobert196
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

NA10 Notes

Uploaded by

simazuorobert196
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

Numerical Analysis (10th ed)

R.L. Burden, J. D. Faires, A. M. Burden

Chapter 2
Solutions of Equations in One Variable

Chapter 2.1: The Bisection Method*


One of the most basic root-finding methods is the Bisection method. This method is based on the
Intermediate Value Theorem and generates a sequence of approximate solutions to f x = 0 that
converge to a root of f, provided f is continuous on the interval where we believe a root exists. The
main "take-aways" in this section are:

• How do we use the Bisection method to generate the sequence?


• How many steps do we need to insure that the root is within a certain specified tolerance?

MOTIVATION: Given a function f that is continuous on [a,b] where a solution to f(x) = 0 is expected
to exist, the Bisection method begins with finding the functional values at the endpoints, namely, f(a)
and f(b) . If these functional values change sign, then we know that the graph will cross the x-axis at
least once on the interval [a,b]. That is, f(x) = 0 will have at least one solution on that interval.

We use this knowledge to approximate the solution by the Bisection method

1. Find the midpoint, c, of the interval [a,b] and check for sign changes between f(a), f(b) , and f(c) .
2. Let's assume, for illustration, that the functional values change sign at x = a and x = c . We then
modify the interval from [a,b] to [a,c] and begin the process again on the modified interval.
3. We continue this process until we reach an approximation to the solution of f(x) = 0 that is
accurate to within a specified tolerance.

bKa
The absolute error for the Bisection method is pn K p % , where p is a zero of f and n R 1. If
2n
bKa
we want to determine how many steps are needed to insure that pn K p % ! Tol then we use
2n
logarithms to find the value of n that will give us the tolerance we are looking for. In other words,
log10 b K a $Tol
2n O b K a $Tol or n O . A really nice animation of the Bisection algorithm
log10 2
can be seen by clicking on the following Bisection example found at:
http://mathfaculty.fullerton.
edu/mathews//a2001/animations/rootfinding/BisectionMethod/BisectionMethod.html
Chapter 2.2: Fixed-point Iteration*
The questions we would like to answer in this section are:
• What is a fixed point of a function?
• How do we find a fixed point of a function?
• What are fixed points used for?

A fixed point is a an invariant value, p, for which the functional value at p equals p. That is f p = p.
Not all functions have fixed points. Suppose f x = x2 K 2 x. We can easily see that x = 0 and x = 3 are
fixed points of f since f 0 = 0 and f 3 = 3. However, f x = x2 K 2 x C 3 has no fixed points since
there are no values, p, for which f p = p.

Example 1
If we solve x2 K 2 x = x for x, we see that x = 0 and x = 3. Plugging these values back into f will
show that f 0 = 0 and f 3 = 3.

solve x2 K 2 x = x
0, 3 (2.1.1)
2
Graphically, we see that the curve f x = x K 2 x and the straight line g x = x intersect at x = 0
and x = 3 indicating that these values are fixed points of f.

plot x2 K 2 x, x , x =K2 ..4, axis = gridlines = colour = green, majorlines = 2

6 2
f x =x
4

2 g x =x

K2 K1 0 1 2 3 4
x
K2

Example 2
If we solve x2 K 2 x C 3 = x for x, we see that there are no real solutions, and hence, no fixed
points.
solve x2 K 2 x C 3 = x
3 1 3 1
C I 3, K I 3 (2.2.1)
2 2 2 2
Graphically, we see that the curve f x = x2 K 2 x C 3 and the straight line g x = x indicating that
there are no fixed points of f.
plot x2 K 2 x C 3, x , x =K2 ..4, axis = gridlines = colour = green, majorlines = 2

10 2
f x =x K2 x
8 C3

6
4
2
g x
=x
K2 K1 0 1 2 3 4
K2 x

Theorem 2.3 in this section gave sufficient conditions for the existence and uniqueness of a fixed point.
1
In Example 1, because f x is continuous on the interval K , 1 and f maps every x in that interval
4
back into the interval, theorem 2.3 guarantees at least one fixed point on the interval. We can also find a
similar interval about x = 3 for which this theorem applies. However, this clearly is not the case for the
second example.

Fixed points are important in many disciplines. In general, they describe points of equilibrium of a
process (function), if they exist, which can then be analyzed. The fixed point iteration generates a
sequence of values which, depending upon the starting value used, will converge to a fixed point if it
exists.

The following steps describe the fixed point iteration:

1. Identify a starting value for x, call it p0 .


2. Compute f p0 = p1
3. Compute f p1 = p2
4. We continue this process until we reach an approximation to the solution of f(x) = 0 that is
accurate to within a specified tolerance.

Example 3
How can we use the fixed point iteration to find the roots of the x2 K 2 x = 0? To do this, we solve
the equation for x2 and proceed as before. Of course, we know that the roots are x = 0 and x = 2.
Graphically, we see that the curve f x = x2 and the straight line g x = 2x intersect at the two fixed
points of f, namely, x = 0 and x = 2.
plot x2 , 2 x , x =K2 ..3, axis = gridlines = colour = green, majorlines = 2

8
6

4 g x
=
2
f x
2
=x
K2 K1 0 1 2 3
K2 x

K4

Several really nice animations of the fixed point method can be seen by clicking on the following URL:

http://mathfaculty.fullerton.edu/mathews//a2001/animations/rootfinding/FixedPoint/FixedPoint.html

Chapter 2.3: Newton's Method and Its Extensions*


The key questions we want to address in this section are:
• How do we use Newton's method to find the root of an equation?
• How do we use the Secant method to find the root of an equation?
• How do we use the method of False Position used to find the root of an equation?

Newton's Method:
Newton's method is derived from the first Taylor polynomial. As long as the function f is twice
continuously differentiable on a, b
and an approximation p0 2 a, b to the solution p is such that f' p0 s 0, Taylor's polynomial of
degree one expanded about p0 and evaluated at x = p
is given by

p K p0
f p = f p0 C p K p0 f' p0 C f'' z p
2
where z p lies between p and p0 . Recall that the last term is called the remainder and provides a
formula for the bound on the error for this method. Since p is a solution to the equation, f p = 0 and
we have

p K p0
0 = f p0 C p K p0 f' p0 C f'' z p
2
If we ignore the error term, solving for p, we then have that

f p0
p z p0 K .
f' p0

f p0
Thus, the initial approximation to the solution p is found by computing p1 = p0 K and this
f' p0
represents the point at which the tangent line at the point p0 , f p0 crosses the x-axis . We can
compute a second approximation p2 to p if we replace p0 with p1 in our computation. We can continue
this process until we achieve our desired accuracy.

As is stated in the text, the major drawback for this method is the need to know the value of the
derivative at each approximation.

Several nice animations of Newton's method can be found at the following URL:
http://mathfaculty.fullerton.edu/mathews//a2001/animations/rootfinding/NewtonMethod/Newtonee.html

Secant Method:

The secant is a slight modification of Newton's method. The definition of the first derivative is used to
replace the derivative in Newton's method.
This leads to the formula

f pn pn K 1 K pn K 2
pn = pn K 1 K
f pn K 1 K f pn K 2

which is, in fact, the equation of the secant line through pn K 1 , f pn K 1 and pn K 2 , f pn K 2
with x-intercept pn , 0 . Sequential application of the formula to successive approximations to p leads
to a sequence that hopefully converges to the root of f.

Several nice animations of the Secant method can be found at the following URL:
http://mathfaculty.fullerton.edu/mathews//a2001/animations/rootfinding/SecantMethod/SecantMethod.
html

Method of False Position (Regula Falsi):

This method differs from the Secant method only in the way the results are handled once we have them.
In the Secant method we simply compute successive approximations to the root using the formula until
we have reach the desired level of accuracy. In the method of False Position, we refine interval
pn K 2 , pn K 1 so that it always brackets the root, as we do with the with the Bisection method.

A nice animation of the method of False Position can be found at the following URL:
http://mathfaculty.fullerton.edu/mathews//a2001/animations/rootfinding/RegulaFalsi/RegulaFalsi.html
Chapter 2.4: Error Analysis for Iterative Methods*
The key questions we want to address in this section are:
• How do we determine the rate of convergence of a method?
• How do we determine the number of iterations necessary to achieve the desired level of
convergence?

N
To determine the convergence rate of a sequence of approximations pn to p, we look at the
n=0
following equation and two cases:

pn C 1 K p
l = n lim
pn K p a
/N

• if α = 1(and λ <1), the sequence is linearly convergent


• if α = 2, the sequence is quadratically convergent.

The Illustration in the text shows a good example as to how this computation is done.

To determine the number of iterations necessary to achieve the desired level of convergence, given that
we know the sequence formula in terms of N, we can solve the following inequality for N.
pn C 1 K p ! TOL

Chapter 2.5: Accelerating Convergence*


The key question in this section is:
• How do we accelerate convergence?

Theorem 2.8 from the last section implies that higher-order convergence for fixed-point methods of the
form g p = p can occur only when g' p = 0.

The method this discussed in this section is Aitken's Δ2 method. We begin by making an important
assumption, namely that the finite sequence pn N or the infinite sequence pn N linearly
n=0 n=0
converges to some limit p. We define a new sequence using the following formula:

2
pn C 1 Kpn
pn = pn K for n = 0, 1 ,..., N K 2
pn C 2 K2 pn C 1 C pn
or

2
pn C 1 Kpn
pn = pn K for n = 0, 1 ,...,N
pn C 2 K2 pn C 1 C pn
For instance, suppose we want to generate the first three terms of a sequence {pn} generated by using
Kp
nK1
Aitken's Δ2 method where pn = 3 and p0 = 0.5.

n pn pn

0 0.5 0.5773502692 K0.5 2


0.5 K =
0.5303150046 K 2$0.5773502692 C 0.5
0.5481009646

1 3K0.5 = 0.5773502692
0.5773502692 0.5303150046 K 0.5773502692 2
K
0.5584386128 K 2$0.5303150046 C 0.5773502692
= 0.5479150738

2 3K0.5773502692 = 0.5303150046
0.5303150046 0.5584386128 K 0.5303150046 2
K
0.5414483921 K 2$0.5584386128 C 0.5303150046
= 0.5478470419

3 3K0.5303150046 = 0.5584386128
0.5584386128 0.5414483921 K 0.5584386128 2
K
0.5516497984 K 2$0.5414483921 C 0.5584386128
= 0.5478225655

4 3K0.5584386128 =
0.5414483921

5 3K0.5414483921 =
0.5516497984

2
Applying Aitken's D method to a linearly convergent fixed point iteration yields Steffensen's method.
This method upgrades the convergence to quadratic. Recall that a fixed point iteration is one in which
p = g p . Steffensen's method proceeds as follows:

• begin with an initial approximation p00 to p


• set p01 = g p00 and p02 = g p01
2
• determine the first Aitken's D approximation p10
• set p11 = g p10 and p02 = g p01
• continue the process until the desired TOL is achieved

The text gives a nice example of this process.


..
Chapter 2.6: Zeros of Polynomials and Muller's Method
The main concepts in this section are:
•P x ?
• How do we deal with a polynomial P x that has complex zeros?

Horner's method:

Horner's method uses synthetic division to re-write the polynomial


P x = an xn C an K 1 xn K 1 C... a1 x C a0
as
P x = x K x0 Q x C b0 where Q x = bn xn K 1 C bn K 1 xn C... b2 x C b1 .

With the assumptions that bn = an and bk = ak C bk C 1 x0 , for k = n K 1, n K 2,..., 1, 0, we see that


for an initial
approximation, x0 , of the zero of our polynomial as our divisor, we have that
P x0 = x0 Kx0 Q x0 C b0 = b0 .

Using the product rule, we see that the derivative with respect to x of P x = x K x0 Q x C b0 is
P' x = Q x .

Therefore, applying synthetic division to Q x will give us P' x0 = Q x0 .


With that information, we are able to apply Newton's method to find the first approximation to a zero of
P x :

P x0
x1 = x0 K .
P' x0
If we begin the entire process again, applying synthetic division on P x and Q x using the
approximation x1 as our divisor,
we will find our next approximation, x2 , to the zero of P x .

The procedure described above is called deflation and unfortunately, inaccuracy results from the
computation of the zeros of
successive polynomials.
..
Mullers method:
..
What do we do if the polynomial has complex zeros? Muller's, which is an extension of the Secant
method, is useful in this case.
We want to consider the zeros of a quadratic polynomial

2
P x = a x K p2 C b x K p2 C c

that passes through three points


p0 , f x0 , p1 , f x1 and p3 , f x3 .

That is, we have the following system of equations that can be solved for a, b, and c:
f p0 = a p0 Kp2 2 C b p0 Kp2 C c:
2
f p1 = a p1 Kp2 C b p1 Kp2 C c:
2
f p2 = a p2 Kp2 C b p2 Kp2 C c = c :

to obtain:

2 2
p0 Kp2 f p1 Kf p2 K p1 Kp2 f p0 Kf p2
a= :
p0 Kp2 p1 Kp2 p0 Kp1

2
p1 Kp2 f p0 Kf p2 K p0 Kp2 f p1 Kf p2
b= :
p0 Kp2 p1 Kp2 p0 Kp1

c = f p2 :

Substituting the formulas for a, b, and c into P x and using the quadratic formula to solve P x = 0
for p3 yields:
2c
p3 = p2 K :
2
b C sgn b b K 4 ac

where the denominator is chosen as the largest in magnitude that will result in p3 being selected as the
closest zero of P to p2 . Table 2.12 of the text shows the values computed by the method for the
function below:

f d x / x4 K 3 x3 C x2 C x C 1
x/x4 K 3 x3 C x2 C x C 1 (6.1)
where p0 , p1 , p2 are given and f p0 , f p1 , f p2 are computed as follows:

given values of p and computations of f(p) for i = 0,1,2


p0 d 0.5; f p0
0.5
1.4375 (6.1.1)
p1 dK0.5; f p1
K0.5
1.1875 (6.1.2)
p2 d 0; f p2
0
1 (6.1.3)

So that the student can better appreciate the steps within the algorithm, the computations for the first
row of the table as defined in the algorithm are shown below:

computations for first pass of algorithm


h1 d p1 K p0
K1.0 (6.2.1)
h2 d p2 K p1
0.5 (6.2.2)
f p1 K f p0
del1 d
h1
0.2500000000 (6.2.3)
f p2 K f p1
del2 d
h2
K0.3750000000 (6.2.4)
del2 K del1
dd d
h2 C h1
1.250000000 (6.2.5)
b d del2 C h2$dd
0.2500000000 (6.2.6)
1
2
CapD d b2 K 4$f p2 $dd
2.222048604 I (6.2.7)
if b K CapD ! b C CapD then E d b C CapD else E d b K CapD end if
0.2500000000 K 2.222048604 I (6.2.8)
2$f p2
h dK
E
K0.1000000000 K 0.8888194418 I (6.2.9)
The compution below, then, is the first approximation, p3 , to the zero that appears in the first row of
the table.
p = p2 C h
p = K0.1000000000 K 0.8888194418 I (6.2.10)

The values are then reset and the process repeats to obtain the next row of the table.

You might also like