0% found this document useful (0 votes)
690 views6 pages

False Position Method Explained

The document explains the False Position Method (Regula-Falsi Method) for finding real roots of equations. It details the iterative process of approximating roots using linear interpolation between two points where the function changes sign. Several examples illustrate the method's application to different equations, including step-by-step iterations and calculations to achieve desired accuracy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Topics covered

  • False Position Method,
  • Case Analysis,
  • Graphical Methods,
  • Function Evaluation,
  • Iteration Process,
  • Numerical Analysis,
  • Mathematical Algorithms,
  • Mathematical Functions and Ite…,
  • Mathematical Proofs,
  • Convergence Criteria
0% found this document useful (0 votes)
690 views6 pages

False Position Method Explained

The document explains the False Position Method (Regula-Falsi Method) for finding real roots of equations. It details the iterative process of approximating roots using linear interpolation between two points where the function changes sign. Several examples illustrate the method's application to different equations, including step-by-step iterations and calculations to achieve desired accuracy.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Topics covered

  • False Position Method,
  • Case Analysis,
  • Graphical Methods,
  • Function Evaluation,
  • Iteration Process,
  • Numerical Analysis,
  • Mathematical Algorithms,
  • Mathematical Functions and Ite…,
  • Mathematical Proofs,
  • Convergence Criteria

False Position Method/ Regula-Falsi Method

Y (b , f (b))

f (x ) = 0

a
0 a x1 x2 x3 R b X

( a , f ( a )) ( x , f ( x )) ( x
1 1 2 , f ( x2 ) ) ( x , f ( x ))
3 3

Fig: 01
(a+b)/2=x1 f(x1)<0

(a , f (a ))
Y

f (x ) = 0

0 a x3 x2 x1 b X

(x 2 , f ( x2 ) ) ( x , f ( x )) (b , f (b ))
1 1

Fig:02

Nahid Sultana Page 1 of 6


Let we want to find a real root of f (x ) = 0 .

Consider two paints x = a and x = b, a  b such that f (a )  f (b)  0 then there exists a real root of
f (x ) = 0 in (a , b ) .

Draw a straight line joining (a , f (a )) and (b , f (b )) whose equation is given by

x−a y − f (a )
=
b − a f (b ) − f (a )

y − f (a )
 x−a = (b − a )          (1)
f (b ) − f (a )

This line cuts the x- axis at ( x1 , 0 ) where x1 is the first approximation of the desired root. Now (1)
takes the form
0 − f (a )
x1 − a = (b − a )
f (b) − f (a )

af (a ) − bf (a )
 x1 = +a
f (b) − f (a )

af (a ) − bf (a ) + af (b) − af (a )
=
f (b) − f (a )
a f (b ) − bf (a )
=
f (b ) − f (a )

If f (x1 ) = 0 then x1 is the desired root.

Suppose f (x1 )  0 , then the following two case arise.

Case-1: f (x1 )  f (b)  0 , then the root lies in ( x1 , b ) .

Draw a straight line joining (x1 , f (x1 )) and (b , f (b )) whose equation is given by

x − x1 y − f (x1 )
=
b − x1 f (b ) − f (x1 )

y − f (x1 )
 x − x1 = (b − x1 )          (2)
f (b ) − f (x1 )

This line cuts the x- axis at (x2 , 0) where x2 is the second approximation of the desired root. Now
(2) takes the form

Nahid Sultana Page 2 of 6


0 − f (x1 )
x2 − x1 = (b − x1 )
f (b ) − f (x1 )

x1 f (x1 ) − bf (x1 )
 x2 = + x1
f (b ) − f (x1 )

x1 f (x1 ) − bf (x1 ) + x1 f (b ) − x1 f (x1 )


=
f (b ) − f (x1 )

x1 f (b ) − bf (x1 )
=
f (b ) − f (x1 )

a f (b ) − bf (a )
Now replacing x1 by a we get x2 = .
f (b ) − f (a )

Case-2: f (a )  f (x1 )  0 then the root lies in (a , x1 ) .

Draw a straight line joining (a , f (a )) and (x1 , f (x1 )) whose equation is given by

x−a y − f (a )
=
x1 − a f (x1 ) − f (a )

y − f (a )
 x−a = (x1 − a )          (3)
f (x1 ) − f (a )

This line cuts the x- axis at (x2 , 0) , where x2 is the second approximation of the desired root. Now
(3) takes the form

0 − f (a )
x2 − a = (x1 − a )
f (x1 ) − f (a )

af (a ) − x1 f (a )
 x2 = +a
f (x1 ) − f (a )

af (a ) − x1 f (a ) + af (x1 ) − af (a )
=
f (x1 ) − f (a )

a f (x1 ) − x1 f (a )
=
f (x1 ) − f (a )

a f (b ) − bf (a )
Now replacing x1 by b we get x2 =
f (b ) − f (a )
Repeating this iterating process, we can find the higher approximation formula as

Nahid Sultana Page 3 of 6


a f (b ) − bf (a )
xk =
f (b ) − f (a )

Setting xk −1 = a if f (xk )  f (b)  0 or, xk −1 = b if f (a )  f (xk )  0 .

We shall continue this iterating process until two successive values of xk is approximately same

i.e. xk −1  xk .

Find the real root of e x − 4 x 2 = 0 that lies in (0 ,1) correct to four decimal places by using False
Position Method.

Soln: Given, f (x ) = e x − 4x 2

f (0) = e 0 − 4  0 2 = 1

f (1) = e1 − 4  12 = −1.2817

Here, f (0)  f (1)  0 so there exists a real root of f (x ) = 0 in (0 ,1) .

The internal formula for False Position method is


a f (b ) − bf (a )
xk =
f (b ) − f (a )

a f (b) − bf (a )
f (a ) f (b) xk = f ( xk )
f (b) − f (a )
Iteration a b

1 0 1 1 -1.2817 0.4383 0.7817


2 0.4383 0.7817 1 -1.2817 0.6511 0.2220
3 0.6511 0.2220 1 -1.2817 0.7026 0.0444
4 0.7026 0.0444 1 -1.2817 0.7173 0.0478
5 0.7173 0.0478 1 -1.2817 0.7274 -0.0467
6 0.7173 0.0478 0.7274 -0.0467 0.7224 -0.0280
7 0.7173 0.0478 0.7224 -0.0280 0.7205 -0.0210
8 0.7173 0.0478 0.7205 -0.0210 0.7195 -0.0173
9 0.7173 0.0478 0.7195 -0.0173 0.7118 0.0110
10 0.7118 0.0110 0.7195 -0.0173 0.7147 0.0003
Since f (x10 )  0.0003 = 0 .

So, the required root is 0.7147

Nahid Sultana Page 4 of 6


Find the real root of x 3 − 3x − 5 = 0 that lies in (0 ,1) correct to four decimal places by using
False Position Method
Soln: Given, f (x ) = x 3 − 3x − 5

f (2) = 23 − 3  2 − 5 = −3

f (3) = 33 − 3  3 − 5 = 13

Here, f (2)  f (3)  0 so there exists a real root of f (x ) = 0 in (2 , 3) .

The internal formula for regular false position method in


a f (b ) − bf (a )
xk =
f (b ) − f (a )

a f (b) − bf (a )
f (a ) f (b) xk = f ( xk )
f (b) − f (a )
Iteration a b

1 2 -3 3 13 2.1875 -1.0949
2 2.1875 -1.0949 3 13 2.2506 -0.3521
3 2.2506 -0.3521 3 13 2.2704 -0.1079
4 2.2704 -0.1079 3 13 2.2764 -0.6329
5 2.2764 -6329 3 13 2.2782 -0.0103
6 2.2782 -0.0103 3 13 2.2788 -0.0028
7 2.2788 -0.0028 3 13 2.2790 -0.0002
8 2.2790 -0.0002 3 13 2.2790 -0.0002
Since x8 − x7 = 0

So, the required root is 2.2790

Find the real root of 2 x − log 10 x − 7 = 0 that lies in (3 , 4) correct to four decimal places by
using false position method.

Soln: Given, f (x ) = 2 x − log 10 x − 7

f (3) = 2  3 − log 10 3 − 7 = − 1.477121

f (4) = 2  4 − log 10 4 − 7 = 0.39794

Here, f (3)  f (4)  0 so there exists a real root of (x)=0 in (3 , 4) .

Nahid Sultana Page 5 of 6


a f (b) − bf (a )
f (a ) f (b) xk = f ( xk )
f (b) − f (a )
Iteration a b

1 3 -1.4771 4 0.3979 3.7877 -0.0029


2 3.7877 -0.0029 4 0.3979 3.7892 -0.0001
3 3.7892 -0.0001 4 0.3979 3.7892 -0.0001
Since x3 − x2 = 0

So, the required root is 3.7892

Find the real root of x 2 + 4Sinx = 0 that lies in (− 2.5 ,1.5) correct to four decimal places by
using false position method.

Soln: Given, f (x ) = x 2 + 4Sinx

f (− 2.5) = (− 2.5) + 4Sin(− 2.5) = 6.0755


2

f (1.5) = (1.5) + 4 Sin(1.5) = 2.3547


2

Here, f (− 2.5)  f (1.5)  0 so there doesn’t exist a real root of (x)=0 in (− 2.5 ,1.5) .

Algorithm for False-Position Method:

Step-1: Define f ( x )
Step-2: Read a , b
Step-3: k =1
a f (b ) − bf (a )
Step-4: xk = 1 x1 f(x1)
f (b ) − f (a )
Step-5: Print k , xk , f (xk ) 2 x2 f(x2)
Step-6: If xk − xk −1  0.0001 then Go To Step-9
else if f (xk )• f (b )  0 then a = xk
else if f (xk )• f (a )  0 then b = xk
end if
Step-7: k = k + 1
Step-8: Go to Step-4
Step-9: Print ‘The required root=’ xk
Step-10: Stop

Nahid Sultana Page 6 of 6

Common questions

Powered by AI

Implementing the False Position Method algorithm involves: 1) Defining the function f(x) and initial interval [a, b]. 2) Checking f(a) * f(b) < 0 to ensure a root is present. 3) Using the iterative formula xk = (a*f(b) - b*f(a))/(f(b) - f(a)) to approximate the root. 4) Iteratively refining the interval: replace a or b based on sign of f(xk). 5) Continue iterations until convergence criteria are met, e.g., |xk - xk-1| < tolerance. Potential errors include not verifying the initial condition or mismanaging endpoint replacements, which may lead to incorrect convergence or infinite loops .

To find the root of f(x) = 0.5x^2 - 4 using the False Position Method within an interval [a, b], follow these steps: 1) Select an initial interval [a, b] such that f(a) * f(b) < 0. 2) Calculate the intersection of the line connecting (a, f(a)) and (b, f(b)) with the x-axis to estimate the root x1 using the formula x1 = (a*f(b) - b*f(a))/(f(b) - f(a)). 3) Check if f(x1) = 0 or if the difference between successive approximations is below a desired tolerance. 4) Depending on the sign of f(x1), narrow down the interval to either [a, x1] or [x1, b] and repeat the process with the new interval. Continue until the approximation is satisfactory .

The False Position Method and Bisection Method both require an initial interval where the function changes sign. However, the False Position Method is typically more efficient as it leverages the function's linear approximation for estimating the root, converging faster in cases where the function behaves linearly. Conversely, the Bisection Method guarantees convergence by repeatedly halving the interval, offering more stability, but often at a slower pace. The False Position Method is preferable for problems with linear or close-to-linear sections and does not guarantee convergence for functions with repeated roots or near-horizontal slopes .

The False Position Method struggles with functions having multiple or closely spaced roots due to its reliance on sign changes between interval endpoints. If multiple roots are close, the method might bypass some roots by continually converging to one root without identifying others, especially if the derivative changes slowly between the roots, impacting the slope of the lines used for intersection. Additionally, if the slope is almost horizontal near a root, successive approximations can stagnate, leading to slow convergence or mislocation .

In the refinement process of the False Position Method, after finding an approximation xk, we check if f(xk) equals zero. If not, we evaluate the sign of f(xk) * f(a) or f(xk) * f(b) to decide the new interval. If f(xk) * f(a) < 0, the root lies in the interval (a, xk), hence a is replaced by xk. Conversely, if f(xk) * f(b) < 0, the interval becomes (xk, b), replacing b with xk. This process is repeated iteratively to improve the approximation until convergence .

To improve the convergence rate of the False Position Method, hybrid methods such as the Modified False Position or Anderson-Björck modifications can be used. These methods combine aspects of bisection to update the interval endpoints more optimally, thereby accelerating the convergence, especially when dealing with functions that have slow-changing derivatives near the root .

The iterative formula used in the False Position Method is given by: xk = (a * f(b) - b * f(a)) / (f(b) - f(a)), where xk is the k-th approximation of the desired root, and a, b are the current interval endpoints .

The False Position Method applies to transcendental equations by utilizing its same principles: selecting an interval [a, b] where the function changes sign, indicating the presence of a root. For equations like logarithmic (e.g., f(x) = log(x) - a constant) or exponential functions (e.g., f(x) = e^x - a constant), the method calculates the intersection of the line between the function values at a and b with the x-axis to approximate the root. It iterates to refine this approximation by adjusting the interval endpoints based on the function's value sign at the current approximation, continuing until the desired precision is achieved .

A scenario where the critical condition (f(a) * f(b) < 0) for the False Position Method does not lead to a solution is when the function does not cross the x-axis within the interval but instead touches it. For instance, consider the function f(x) = x^2 in the interval [-1, 1]; here, f(-1) * f(1) < 0 holds, but there is no crossing over the x-axis, only a tangent at x = 0. This results in an infinite loop or failure to converge to a true root because the method relies on sign change, not touch points .

The False Position Method determines the existence of a real root within an interval (a, b) by checking the condition f(a) * f(b) < 0. If this condition holds, there exists a real root in the interval because it indicates a change in sign of the function values at the endpoints, suggesting the presence of a root between a and b .

You might also like