Mathematics 07 01069
Mathematics 07 01069
Article
Iterative Methods with Memory for Solving Systems of
Nonlinear Equations Using a Second Order
Approximation
Alicia Cordero 1,† , Javier G. Maimó 2, *, Juan R. Torregrosa 1,† and María P. Vassileva 2,†
1 Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, 46022 Valencia, Spain;
[email protected] (A.C.); [email protected] (J.R.T.)
2 Instituto Tecnológico de Santo Domingo (INTEC), Santo Domingo 10602, Dominican Republic;
[email protected]
* Correspondence: [email protected]
† These authors contributed equally to this work.
Received: 11 October 2019; Accepted: 31 October 2019; Published: 7 November 2019
Abstract: Iterative methods for solving nonlinear equations are said to have memory when the calculation
of the next iterate requires the use of more than one previous iteration. Methods with memory usually
have a very stable behavior in the sense of the wideness of the set of convergent initial estimations.
With the right choice of parameters, iterative methods without memory can increase their order of
convergence significantly, becoming schemes with memory. In this work, starting from a simple method
without memory, we increase its order of convergence without adding new functional evaluations by
approximating the accelerating parameter with Newton interpolation polynomials of degree one and
two. Using this technique in the multidimensional case, we extend the proposed method to systems of
nonlinear equations. Numerical tests are presented to verify the theoretical results and a study of the
dynamics of the method is applied to different problems to show its stability.
Keywords: iterative methods; secant method; methods with memory; multidimensional Newton
polynomial interpolation; basin of attraction
1. Introduction
This paper deals with iterative methods for approximating the solutions of a nonlinear system of n
equations and n unknowns, F ( x ) = 0, where F : D ⊆ Rn −→ Rn is a nonlinear vectorial function defined
in a convex set D. The main aim of this work is to design iterative methods with memory for approximating
the solutions ξ of F ( x ) = 0. An iterative method is said to have memory when the fixed point function G
depends on more than one previous iteration, that is, the iterative expression is xk+1 = G ( xk , xk−1 , . . . ).
A classical iterative scheme with memory for solving scalar equations f ( x ) = 0, n = 1, is the known
secant method, whose iterative expression is
f ( xk )( xk − xk−1 )
x k +1 = x k − , k = 1, 2, . . . , (1)
f ( x k ) − f ( x k −1 )
with x0 and x1 as initial estimations. This method can be obtained from Newton’s scheme by replacing the
derivative by the first divided difference f [ xk , xk−1 ]. For n > 1 the secant method has the expression
Let (IM) be an iterative method with memory that generates a sequence { xk } of approximations to
the root ξ, and let us also assume that this sequence converges to ξ. If there exists a nonzero constant η
and nonnegative numbers ti , 0 ≤ i ≤ m such that the inequality
m
| e k +1 | ≤ η ∏ | e k − i | t i (5)
i =0
OR (( I M ), ξ ) ≥ s∗ , (6)
where Dk,r tends to the asymptotic error constant of the iterative method when k → ∞. To avoid higher
order terms in the Taylor series that do not influence the convergence order, we will use the notation used
by Traub in [2]. If { f k } and { gk } are null sequences (that is, sequences convergent to zero) and
fk
→ C, (9)
gk
f k = O( gk ) or f k ∼ Cgk . (10)
Let { x (k) }k≥0 be a sequence of vectors generated by an iterative method converging to a zero of F,
with R-order greater than or equal to r, then according to [4] we can write
where { D (k,r) } is a sequence which tends to the asymptotic error constant Dr of the iterative method when
k → ∞, so ek+1 ∼ ek .
Accelerating Parameters
We illustrate the technique of the accelerating parameters, introduced by Traub in [2], starting from a
very simple method with a real parameter α
x k +1 = x k − α f ( x k ), k = 1, 2, . . . (12)
where ek = xk − ξ, k = 0, 1, . . .
1
As it is easy to observe, the order of convergence can increase up to 2 if α =. Since ξ is unknown,
f 0 (ξ )
0
we estimate f (ξ ) by approximating the nonlinear function with a Newton polynomial interpolation of
degree 1, at points ( xk , f ( xk )) and ( xk−1 , f ( xk−1 ))
f ( x k ) − f ( x k −1 )
N1 (t) = f ( xk−1 ) + f [ xk , xk−1 ](t − xk−1 ) = f ( xk−1 ) + ( t − x k −1 ). (14)
x k − x k −1
To construct the polynomial N1 (t), it is necessary to evaluate the nonlinear function f in two points;
so, two initial estimates are required, x1 and x0 . The derivative of the nonlinear function is approximated
1 1
by the derivative of the interpolating polynomial, that is, α = 0 ≈ 0 , and the resulting scheme is
f (ξ ) N1 ( xk )
x k − x k −1
αk = ,
f ( x k ) − f ( x k −1 )
x k +1 = x k − α k f ( x k ), (15)
f ( xk )( xk − xk−1 )
x k +1 = x k − , (16)
f ( x k ) − f ( x k −1 )
Mathematics 2019, 7, 1069 4 of 12
√
1+ 5
we get the secant method [7], with order of convergence p = ≈ 1.6180.
2
N2 (t) = f ( xk−1 ) + f [ xk , xk−1 ](t − xk−1 ) + f [ xk , xk−1 , xk−2 ](t − xk )(t − xk−1 ), (17)
being
f [ x k , x k −1 ] − f [ x k −1 , x k −2 ]
f [ x k , x k −1 , x k −2 ] = . (18)
x k − x k −2
So,
N20 ( xk ) = f [ xk , xk−1 ] + f [ xk , xk−1 , xk−2 ](2t − xk − xk−1 )
1
and replacing this expression in αk = an iterative method with memory is obtained
N20 ( xk )
1
αk = ,
xk ( f ( xk−1 ) − f ( xk−2 )) + xk−1 ( f ( xk−2 ) − f ( xk )) + xk−2 ( f ( xk ) − f ( xk−1 ))
+ f [ x k , x k −1 ]
( xk − xk−2 )( xk−2 − xk−1 )
x k +1 = x k − α k f ( x k ). (19)
Now, three initial points are necessary, xk , xk−1 and xk−2 , to calculate the value of parameter αk .
In the following result, we present the order of convergence of the iterative scheme with memory (19).
Theorem 1. (Order of convergence of the modified secant method) Let ξ be a simple zero of a sufficiently differentiable
function f : D ⊆ R −→ R in an open interval D. Let x0 , x1 and x2 be initial guesses sufficiently close to ξ. Then,
the order of convergence of method (19) is at least 1.8393, with error equation ek+1 ∼ c3 ek−1 ek−2 ek .
(1 − αk f 0 (ξ )) ∼ c3 ek−1 ek−2 .
p p2 p2 + p +1
e k +1 ∼ e k −2 e k −2 e k −2 = e k −2 . (21)
p3
where p denotes the order of convergence of method (19). But, if p is the order of (19), then ek+1 ∼ ek−2 .
Therefore, p must satisfy p3 = p2 + p + 1. The unique positive root of this cubic polynomial is p ≈ 1.8393
and, by applying the result of Ortega and Rheinbolt, this is the order of convergence of scheme (19).
Mathematics 2019, 7, 1069 5 of 12
The efficiency index, defined by Ostrowski in [8], depends on the number of functional evaluations per
iteration, d, and on the order of convergence p, in the way
I = p1/d , (22)
so, as in the modified secant method there is only one new functional evaluation per iteration, its efficiency
index is 1.8393.
Nonlinear Systems
Method (12) can be extended for approximating the roots of a nonlinear system F ( x ) = 0, where
F : D ⊆ Rn −→ Rn is a vectorial function defined in a convex set D,
It is easy to prove that this method has linear convergence, with error equation
being ek = x (k) − ξ, k = 0, 1, . . .
In the multidimensional case, we can approximate the parameter α = [ F 0 (ξ )]−1 with a multivariate
Newton polynomial interpolation. If we use a polynomial of first degree
then N10 ( x (k) ) = [ x (k) , x (k−1) , F ] and α is approximated by αk = [ x (k) , x (k−1) ; F ]−1 , so the iterative resulting
method is
N2 (t) = F ( x (k−1) ) + [ x (k) , x (k−1) ; F ](t − x (k−1) ) + [ x (k) , x (k−1) , x (k−2) ; F ](t − x (k) )(t − x (k−1) ), (27)
where
[·, ·, ·; F ] : Rn × Rn × Rn → B(Rn × Rn , Rn ), (28)
N20 (t) = [ x (k) , x (k−1) ; F ] + [ x (k) , x (k−1) , x (k−2) ; F ]((t − x (k) ) + (t − x (k−1) )), (29)
N20 ( x (k) ) = [ x (k) , x (k−1) ; F ] + [ x (k) , x (k−1) , x (k−2) ; F ]( x (k) − x (k−1) ). (30)
x (k+1) = x (k) − [[ x (k) , x (k−1) ; F ] + [ x (k) , x (k−1) , x (k−2) ; F ]( x (k) − x (k−1) )]−1 F ( x (k) ), k = 2, 3, . . . (31)
Mathematics 2019, 7, 1069 6 of 12
The divided difference operator can be expressed in its integral form by using the Genocchi-Hermite
formula for divided difference of first order [9]
Z 1
[ x, x + h; F ] = F 0 ( x + ht)dt, (32)
0
as
1
F 0 ( x + ht) = F 0 ( x ) + F 00 ( x )ht + F 000 ( x )(ht)2 + · · · , (33)
2
then
h h2
[ x, x + h; F ] = F 0 ( x ) + F 00 ( x ) + F 000 ( x ) + · · · , (34)
2 6
and, in general, the divided difference of k-order can be calculated by [10]
Z 1 Z 1
[ x0 , x1 , . . . , x k ; F ] = ··· t1k−1 t2k−2 · · · tk−1 F (k) (µ)dt1 dt2 · · · dtk , (35)
0 0
where
µ = x 0 + t 1 ( x 1 − x 0 ) + t 1 t 2 ( x 2 − x 1 ) + · · · + t 1 t 2 · · · t k ( x k − x k −1 ). (36)
Theorem 2. (Order of Convergence of the Modified Secant Method in the Multidimensional Case) Let ξ be a zero of
a sufficiently differentiable function F : D ⊆ Rn −→ Rn in a convex set D, such that F 0 (ξ ) is nonsingular. Let
x (0) , x (1) and x (2) be initial guesses sufficiently close to ξ. Then the order of convergence of que method (31) is at
least 1.8393 with error equation
e k +1 ∼ c 3 e k −1 e k −2 e k ,
Proof. For the second order divided difference we write (35) for k = 2 in the following way
Z 1Z 1
[ x, x + h, x + k; F ] = F 00 ( x + ht1 + kt2 )dt1 dt2
0 0
k2
1 00 1 000 k 1 (iv) 2
= F (x) + F (x) h + + F ( x ) h + + hk , (37)
2 3 2 8 3
where h = xk − xk−1 and k = xk−2 − xk−1 . We can write h and k in terms of the errors as h = ek − ek−1 and
k = ek−2 − ek−1 . Substituting the approximation of α, αk , in the error equation (24), we obtain
e k +1 ∼ c 3 e k −1 e k −2 e k . (38)
Following the same steps as in the unidimensional case, the order of convergence p of method (31) must
satisfy the equation p3 = p2 + p + 1, which has the unique positive root p = 1.8393.
The rational operators obtained when our method (31) is applied on the mentioned polynomials are:
xk 5xk2 − xk2−1 − xk−1 xk−2 − xk2−2
O p1 ( x k −2 , x k −1 , x k ) = − ,
−6xk2 + xk2−1 + xk−1 xk−2 + xk2−2 + 1
xk 5xk2 − xk2−1 − xk−1 xk−2 − xk2−2
O p2 ( x k −2 , x k −1 , x k ) = − ,
−6xk2 + xk2−1 + xk−1 xk−2 + xk2−2 − 1
xk 5xk2 − xk2−1 − xk−1 xk−2 − xk2−2
O p3 ( x k −2 , x k −1 , x k ) = − ,
−6xk2 + xk2−1 + xk−1 xk−2 + xk2−2
5xk3 − xk xk2−1 + xk−1 xk−2 + xk2−2 − 1
O p4 ( x k −2 , x k −1 , x k ) = − .
−γ − 6xk2 + xk2−1 + xk−1 xk−2 + xk2−2
In order to analyze the fixed points, we introduce the auxiliary operators Gi , i = 1, 2, 3, 4, defined as
follows
Gi ( xk−2 , xk−1 , xk ) = ( xk−1 , xk , O pi ( xk−2 , xk−1 , xk )), i = 1, 2, 3, 4.
So, a point ( xk−2 , xk−1 , xk ) is a fixed point of Gi if xk−2 = xk−1 , xk−1 = xk and xk = O pi ( xk−2 , xk−1 , xk ). We
can prove that there are no fixed points different from the roots of the polynomials. So, the method is very
stable. On the other hand, critical points are also interesting because a classical result of Fatou and Julia
states that each basin of attraction of an attracting fixed point contains at least a critical point ([11,12]), so
it is important to determine to which basin of attraction belongs each critical point. The critical points
are determined by the calculation of the determinant of the Jacobian matrix Gi0 , i = 1, 2, 3, 4 ([13]). For
the polynomials p1 to p4 the critical points are the roots of the polynomials and the points ( xk−2 , xk−1 , xk ),
such that xk−1 = −2xk−2 .
The bifurcation diagram ([13]) is a dynamical tool that shows the behavior of the sequence of iterates
depending on a parameter. In this case, we use parameter γ from p4 ( x ). Starting from an initial estimation
close to zero, we use a mesh of 500 subintervals for γ in the x axis and we draw the last 100 iterates. In
Figure 1a, we plot the real roots of p4 and in Figure 1b we draw the bifurcation diagram. We can see how
the bifurcation diagram always matches the solutions.
2 2
1.5 1.5
1 1
0.5 0.5
0 0
x
−0.5 −0.5
−1 −1
−1.5 −1.5
−2 −2
−5 −4 −3 −2 −1 0 1 2 3 4 5 −5 −4 −3 −2 −1 0 1 2 3 4 5
gamma
Figure 1. Comparison between the real roots and the iterative method.
Mathematics 2019, 7, 1069 8 of 12
Now, we are going to compare Newton’s method and the secant and modified secant schemes on
the following test functions, under different numerical characteristics, as well as the dynamical planes
associated to them.
(1) f 1 ( x ) = sin( x ) − x2 + 1,
(2) f 2 ( x ) = ( x − 1)( x3 + x10 + 1) sin( x ),
(3) f 3 ( x ) = arctan( x ),
(4) F4 ( x1 , x2 ) = ( x12 − 1, x22 − 1),
(5) F5 ( x1 , x2 ) = ( x12 − x1 − x22 − 1, x2 − sin( x1 )),
(6) F6 ( x1 , x2 , x3 ) = ( x1 x − 2 − 1, x2 x − 3 − 1, x1 x − 3 − 1).
Variable precision arithmetics has been used with 100 digits of mantissa, with the stopping criterion
| xk+1 − xk | < 10−25 or | f ( xk+1 )| < 10−25 with a maximum of 100 iterations. That means the iterative
method will stop when any of these conditions are met. These tests have been executed on a computer with
16GB of RAM using MATLAB 2014a version. As a numerical approximation of the order of convergence
of the method, we use the approximated computational order of convergence (ACOC), defined as [14]:
Let us remark that we use the same stopping criterium and calculation of ACOC in the
multidimensional case, only replacing the absolute value by a norm. As we only use one initial point to be
able to compare methods with memory with Newton’s method, to calculate the initial points needed for
the methods with memory we use an initial estimation of α. We take α1 = 0.01 in the unidimensional case.
In the multivariate case, we use the initial value α1−1 = 5I for the secant method, and two different values
for the two first iterations, α1−1 = 5I and α2−1 = 3I for the modified secant method, where I denotes the
identity matrix of size n × n. We have observed that, taking two different approximations of α leads to a
more stable behavior of the modified secant method.
On the other hand, for each test function we determine the associated dynamical plane. This tool of
the dynamical analysis for methods with memory was introduced in [15]. The dynamical plane is a visual
representation of the basins of attraction of an specific problem. It is constructed by defining a mesh of
points, each of which is taken as a initial estimation of the iterative method. The complex plane is created
showing the real part of the initial estimate in the x axis and the imaginary part on the y axis ([16]). In
a similar way as before, we use an initial estimation of α to calculate the necessary initial points. This
approach makes possible to draw the performance of iterative schemes with memory in the complex plane,
thus allowing to compare the performance of methods with and without memory. To draw the dynamical
planes we have used a mesh of 400 × 400 initial estimations, a maximum of 40 iterations and a tolerance of
10−3 . We used α = 0.01 to calculate the initial estimations. Each point used as an initial estimate is painted
in a certain color, depending on the root to which the method converges. If it does not converge to any
root, it is painted black.
In Table 1, we show the results obtained by Newton’s method, secant and modified secant schemes
for the scalar functions f 1 ( x ), f 2 ( x ) and f 3 ( x ). These tests confirm the theoretical results with a very stable
ACOC when the method is convergent.
Mathematics 2019, 7, 1069 9 of 12
f 1 ( x ) = sin( x ) − x2 + 1, x0 = 1, ξ ≈ 1.4096
Method ACOC Iter | x k +1 − x k | | f ( xk+1 )|
Newton 2.00 6 1.6 × 10−17 3.5 × 10−34
Secant 1.62 9 2.4 × 10−18 5.9 × 10−29
SecantM 1.84 8 1.5 × 10−16 5.3 × 10−30
f 2 ( x ) = ( x − 1)( x3 + x10 + 1) sin x x0 = 0.75, ξ=1
Method ACOC Iter | x k +1 − x k | | f ( xk+1 )|
Newton 2.00 12 2.7 × 10−22 8.9 × 10−43
Secant n.c
SecantM 1.82 12 8.2 × 10−17 2.0 × 10−29
f 3 ( x ) = arctan( x ) x0 = 1.4, ξ=0
Method ACOC Iter | x k +1 − x k | | f ( xk+1 )|
Newton n.c
Secant 1.06 7 7.8 × 10−16 5.9 × 10−34
SecantM 1.82 11 7.6 × 10−21 6.7 × 10−38
On the other hand, in Figure 2 we see that the three methods behave well on function f 1 ( x ), with no
significant differences between their basins of attraction. In Figure 3, we see that in case of function f 2 ( x )
the dynamical plane of the modified secant is better than the secant and very similar to Newton’s one. The
secant method presents black regions that, in this case, mean slow convergence. In Figure 4, we see the
basins of attraction of the methods on f 3 ( x ). We observe that the wider basins of attraction correspond to
the modified secant method. In this case, the black regions mean points where the methods are divergent.
2 2 2
1 1 1
Im(x)
Im(x)
0 0 0
−1 −1 −1
−2 −2 −2
1 1 1
Im(x)
Im(x)
0 0 0
−1 −1 −1
−1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1
Re(x) Re(x) Re(x)
2.5
2.5 2.5
2
2 2
1.5
1.5 1.5
1
1 1
0.5
0.5 0.5
Im(x)
Im(x)
Im(x)
0 0
−0.5
−0.5 −0.5
−1
−1 −1
−1.5 −1.5 −1.5
−2 −2 −2
3 3
3
2 2
2
1 1
1
0 0
0
−1 −1 −1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
3 3
3
2 2
2
1 1
1
0 0
0
−1 −1
−1
−2 −2 −2
−3 −3 −3
−3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3 −3 −2 −1 0 1 2 3
4. Conclusions
The technique of introducing accelerating parameters allows us to generate new methods with higher
convergence order than the original one, with the same number of functional evaluations. The increase is
more significant if we start from a method with high order of convergence. As far as we know, this is the
first time that an iterative method with memory for solving nonlinear systems is designed by using Newton
polynomial interpolation with several variables of second degree. In addition to obtaining the expression
of what we called modified secant method, a dynamical study was carried out adapting tools from the
dynamics of methods without memory to methods with memory. Although numerically computing the
divided differences is hard, methods with memory show a very stable dynamical behavior, even more
than other known methods without memory.
Mathematics 2019, 7, 1069 12 of 12
Author Contributions: Writing—original draft preparation, J.G.M., writing—review and editing A.C. and J.R.T.,
validation, M.P.V.
Funding: This research was supported by PGC2018-095896-B-C22 (MCIU/AEI/FEDER, UE), Generalitat Valenciana
PROMETEO/2016/089, and FONDOCYT 2016–2017-212 República Dominicana.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Ortega, J.M.; Rheinboldt, W.C. Iterative Solutions of Nonlinear Equations in Several Variables; Academic Press:
New York, NY, USA, 1970.
2. Traub, J.F. Iterative Methods for the Solution of Equations; Chelsea Publishing Company: New York, NY, USA, 1982.
3. Soleymani, F.; Lotfi, T.; Tavakoli, E.; Khaksar Haghani, F. Several iterative methods with memory using
self-accelerators. Appl. Math. Comput. 2015, 254, 452–458. [CrossRef]
4. Petković, M.S.; Neta, B.; Petković, L.D.; Džunić, J. Multipoint Methods for Solving Nonlinear Equations; Elsevier
Academic Press: New York, NY, USA, 2013.
5. Petković, M.S.; Sharma, J.R. On some efficient derivative-free iterative methods with memory for solving systems
of nonlinear equations. Numer. Algorithms 2016, 71, 457–474. [CrossRef]
6. Narang, M.; Bhatia, S.; Alshomrani, A.S.; Kanwar, V. General efficient class of Steffensen type methods with
memory for solving systems of nonlinear equations. Comput. Appl. Math. 2019, 352, 23–39. [CrossRef]
7. Potra, F.A. An error analysis for the secant method. Numer. Math. 1982, 38, 427-–445. [CrossRef]
8. Ostrowski, A.M. Solution of Equations and Systems of Equations; Academic Press: New York, NY, USA, 1960.
9. Michelli, C.A. On a Numerically Efficient Method for Computing Multivariate B-Splines. In Multivariate
Approximation Theory; Schempp, W., Zeller, K., Eds.; Birkhäuser: Basel, Switzerland, 1979; pp. 211–248.
10. Potra, F.-A.; Ptak, V. Nondiscrete Induction and Iterative Processes; Pitman Publising INC: Boston, MA, USA, 1984.
11. Fatou, P. Sur les équations fonctionelles. Bull. Soc. Math. Fr. 1919, 47, 161–271. [CrossRef]
12. Julia, G. Mémoire sur l’iteration des fonctions rationnelles. J. Math. Pures Appl. 1918, 8, 47–245.
13. Robinson, R.C. An Introduction to Dynamical Systems, Continous and Discrete; American Mathematical Society:
Providence, RI, USA, 2012.
14. Cordero, A.; Torregrosa, J.R. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math.
Comput. 2007, 190, 686–698. [CrossRef]
15. Campos, B.; Cordero, A.; Torregrosa, J.R.; Vindel, P. A multidimensional dynamical approach to iterative methods
with memory. Appl. Math. Comput. 2015, 271, 701–715. [CrossRef]
16. Chicharro, F.I.; Cordero, A.; Torregrosa, J.R. Drawing Dynamical Parameters Planes of Iterative Families and
Methods. Sci. World J. 2013, 2013. [CrossRef] [PubMed]
c 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).