Two Simple Projection-Type Methods For Solving Variational Inequalities
Two Simple Projection-Type Methods For Solving Variational Inequalities
[Link]
Abstract
In this paper we study a classical monotone and Lipschitz continuous variational
inequality in real Hilbert spaces. Two projection type methods, Mann and its viscosity
generalization are introduced with their strong convergence theorems. Our methods
generalize and extend some related results in the literature and their main advantages
are: the strong convergence and the adaptive step-size usage which avoids the need
to know apriori the Lipschitz constant of variational inequality associated operator.
Primary numerical experiments in finite and infinite dimensional spaces compare and
illustrate the behaviors of the proposed schemes.
0123456789().: V,-vol
0 Page 2 of 23 A. Gibali et al.
1 Introduction
In this paper, we study the classical Variational Inequality (VI) of Fichera [14,15] in
real Hilbert spaces. The VI is formulated as follows: Find a point x ∗ ∈ C such that
Ax ∗ , x − x ∗ ≥ 0 ∀x ∈ C, (1)
where C ⊆ H is a nonempty, closed and convex set of a real Hilbert space H and
A : H → H is a given mapping. We denote by V I (C, A) the solution set of the
VI (1).
Variational inequalities are fundamental problems which stand the core of diverse
applied fields such as in economics, engineering mechanics, transportation, and many
more, see for example, [2,3,20], just to name a few. In the last decades, many iterative
methods have been constructed for solving variational inequalities and their related
optimization problems, see for example the excellent book of Facchinei and Pang [13],
Konnov [20] and the many references therein.
The first simplest method for solving VIs, derived from optimization theory, is
known as the gradient method (GM). The iterative step of this method requires the
calculation of the orthogonal projection onto the feasible set of the VI, that is C,
per each iteration. Given the current iterate xn , the algorithm’s iterative step has the
following form.
where τ and PC are as above. This method is studied intensively and extended and
improved in various ways, for example see, e.g. [6–10,25,26,31,34,35] and the refer-
ences therein.
Although the extragradient method converges under weaker monotonicity assump-
tion than the gradient method, it requires to calculate two projections onto C per each
iteration. So, in case that the set C is not “easy” to project onto it, a minimum distance
subproblem has to be solved twice per each iteration in order to evaluate PC , a fact
which might affect the applicability and computational complexity of the method.
In a direction to overcome this obstacle, Censor et al. [7–9] introduced the so-called
subgradient extragradient method (SEM). In this algorithm, the second projection onto
Two simple projection-type methods for solving variational… Page 3 of 23 0
C is replaced by an easy and constructible projection onto some super set which con-
tains C. Given the current iterate xn , the algorithm’s iterative step has the following
form.
⎧
⎪
⎨ yn = PC (xn − τ Axn ),
Tn = {x ∈ H | xn − τ Axn − yn , x − yn ≤ 0}, (4)
⎪
⎩
xn+1 = PTn (xn − τ Ayn ),
yn = PC (xn − τn Axn ),
and then the next iterate xn+1 is generated via the following PC-algorithms:
where γ ∈ (0, 2), τn ∈ (0, 1/L) (or τn is updated by some self-adaptive rule),
and
xn − yn , d(xn , yn )
ηn := .
d(xn , yn ) 2
Recently, projection and contraction type methods for solving VIs have received
great attention by many authors, see, e.g., [4,11,12], just to name a few.
Since the SEM and PC algorithms originally introduced in Euclidean spaces, a nat-
ural question which was studied is how to extend the method to infinite dimensional
spaces and obtain strong convergence. In 2012, Censor et al. [8] proposed two subgra-
dient extragradient variants, which converge strongly in real Hilbert spaces. One of
the SEM variant has the following form. Given the current iterate xn , the next iterate
xn+1 is calculated via the following.
⎧
⎪
⎪ yn = PC (xn − τ Axn ),
⎪
⎪
⎪
⎪ Tn = {x ∈ H | x n − τ Ax n − yn , x − yn ≤ 0},
⎪
⎪
⎨z = α x + (1 − α )P (x − τ Ay ),
n n n n Tn n n
(6)
⎪
⎪C = {w ∈ H | z − w ≤ x − w },
⎪
⎪
n n n
⎪
⎪ Q n = {w ∈ H | xn − w, x0 − xn ≥ 0},
⎪
⎪
⎩
xn+1 = PCn ∩Q n x0 , ∀n ≥ 0.
0 Page 4 of 23 A. Gibali et al.
Inspired by the results in [8], Kraikaew and Saejung [22] combined the subgradient
extragradient method and the Halpern-type method and propose the so-called Halpern
subgradient extragradient [Link] the current iterate xn , the next iterate xn+1
is calculated via the following.
⎧
⎪
⎪ yn = PC (xn − τ Axn ),
⎪
⎨T = {x ∈ H | x − τ Ax − y , x − y ≤ 0},
n n n n n
(7)
⎪
⎪ z = P (x − τ Ay ),
⎪
⎩
n Tn n n
xn+1 = αn x0 + (1 − αn )z n , ∀n ≥ 0,
1
where τ ∈ (0, ), {αn } ⊂ (0, 1), limn→∞ αn = 0, ∞ n=1 αn = +∞ and x 0 ∈ H .
L
Similar to (6) of Censor et al. [8], (7) converges strongly to a specific point p =
PV I P(C,A) x0 .
Another two very recent and related (viscosity type methods) which are also used
as comparison with our methods in Sect. 4 are Shehu and Iyiola [30, Algorithm 3.1]
and Thong and Hieu [33, Algorithm 3].
The setting of Shehu and Iyiola [30, Algorithm 3.1] is as follows. Given ρ, μ ∈
(0, 1) and let {αn }∞
n=0 ⊂ (0, 1), f a contraction and choose an arbitrary starting point
x1 ∈ H . Given the current iterate xn , calculate.
yn = PC (xn − λn Axn ),
λn xn − yn ≤ μ rρln (xn )
where rρln (xn ) := xn − PC (xn − ρln Axn ). Construct the set Tn as in (4) and compute
The setting of Thong and Hieu [33, Algorithm 3] is as follows. Given ρ ∈ [0, 1),
μ, l ∈ (0, 1) and γ > 0. Let {αn }∞
n=0 ⊂ (0, 1), f a contraction and choose an arbitrary
starting point x1 ∈ H . Given the current iterate xn , calculate.
yn = PC (xn − λn Axn ),
λ Axn − Ayn ≤ μ xn − yn .
Two simple projection-type methods for solving variational… Page 5 of 23 0
2 Preliminaries
Let H be a real Hilbert space and C be a nonempty, closed and convex subset of
H . The weak convergence of {xn }∞n=1 to x is denoted by x n x as n → ∞, while
the strong convergence of {xn }∞
n=1 to x is written as x n → x as n → ∞. For each
x, y ∈ H and α ∈ R, we have
x+y 2
≤ x 2
+ 2y, x + y. (10)
αx + (1 − α)y 2
=α x 2
+ (1 − α) y 2
− α(1 − α) x − y . 2
(11)
αx + β y + γ z 2
=α x 2
+β y 2
+γ z 2
− αβ x − y 2
−αγ x − z 2
− βγ y − z 2
(12)
Tx − Ty ≤ L x − y ∀x, y ∈ H . (13)
T x − T y, x − y ≥ 0 ∀x, y ∈ H . (14)
Lemma 2.2 [16] Let C be a closed and convex subset in a real Hilbert space H ,
x ∈ H . Then
i) PC x − PC y 2 ≤ PC x − PC y, x − y ∀y ∈ C;
ii) PC x − y 2 ≤ x − y 2 − x − PC x 2 ∀y ∈ C;
iii) (I − PC )x − (I − PC )y, x − y ≥ (I − PC )x − (I − PC )y 2 ∀y ∈ C.
For properties of the metric projection, the interested reader could be referred to
Section 3 in [16].
The following Lemmas are useful for the convergence of our proposed methods.
Lemma 2.3 [22] Let A : H → H be a monotone and L-Lipschitz continuous mapping
on C. Let S = PC (I − τ A), where τ > 0. If {xn } is a sequence in H satisfying xn q
and xn − Sxn → 0 then q ∈ V I (C, A) = Fi x(S).
Lemma 2.4 [24] Let {an } be a sequence of nonnegative real numbers such that there
exists a subsequence {an j } of {an } such that an j < an j +1 for all j ∈ N. Then there exists
a nondecreasing sequence {m k } of N such that limk→∞ m k = ∞ and the following
properties are satisfied by all (sufficiently large) number k ∈ N:
am k ≤ am k +1 and ak ≤ am k +1 .
In fact, m k is the largest number n in the set {1, 2, · · · , k} such that an < an+1 .
The next technical lemma is very useful and used by many authors, for example
Liu [23] and Xu [36]. Furthermore, a variant of Lemma 2.5 has already been used by
Reich in [29].
Lemma 2.5 Let {an } be sequence of nonnegative real numbers such that:
an+1 ≤ (1 − αn )an + αn bn ,
3 Main results
In this section we introduce our two modified projection-type methods for solving VIs.
For the convergence analysis of the methods, we assume the following conditions.
Two simple projection-type methods for solving variational… Page 7 of 23 0
Condition 3.2 The solution set of the VI (1) is nonempty, that is V I (C, A) = ∅.
Condition 3.3 Let {αn } and {βn } be two real sequences in (0, 1) such that {βn } ⊂
(a, b) ⊂ (0, 1 − αn ) for some a > 0, b > 0 and
∞
lim αn = 0, αn = ∞.
n→∞
n=1
Algorithm 3.1
Initialization: Given λ > 0, l ∈ (0, 1), μ ∈ (0, 1), γ ∈ (0, 2). Let x0 ∈ H be
arbitrary
Step 1. Compute
yn = PC (xn − τn Axn ),
dn := xn − yn − τn (Axn − Ayn ),
and
xn − yn 2
ηn := (1 − μ) .
dn 2
Step 3. Compute
xn+1 = (1 − αn − βn )xn + βn z n .
Set n := n + 1 and go to Step 1.
We start the analysis of the algorithm’s convergence by proving the validity of the
stopping criterion.
0 Page 8 of 23 A. Gibali et al.
Lemma 3.1 Assume that Conditions 3.1–3.2 hold. The Armijo-like search rule (16) is
well defined and
μl
min λ, ≤ τn ≤ λ.
L
Lemma 3.2 Let {dn } be a sequence generated by Algorithm 3.1. Then dn = 0 if and
only if xn = yn .
(1 − μ) xn − yn ≤ dn ≤ (1 + μ) xn − yn . (17)
We have
dn = xn − yn − τn (Axn − Ayn )
≥ xn − yn − τn Axn − Ayn
≥ xn − yn − μ xn − yn
= (1 − μ) xn − yn . (18)
dn ≤ (1 + μ) xn − yn . (19)
(1 − μ) xn − yn ≤ dn ≤ (1 + μ) xn − yn .
Remark 3.1 From Lemma 3.2 we show that if dn = 0 then stop and yn is a solution of
V I (C, A).
Lemma 3.3 Assume that Conditions 3.1 and 3.2 hold. Let {z n } be a sequence generated
by Algorithm 3.1. Then
2−γ
zn − p 2
≤ xn − p 2
− xn − z n 2
∀ p ∈ V I (C, A). (20)
γ
= xn − yn 2
− τn xn − yn , Axn − Ayn
+ yn − p, xn − yn − τn (Axn − Ayn )
≥ xn − yn 2 − τn xn − yn Axn − Ayn
+ yn − p, xn − yn − τn (Axn − Ayn )
≥ xn − yn 2 − μ xn − yn 2
+ yn − p, xn − yn − τn (Axn − Ayn ). (21)
xn − p, dn ≥ (1 − μ) xn − yn 2
. (25)
zn − p 2
= xn − γ ηn dn − p 2
= xn − p 2 − 2γ ηn xn − p, dn + γ 2 ηn2 dn 2
. (26)
zn − p 2
≤ xn − p 2
− 2γ ηn (1 − μ) xn − yn 2
+ γ 2 ηn2 dn 2
.
xn − yn 2 ηn dn 2
Since ηn = (1 − μ) , it implies that xn − yn 2 = . Thus,
dn 2 1−μ
zn − p 2
≤ xn − p 2
− 2γ ηn2 dn 2
+ γ 2 ηn2 dn 2
= xn − p − γ (2 − γ ) ηn dn 2
2
2−γ
= xn − p 2 − γ ηn dn 2
γ
2−γ
= xn − p 2 − xn − z n 2 .
γ
0 Page 10 of 23 A. Gibali et al.
Lemma 3.4 Assume that Conditions 3.1–3.2 hold and let the sequence {xn } be gener-
ated by Algorithm 3.1. Then
(1 + μ)2
xn − yn 2
≤ xn − z n 2
. (27)
[(1 − μ)γ ]2
Proof We have
1 1
xn − yn 2
= .ηn dn 2 = ηn dn 2
1−μ ηn (1 − μ)
1
= xn − z n 2 . (28)
ηn (1 − μ)γ 2
xn − yn 2 1−μ
ηn = (1 − μ) ≥ ,
dn 2 (1 + μ)2
thus,
1 (1 + μ)2
≤ (29)
ηn 1−μ
(1 + μ)2
xn − yn 2
≤ xn − z n 2
.
[(1 − μ)γ ]2
Theorem 3.1 Assume that Conditions 3.1–3.3 hold. Then any sequence {xn } generated
by Algorithm 3.1 converges strongly to p ∈ V I (C, A), where p = min{ z : z ∈
V I (C, A)}.
Proof Thanks to Lemma 3.3 we get
z n − p ≤ xn − p ∀n. (30)
xn+1 − p = (1 − αn − βn )xn + βn z n − p
= (1 − αn − βn )(xn − p) + βn (z n − p) − αn p
≤ (1 − αn − βn )(xn − p) + βn (z n − p) + αn p . (31)
(1−αn − βn )(xn − p) + βn (z n − p) 2
Two simple projection-type methods for solving variational… Page 11 of 23 0
= (1 − αn − βn )2 xn − p 2
+ 2(1 − αn − βn )βn xn − p, z n − p
+ βn2 zn − p 2
≤ (1 − αn − βn )2 xn − p 2
+ 2(1 − αn − βn )βn z n − p xn − p
+ βn2 zn − p 2
≤ (1 − αn − βn )2 xn − p 2
+ 2(1 − αn − βn )βn xn − p 2
+ βn2 xn − p 2
= (1 − αn )2 xn − p 2 .
xn+1 − p ≤ (1 − αn ) xn − p + αn p
≤ max{ xn − p , p }
≤ · · · ≤ max{ x0 − p , p }.
xn+1 − p 2
= (1 − αn − βn )xn + βn z n − p 2
= (1 − αn − βn )(xn − p) + βn (z n − p) + αn (− p) 2
= (1 − αn − βn ) xn − p 2
+ βn z n − p 2
+ αn p 2
− βn (1
− αn − βn ) xn − z n 2
− αn (1 − αn − βn ) xn 2
− αn βn z n 2
≤ (1 − αn − βn ) xn − p 2
+ βn z n − p 2
+ αn p 2 , (34)
xn+1 − p 2
≤ (1 − αn − βn ) xn − p 2 + βn xn − p 2
2−γ
− βn xn − z n 2 + αn p 2
γ
2−γ
= (1 − αn ) xn − p 2 − βn xn − z n 2 + αn p 2
γ
2−γ
≤ xn − p 2 − βn xn − z n 2 + αn p 2 . (35)
γ
0 Page 12 of 23 A. Gibali et al.
Therefore, we get
2−γ
βn xn − z n 2
≤ xn − p 2
− xn+1 − p 2
+ αn p 2 .
γ
xn+1 − p 2
≤ (1 − αn ) xn − p 2 + αn [2βn xn − z n xn+1 − p
+ 2 p, p − xn+1 ]. (36)
tn − p = (1 − βn )(xn − p) + βn (z n − p)
= (1 − βn ) xn − p + βn z n − p
≤ (1 − βn ) xn − p + βn xn − p
= xn − p , (37)
and
tn − xn = βn xn − z n . (38)
xn+1 − p 2
= (1 − αn − βn )xn + βn z n − p 2
= (1 − βn )xn + βn z n − αn xn − p 2
= (1 − αn )(tn − p) − αn (xn − tn ) − αn p 2
≤ (1 − αn )2 tn − p 2
− 2αn (xn − tn ) + αn p, xn+1 − p
= (1 − αn ) tn − p
2 2
+ 2αn xn − tn , p − xn+1 + 2αn p, p − xn+1
≤ (1 − αn ) tn − p 2
+ 2αn xn − tn xn+1 − p + 2αn p, p − xn+1
≤ (1 − αn ) xn − p + αn [2βn xn − z n
2
xn+1 − p
+ 2 p, p − xn+1 ].
lim xn − z n = 0,
n→∞
lim xn − yn = 0.
n→∞
Two simple projection-type methods for solving variational… Page 13 of 23 0
We also have
xn+1 − xn ≤ αn xn + βn xn − z n → 0 as n → ∞.
Since {xn } is bounded we assume that there exists a subsequence {xn j } of {xn } such
that xn j q and
μl
We have xn j q, min{λ, } ≤ τn ≤ λ and xn −yn = xn − PC (xn −τn Axn ) → 0,
L
by Lemma 2.3 we get q ∈ V I (C, A).
Since q ∈ V I (C, A) and p = min{ z : z ∈ V I (C, A)}, that is p = PV I (C,A) 0
we obtain
lim sup p, p − xn = p, p − q ≤ 0.
n→∞
By xn+1 − xn → 0 we get
xm k − p 2
≤ xm k +1 − p 2
and xk − p 2
≤ xm k +1 − p 2 . (39)
2−γ 2−γ
a xm k − z m k 2
≤ βm k xm k − z m k 2
γ γ
≤ xm k − p 2
− xm k +1 − p 2
+ αm k p 2
≤ αm k p 2 .
Therefore, we get
lim xm k − z m k = 0. (40)
k→∞
xm k +1 − xm k → 0
0 Page 14 of 23 A. Gibali et al.
and
lim sup p, p − xm k +1 ≤ 0.
k→∞
xm k +1 − p 2
≤ (1 − αm k ) xm k − p 2
+ αm k [2βm k xm k − z m k xm k +1 − p + 2 p, p − xm k +1 ]
≤ (1 − αm k ) xm k +1 − p 2
+ αm k [2βm k xm k − z m k xm k +1 − p + 2 p, p − xm k +1 ].
xk − p 2
≤ xm k +1 − p 2
≤ 2βm k xm k − z m k xm k +1 − p + 2 p, p − xm k +1 .
In this section, we propose our viscosity projection type algorithm for solving varia-
tional inequalities, with the usage of a ρ-contraction f : H → H .
Algorithm 3.2
Initialization: Given λ > 0, l ∈ (0, 1), μ ∈ (0, 1), γ ∈ (0, 2). Let x0 ∈ H be
arbitrary
Iterative Steps: Given the current iterate xn , calculate the next iterate xn+1 as follows:
Step 1. Compute
yn = PC (xn − τn Axn ),
xn − yn 2
ηn := (1 − μ) ,
dn 2
Two simple projection-type methods for solving variational… Page 15 of 23 0
and
dn := xn − yn − τn (Axn − Ayn ).
Step 3. Compute
xn+1 = αn f (xn ) + (1 − αn )z n .
Set n := n + 1 and go to Step 1.
Theorem 3.2 Assume that Conditions 3.1–3.2 hold and given a ρ-contraction f :
H → H . Assume that {αn } is a real sequence in (0, 1) such that
∞
lim αn = 0, αn = ∞.
n→∞
n=1
Then any sequence {xn } generated by Algorithm 3.2 converges strongly to an element
p ∈ V I (C, A), where p = PV I (C,A) ◦ f ( p).
Proof Claim 1. We prove that the {xn } is bounded. Indeed, According to Lemma 3.3
we have
z n − p ≤ xn − p . (42)
xn+1 − p = αn f (xn ) + (1 − αn )z n − p
= αn ( f (xn ) − p) + (1 − αn )(z n − p)
≤ αn f (xn ) − p + (1 − αn ) z n − p
≤ αn f (xn ) − f ( p) + αn f ( p) − p + (1 − αn ) z n − p
≤ αn ρ xn − p + αn f ( p) − p + (1 − αn ) xn − p
f ( p) − p
≤ [1 − αn (1 − ρ)] xn − p + αn (1 − ρ)
1−ρ
f ( p) − p
≤ max{ xn − q , }
1−ρ
f ( p) − p
≤ · · · ≤ max x0 − p , .
1−ρ
This implies that the sequence {xn } is bounded. Consequently, { f (xn )}, {yn } and {z n }
are bounded.
Claim 2. We show that
2−γ
(1 − αn ) xn − z n 2
≤ xn − p 2
− xn+1 − p 2
+ αn f (xn ) − p 2 .
γ
0 Page 16 of 23 A. Gibali et al.
xn+1 − p 2
= αn ( f (xn ) − p) + (1 − αn )(z n − p) 2
= αn f (xn ) − p 2
+ (1 − αn ) z n − p 2
− αn (1 − αn ) f (xn ) − z n 2
≤ αn f (xn ) − p 2
+ (1 − αn ) z n − p 2
≤ αn f (xn ) − p 2 + (1 − αn ) xn − p 2
2−γ
− (1 − αn )βn xn − z n 2
γ
2−γ
≤ αn f (xn ) − p 2
+ xn − p 2
− (1 − αn ) xn − z n 2
.
γ
2−γ
(1 − αn ) xn − z n 2
≤ xn − p 2
− xn+1 − p 2
+ αn f (xn ) − p 2 .
γ
xn+1 − p 2
= αn f (xn ) + (1 − αn )z n − p 2
+ 2αn f ( p) − p, xn+1 − p
≤ αn f (xn ) − f ( p) 2 + (1 − αn ) z n − p 2
+ 2αn f ( p) − p, xn+1 − p
≤ αn ρ xn − p 2
+ (1 − αn ) xn − p 2
+ 2αn f ( p) − p, xn+1 − p
= (1 − (1 − ρ)αn ) xn − p + (1 − ρ)αn2
2
. f ( p) − p, xn+1 − p. (43)
1−ρ
lim xn − z n = 0, (44)
n→∞
Two simple projection-type methods for solving variational… Page 17 of 23 0
lim xn − yn = 0. (45)
n→∞
We also have
xn+1 − xn = αn f (xn ) + (1 − αn )z n − xn
≤ αn f (xn ) − xn + (1 − αn ) z n − xn → 0. (46)
Since the sequence {xn } is bounded, it implies that there exists a subsequence {xn k } of
{xn } that weak convergence to some z ∈ H such that
xm k − p 2
≤ xm k +1 − p 2 , (50)
and
xk − p 2
≤ xm k − p 2 . (51)
2−γ
(1 − αm k ) xm k − z m k 2
≤ xm k − p 2
− xm k +1 − p 2
γ
0 Page 18 of 23 A. Gibali et al.
+ αm k f (xm k ) − p 2
≤ αm k f (xm k ) − p 2 .
We obtain
lim xm k − z m k = 0, (52)
k→∞
lim xm k − ym k = 0. (53)
k→∞
xm k +1 − p 2
≤ (1 − (1 − ρ)αm k ) xm k − p 2
2
+ (1 − ρ)αm k . f ( p) − p, xm k +1 − p, (55)
1−ρ
xm k +1 − p 2
≤ (1 − (1 − ρ)αm k ) xm k +1 − p 2
2
+(1 − ρ)αm k . f ( p)−, xm k +1 − p.
1−ρ
4 Numerical illustrations
In this section we present two numerical experiments which demonstrate the perfor-
mances of our Mann-type and viscosity-type projection algorithm (Algorithms 3.1 and
3.2) in finite and infinite dimensional spaces. In both experiments the parameters are
chosen as λ = 7.55, l = 0.5, μ = 0.85 and γ = 1.99, αk = 1/k, βk = (k − 1)/2k.
Two simple projection-type methods for solving variational… Page 19 of 23 0
3.5
2.5
1.5
0.5
0
0 2 4 6 8 10 12 14
1 [sin(−3t) + cos(−10t)]
Fig. 1 x1 (t) = 600
1 1
Example 1 Suppose that H = L 2 ([0, 1]) with norm x := |x(t)| 2 dt 2 and
0
1
inner product x, y := 0 x(t)y(t)dt, ∀x, y ∈ H . Let C := {x ∈ H | x ≤ 1} be
the unit ball. Define operator A : C → H by (Ax)(t) = max(0, x(t)). Then it can be
easily verified that A is 2-Lipschitz continuous and monotone on C (see [19]). With
these given C and A, the set of solution to the variational inequality is {0} = ∅. It is
known that, see for example [5]
x
, if x L2 > 1,
PC (x) = x L2
x, if x L2 ≤ 1,
We implement our algorithm with different starting point x1 (t). We choose the
stopping criterion ||xn+1 − xn || < ε with ε = 10−30 . The results are presented in
Table 1 and Figs. 1 and 2.
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0 2 4 6 8 10 12 14
1 t 2 − e−t
Fig. 2 x1 (t) = 525
||y||4 1
F x = arg min + ||y − x||2 | y ∈ Rm .
4 2
5 Conclusions
In this paper we proposed two projection-type methods, Mann and viscosity schemes
methods [27,28] for solving variational inequalities in real Hilbert spaces. Both algo-
rithms converge strongly under monotonicity and Lipschitz continuity of the VI
associated mapping A. The algorithms require the calculation of only one projec-
tion onto the VI’s feasible set C per each iteration and by using the projection and
contraction technique there is no need to know the Lipschitz constant of A in advance.
These two properties emphasize the applicability and advantages over several exist-
ing results in the literature. Numerical experiments in finite and infinite dimensional
spaces compare and illustrate the performance of the our new schemes.
Two simple projection-type methods for solving variational… Page 21 of 23 0
5
Our algorithm Alg. 3.1
Shehu and Iyiola Alg. 3.1
0 Thong and Hieu Alg. 3
-5
-10
-15
-20
-25
0 10 20 30 40 50
Fig. 3 Comparison between Algorithm 3.2 and [30, Algorithm 3.1] and [33, Algorithm 3]
4
Algorithm 3.1 with f(x)=0.5x
Algorithm 3.1 with f(x)=0.25x
2
Algorithm 3.1 with f(x)=0.75x
Algorithm 3.1 with f(x)=0.9
-2
-4
-6
-8
-10
1 2 3 4 5 6 7 8
Fig. 4 The performances of Algorithm 3.2 for different choices of the contraction f (x) =
0.9x, 0.75x, 0.5x, 0.25x
References
1. Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange
function. Ekonomika i Mat. Metody. 12, 1164–1173 (1976)
2. Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)
0 Page 22 of 23 A. Gibali et al.
3. Baiocchi, C., Capelo, A.: Variational and Quasivariational Inequalities, Applications to Free Boundary
Problems. Wiley, New York (1984)
4. Cai, X., Gu, G., He, B.: On the O(1/t) convergence rate of the projection and contraction methods
for variational inequalities with Lipschitz continuous monotone operators. Comput. Optim. Appl. 57,
339–363 (2014)
5. Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Lecture Notes in Mathe-
matics, vol. 2057. Springer, Berlin (2012)
6. Ceng, L.C., Hadjisavvas, N., Wong, N.C.: Strong convergence theorem by a hybrid extragradient-
like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 46,
635–646 (2010)
7. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradientmethod for solving variational inequal-
ities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
8. Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the
variational inequality problem in Hilbert space. Optim. Meth. Softw. 26, 827–845 (2011)
9. Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational
inequality problem in Euclidean space. Optimization 61, 1119–1132 (2011)
10. Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer.
Algorithms 56, 301–323 (2012)
11. Dong, Q.L., Gibali, A., Jiang, D., Ke, S.H.: Convergence of projection and contraction algorithms with
outer perturbations and their applications to sparse signals recovery. J. Fixed Point Theory Appl. 20,
16 (2018). [Link]
12. Dong, L.Q., Cho, J.Y., Zhong, L.L., Rassias, MTh: Inertial projection and contraction algorithms for
variational inequalities. J. Glob. Optim. 70, 687–704 (2018)
13. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems.
Springer Series in Operations Research, vols. I and II. Springer, New York (2003)
14. Fichera, G.: Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad.
Naz. Lincei, VIII Ser. Rend. Cl. Sci. Fis. Mat. Nat. 34, 138–142 (1963)
15. Fichera, G.: Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue con-
dizioni al contorno. Atti Accad. Naz. Lincei, Mem. Cl. Sci. Fis. Mat. Nat. Sez. I, VIII. Ser. 7, 91–140
(1964)
16. Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel
Dekker, New York (1984)
17. He, B.S.: A class of projection and contraction methods for monotone variational inequalities. Appl.
Math. Optim. 35, 69–76 (1997)
18. He, B.S., Liao, L.Z.: Improvements of some projection methods for monotone nonlinear variational
inequalities. J. Optim. Theory Appl. 112, 111–128 (2002)
19. Hieu, D.V., Anh, P.K., Muu, L.D.: Modified hybrid projection methods for finding common solutions
to variational inequality problems. Comput. Optim. Appl. 66, 75–96 (2017)
20. Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001)
21. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekonomika
i Mat. Metody. 12, 747–756 (1976)
22. Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for
solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)
23. Liu, L.S.: Ishikawa and Mann iteration process with errors for nonlinear strongly accretive mappings
in Banach space. J. Math. Anal. Appl. 194, 114–125 (1995)
24. Maingé, P.E.: A hybrid extragradient-viscosity method for monotone operators and fixed point prob-
lems. SIAM J. Control Optim. 47, 1499–1515 (2008)
25. Malitsky, Y.V.: Projected reflected gradient methods for monotone variational inequalities. SIAM J.
Optim. 25, 502–520 (2015)
26. Malitsky, Y.V., Semenov, V.V.: A hybrid method without extrapolation step for solving variational
inequality problems. J. Glob. Optim. 61, 193–202 (2015)
27. Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)
28. Moudafi, A.: Viscosity approximating methods for fixed point problems. J. Math. Anal. Appl. 241,
46–55 (2000)
29. Reich, S.: Constructive Techniques for Accretive and Monotone Operators. Applied Nonlinear Anal-
ysis, pp. 335–345. Academic Press, New York (1979)
Two simple projection-type methods for solving variational… Page 23 of 23 0
30. Shehu, Y., Iyiola, O.S.: Strong convergence result for monotone variational inequalities. Numer. Algo-
rithms 76, 259–282 (2017)
31. Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J.
Control Optim. 37, 765–776 (1999)
32. Sun, D.F.: A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory
Appl. 91, 123–140 (1996)
33. Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems.
Numer. Algorithms 78, 1045–1060 (2018)
34. Thong, D.V., Hieu, D.V.: Modified subgradient extragradient method for variational inequality prob-
lems. Numer. Algorithms 79, 597–610 (2018)
35. Thong, D.V., Hieu, D.V.: Inertial extragradient algorithms for strongly pseudomonotone variational
inequalities. J. Comput. Appl. Math. 341, 80–98 (2018)
36. Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.