0% found this document useful (0 votes)
8 views23 pages

Two Simple Projection-Type Methods For Solving Variational Inequalities

This paper introduces two projection-type methods for solving classical monotone and Lipschitz continuous variational inequalities in real Hilbert spaces, specifically the Mann method and its viscosity generalization. The proposed methods demonstrate strong convergence without requiring prior knowledge of the Lipschitz constant of the associated operator, and are compared through numerical experiments. The study aims to extend existing results in the literature while addressing computational complexities associated with traditional methods.

Uploaded by

thongduongviet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views23 pages

Two Simple Projection-Type Methods For Solving Variational Inequalities

This paper introduces two projection-type methods for solving classical monotone and Lipschitz continuous variational inequalities in real Hilbert spaces, specifically the Mann method and its viscosity generalization. The proposed methods demonstrate strong convergence without requiring prior knowledge of the Lipschitz constant of the associated operator, and are compared through numerical experiments. The study aims to extend existing results in the literature while addressing computational complexities associated with traditional methods.

Uploaded by

thongduongviet
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Analysis and Mathematical Physics (2019) 9:0

[Link]

Two simple projection-type methods for solving variational


inequalities

Aviv Gibali1,2 · Duong Viet Thong3 · Pham Anh Tuan4

Received: 17 March 2019 / Revised: 17 March 2019 / Accepted: 21 May 2019


© Springer Nature Switzerland AG 2019

Abstract
In this paper we study a classical monotone and Lipschitz continuous variational
inequality in real Hilbert spaces. Two projection type methods, Mann and its viscosity
generalization are introduced with their strong convergence theorems. Our methods
generalize and extend some related results in the literature and their main advantages
are: the strong convergence and the adaptive step-size usage which avoids the need
to know apriori the Lipschitz constant of variational inequality associated operator.
Primary numerical experiments in finite and infinite dimensional spaces compare and
illustrate the behaviors of the proposed schemes.

Keywords Projection-type method · Variational inequality · Mann-type method ·


Viscosity method · Projection and contraction method

Mathematics Subject Classification 47H09 · 47J20 · 65K15 · 90C25

Dedicated to Professor Le Dung Muu on the Occasion of his 70th Birthday.

B Duong Viet Thong


duongvietthong@[Link]
Aviv Gibali
avivg@[Link]
Pham Anh Tuan
patuan.1963@[Link]

1 Department of Mathematics, ORT Braude College, 2161002 Karmiel, Israel


2 The Center for Mathematics and Scientific Computation, University of Haifa,
3498838 Mt. Carmel, Haifa, Israel
3 Applied Analysis Research Group, Faculty of Mathematics and Statistics, Ton Duc Thang
University, Ho Chi Minh City, Vietnam
4 Faculty of Economics Mathematics, National Economics University, Hanoi City, Vietnam

0123456789().: V,-vol
0 Page 2 of 23 A. Gibali et al.

1 Introduction

In this paper, we study the classical Variational Inequality (VI) of Fichera [14,15] in
real Hilbert spaces. The VI is formulated as follows: Find a point x ∗ ∈ C such that

Ax ∗ , x − x ∗  ≥ 0 ∀x ∈ C, (1)

where C ⊆ H is a nonempty, closed and convex set of a real Hilbert space H and
A : H → H is a given mapping. We denote by V I (C, A) the solution set of the
VI (1).
Variational inequalities are fundamental problems which stand the core of diverse
applied fields such as in economics, engineering mechanics, transportation, and many
more, see for example, [2,3,20], just to name a few. In the last decades, many iterative
methods have been constructed for solving variational inequalities and their related
optimization problems, see for example the excellent book of Facchinei and Pang [13],
Konnov [20] and the many references therein.
The first simplest method for solving VIs, derived from optimization theory, is
known as the gradient method (GM). The iterative step of this method requires the
calculation of the orthogonal projection onto the feasible set of the VI, that is C,
per each iteration. Given the current iterate xn , the algorithm’s iterative step has the
following form.

xn+1 = PC (xn − τ Axn ), (2)


1
where τ ∈ (0, ), L is the Lipschitz constant of A and PC denotes the metric pro-
L
jection onto C. It is shown that gradient method (2) convergence under Lipschitz
continuity and some restrictive monotonicity assumption, such as strong monotonic-
ity or inverse strongly monotone, see for example [18]. Korpelevich [21] (also Antipin
[1] independently) proposed a double-projection method, known as the extragradient
method (EM) which enable to obtain convergence in Euclidean spaces under Lipschitz
continuity and just monotonicity. Given the current iterate xn , the algorithm’s iterative
step has the following form.

yn = PC (xn − τ Axn ),
(3)
xn+1 = PC (xn − τ Ayn ),

where τ and PC are as above. This method is studied intensively and extended and
improved in various ways, for example see, e.g. [6–10,25,26,31,34,35] and the refer-
ences therein.
Although the extragradient method converges under weaker monotonicity assump-
tion than the gradient method, it requires to calculate two projections onto C per each
iteration. So, in case that the set C is not “easy” to project onto it, a minimum distance
subproblem has to be solved twice per each iteration in order to evaluate PC , a fact
which might affect the applicability and computational complexity of the method.
In a direction to overcome this obstacle, Censor et al. [7–9] introduced the so-called
subgradient extragradient method (SEM). In this algorithm, the second projection onto
Two simple projection-type methods for solving variational… Page 3 of 23 0

C is replaced by an easy and constructible projection onto some super set which con-
tains C. Given the current iterate xn , the algorithm’s iterative step has the following
form.


⎨ yn = PC (xn − τ Axn ),
Tn = {x ∈ H | xn − τ Axn − yn , x − yn  ≤ 0}, (4)


xn+1 = PTn (xn − τ Ayn ),

where τ ∈ (0, 1/L).


Another method which uses only one projection onto C is projection and con-
traction method (PC) of He [17] (see also Sun [32]). In this method, the point yn in
calculated in the same spirit of (3), but the next iterate xn+1 is calculated via some
adaptive step size rules. Given the current iterate xn , the algorithm’s iterative step has
the following form.

yn = PC (xn − τn Axn ),

and then the next iterate xn+1 is generated via the following PC-algorithms:

xn+1 = xn − γ ηn d(xn , yn ), (5)

where γ ∈ (0, 2), τn ∈ (0, 1/L) (or τn is updated by some self-adaptive rule),

d(xn , yn ) := xn − yn − τn (Axn − Ayn ),

and

xn − yn , d(xn , yn )
ηn := .
d(xn , yn ) 2

Recently, projection and contraction type methods for solving VIs have received
great attention by many authors, see, e.g., [4,11,12], just to name a few.
Since the SEM and PC algorithms originally introduced in Euclidean spaces, a nat-
ural question which was studied is how to extend the method to infinite dimensional
spaces and obtain strong convergence. In 2012, Censor et al. [8] proposed two subgra-
dient extragradient variants, which converge strongly in real Hilbert spaces. One of
the SEM variant has the following form. Given the current iterate xn , the next iterate
xn+1 is calculated via the following.


⎪ yn = PC (xn − τ Axn ),



⎪ Tn = {x ∈ H | x n − τ Ax n − yn , x − yn  ≤ 0},


⎨z = α x + (1 − α )P (x − τ Ay ),
n n n n Tn n n
(6)

⎪C = {w ∈ H | z − w ≤ x − w },


n n n

⎪ Q n = {w ∈ H | xn − w, x0 − xn  ≥ 0},



xn+1 = PCn ∩Q n x0 , ∀n ≥ 0.
0 Page 4 of 23 A. Gibali et al.

Inspired by the results in [8], Kraikaew and Saejung [22] combined the subgradient
extragradient method and the Halpern-type method and propose the so-called Halpern
subgradient extragradient [Link] the current iterate xn , the next iterate xn+1
is calculated via the following.


⎪ yn = PC (xn − τ Axn ),

⎨T = {x ∈ H | x − τ Ax − y , x − y  ≤ 0},
n n n n n
(7)

⎪ z = P (x − τ Ay ),


n Tn n n
xn+1 = αn x0 + (1 − αn )z n , ∀n ≥ 0,

1 
where τ ∈ (0, ), {αn } ⊂ (0, 1), limn→∞ αn = 0, ∞ n=1 αn = +∞ and x 0 ∈ H .
L
Similar to (6) of Censor et al. [8], (7) converges strongly to a specific point p =
PV I P(C,A) x0 .
Another two very recent and related (viscosity type methods) which are also used
as comparison with our methods in Sect. 4 are Shehu and Iyiola [30, Algorithm 3.1]
and Thong and Hieu [33, Algorithm 3].
The setting of Shehu and Iyiola [30, Algorithm 3.1] is as follows. Given ρ, μ ∈
(0, 1) and let {αn }∞
n=0 ⊂ (0, 1), f a contraction and choose an arbitrary starting point
x1 ∈ H . Given the current iterate xn , calculate.

yn = PC (xn − λn Axn ),

where λn = ρln and ln is the smallest nonnegative integer l such that

λn xn − yn ≤ μ rρln (xn )

where rρln (xn ) := xn − PC (xn − ρln Axn ). Construct the set Tn as in (4) and compute

z n = PTn (xn − λn Ayn ),

and calculate the next iterate as follows.

xn+1 = αn f (xn ) + (1 − αn )z n . (8)

The setting of Thong and Hieu [33, Algorithm 3] is as follows. Given ρ ∈ [0, 1),
μ, l ∈ (0, 1) and γ > 0. Let {αn }∞
n=0 ⊂ (0, 1), f a contraction and choose an arbitrary
starting point x1 ∈ H . Given the current iterate xn , calculate.

yn = PC (xn − λn Axn ),

where λn is chosen to be the largest λ ∈ {γ , γ l, γ l 2 , · · · } satisfying

λ Axn − Ayn ≤ μ xn − yn .
Two simple projection-type methods for solving variational… Page 5 of 23 0

Calculate the next iterate as follows.

xn+1 = αn f (xn ) + (1 − αn )z n (9)

where z n = yn − λn (Ayn − Axn ).


Motivated and inspired by the above results and the ongoing research in these direc-
tions, we suggest two modified projection-type methods, Man-type [27] and viscosity
type [28], for solving monotone and Lipschitz continuous variational inequalities
which converge strongly in real Hilbert spaces and does not require the knowledge of
the Lipschitz constant of A a-priori.
The paper is organized as follows. We first recall some basic definitions and results
in Sect. 2. Our algorithms are presented and analysed in Sect. 3. In Sect. 4 we present
some numerical experiments which demonstrate the algorithms performances as well
as provide a preliminary computational overview by comparing it with some related
algorithms. Final remarks and conclusions are given in Sect. 5.

2 Preliminaries

Let H be a real Hilbert space and C be a nonempty, closed and convex subset of
H . The weak convergence of {xn }∞n=1 to x is denoted by x n x as n → ∞, while
the strong convergence of {xn }∞
n=1 to x is written as x n → x as n → ∞. For each
x, y ∈ H and α ∈ R, we have

x+y 2
≤ x 2
+ 2y, x + y. (10)
αx + (1 − α)y 2
=α x 2
+ (1 − α) y 2
− α(1 − α) x − y . 2
(11)
αx + β y + γ z 2
=α x 2
+β y 2
+γ z 2
− αβ x − y 2

−αγ x − z 2
− βγ y − z 2
(12)

for all x, y, z ∈ H and for all α, β, γ ∈ [0; 1] with α + β + γ = 1.


Definition 2.1 Let T : H → H be an operator. Then
1. the operator T is called L-Lipschitz continuous with L > 0 if

Tx − Ty ≤ L x − y ∀x, y ∈ H . (13)

if L = 1 then the operator T is called nonexpansive and if L ∈ (0, 1), T is


called contraction.
2. T is called monotone if

T x − T y, x − y ≥ 0 ∀x, y ∈ H . (14)

3. the fixed point set of T , denoted by Fi x(T ) is defined as follows.

Fi x(T ) := {x ∈ H | T x = x}. (15)


0 Page 6 of 23 A. Gibali et al.

For every point x ∈ H , there exists a unique nearest point in C, denoted by PC x


such that x − PC x ≤ x − y ∀y ∈ C. PC is called the metric projection of H onto
C. It is known that PC is nonexpansive.
Lemma 2.1 [16] Let C be a nonempty closed convex subset of a real Hilbert space H .
Given x ∈ H and z ∈ C. Then z = PC x ⇐⇒ x − z, z − y ≥ 0 ∀y ∈ C.

Lemma 2.2 [16] Let C be a closed and convex subset in a real Hilbert space H ,
x ∈ H . Then
i) PC x − PC y 2 ≤ PC x − PC y, x − y ∀y ∈ C;
ii) PC x − y 2 ≤ x − y 2 − x − PC x 2 ∀y ∈ C;
iii) (I − PC )x − (I − PC )y, x − y ≥ (I − PC )x − (I − PC )y 2 ∀y ∈ C.

For properties of the metric projection, the interested reader could be referred to
Section 3 in [16].
The following Lemmas are useful for the convergence of our proposed methods.
Lemma 2.3 [22] Let A : H → H be a monotone and L-Lipschitz continuous mapping
on C. Let S = PC (I − τ A), where τ > 0. If {xn } is a sequence in H satisfying xn q
and xn − Sxn → 0 then q ∈ V I (C, A) = Fi x(S).

Lemma 2.4 [24] Let {an } be a sequence of nonnegative real numbers such that there
exists a subsequence {an j } of {an } such that an j < an j +1 for all j ∈ N. Then there exists
a nondecreasing sequence {m k } of N such that limk→∞ m k = ∞ and the following
properties are satisfied by all (sufficiently large) number k ∈ N:

am k ≤ am k +1 and ak ≤ am k +1 .

In fact, m k is the largest number n in the set {1, 2, · · · , k} such that an < an+1 .

The next technical lemma is very useful and used by many authors, for example
Liu [23] and Xu [36]. Furthermore, a variant of Lemma 2.5 has already been used by
Reich in [29].

Lemma 2.5 Let {an } be sequence of nonnegative real numbers such that:

an+1 ≤ (1 − αn )an + αn bn ,

 {αn } ⊂ (0, 1) and {bn } is a sequence such that


where
a) ∞ n=0 αn = ∞;
b) lim supn→∞ bn ≤ 0.
Then limn→∞ an = 0.

3 Main results

In this section we introduce our two modified projection-type methods for solving VIs.
For the convergence analysis of the methods, we assume the following conditions.
Two simple projection-type methods for solving variational… Page 7 of 23 0

Condition 3.1 The VI (1) associated operator A : H → H is monotone and L-


Lipschitz continuous on H .

Condition 3.2 The solution set of the VI (1) is nonempty, that is V I (C, A) = ∅.

Condition 3.3 Let {αn } and {βn } be two real sequences in (0, 1) such that {βn } ⊂
(a, b) ⊂ (0, 1 − αn ) for some a > 0, b > 0 and


lim αn = 0, αn = ∞.
n→∞
n=1

3.1 Mann-type projection algorithm

Algorithm 3.1

Initialization: Given λ > 0, l ∈ (0, 1), μ ∈ (0, 1), γ ∈ (0, 2). Let x0 ∈ H be
arbitrary

Iterative Steps: Given the current iterate xn , calculate xn+1 as follows:

Step 1. Compute

yn = PC (xn − τn Axn ),

where τn is chosen to be the largest τ ∈ {λ, λl, λl 2 , ...} satisfying

τ Axn − Ayn ≤ μ xn − yn . (16)

If xn = yn then stop and yn is a solution of V I (C, A). Otherwise


Step 2. Compute
z n = xn − γ ηn dn ,
where

dn := xn − yn − τn (Axn − Ayn ),

and
xn − yn 2
ηn := (1 − μ) .
dn 2
Step 3. Compute
xn+1 = (1 − αn − βn )xn + βn z n .
Set n := n + 1 and go to Step 1.

We start the analysis of the algorithm’s convergence by proving the validity of the
stopping criterion.
0 Page 8 of 23 A. Gibali et al.

Lemma 3.1 Assume that Conditions 3.1–3.2 hold. The Armijo-like search rule (16) is
well defined and

μl
min λ, ≤ τn ≤ λ.
L

Proof See e.g., Lemma 3.1 in [33]. 




Lemma 3.2 Let {dn } be a sequence generated by Algorithm 3.1. Then dn = 0 if and
only if xn = yn .

Proof Indeed, we will prove that

(1 − μ) xn − yn ≤ dn ≤ (1 + μ) xn − yn . (17)

We have

dn = xn − yn − τn (Axn − Ayn )
≥ xn − yn − τn Axn − Ayn
≥ xn − yn − μ xn − yn
= (1 − μ) xn − yn . (18)

and it is also easy to see that

dn ≤ (1 + μ) xn − yn . (19)

Combining (18) and (19) we obtain

(1 − μ) xn − yn ≤ dn ≤ (1 + μ) xn − yn .

It implies from (17) that dn = 0 if and only if xn = yn . 




Remark 3.1 From Lemma 3.2 we show that if dn = 0 then stop and yn is a solution of
V I (C, A).

Lemma 3.3 Assume that Conditions 3.1 and 3.2 hold. Let {z n } be a sequence generated
by Algorithm 3.1. Then

2−γ
zn − p 2
≤ xn − p 2
− xn − z n 2
∀ p ∈ V I (C, A). (20)
γ

Proof Using (16) we have

xn − p, dn  = xn − yn , dn  + yn − p, dn 


= xn − yn , xn − yn − τn (Axn − Ayn ) + yn − p, xn − yn
− τn (Axn − Ayn )
Two simple projection-type methods for solving variational… Page 9 of 23 0

= xn − yn 2
− τn xn − yn , Axn − Ayn 
+ yn − p, xn − yn − τn (Axn − Ayn )
≥ xn − yn 2 − τn xn − yn Axn − Ayn
+ yn − p, xn − yn − τn (Axn − Ayn )
≥ xn − yn 2 − μ xn − yn 2
+ yn − p, xn − yn − τn (Axn − Ayn ). (21)

On the other hand, since yn = PC (xn − τn Axn ) we get

xn − yn − τn Axn , yn − p ≥ 0, (22)

By the monotonicity of A and p ∈ V I (C, A) we have

Ayn , yn − p ≥ Ap, yn − p ≥ 0. (23)

Adding (22) and (23) we get

yn − p, xn − yn − τn (Axn − Ayn ) ≥ 0 (24)

Combining (21) and (24) we get

xn − p, dn  ≥ (1 − μ) xn − yn 2
. (25)

On the other hand, we have

zn − p 2
= xn − γ ηn dn − p 2
= xn − p 2 − 2γ ηn xn − p, dn  + γ 2 ηn2 dn 2
. (26)

It implies from (25) and (26) that

zn − p 2
≤ xn − p 2
− 2γ ηn (1 − μ) xn − yn 2
+ γ 2 ηn2 dn 2
.

xn − yn 2 ηn dn 2
Since ηn = (1 − μ) , it implies that xn − yn 2 = . Thus,
dn 2 1−μ

zn − p 2
≤ xn − p 2
− 2γ ηn2 dn 2
+ γ 2 ηn2 dn 2

= xn − p − γ (2 − γ ) ηn dn 2
2

2−γ
= xn − p 2 − γ ηn dn 2
γ
2−γ
= xn − p 2 − xn − z n 2 .
γ



0 Page 10 of 23 A. Gibali et al.

Lemma 3.4 Assume that Conditions 3.1–3.2 hold and let the sequence {xn } be gener-
ated by Algorithm 3.1. Then

(1 + μ)2
xn − yn 2
≤ xn − z n 2
. (27)
[(1 − μ)γ ]2

Proof We have
1 1
xn − yn 2
= .ηn dn 2 = ηn dn 2
1−μ ηn (1 − μ)
1
= xn − z n 2 . (28)
ηn (1 − μ)γ 2

On the other hand, from (17) we get

xn − yn 2 1−μ
ηn = (1 − μ) ≥ ,
dn 2 (1 + μ)2

thus,

1 (1 + μ)2
≤ (29)
ηn 1−μ

It implies from (28) and (29) that

(1 + μ)2
xn − yn 2
≤ xn − z n 2
.
[(1 − μ)γ ]2



Theorem 3.1 Assume that Conditions 3.1–3.3 hold. Then any sequence {xn } generated
by Algorithm 3.1 converges strongly to p ∈ V I (C, A), where p = min{ z : z ∈
V I (C, A)}.
Proof Thanks to Lemma 3.3 we get

z n − p ≤ xn − p ∀n. (30)

Claim 1. We prove that the sequence {xn } is bounded. We have

xn+1 − p = (1 − αn − βn )xn + βn z n − p
= (1 − αn − βn )(xn − p) + βn (z n − p) − αn p
≤ (1 − αn − βn )(xn − p) + βn (z n − p) + αn p . (31)

On the other hand, using (30) we get

(1−αn − βn )(xn − p) + βn (z n − p) 2
Two simple projection-type methods for solving variational… Page 11 of 23 0

= (1 − αn − βn )2 xn − p 2
+ 2(1 − αn − βn )βn xn − p, z n − p
+ βn2 zn − p 2

≤ (1 − αn − βn )2 xn − p 2
+ 2(1 − αn − βn )βn z n − p xn − p
+ βn2 zn − p 2

≤ (1 − αn − βn )2 xn − p 2
+ 2(1 − αn − βn )βn xn − p 2
+ βn2 xn − p 2

= (1 − αn )2 xn − p 2 .

This implies that

(1 − αn − βn )(xn − p) + βn (z n − p) ≤ (1 − αn ) xn − p ∀n. (32)

From (31) and (32) we get

xn+1 − p ≤ (1 − αn ) xn − p + αn p
≤ max{ xn − p , p }
≤ · · · ≤ max{ x0 − p , p }.

That is, the sequence {xn } is bounded and {z n } is also.


Claim 2. We show that
2−γ
βn xn − z n 2
≤ xn − p 2
− xn+1 − p 2
+ αn p 2 . (33)
γ

Indeed, using (12) we have

xn+1 − p 2
= (1 − αn − βn )xn + βn z n − p 2

= (1 − αn − βn )(xn − p) + βn (z n − p) + αn (− p) 2

= (1 − αn − βn ) xn − p 2
+ βn z n − p 2
+ αn p 2
− βn (1
− αn − βn ) xn − z n 2

− αn (1 − αn − βn ) xn 2
− αn βn z n 2

≤ (1 − αn − βn ) xn − p 2
+ βn z n − p 2
+ αn p 2 , (34)

which, together Lemma 3.3 we obtain

xn+1 − p 2
≤ (1 − αn − βn ) xn − p 2 + βn xn − p 2
2−γ
− βn xn − z n 2 + αn p 2
γ
2−γ
= (1 − αn ) xn − p 2 − βn xn − z n 2 + αn p 2
γ
2−γ
≤ xn − p 2 − βn xn − z n 2 + αn p 2 . (35)
γ
0 Page 12 of 23 A. Gibali et al.

Therefore, we get

2−γ
βn xn − z n 2
≤ xn − p 2
− xn+1 − p 2
+ αn p 2 .
γ

Claim 3. We show that

xn+1 − p 2
≤ (1 − αn ) xn − p 2 + αn [2βn xn − z n xn+1 − p
+ 2 p, p − xn+1 ]. (36)

Indeed, setting tn = (1 − βn )xn + βn z n . We have

tn − p = (1 − βn )(xn − p) + βn (z n − p)
= (1 − βn ) xn − p + βn z n − p
≤ (1 − βn ) xn − p + βn xn − p
= xn − p , (37)

and

tn − xn = βn xn − z n . (38)

Using (37) and (38) we get

xn+1 − p 2
= (1 − αn − βn )xn + βn z n − p 2

= (1 − βn )xn + βn z n − αn xn − p 2

= (1 − αn )(tn − p) − αn (xn − tn ) − αn p 2

≤ (1 − αn )2 tn − p 2
− 2αn (xn − tn ) + αn p, xn+1 − p
= (1 − αn ) tn − p
2 2
+ 2αn xn − tn , p − xn+1  + 2αn  p, p − xn+1 
≤ (1 − αn ) tn − p 2
+ 2αn xn − tn xn+1 − p + 2αn  p, p − xn+1 
≤ (1 − αn ) xn − p + αn [2βn xn − z n
2
xn+1 − p
+ 2 p, p − xn+1 ].

Claim 4. Now, we will show that the sequence { xn − p 2 } converges to zero by


considering two possible cases on the sequence { xn − p 2 }.
Case 1: There exists an N ∈ N such that xn+1 − p 2 ≤ xn − p 2 for all n ≥ N .
This implies that limn→∞ xn − p 2 exists. It implies from Claim 2 that

lim xn − z n = 0,
n→∞

which, together with Lemma 3.4, we get

lim xn − yn = 0.
n→∞
Two simple projection-type methods for solving variational… Page 13 of 23 0

We also have

xn+1 − xn ≤ αn xn + βn xn − z n → 0 as n → ∞.

Since {xn } is bounded we assume that there exists a subsequence {xn j } of {xn } such
that xn j q and

lim sup p, p − xn  = lim  p, p − xn j  =  p, p − q.


n→∞ j→∞

μl
We have xn j q, min{λ, } ≤ τn ≤ λ and xn −yn = xn − PC (xn −τn Axn ) → 0,
L
by Lemma 2.3 we get q ∈ V I (C, A).
Since q ∈ V I (C, A) and p = min{ z : z ∈ V I (C, A)}, that is p = PV I (C,A) 0
we obtain

lim sup p, p − xn  =  p, p − q ≤ 0.
n→∞

By xn+1 − xn → 0 we get

lim sup p, p − xn+1  ≤ 0.


n→∞

Therefore by Claim 3 and Lemma 2.5 we get limn→∞ xn − p 2 = 0, that is xn → p.


Case 2: There exists a subsequence { xn j − p 2 } of { xn − p 2 } such that xn j −
p 2 < xn j +1 − p 2 for all j ∈ N. In this case, it follows from Lemma 2.4 that
there exists a nondecreasing sequence {m k } of N such that limk→∞ m k = ∞ and the
following inequalities hold for all k ∈ N:

xm k − p 2
≤ xm k +1 − p 2
and xk − p 2
≤ xm k +1 − p 2 . (39)

Since {βn } ⊂ (a, b) and Claim 2, we have

2−γ 2−γ
a xm k − z m k 2
≤ βm k xm k − z m k 2
γ γ
≤ xm k − p 2
− xm k +1 − p 2
+ αm k p 2

≤ αm k p 2 .

Therefore, we get

lim xm k − z m k = 0. (40)
k→∞

As proved in the first case, we obtain

xm k +1 − xm k → 0
0 Page 14 of 23 A. Gibali et al.

and

lim sup p, p − xm k +1  ≤ 0.
k→∞

Since Claim 3 we have

xm k +1 − p 2
≤ (1 − αm k ) xm k − p 2

+ αm k [2βm k xm k − z m k xm k +1 − p + 2 p, p − xm k +1 ]
≤ (1 − αm k ) xm k +1 − p 2

+ αm k [2βm k xm k − z m k xm k +1 − p + 2 p, p − xm k +1 ].

This implies that

xk − p 2
≤ xm k +1 − p 2
≤ 2βm k xm k − z m k xm k +1 − p + 2 p, p − xm k +1 .

Therefore, we obtain lim supk→∞ xk − p ≤ 0, that is xk → p. The proof is


completed. 


3.2 Viscosity projection type algorithm

In this section, we propose our viscosity projection type algorithm for solving varia-
tional inequalities, with the usage of a ρ-contraction f : H → H .
Algorithm 3.2

Initialization: Given λ > 0, l ∈ (0, 1), μ ∈ (0, 1), γ ∈ (0, 2). Let x0 ∈ H be
arbitrary

Iterative Steps: Given the current iterate xn , calculate the next iterate xn+1 as follows:

Step 1. Compute

yn = PC (xn − τn Axn ),

where τn is chosen to be the largest τ ∈ {λ, λl, λl 2 , ...} satisfying

τ Axn − Ayn ≤ μ xn − yn . (41)

If xn = yn then stop and yn is a solution of V I (C, A). Otherwise


Step 2. Compute
z n = xn − γ ηn dn ,
where

xn − yn 2
ηn := (1 − μ) ,
dn 2
Two simple projection-type methods for solving variational… Page 15 of 23 0

and

dn := xn − yn − τn (Axn − Ayn ).

Step 3. Compute
xn+1 = αn f (xn ) + (1 − αn )z n .
Set n := n + 1 and go to Step 1.

Theorem 3.2 Assume that Conditions 3.1–3.2 hold and given a ρ-contraction f :
H → H . Assume that {αn } is a real sequence in (0, 1) such that



lim αn = 0, αn = ∞.
n→∞
n=1

Then any sequence {xn } generated by Algorithm 3.2 converges strongly to an element
p ∈ V I (C, A), where p = PV I (C,A) ◦ f ( p).

Proof Claim 1. We prove that the {xn } is bounded. Indeed, According to Lemma 3.3
we have

z n − p ≤ xn − p . (42)

Using (42) we obtain

xn+1 − p = αn f (xn ) + (1 − αn )z n − p
= αn ( f (xn ) − p) + (1 − αn )(z n − p)
≤ αn f (xn ) − p + (1 − αn ) z n − p
≤ αn f (xn ) − f ( p) + αn f ( p) − p + (1 − αn ) z n − p
≤ αn ρ xn − p + αn f ( p) − p + (1 − αn ) xn − p
f ( p) − p
≤ [1 − αn (1 − ρ)] xn − p + αn (1 − ρ)
1−ρ
f ( p) − p
≤ max{ xn − q , }
1−ρ
f ( p) − p
≤ · · · ≤ max x0 − p , .
1−ρ

This implies that the sequence {xn } is bounded. Consequently, { f (xn )}, {yn } and {z n }
are bounded.
Claim 2. We show that

2−γ
(1 − αn ) xn − z n 2
≤ xn − p 2
− xn+1 − p 2
+ αn f (xn ) − p 2 .
γ
0 Page 16 of 23 A. Gibali et al.

Indeed, using (11) and Lemma 3.3 we have

xn+1 − p 2
= αn ( f (xn ) − p) + (1 − αn )(z n − p) 2

= αn f (xn ) − p 2
+ (1 − αn ) z n − p 2
− αn (1 − αn ) f (xn ) − z n 2

≤ αn f (xn ) − p 2
+ (1 − αn ) z n − p 2

≤ αn f (xn ) − p 2 + (1 − αn ) xn − p 2

2−γ
− (1 − αn )βn xn − z n 2
γ
2−γ
≤ αn f (xn ) − p 2
+ xn − p 2
− (1 − αn ) xn − z n 2
.
γ

This implies that

2−γ
(1 − αn ) xn − z n 2
≤ xn − p 2
− xn+1 − p 2
+ αn f (xn ) − p 2 .
γ

Claim 3. We show that


2
xn+1 − p 2
≤ (1 − (1 − ρ)αn ) xn − p 2
+ (1 − ρ)αn .  f ( p)
1−ρ
− p, xn+1 − p.

Indeed, using (10) and (42) we have

xn+1 − p 2
= αn f (xn ) + (1 − αn )z n − p 2

= αn ( f (xn ) − f ( p)) + (1 − αn )(z n − p) + αn ( f ( p) − p) 2

≤ αn ( f (xn ) − f ( p)) + (1 − αn )(z n − p) 2

+ 2αn  f ( p) − p, xn+1 − p
≤ αn f (xn ) − f ( p) 2 + (1 − αn ) z n − p 2

+ 2αn  f ( p) − p, xn+1 − p
≤ αn ρ xn − p 2
+ (1 − αn ) xn − p 2
+ 2αn  f ( p) − p, xn+1 − p
= (1 − (1 − ρ)αn ) xn − p + (1 − ρ)αn2

2
.  f ( p) − p, xn+1 − p. (43)
1−ρ

Claim 4. Now, we will show that the sequence { xn − p 2 } converges to zero by


considering two possible cases on the sequence { xn − p 2 }.
Case 1: There exists an N ∈ N such that xn+1 − p 2 ≤ xn − p 2 for all n ≥ N .
This implies that limn→∞ xn − p 2 exists.
Since the Claim 2 and limn→∞ αn = 0 we get

lim xn − z n = 0, (44)
n→∞
Two simple projection-type methods for solving variational… Page 17 of 23 0

and by Lemma 3.4

lim xn − yn = 0. (45)
n→∞

We also have

xn+1 − xn = αn f (xn ) + (1 − αn )z n − xn
≤ αn f (xn ) − xn + (1 − αn ) z n − xn → 0. (46)

Since the sequence {xn } is bounded, it implies that there exists a subsequence {xn k } of
{xn } that weak convergence to some z ∈ H such that

lim sup f ( p) − p, xn − p = lim  f ( p) − p, xn k − p


n→∞ k→∞
=  f ( p) − p, z − p. (47)

From (45) and Lemma 2.3 we have z ∈ V I (C, A).


By the definition of p and z ∈ V I (C, A) we have

lim sup f ( p) − p, xn − p =  f ( p) − p, z − p ≤ 0. (48)


n→∞

which, together with (46) and (47) we get

lim sup f ( p) − p, xn+1 − p ≤ lim sup f ( p) − p, xn+1 − xn 


n→∞ n→∞
+ lim sup f ( p) − p, xn − p
n→∞
=  f ( p) − p, z − p ≤ 0. (49)

Using Lemma 2.5, (49) and Claim 3 we obtain xn → p.


Case 2. There exists a subsequence { xn j − p 2 } of { xn − p 2 } such that xn j −
p 2 < xn j +1 − p 2 for all j ∈ N. In this case, it follows from Lemma 2.4 that
there exists a nondecreasing sequence {m k } of N such that limk→∞ m k = ∞ and the
following inequalities hold for all k ∈ N:

xm k − p 2
≤ xm k +1 − p 2 , (50)

and

xk − p 2
≤ xm k − p 2 . (51)

According to Claim 2 we get

2−γ
(1 − αm k ) xm k − z m k 2
≤ xm k − p 2
− xm k +1 − p 2
γ
0 Page 18 of 23 A. Gibali et al.

+ αm k f (xm k ) − p 2

≤ αm k f (xm k ) − p 2 .

We obtain

lim xm k − z m k = 0, (52)
k→∞

and by Lemma 3.4 we get

lim xm k − ym k = 0. (53)
k→∞

Using the same arguments as in the proof of Case 1, we obtain

lim sup f ( p) − p, xm k +1 − p ≤ 0. (54)


k→∞

Thanks to Claim 3, we have

xm k +1 − p 2
≤ (1 − (1 − ρ)αm k ) xm k − p 2
2
+ (1 − ρ)αm k .  f ( p) − p, xm k +1 − p, (55)
1−ρ

together with (50), we deduce that

xm k +1 − p 2
≤ (1 − (1 − ρ)αm k ) xm k +1 − p 2
2
+(1 − ρ)αm k .  f ( p)−, xm k +1 − p.
1−ρ

This follows that


2
xm k +1 − p 2
≤  f ( p) − p, xm k +1 − p. (56)
1−ρ

Combining (51), (54) and (56) we get

lim sup xk − p ≤ 0, (57)


k→∞

that is xk → p. The proof is completed. 




4 Numerical illustrations

In this section we present two numerical experiments which demonstrate the perfor-
mances of our Mann-type and viscosity-type projection algorithm (Algorithms 3.1 and
3.2) in finite and infinite dimensional spaces. In both experiments the parameters are
chosen as λ = 7.55, l = 0.5, μ = 0.85 and γ = 1.99, αk = 1/k, βk = (k − 1)/2k.
Two simple projection-type methods for solving variational… Page 19 of 23 0

Table 1 Algorithm 3.1 with different Cases

x1 (t) No. of Iterations CPU time


1
600 [sin(−3t) + cos(−10t)] 13 0.0625
1 2 −t
525 t − e 13 0.078125

3.5

2.5

1.5

0.5

0
0 2 4 6 8 10 12 14

1 [sin(−3t) + cos(−10t)]
Fig. 1 x1 (t) = 600

1 1
Example 1 Suppose that H = L 2 ([0, 1]) with norm x := |x(t)| 2 dt 2 and
0
1
inner product x, y := 0 x(t)y(t)dt, ∀x, y ∈ H . Let C := {x ∈ H | x ≤ 1} be
the unit ball. Define operator A : C → H by (Ax)(t) = max(0, x(t)). Then it can be
easily verified that A is 2-Lipschitz continuous and monotone on C (see [19]). With
these given C and A, the set of solution to the variational inequality is {0} = ∅. It is
known that, see for example [5]


x
, if x L2 > 1,
PC (x) = x L2
x, if x L2 ≤ 1,

We implement our algorithm with different starting point x1 (t). We choose the
stopping criterion ||xn+1 − xn || < ε with ε = 10−30 . The results are presented in
Table 1 and Figs. 1 and 2.

Example 2 In this example we consider a nonlinear variational inequality with A :


Rm → Rm which is defined as Ax = M x + F x + q, with M an m × m symmetric
semi-definite matrix, q is a vector in Rm and F x is the proximal mapping of the
function g(x) = 41 ||x||4 , i.e.,
0 Page 20 of 23 A. Gibali et al.

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0
0 2 4 6 8 10 12 14

1 t 2 − e−t
Fig. 2 x1 (t) = 525

||y||4 1
F x = arg min + ||y − x||2 | y ∈ Rm .
4 2

The feasible set is a polyhedral convex set, given by C = {x ∈ Rm | Qx ≤ b}, where


Q ∈ Rr ×m and b ∈ Rl . In this case, A is monotone and Lipschitz continuous with
L = ||M|| + 1. All the entries of Q, M, q are generated randomly in (−2, 2) and b
in (0, 1), m = 100, r = 10 and we choose the stopping criterion ||xn − yn || < ε with
ε = 10−5 . The starting point is x0 = (1, 1, . . . , 1) ∈ Rm . The projections onto C
and the evaluation of F are computed by using the MATLAB solvers fmincon. For
comparison we choose two very recent viscosity type methods, Shehu and Iyiola [30,
Algorithm 3.1] and Thong and Hieu [33, Algorithm 3]. In all algorithms we take the
contractions f (x) = x/2. The numerical results are showed in Fig. 3 with respect to
the logarithmic scale. In Fig. 4 we illustrate the performances of Algorithm 3.2 for
different choices of the contraction f (x) = 0.9x, 0.75x, 0.5x, 0.25x.

5 Conclusions

In this paper we proposed two projection-type methods, Mann and viscosity schemes
methods [27,28] for solving variational inequalities in real Hilbert spaces. Both algo-
rithms converge strongly under monotonicity and Lipschitz continuity of the VI
associated mapping A. The algorithms require the calculation of only one projec-
tion onto the VI’s feasible set C per each iteration and by using the projection and
contraction technique there is no need to know the Lipschitz constant of A in advance.
These two properties emphasize the applicability and advantages over several exist-
ing results in the literature. Numerical experiments in finite and infinite dimensional
spaces compare and illustrate the performance of the our new schemes.
Two simple projection-type methods for solving variational… Page 21 of 23 0

5
Our algorithm Alg. 3.1
Shehu and Iyiola Alg. 3.1
0 Thong and Hieu Alg. 3

-5

-10

-15

-20

-25
0 10 20 30 40 50

Fig. 3 Comparison between Algorithm 3.2 and [30, Algorithm 3.1] and [33, Algorithm 3]

4
Algorithm 3.1 with f(x)=0.5x
Algorithm 3.1 with f(x)=0.25x
2
Algorithm 3.1 with f(x)=0.75x
Algorithm 3.1 with f(x)=0.9

-2

-4

-6

-8

-10
1 2 3 4 5 6 7 8

Fig. 4 The performances of Algorithm 3.2 for different choices of the contraction f (x) =
0.9x, 0.75x, 0.5x, 0.25x

Compliance with ethical standards

Conflict of interest The authors declare no conflict of interest.

References
1. Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange
function. Ekonomika i Mat. Metody. 12, 1164–1173 (1976)
2. Aubin, J.P., Ekeland, I.: Applied Nonlinear Analysis. Wiley, New York (1984)
0 Page 22 of 23 A. Gibali et al.

3. Baiocchi, C., Capelo, A.: Variational and Quasivariational Inequalities, Applications to Free Boundary
Problems. Wiley, New York (1984)
4. Cai, X., Gu, G., He, B.: On the O(1/t) convergence rate of the projection and contraction methods
for variational inequalities with Lipschitz continuous monotone operators. Comput. Optim. Appl. 57,
339–363 (2014)
5. Cegielski, A.: Iterative Methods for Fixed Point Problems in Hilbert Spaces. Lecture Notes in Mathe-
matics, vol. 2057. Springer, Berlin (2012)
6. Ceng, L.C., Hadjisavvas, N., Wong, N.C.: Strong convergence theorem by a hybrid extragradient-
like approximation method for variational inequalities and fixed point problems. J. Glob. Optim. 46,
635–646 (2010)
7. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradientmethod for solving variational inequal-
ities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)
8. Censor, Y., Gibali, A., Reich, S.: Strong convergence of subgradient extragradient methods for the
variational inequality problem in Hilbert space. Optim. Meth. Softw. 26, 827–845 (2011)
9. Censor, Y., Gibali, A., Reich, S.: Extensions of Korpelevich’s extragradient method for the variational
inequality problem in Euclidean space. Optimization 61, 1119–1132 (2011)
10. Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer.
Algorithms 56, 301–323 (2012)
11. Dong, Q.L., Gibali, A., Jiang, D., Ke, S.H.: Convergence of projection and contraction algorithms with
outer perturbations and their applications to sparse signals recovery. J. Fixed Point Theory Appl. 20,
16 (2018). [Link]
12. Dong, L.Q., Cho, J.Y., Zhong, L.L., Rassias, MTh: Inertial projection and contraction algorithms for
variational inequalities. J. Glob. Optim. 70, 687–704 (2018)
13. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems.
Springer Series in Operations Research, vols. I and II. Springer, New York (2003)
14. Fichera, G.: Sul problema elastostatico di Signorini con ambigue condizioni al contorno. Atti Accad.
Naz. Lincei, VIII Ser. Rend. Cl. Sci. Fis. Mat. Nat. 34, 138–142 (1963)
15. Fichera, G.: Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue con-
dizioni al contorno. Atti Accad. Naz. Lincei, Mem. Cl. Sci. Fis. Mat. Nat. Sez. I, VIII. Ser. 7, 91–140
(1964)
16. Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Marcel
Dekker, New York (1984)
17. He, B.S.: A class of projection and contraction methods for monotone variational inequalities. Appl.
Math. Optim. 35, 69–76 (1997)
18. He, B.S., Liao, L.Z.: Improvements of some projection methods for monotone nonlinear variational
inequalities. J. Optim. Theory Appl. 112, 111–128 (2002)
19. Hieu, D.V., Anh, P.K., Muu, L.D.: Modified hybrid projection methods for finding common solutions
to variational inequality problems. Comput. Optim. Appl. 66, 75–96 (2017)
20. Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001)
21. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekonomika
i Mat. Metody. 12, 747–756 (1976)
22. Kraikaew, R., Saejung, S.: Strong convergence of the Halpern subgradient extragradient method for
solving variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 163, 399–412 (2014)
23. Liu, L.S.: Ishikawa and Mann iteration process with errors for nonlinear strongly accretive mappings
in Banach space. J. Math. Anal. Appl. 194, 114–125 (1995)
24. Maingé, P.E.: A hybrid extragradient-viscosity method for monotone operators and fixed point prob-
lems. SIAM J. Control Optim. 47, 1499–1515 (2008)
25. Malitsky, Y.V.: Projected reflected gradient methods for monotone variational inequalities. SIAM J.
Optim. 25, 502–520 (2015)
26. Malitsky, Y.V., Semenov, V.V.: A hybrid method without extrapolation step for solving variational
inequality problems. J. Glob. Optim. 61, 193–202 (2015)
27. Mann, W.R.: Mean value methods in iteration. Proc. Am. Math. Soc. 4, 506–510 (1953)
28. Moudafi, A.: Viscosity approximating methods for fixed point problems. J. Math. Anal. Appl. 241,
46–55 (2000)
29. Reich, S.: Constructive Techniques for Accretive and Monotone Operators. Applied Nonlinear Anal-
ysis, pp. 335–345. Academic Press, New York (1979)
Two simple projection-type methods for solving variational… Page 23 of 23 0

30. Shehu, Y., Iyiola, O.S.: Strong convergence result for monotone variational inequalities. Numer. Algo-
rithms 76, 259–282 (2017)
31. Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J.
Control Optim. 37, 765–776 (1999)
32. Sun, D.F.: A class of iterative methods for solving nonlinear projection equations. J. Optim. Theory
Appl. 91, 123–140 (1996)
33. Thong, D.V., Hieu, D.V.: Weak and strong convergence theorems for variational inequality problems.
Numer. Algorithms 78, 1045–1060 (2018)
34. Thong, D.V., Hieu, D.V.: Modified subgradient extragradient method for variational inequality prob-
lems. Numer. Algorithms 79, 597–610 (2018)
35. Thong, D.V., Hieu, D.V.: Inertial extragradient algorithms for strongly pseudomonotone variational
inequalities. J. Comput. Appl. Math. 341, 80–98 (2018)
36. Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 66, 240–256 (2002)

Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.

You might also like