0% found this document useful (0 votes)
11 views

Mutual Control 7

Uploaded by

andrei.stan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Mutual Control 7

Uploaded by

andrei.stan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

ON EQUILIBRIUM IN CONTROL PROBLEMS WITH

APPLICATIONS TO EVOLUTION SYSTEMS

RADU PRECUP AND ANDREI STAN

Abstract.

1. Introduction
The study of systems of abstract or concrete equations has been the subject
of research from a long time, especially from the perspective of the existence and
uniqueness of solutions. In the present paper, our objective goes beyond simply
establishing solution existence; we strive to identify solutions that exhibit a level of
control over one another.
Let D1 , D2 , C be sets with C ⊂ D1 × D2 and E : D1 × D2 → Z be any mapping,
where Z is a linear space. Consider the equation E (x, λ) = 0Z and the problem of
finding λ ∈ D2 for which the equation has a solution x ∈ D1 with (x, λ) ∈ C. We
say that x is the state variable, λ is the control variable and C is the controllability
domain. One way to solve the problem (see [12]) is to use the controllability condition
and obtain the expression of λ as function of x, λ = S (x) , and then to find a solution
of the equation E (x, S (x)) = 0. Alternatively, to give expression of x, x = S (λ) ,
and then solve the equation E (S (λ) , λ) = 0Z . In the first case, we solve the problem

λ = S (x)
E (x, S (x)) = 0Z ,
while in the second case, the problem

x = S (λ)
E (S (λ) , λ) = 0Z .
The aim of this paper is to discuss solvability of systems of two equations with
mutual controllability. More exactly, we consider four sets D1 , D2 , C1 , C2 with
C1 ⊂ D1 × D2 , C2 ⊂ D2 × D1 , two mappings E1 , E2 : D1 × D2 → Z and the
problem 
E1 (x, y) = 0Z

E2 (x, y) = 0Z

(x, y) ∈ C1 , (y, x) ∈ C2 .

The interpretation of this problem is as follows: Variable y is the control of the
state x governed by the first equation, while variable x is the control of state y
governed by the second equation. The controllability conditions on x and y are
expressed by the appartenence to C1 and C2 , respectively. We call this problem
2010 Mathematics Subject Classification. ????????
Key words and phrases. ????????
1
2 R. PRECUP AND A. STAN

the mutual control problem. First we may be interested in finding x, y as state


variables, in terms of controls y, x, respectively, that is x ∈ S1 (y) and y ∈ S2 (x).
Here S1 , S2 are the set-valued mappings
S1 : D2 → D1 , S1 (y) := {x : E1 (x, y) = 0Z and (x, y) ∈ C1 } ,
S2 : D1 → D2 , S2 (x) := {y : E2 (x, y) = 0Z and (y, x) ∈ C2 } .
Thus S1 shows us how the state x is controlled by y and S2 , how state y is controlled
by x. Next the equilibrium is reached if there exists (x, y) ∈ D1 × D2 such that
(x, y) ∈ (S1 (y) , S2 (x)) .
A solution (x, y) of this fixed point equation is said to be a solution of the mutual
control problem, while the problem is said to be mutually controllable if such a
solution exists.
It is interesting to note the similarity between the mutual control problem and
the Nash equilibrium problem. In case of the last one, we have two functionals
J1 (x, y) and J2 (x, y) . Minimizing J1 (., y) on D1 for each y ∈ D2 yields to the
set of minimum points denoted s1 (y) , while minimizing J2 (x, .) on D2 for each
x ∈ D1 yields to the set of minimum points denoted s2 (x) . A point (x, y) is a Nash
equilibrium with respect the two functionals if
(x, y) ∈ (s1 (y) , s2 (x)) .
In the situation that we can talk about differentiability of the two functionals,
a Nash equilibrium (x, y) solves the system J11 (x, y) = 0, J22 (x, y) = 0, where
J11 , J22 are the derivatives of J1 , J2 in the first and second variable, respectively.
These equations stay here instead of the equations associated to E1 , E2 , while the
controllability conditions are here replaced by the requirements that x, y minimize
J1 , J2 in the first and second variable, respectively, hence
C1 = {(x, y) ∈ D1 × D2 : x minimizes J1 (., y)} ,
and
C2 = {(x, y) ∈ D1 × D2 : y minimizes J2 (x, .)} .
Therefore, in the differential case, the Nash equilibrium problem appears as a par-
ticular case of mutual control problem in the sense specified above. For such kind
of results we refer the reader to the papers [1, 8, 10, 11, 14, 15, 17, 18, 19].
In this paper, in order to illustrate the general mutual control problem, we study
an abstract system of evolution equations
(
x′ (t) = A x(t) + f (t, x(t), y(t)) + hf
(1.1) t ∈ [0, T ],
y ′ (t) = A y(t) + g (t, x(t), y(t)) + hg ,
together with a controllability condition
(1.2) x (T ) = ky (T ) .
Here, k ∈ R is a given number, A is a linear operator generating a semigroup of
operators, and f, g are continuous functions.
We note that the considered controllability condition is a non-standard one that
instead of the final values of the two states as in [2], requires a proportional rela-
tionship between them.
ON EQUILIBRIUM IN CONTROL PROBLEMS 3

The solvability of problem (1.1)-(1.2) is established by passing to an equivalent


fixed point system and using a vector approach based on some fixed point theorems
and the method of Bielecki-type norms. It is emphasized the advantage of using of
each fixed point method as regards the assumptions on f and g, the uniqueness,
and localization of the solutions.

2. Preliminaries
Let (X, | · |X ) be a Banach space, and let L(X) be the set of all linear and bounded
functionals from X to X. Endowed with the norm
|Ax|X
|U |L(X) = sup ,
x∈X\{0} |x|X

L(X) is a Banach space.

2.1. Abstract evolution equations. Let T > 0 and A : D(A) ⊂ X → X be the


generator of a C0 -semigroup {S(t) : t ≥ 0}.
A function u ∈ C ([0, T ]; X) is said to be a mild solution of the equation
(2.1) u′ (t) = Au(t) + f (t, u(t)) + h, (h ∈ X),
if it satisfies
Z t
u (t) = S (t) u(0) + S (t) (f (s, u (s)) + h) ds, for all t ∈ [0, T ],
0
and is said to be a weakly mild solution if
Z t
S (T − t) u (t) = S (T ) u0 + S (T − s) (f (s, u (s)) + h) ds, for all t ∈ [0, T ].
0

Remark 2.1. If u is a weakly mild solution for (2.1), then at any time t ∈ [0, T ],
u(t) is close to a mild solution up to ker S(T − t), and at the final time t = T, it
coincides with a mild solution, i.e.,
Z T
u(T ) = S(T )u0 + S(T − s) (f (s, u(s)) + h) ds.
0

Remark 2.2. In case that A generates a group of operators, then any weakly mild
solution is a mild solution.
Throughout this paper, the numbers CS and ω stand for upper bounds in L(X),
uniform with respect to t, for the operators S(t) and I − S(t), respectively, that is
(2.2) |S(t)|L(X) ≤ CS and
(2.3) |I − S(t)|L(X) ≤ ω,
for all t ∈ [0, T ] .
Remark 2.3. Since t takes values only in the compact interval [0, T ], following [20,
Theorem 2.3.1], both upper bounds CS and ω exist in R+ . Also, if A generates a
semigroup of contractions, then CS = 1.
4 R. PRECUP AND A. STAN

Remark 2.4. The value ω can be chosen as small as desired, provided that T is
small enough. This follows immediately since S(0) = I and the map t 7→ S(t) is
continuous.
For details about semigroups of linear operators and abstract evolution equations
we refer to the books [6] and [20].

2.2. Bielecki-type norms. For each number θ ≥ 0, on the space C([0, T ]; X), we
define the Bielecki norm
|u|θ := max |u(t)|X e−θt .
t∈[0,T ]
Note that it is equivalent to the usual Chebyshev norm corresponding to θ = 0.
We mention that the role of the Bielecki-type norms is to make possible, through
the convenient choice of θ, a relaxation of the requirements on the various constants
from the Lipschitz or growth conditions required by the fixed point theorems.

2.3. Matrices convergent to zero. Dealing with systems of equations it is con-


venient to use a vector approach based on matrices instead of constants.
A square matrix A ∈ Mn×n (R+ ) is said to be convergent to zero if its power Ak
tends to the zero matrix as k → ∞. The next lemma provides equivalent conditions
for a square matrix to be convergent to zero (see, e.g., [9]).
Lemma 2.5. Let A ∈ Mn×n (R+ ) be a square matrix. The following statements
are equivalent:
(a): The matrix A is convergent to zero.
(b): The spectral radius of A is less than 1, i.e., ρ(A) < 1.
(c): The matrix I −A, where I is the unit matrix of the same size, is invertible
and its inverse has nonnegative entries, i.e., (I − A)−1 ∈ Mn×n (R+ ) .
In case n = 2, we have the following characterization.
Lemma 2.6. A square matrix A = [aij ]1≤i,j≤2 ∈ M2×2 (R+ ) is convergent to zero
if and only if a11 , a22 < 1 and
tr(A) < 1 + det(A), i.e., a11 + a22 < 1 + a11 a22 − a12 a21 .
Lemma 2.7. Let M, N ∈ Mn×n (R) be two square matrices and A a matrix con-
vergent to zero. If M < N , then (I − A)M < (I − A)N.
Proof. From Lemma 2.5, the matrix I − A is invertible and its inverse has positive
entries. Consequently,
(2.4) On < N − M = (I − A)−1 (I − A)(N − M ),
where On is the zero matrix. Since (I − A)−1 has positive entries, relation (2.4)
holds only if (I − A)(N − M ) > On . □
In the next Section 3, we deal with the matrix
θT
" #
a11 a12 e θ−1
A(θ) = −θT ,
a21 a22 1−eθ
ON EQUILIBRIUM IN CONTROL PROBLEMS 5

where θ ≥ 0, and we aim to find θ such that A(θ) is convergent to zero. Here,
aij (i, j = 1, 2) are nonnegative numbers with a11 < 1 and a22 < T1 . Notice that the
−θT −θT
last inequality guarantees a22 1−eθ < 1 since 1−eθ ≤ T for every θ ≥ 0.
From Lemma 2.6, the matrix A(θ) is convergent to zero if and only if h(θ) < 0,
where
h(θ) = tr(A(θ)) − 1 − det(A(θ))
1 − e−θT 1 − e−θT eθT − 1
= a11 + a22 − 1 − a11 a22 + a12 a21 (θ ≥ 0) .
θ θ θ
Note that
1  
h′ (θ) = a22 e−θT
(1 − a 11 )(1 + θT ) + a a
12 21 (θT − 1)eθT
− α ,
θ2
h(0) = − (1 − a11 ) (1 − a22 T ) + a12 a21 T,
 2 
T
h′ (0) = −a22 (1 − a11 ) + 1 < 0, and
2

a11 − 1 if a12 a21 = 0
lim h (θ) =
θ→∞ +∞ if a12 a21 > 0
where α = a22 (1 − a11 ) − a12 a21 . The next lemma deals with the existence of θ for
which matrix A (θ) is convergent to zero.
Lemma 2.8. Assume 0 ≤ a11 < 1 and 0 < a22 < T1 .
(i) If h(0) < 0, then A(0) converges to zero.
(ii) If h(0) ≥ 0, then there exists θ1 > 0 with h′ (θ1 ) = 0:
(a) If h(θ1 ) < 0, then the matrix A(θ) converges to zero for every θ between
the zeroes of h and does not converge to zero otherwise.
(b) If h(θ1 ) ≥ 0, then there are no θ such that A(θ) converges to zero.
Proof. (i) is obvious. The next assertions are based o the convexity of function h.
To prove it, we compute the second derivative and we find
1
h′′ (θ) = 3 (2 (a22 − a22 a11 − a12 a21 ) + a12 a21 φ(θ) − a22 (1 − a11 ) φ(−θ)) ,
θ
where  
φ(θ) = eθT (θT − 1)2 + 1 .
Note that
φ(θ) ≥ 2 and φ(−θ) ≤ 2,
for all θ ≥ 0. Consequently,
θ3 h′′ (θ) ≥ 2 (a22 − a22 a11 − a12 a21 ) + 2a12 a21 − 2a22 (1 − a11 ) = 0.
which guarantees that h is convex on [0, ∞).
(ii) Assume that h(0) ≥ 0. Since h′ (0) < 0 and limθ→∞ h(θ) = +∞, the function
h has a minimum at θ1 ∈ (0, +∞), whence h′ (θ1 ) = 0. Clearly, if h(θ1 ) < 0, then
h has two positive zeroes and is negative between them and nonnegative otherwise.
If h(θ1 ) ≥ 0, then h(θ) ≥ 0 for all θ ≥ 0, and thus there are no θ such that A(θ)
converges to zero.
6 R. PRECUP AND A. STAN


Remark 2.9. In case a22 = 0, A (θ) is convergent to zero if and only if
eθT − 1
a11 < 1 and a12 a21 < 1 − a11 .
θ
Clearly, solving the equation h′ (θ) = 0 analytically is challenging. Thus, instead
of A(θ), we may consider an approximate variant, larger than A (θ) , namely
θT
a e −1
 
a
A(θ)
e = 11 12 1θ .
a21 a22 θ

Since A (θ) ≤ A(θ)


e componentwise, if A(θ)
e is convergent to zero, then A (θ) is so
too. Following similar reasoning as above, the matrix A(θ)
e converges to zero if and
only if h(θ) < 0, where
e

a22 a11 a22 eθT − 1


h(θ) = a11 +
e −1− + a12 a21 (θ > 0) .
θ θ θ
Note that e h cannot be extended at zero by continuity and in general is not convex.
However, under suitable conditions on aij , a similar result to the previous lemma
holds true.
In the following lemma, W : R+ → R+ represents the Lambert function restricted
to R+ , i.e., the inverse of the function zez (z ∈ R+ ) . For details we send to [7].
Lemma 2.10. Assume that a11 < 1.
(i) If
(2.5) α := a22 (1 − a11 ) − a12 a21 > 0
then the function h̃ is strictly convex on (0, ∞).
(ii) If
a12 a21 T eθ1 T < 1 − a11 ,
   
where θ1 = T1 W e a12αa21 + 1 , then e h(θ1 ) < 0 and there is a vecinity V =
(σ1 , σ2 ) of θ1 with 0 < σ1 < σ2 < +∞ such that matrix Ã(θ) is convergent to zero
for every θ ∈ V and is not so for θ ∈/ V.
Proof. (i) Simple computations yield
1  
h′ (θ) = 2 a12 a21 eθT (θT − 1) − α .
e
θ

Differentiating again h (θ) with respect to θ ∈ (0, ∞), and using (2.5), we deduce
1   
h̃′′ (θ) = 3 a12 a21 eθT θ2 T 2 − 2 −α + a12 a21 eθT θT − a12 a21 eθT
θ
a12 a21 θT 2 2  2α
= e θ T − 2θT + 2 + 3 (−a11 a22 − a12 a21 + a22 )
θ3 θ
a12 a21 θT 2α
1 + (θT − 1)2 + 3 > 0.

= e
θ3 θ
ON EQUILIBRIUM IN CONTROL PROBLEMS 7

(ii) Note that from a22 (1 − a11 ) > 0, we find limθ→0 eh(θ) = +∞, while from
a12 a21 > 0, we have limθ→∞ h(θ) = +∞. Therefore, h has a minimum θ1 ∈ (0, ∞).
e e
Since e h′ (θ1 ) = 0, which leads to
h is convex, we have e
a12 a21 eθ1 T (θ1 T − 1) = α.
Letting z := θ1 T − 1, we obtain
α
zez = ,
e a12 a21
 
α
thus z = W e a12 a21 . Consequently,
   
1 α
θ1 = W +1 .
T e a12 a21

Next, evaluating e
h at θ1 , we find
h(θ1 ) = a11 − 1 + a12 a21 T eθ1 T .
e

Therefore, under our assumption, e h(θ1 ) < 0. The conclusion follows from the
convexity of h and its limits as θ approaches 0 and infinity.
e □
Remark 2.11. The advantage of using à instead of A is that we have an analytical
expression of the point where the minimum of h̃ is attained. However, in applica-
tions, due to numerical computer power, one may find an approximate solution of
the equation h′ (θ) = 0, and evaluate h at that point.

Since à is an approximation of A, there are cases where Ã(θ) does not converge
to zero for any θ > 0, however, there exists a θ0 > 0 such that A(θ0 ) does converge.
The example below illustrates this situation.
Example 2.12. Let
a11 = 0.3, a12 = 0.62, a21 = 0.45, a22 = 0.63, and T = 0.98.
Simple computations show that α = 0.162 > 0 and θ1 ≈ 1.1931, but conditions ii)
is not satisfied, since
0.88 ≈ a12 a21 T eθ1 T > 1 − a11 = 0.7.
On the other hand, solving numerically h′ (θ) = 0 gives an approximate solution
θ0 ≈ 0.3505, and hence h(θ0 ) ≈ −0.00798 < 0.
Additionally, note that h(0) = 0.0056 > 0, i.e., A(0) is not convergent to zero.
2.4. Fixed point theorems. Next, we recall two fixed point theorems, which
together with the well-known Schauder’s fixed point therem will play an important
role in our analysis. The first result is Perov’s fixed point theorem (see, e.g., [13,
pp 151-154]) for mappings defined on the Cartesian product of two metric spaces.
Theorem 2.13 (Perov). Let (Xi , di ) , i = 1, 2 be complete metric spaces and Ni :
X1 × X2 → Xi be two mappings for which there exists a square matrix A of size two
8 R. PRECUP AND A. STAN

with nonnegative entries and the spectral radius ρ (A) < 1 such that the following
vector inequality
   
d1 (N1 (x, y) , N1 (u, v)) d1 (x, y)
≤A
d2 (N2 (x, y) , N2 (u, v)) d2 (u, v)
holds for all (x, y) , (u, v) ∈ X1 × X2 . Then, there exists a unique point (x∗ , y ∗ ) ∈
X1 × X2 with (x∗ , y ∗ ) = (N1 (x∗ , y ∗ ) , N2 (x∗ , y ∗ )) .
The second result is Avramescu’s fixed point theorem (see, e.g., [3] and [13]).
Theorem 2.14 (Avramescu). Let (D1 , d) be a complete metric space, D2 a closed
convex subset of a normed space (Y, ∥ · ∥), and let Ni : D1 × D2 → Di , i = 1, 2 be
continuous mappings. Assume that the following conditions are satisfied:
(a) There is a constant L ∈ [0, 1), such that:
d(N1 (x, y), N1 (x′ , y)) ≤ Ld(x, x′ )
for all x, x′ ∈ D1 and y ∈ D2 ;
(b) N2 (D1 × D2 ) is a relatively compact subset of Y .
Then, there exists (x, y) ∈ D1 × D2 such that:
N1 (x, y) = x, N2 (x, y) = y.
Some reference works in fixed point theory are the books [3] and [4].

3. Mutual control for abstract evolution equations


In this section, we aim to find a weakly mild -mild solution of the problem (1.1).
That is, we look for x, y ∈ C ([0, T ] ; X) such that, for all t ∈ [0, T ], the following
relations hold true
( Rt
S (T − t) x (t) = S (T ) x (0) + 0 S (T − s) (f (x (s) , y (s)) + hf ) ds
(3.1) Rt
y (t) = S (t) y (0) + 0 S (t − s) (g (x (s) , y (s)) + hg ) ds.
Assuming there exist x, y ∈ C ([0, T ]; X) that satisfy (3.1), then the controllability
condition (1.2) becomes
(3.2) S (T ) x (0) − kS (T ) y (0)
Z T
= S (T − s) (k (g (x (s) , y (s)) + hg − f (x (s) , y (s)) − hf )) ds.
0
Whence, we may express S(T )x(0) from (3.2) and substitute it into (3.1) to obtain
 RT

 S (T − t) x (t) = kS (T ) y (0) + k 0 S (T − s) (g(x(s), y(s)) + hg ) ds
 RT
(3.3) − t S (T − s) (f (x(s), y(s)) + hf ) ds

 Rt
y (t) = S (t) y (0) + 0 S (t − s) (g (x (s) , y (s)) + hg ) ds.

One sees that (3.3) is equivalent with the fixed point equation
(3.4) (x, y) = (N1 (x, y), N2 (x, y)),
ON EQUILIBRIUM IN CONTROL PROBLEMS 9

where
N1 (x(t), y(t)) = (I − S (T − t)) x (t) + kS (T ) y (0)
Z T
+k S (T − s) (g(x(s), y(s)) + hg ) ds
0
Z T
− S (T − s) (f (x(s), y(s)) + hf ) ds,
t
and
Z t
N2 (x(t), y(t)) = S (t) y (0) + S (t − s) (g (x (s) , y (s)) + hg ) ds.
0
The following result guarantees the equivalence between a fixed point of the opera-
tors (N1 , N2 ) and relations (3.1) and (3.2).
Lemma 3.1. Any fixed point of the operator (N1 , N2 ) satisfy both (3.1) and (3.2).
Proof. Let (x, y) ∈ C([0, T ]; X)2 be a fixed point of the operator (N1 , N2 ). Clearly,
relations (3.3) are satisfied. Evaluating the first relation of (3.3) at t = 0, we obtain
Z T
S(T )x(0) = kS(T )y(0) + k S(T − s) (g(x(s), y(s)) + hg ) ds
0
Z T
− S (T − s) (f (x(s), y(s)) + hf ) ds,
0

whence (3.2) holds. Note that (3.1) follows immediately if in (3.3) we use
Z T
kS(T )y(0) + k S(T − s) (g(x(s), y(s)) + hg ) ds
0
Z T
= S(T )x(0) + S (T − s) (f (x(s), y(s)) + hf ) ds.
0

In the subsequent analysis, we present various conditions ensuring the existence,


uniqueness, or localization of a fixed point for the operator (N1 , N2 ).
In this section, we assume hf , hg ∈ X. Let α ∈ X be fixed, and consider
Cα ([0, T ]; X) := {u ∈ C([0, T ]; X) : u(0) = α} .
Note that, since Cα ([0, T ]; X) ⊂ C([0, T ]; X), it is also a Banach space, endowed
with the same norm as C([0, T ]; X). Clearly, from its definition, N2 (x, y) ∈ Cα ([0, T ]; X)
whenever y ∈ Cα ([0, T ]; X).
On C([0, T ]; X) × Cα ([0, T ]; X), we consider the norm
 
  max |u(τ )|X
|u|0 τ ∈[0,T ]
= ,
|v|θ max e−τ θ |v(τ )|X
τ ∈[0,T ]

where θ ≥ 0 will be chosen conveniently later.


10 R. PRECUP AND A. STAN

3.1. Existence via Perov’s fixed point theorem.

Using Perov’s fixed point theorem, we obtain the following existence and unique-
ness result.
Theorem 3.2. Assume the following conditions are satisfied:
i) There are constants a, b, c, d ≥ 0 such that
(3.5) |f (·, x, y) − f (·, x, y)|X ≤ a|x − x|X + b|y − y|X ,
|g(·, x, y) − g(·, x, y)|X ≤ c|x − x|X + d|y − y|X ,
for all x, x, y, y ∈ C([0, T ], X), and
ii) There exists θ0 ≥ 0 such that the matrix A(θ0 ) is convergent to zero, where
θT
" #
ω + T CS (a + kc) CS (b + kd) e θ−1
(3.6) A(θ) = −θT .
c T CS d CS 1−eθ
Then, for every α ∈ X, the operator
(N1 , N2 ) : C([0, T ]; X) × Cα ([0, T ]; X) → C([0, T ]; X) × Cα ([0, T ]; X),
admits a unique fixed point.
Proof. We show that the operator (N1 , N2 ) is a Perov contraction on C([0, T ]; X) ×
Cα ([0, T ]; X). Let x, x ∈ C([0, T ]; X) and y, y ∈ Cα ([0, T ]; X). Then,
|N1 (x(t), y(t)) − N1 (x(t), y(t))|X
≤ |I − S(T − t)|L(X) |x(t) − x(t)|X
Z T
+k |S(T − s)|L(X) |g(x(s), y(s)) − g(x(s), y(s))|X
0
Z T
+ |S(T − s)|L(X) |f (x(s), y(s)) − f (x(s), y(s))|X .
t
Using (2.2), i) and ii), for each t ∈ [0, T ], we obtain,
|N1 (x(t), y(t)) − N1 (x(t), y(t))|X
≤ ω|x(t) − x(t)|X + (a + ck)T CS max |x(τ ) − x(τ )|X
τ ∈[0,T ]
Z T Z T
+ kdCS |y(s) − y(s)|X ds + bCS |y(s) − y(s)|X ds
0 t
Z T
≤ a11 |x − x|0 + a12 eθs e−θs |y(s) − y(s)|X ds
0
eθT − 1
≤ a11 |x − x|0 + a12 |y − y|θ ,
θ
where
a11 = ω + T CS (a + kc), a12 = CS (b + kd).
Therefore, taking the supremmum over t ∈ [0, T ] yields
eθT − 1
(3.7) |N1 (x, y) − N1 (x, y)|0 ≤ a11 |x − x|0 + a12 |y − y|θ .
θ
ON EQUILIBRIUM IN CONTROL PROBLEMS 11

For the second operator N2 , we compute


|N2 (x(t), y(t)) − N2 (x(t), y(t))|X ≤ c T CS |x − x|0
Z t
+ d CS eθs e−θs |y(s) − y(s)|X ds,
0
for all t ∈ [0, T ]. Whence,
Z t
(3.8) |N2 (x(t), y(t)) − N2 (x(t), y(t))|X ≤ a21 |x − x|0 + a22 |y − y|θ eθs ds
0
eθt − 1
= a12 |x − x|0 + a22 |y − y|θ ,
θ
where
a21 = c T CS , a22 = d CS .
Multiplying the both sides of (3.8) with e−θt and taking the supremum over t ∈
[0, T ], we obtain
1 − e−θT
(3.9) |N2 (x, y) − N2 (x, y)|θ ≤ a12 |x − x|0 + a22 |y − y|θ .
θ
Consequently, writing inequalities (3.7), (3.9) in vector form, one has
   
|N1 (x, y) − N1 (x, y)|0 |x − x|0
(3.10) ≤ A(θ) ,
|N2 (x, y) − N2 (x, y)|θ |y − y|θ
where
θT −1
" #
a11 a12 e θ
A(θ) = .
1−e−θT
a21 a22 θ
From assumption ii), there exists θ0 ≥ 0 such that A(θ0 ) converges to zero. There-
fore, letting θ := θ0 in (3.10), Perov’s fixed point theorem applies and guarantees
the existence of a unique fixed point for the operator (N1 , N2 ).

3.2. Existence via Schauder’s fixed point theorem.

From the Schauder’s fixed point theorem, the following existence result holds.
Furthermore, upon obtaining the fixed point (x, y), we achieve the following local-
ization: |x(t)|X ≤ R1 and |y(t)|X ≤ eθ1 t R2 for all t ∈ [0, T ], where θ1 is given in
assumption (ii) of the theorem below, and R1 , R2 satisfy (3.16). Note that, due to
the use of the Bielecki norm on the second component, we only have an exponential
localization for y(t).
Theorem 3.3. Assume the following conditions are satisfied:
i) There are constants a, b, c, d, Cf , Cg ≥ 0 such that
(3.11) |f (·, x, y))|X ≤ a|x|X + b|y|X + Cf ,
|g(·, x, y))|X ≤ c|x|X + d|y|X + Cg
for all x, x, y, y ∈ C([0, T ], X), and
12 R. PRECUP AND A. STAN

ii) There exists θ0 ≥ 0 such that the matrix A(θ0 ) is convergent to zero, where
θT
" #
ω + T CS (a + kc) CS (b + kd) e θ−1
(3.12) A(θ) = −θT .
c T CS d CS 1−eθ
Then, there exists at least one pair (x, y) ∈ C([0, T ]; X)2 such that y(0) = α,
N1 (x, y) = x, and N2 (x, y) = y.
Proof. We shall apply the Schauder’s fixed point theorem to the operator (N1 , N2 )
on the set
(3.13) Dα;R1 ,R2 := {(x, y) ∈ C([0, T ]; X) × Cα ([0, T ]; X) : |x|0 ≤ R1 , |y|θ ≤ R2 } ,
where R1 and R2 are positive real numbers, and θ ≥ 0 will be determined conve-
niently later. We need to ensure the existence of real numbers R1 , R2 such that the
operator (N1 , N2 ) is invariant over Dα;R1 ,R2 , i.e.,
|N1 (x, y)|0 ≤ R1 , |N2 (x, y)|θ ≤ R2 whenever |x|0 ≤ R1 , |y|θ ≤ R2 .
Let (x, y) ∈ Dα;R1 ,R2 . Then, for all t ∈ [0, T ], we have
Z T
|N1 (x(t), y(t))|X ≤ ω|x|0 + k CS (c|x(s)|X + d|y(s)|X + Cf ) ds
0
Z T
≤ a11 |x|0 + a12 eθs e−θs |y(s)|X + C1
0
eθT − 1
≤ a11 R1 + a12 R2 + C1 ,
θ
where
a11 = ω + CS T (kc + a), a12 = CS (kd + b), and
C1 = (CS (Cf + kCg + |hf |X + k|hg |X )) T.
Hence,
eθT − 1
(3.14) |N1 (x, y)|0 ≤ a11 R1 + a12 R2 + C1 .
θ
For the second operator N2 , we compute
Z t
|N2 (x(t), y(t))|X ≤ CS (c|x(s)|X + d|y(s)|X ) ds + C2
0
Z t
≤ c CS T |x0 | + dCS eθs e−θs |y(s)|X ds + C2
0
eθt − 1
≤ a21 |x0 | + a22 |y|θ + C2 ,
θ
where a21 = c CS T , a22 = dCS and C2 = CS (|α|X + (Cg + |hg |X )T ). Dividing the
above relation with eθt and taking the supremum over t ∈ [0, T ], we obtain
1 − e−θT
(3.15) |N2 (x, y)|θ ≤ a21 R1 + a22 R2 + C2 .
θ
ON EQUILIBRIUM IN CONTROL PROBLEMS 13

We may write relations (3.14), (3.15) in matrix form,


     
|N1 (x, y)|0 R C
≤ A(θ) 1 + 1 ,
|N2 (x, y)|θ R2 C2
where A(θ) is given in (3.18).
From assumption ii), let θ0 be such that A(θ0 ) is convergent to zero, and assume
R1 , R2 are large enough such that
   
R1 −1 C1
(3.16) > (I − A(θ0 )) .
R2 C2
Then, Lemma 2.7 yields
     
|N1 (x, y)|0 R C
≤ A(θ0 ) 1 + 1
|N2 (x, y)|θ0 R2 C2
   
R1 −1 R1
= A(θ0 ) + (I − A(θ0 ))(I − A(θ0 ))
R2 R2
     
R R R1
≤ A(θ0 ) 1 + (I − A(θ0 )) 1 = .
R2 R2 R2
Therefore, the operator (N1 , N2 ) is invariant with respect to the set Dα;R1 ,R2 .
Note that N1 is a Fredholm-Volterra type operator, while N2 is a Volterra op-
erator. Thus, (N1 , N2 ) is completely continuous (see, e.g., [13]). Consequently,
Schauder’s fixed point theorem applies and guarantees the existence of at least one
fixed point for (N1 , N2 ) in Dα;R1 ,R2 .

3.3. Existence via Avramescu’s fixed point theorem.

We apply Avramescu’s fixed point theorem to obtain a fixed point for the operator
(N1 , N2 ). It is noteworthy that, given the fixed point obtained in Theorem 3.4 below,
the uniqueness of the second component is guaranteed by Banach’s principle, while
both components x and y satisfy the bounds |x(t)|X ≤ R1 and |y(t)|X ≤ eθ1 t R2 for
all t ∈ [0, T ], where θ1 is given in assumption (ii), and R1 , R2 satisfy (3.3).
Theorem 3.4. Assume the following conditions are satisfied
i) The function g satisfies g(·, 0, 0) = 0, and there are constants a, b, c, d ≥ 0
such that
(3.17) |f (·, x, y))|X ≤ a|x|X + b|y|X + Cf ,
|g(·, x, y) − g(·, x, y)|X ≤ c|x − x|X + d|y − y|X ,
for all x, x, y, y ∈ C([0, T ], X), and
ii) There exists θ0 ≥ 0 such that the matrix A(θ0 ), where
θT
" #
ω + T CS (a + kc) CS (b + kd) e θ−1
(3.18) A(θ) = −θT .
c T CS d CS 1−eθ
Then, there exists at least one pair (x, y) ∈ C([0, T ]; X)2 such that y(0) = α,
N1 (x, y) = x, and N2 (x, y) = y.
14 R. PRECUP AND A. STAN

Proof. Let us consider the sets,


DR1 := {x ∈ C([0, T ]; X) : |x|0 ≤ R1 } , and
DR2 := {y ∈ Cα ([0, T ]; X) : |y|θ ≤ R1 } ,
where R1 , R2 are positive real numbers. We observe that, from the assumption i),
function g satisfies the growth condition
(3.19) |g(·, x, y))|X ≤ c|x|X + d|y|X ,
for all x, y ∈ C([0, T ]; X). Consequently, performing the same computations as in
the proof of Theorem 3.3 and using ii), we conclude that
   
|N1 (x, y)|0 R1
≤ ,
|N2 (x, y)|θ R2
for all (x, y) ∈ DR1 × DR2 , where R1 , R2 satisfy
   
R1 −1 CS (Cf + |hf |X + |hg |X )T
(3.20) > (I − A(θ0 )) .
R2 CS (|α|X + |hg |X T )
This guarantees that Ni (DR1 × DR2 ) ⊂ DRi i = 1, 2.
Since the matrix A(θ1 ) converges to zero, the diagonal elements are strictly less
−θ T
than 1, i.e., d CS 1−eθ1 1 < 1. This guarantees that N2 (x, ·) is a Lipschitz contrac-
tion for all x ∈ C([0, T ]; X). Indeed, following a similar reasoning as in the proof of
Theorem 3.2, we deduce
1 − e−θ1 T
|N2 (x, y) − N2 (x, y)|θ1 ≤ d CS |y − y|θ1 ,
θ1
for all x ∈ C([0, T ]; X) and y, y ∈ Cα ([0, T ]; X). Therefore, Avramescu’s theo-
rem applies and guarantees the existence of a pair (x, y) ∈ DR1 × DR2 such that
N1 (x, y) = x and N2 (x, y) = y.

4. Application
In this section, we present an application of the theoretical results obtained in
Section 3. We consider the following Stokes-type system

ut = ∆u + f (·, u, v) + hf

(4.1) vt = ∆v + g(·, u, v) + hg t ∈ [0, T ],

∇ · u = ∇ · v = 0,

with the controllability condition u(T ) = kv(T ) (k ≥ 0). Here, X is given by


n 3 o
X := u ∈ L2 (R) : ∇ · u = 0 ,

the functions f, g : [0, T ] × R3 × R3 → R3 , f = (f1 , f2 , f3 ), g = (g1 , g2 , g3 ) are


continuous, and hf , hg ∈ X. Note that X is a Hilbert space endowed with the inner
product
3
X
(u, v) = (ui , vi )L2 , u = (u1 , u2 , u3 ), v = (v1 , v2 , v3 ) ∈ X.
i=1
ON EQUILIBRIUM IN CONTROL PROBLEMS 15

The Stokes operator


A : {u ∈ X : ∆u ∈ X} → X, Au := ∆u = (∆u1 , ∆u2 , ∆u3 ),
is the infinitesimal generator of a semigroup of contractions in X. Therefore, the
constant CS given in (2.2) has the value 1. For details, we send to [20, Chapter 7].
We assume there are nonnegative real numbers a, b, c, d such that one of the
following three conditions is satisfied:
(
|f (t, p, q) − f (t, p, q)| ≤ a|p − p| + b|q − q|
(c1 ) :
|g(t, p, q) − g(t, p, q)| ≤ c|p − p| + d|q − q|
(
|f (t, p) − f (t, p)| ≤ a|p| + b|q|
(c2 ) :
|g(t, p)| ≤ c|p| + d|q| + hg

|f (t, p, q)| ≤ ai |p| + bi |q|

(c3 ) : |g(t, p, q) − g(t, p, q)| ≤ c|p − p| + d|q − q|

g(·, 0, 0) = 0

for all p, p, q ∈ R3 . In the above inequalities, | · | stands for the usual norm in R3 .
Simple computations show that if either (c1 ), (c2 ), or (c3 ) holds, then assumption
(i) of Theorem 3.2, Theorem 3.3, or Theorem 3.4 is satisfied with Cf = Cg = 0 and
the constants a, b, c, d.
Additionally, if we assume there exists θ ≥ 0 such that the matrix
" θT
#
ω + T (a + kc) (b + kd) e θ−1
A(θ) = −θT
cT d 1−eθ
converges to zero, then condition (ii) of each of Theorems 3.2, 3.3, and 3.4 is verified.
Consequently, depending on the conditions imposed (c1 , c2 , or c3 ), there exists a
weakly mild-mild solution for the mutual control problem (4.1), which is either
unique, localized, or unique in one component and localized in both.

References
[1] M. Beldinski and M. Galewski, Nash type equilibria for systems of non-potential equations,
Appl. Math. Comput. 385 (2020), pp. 125456.
[2] M. Coron, Control and Nonlinearity, AMS, Providence, 2007.
[3] K. Deimling, Nonlinear Functional Analysis, Springer, Berlin, 1985.
[4] A. Granas, Fixed Point Theory, Springer, New York, 2003.
[5] M.A. Krasnoselskii, Some problems of nonlinear analysis, Amer. Math. Soc. Transl. 10 (1958),
345–409.
[6] P. Magal and S. Ruan, Theory and Applications of Abstract Semilinear Cauchy Problems,
Springer, 2018.
[7] I. Mezo, The Lambert W Function: Its Generalizations and Applications, Chapman and
Hall/CRC.
[8] S. Park, Generalizations of the Nash equilibrium theorem in the KKM theory, Fixed Point
Theor. Appl. 2010 (2010).
[9] R. Precup, The role of matrices that are convergent to zero in the study of semilinear operator
systems, Math. Comput. Model. 49 (2009), 703–708.
[10] R. Precup, Nash-type equilibria and periodic solutions to nonvariational systems, Adv. Non-
linear Anal. 3 (2014), no. 4, 197–207.
16 R. PRECUP AND A. STAN

[11] R. Precup, A critical point theorem in bounded convex sets and localization of Nash-type equi-
libria of nonvariational systems, J. Math. Anal. Appl. 463 (2018), 412–431.
[12] R. Precup, On some applications of the controllability principle for fixed point equations, Re-
sults Appl. Math. 13 (2022) 100236, 1–7.
[13] R. Precup, Methods in nonlinear integral equations, Dordrecht, Springer, 2002.
[14] R. Precup and A. Stan, Stationary Kirchhoff equations and systems with reaction terms, AIMS
Mathematics, 7 (2022), Issue 8, 15258–15281.
[15] R. Precup and A. Stan, Linking methods for componentwise variational systems, Results Math.
78 (2023), 1-25.
[16] H. Schaefer, Über die Methode der a priori-Schranken (in German), Math. Ann. 129 (1955),
415–416.
[17] A. Stan, Nonlinear systems with a partial Nash type equilibrium, Studia Univ. Babeş-Bolyai
Math. 66 (2021), 397–408.
[18] A. Stan, Nash equilibria for componentwise variational systems, J. Nonlinear Funct. Anal. 6
(2023), 1-10.
[19] A. Stan, Localization of Nash-type equilibria for systems with partial variational structure, J.
Numer. Anal. Approx. Theory, 52 (2023) , 253–272.
[20] I.I. Vrabie, C0 -Semigroups and Applications , Elsevier, Amsterdam, 2003.

(R. Precup) Faculty of Mathematics and Computer Science and Institute of Ad-
vanced Studies in Science and Technology, Babeş-Bolyai University, 400084 Cluj-
Napoca, Romania & Tiberiu Popoviciu Institute of Numerical Analysis, Romanian
Academy, P.O. Box 68-1, 400110 Cluj-Napoca, Romania
Email address: [email protected]

(A. Stan) A. Stan, Department of Mathematics, Babeş-Bolyai University, 400084


Cluj-Napoca, Romania & Tiberiu Popoviciu Institute of Numerical Analysis, Romanian
Academy, P.O. Box 68-1, 400110 Cluj-Napoca, Romania
Email address: [email protected]

You might also like