Pub Controlling-Chaos
Pub Controlling-Chaos
Series Editors
E.D. Sontag • M. Thoma • A. Isidori • J.H. van Schuppen
Controlling Chaos
Suppression, Synchronization and
Chaotification
123
Huaguang Zhang, PhD Derong Liu, PhD
Zhiliang Wang, PhD Chinese Academy of Sciences
Institute of Automation
Northeastern University Beijing 100190
College of Information Science PR China
and Engineering and
Shenyang 110004 University of Illinois at Chicago
PR China Department of Electrical
[email protected] and Computer Engineering
Chicago
Illinois 60607
USA
[email protected]
c Springer-Verlag London Limited 2009
MATLAB® and Simulink® are registered trademarks of The MathWorks, Inc., 3 Apple Hill Drive,
Natick, MA 01760-2098, U.S.A., www.mathworks.com
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as per-
mitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publis-
hers, or in the case of reprographic reproduction in accordance with the terms of licenses issued by the
Copyright Licensing Agency. Enquiries concerning reproduction outside those terms should be sent to
the publishers.
The use of registered names, trademarks, etc., in this publication does not imply, even in the absence of a
specific statement, that such names are exempt from the relevant laws and regulations and therefore free
for general use.
The publisher makes no representation, express or implied, with regard to the accuracy of the information
contained in this book and cannot accept any legal responsibility or liability for any errors or omissions
that may be made.
Today, chaos is no longer a stranger to many. Instead, chaos theory and technology
have gradually become well known as a promising research field with significant
impacts on an increasing number of novel, potentially attractive, time- and energy-
critical engineering applications.
There are sufficient scientific evidence and practical reasons for studying and uti-
lizing chaos control and synchronization. In a system where irregular responses are
undesirable or even harmful, chaos should be reduced as much as possible or even
completely suppressed, while if complex dynamics is beneficial and useful, chaos
synchronization and generation become desirable. The scope of chaos control for
applications, as it has turned out, covers such widespread areas as secure wired and
wireless communications and encryption of information data (e.g., texts, images,
and videos), biological systems (e.g., understanding of the functioning of human
brain and heart, so as to suppress deadly epilepsy and cardiac arrhythmias), liquid
mixing (e.g., chemical reactions and medical drug production), crisis management
(e.g., critical prevention of cascading fatal voltage collapses of power grids), high-
performance circuits and devices (e.g., cellular neural networks, multi-coupled sig-
nal modulators, and power converters), and artificial intelligence in decision making
and organization of complex multi-agent networks and systems in industries, eco-
nomics, as well as military alike.
Chaos control, even by linguistic definition, involves both basic chaos theory and
automatic control techniques. More than ten years ago, we pointed out that1 “We
subscribe to the idea that any viable theory should be one that is constantly evolving
and developing. When merged, the twin virtues of classical control theory and ad-
vanced nonlinear science may bring about unexpected benefits. For chaotic systems,
if we can incorporate control mechanisms that exploit certain defining characteris-
tics of chaos, the approach becomes more interesting and, hopefully, more efficient
and effective. In addition, due to the inherent association of nonlinearity and chaos
to various issues, the scope of chaos control is much more diverse. When compared
1 Preface, G. Chen and X. Dong, From Chaos to Order: Methodologies, Perspectives and Applica-
vii
viii Foreword
to other conventional approaches, chaos control has its unique features with regard
to such aspects as objectives, perspectives, problem formulations, and performance
measures.” Today, we are especially pleased to welcome the present monograph
Controlling Chaos that continues to address the important subject of chaos control,
including in particular the key topics of chaos suppression, synchronization, and
generation (also named chaotification).
This exciting and yet challenging research and development area has continu-
ously been a scientific interdiscipline involving systems and control engineers, the-
oretical and experimental physicists, computational and applied mathematicians, bi-
ologists and physiologists, and electronics specialists, among others. The progress
has been very promising and encouraging to date. Nevertheless, achievements
notwithstanding,2 “Chaos control calls for new efforts and endeavors for the coming
millennium. In this new era, perhaps today’s contemporary concepts of nonlinear
dynamics and controls will undergo yet another cycle of rethinking and reorganiz-
ing; perhaps the chaos control theories, methodologies, and perspectives that have
been drawn together in this book will unleash some other new ideas and valuable
applications; perhaps real breakthroughs will begin to take place, bringing enhance-
ment, improvement, and sustainability to the complex living nature – it is only the
beginning ... .”
2Epilogue, G. Chen and X. Dong, From Chaos to Order: Methodologies, Perspectives and Appli-
cations. World Scientific, Singapore, 1998.
Preface
Chaos theory, once considered to be the third revolution in physics following rela-
tivity theory and quantum mechanics, has been studied extensively in the past thirty
years. A lot of chaotic phenomena have been found and enormous mathematical
strides have been taken. Nowadays, it has been agreed by scientists and engineers
that chaos is ubiquitous in natural sciences and social sciences, such as in physics,
chemistry, mathematics, biology, ecology, physiology, economics, and so on. Wher-
ever nonlinearity exists, chaos may be found. For a long time, chaos was thought of
as a harmful behavior that could decrease the performance of a system and there-
fore should be avoided when the system is running. One remarkable feature of a
chaotic system distinguishing itself from other nonchaotic systems is that the sys-
tem is extremely sensitive to initial conditions. Any tiny perturbation of the initial
conditions will significantly alter the long-term dynamics of the system. This fact
means that when one wants to control a chaotic system one must make sure that the
measurement of the needed signals is absolutely precise. Otherwise any attempt of
controlling chaos would make the dynamics of the system go to an unexpected state.
With the development of chaos theory and practice in engineering, more and more
people want to know the answers to the following questions:
(1) Can chaos be controlled?
(2) Can chaos be utilized?
(3) Can two chaotic systems be in resonance as in the case of periodic ones?
(4) If the answer to the second question is positive, then how to generate chaos in a
nonchaotic system?
These questions have been partly answered by Ott, Pecora, and Chen in the
1990s, which has led a surge in the application study of chaos. From then on, a
new research area, chaos control, including suppression, utilization, and generation
of chaotic phenomena, came into being. Among these studies, three aspects attract
ix
x Preface
more attention; that is, stabilization of chaos, synchronization of chaos, and anti-
control of chaos.3
Although several monographs on controlling chaos have been published, the present
book has unique features which distinguish it from others.
First, the types of chaotic systems studied in this monograph are rather extensive.
From the point of view of physics, readers can find not only well-known chaotic
systems, such as the Lorenz system, the Rössler system, and the Hénon map, but
also some new chaotic systems which appeared in recent years, such as the Liu
hyperchaotic system, the Liao chaotic system, the Chen chaotic system, and the Lü
chaotic system. From the point of view of models, one can find difference equations,
ordinary differential equations, and time-delayed differential equations in this book,
which are the main mathematical models describing chaos.
Second, since the monograph is a summary of the authors’ previous research,
the methods proposed here for stabilizing, synchronizing, and generating chaos in a
great degree benefit from the theory of nonlinear control systems, and are more ad-
vanced than that appear in other introductory books. One example is that in order to
stabilize a chaotic system to one of its equilibria, an inverse optimal control method
is developed in this book. The controller designed according to this method not only
stabilizes the system but also optimizes a meaningful cost functional. Therefore, the
difficulty of solving the Hamilton–Jacobi–Bellman (HJB) equation is avoided. An-
other example is that in order to synchronize two discrete-time chaotic systems, the
exact linearization method is used which provides a unified framework for controller
design for both continuous-time and discrete-time chaotic systems. Yet a third ex-
ample is that in order to chaotify a continuous-time nonchaotic system, a kind of
impulsive control method is developed. A mathematical proof shows that the chaos
induced by this method satisfies Devaney’s definition of chaos.
Last but not least, some rather unique contributions are included in this mono-
graph. One notable feature is the combination of fuzzy logic and chaos. Besides
the famous Takagi–Sugeno (T–S) fuzzy model, a novel model, the fuzzy hyperbolic
model (FHM), which was initially proposed by one of the authors and whose merits
in modeling and control have been illustrated in our book4 earlier, is also included
in this book. In this monograph we combine chaos and fuzzy logic in many aspects:
the T–S fuzzy model is used in Chaps. 4 and 8 for suppressing, modeling, and syn-
chronizing chaotic systems, respectively; and the chaotification of the discrete-time
FHM and the continuous-time FHM is studied in Chap. 9. Another notable feature
is that the methods proposed in this monograph can be applied to a wide class of
chaotic systems rather than a specific chaotic system. For example, in Chap. 4 a
systematic method is proposed for stabilizing discrete-time chaotic systems and in
3 Anticontrol of chaos is also known as chaotification. In this book, they are synonymous.
4 H. Zhang and D. Liu, Fuzzy Modeling and Fuzzy Control. Birkhäuser, Boston, 2006.
Preface xi
The whole book involves nine chapters. As indicated by the title of the book, the
main content of the book is composed of three parts: suppressing chaos, synchro-
nizing chaos, and generating chaos. To make the book self contained, additional
materials are added to provide readers with a brief review of the history of chaos
control and some necessary mathematical preliminaries on dynamical systems.
In Chap. 1, we briefly review the history of chaos theory and chaos control.
We first review the history of chaos by following the important events in the de-
velopment of chaos theory. We start the review from the last decade of the 19th
century to the 1980s in the 20th century. The work of many distinguished scientists,
such as Poincaré, Birkhoff, van de Pol, Littlewood, Andoronov, Lorenz, Smale, Kol-
mogorov, Arnol’d, Feigenbaum, Li, Yorke, and May, is summarized. After that, we
review the development of chaos control from three different aspects, i.e., from the
points of view of suppression, synchronization, and chaotification. For each aspect,
not only the main methods are introduced but also the ideas behind those meth-
ods are mentioned. Some representative methods are introduced, such as the Ott–
Grebogi–Yorke (OGY) method and its extensions, the entrainment and migration
method, the time-delay feedback method, and some state feedback methods. Chaos
synchronization is introduced according to different synchronization patterns, such
as complete synchronization, phase synchronization, lag synchronization, and gen-
eralized synchronization. Chaotification was proposed by Chen in 1997 and has
attracted a lot of attention since then. Methods for chaotification will be reviewed,
including the state feedback method, the state delay feedback method, the impulsive
control method, and the Smale horseshoe method.
In Chap. 2, necessary mathematical background materials on nonlinear dynam-
ics and chaos are introduced. Dynamical system theory is a powerful tool for chaos
study. For the completeness of the book we provide a brief introduction to nonlin-
ear dynamical systems. This chapter is rather difficult for readers with engineering
background since many mathematical concepts, definitions, and theorems are in-
volved. The content of this chapter includes two parts. Some concepts and defini-
tions about nonlinear ordinary equations and dynamical systems are introduced first,
such as the concepts of flow, fixed point, equilibrium state, invariant set, attractor,
stable (unstable) manifold, Floquet index, Lyapunov exponents, and Smale horse-
shoe. We also state some important theorems, such as the theorem about existence
and uniqueness of solutions, the Hartman-Grobman theorems, and the Lyapunov
stability theorems. Some concepts and theorems about retarded functional differen-
tial equations (RFDEs) are introduced next, such as the definitions of solutions and
the initial problem, existence and uniqueness of solutions, and stability of solutions.
After that, some stability criteria for RFDEs are introduced.
xii Preface
Acknowledgments
The authors would like to acknowledge the help and encouragement they have re-
ceived during the course of writing this book. A great deal of the materials presented
in this book is based on the research that we conducted with several colleagues and
former students, including Y. H. Xie, D. S. Yang, Z. S. Wang, W. Huang, J. Wang,
Y. B. Quan, Z. B. Liu, and H. X. Guan. We wish to acknowledge especially Dr. T.
D. Ma and Dr. Y. Zhao for their hard work on this book. The authors also wish to
thank Prof. S. S. Wiggins, Prof. A. Medio, Prof. M. Lines, Prof. M. W. Hirsch, Prof.
S. Smale, Prof. R. L. Devaney, and Prof. Y. A. Kuznetsov for their excellent books
on the theory of nonlinear dynamics and chaos. We are very grateful to the National
Natural Science Foundation of China (Grant Nos. 60521003, 60534010, 60572070,
60621001, 60728307, 60774048, and 60804006), the Program for Innovative Re-
search Team in Universities (Grant No. IRT0421), the Program for Changjiang
Scholars, the National High Technology Research and Development Program of
China (863 Program) (Grant No. 2006AA04Z183), and the 111 Project of the Min-
istry of Education of China (B08015), which provided necessary financial support
for writing this book.
1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 The Origin and Development of Chaos Theory . . . . . . . . . . . . . . . . . . 1
1.2 Control of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Suppression of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Synchronization of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Control and Synchronization of Spatiotemporal Chaos . . . . . 9
1.3 Anticontrol of Chaos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
xv
xvi Contents
2.14 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.14.1 Tent Map and Logistic Map . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
2.14.2 Smale Horseshoe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
2.14.3 The Lorenz System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
2.15 Basics of Functional Differential Equations Theory . . . . . . . . . . . . . . 71
2.16 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343
Chapter 1
Overview
Abstract In this chapter, we will provide a brief introduction to the history and de-
velopment of chaos and chaos control. We begin our introduction with chaos theory.
Some important events are narrated in chronological order. After that, we turn our
attention to the field of chaos control. We present this subject in three parts; that is,
suppressing chaos, synchronizing chaos, and chaotification. We will mention some
main approaches used in research and point out open problems in each part.
The word ‘chaos’ generally refers to a phenomenon that is disordered and irregu-
lar. In modern scientific terminology, however, ‘chaos’ refers to a pseudo-random
phenomenon generated in a deterministic system. Henri Poincaré is acknowledged
as the first person to glimpse the possibility of chaos. When he studied the stabil-
ity properties of the solar system at the end of the 19th century, Poincaré found
that even in the case of three masses moving under Newton’s law of attraction they
could still exhibit very complicated behavior [56]. This kind of motion depends
sensitively on the initial conditions, thereby rendering long-term prediction impos-
sible [15]. Birkhoff developed geometric methods created by Poincaré and found
many different types of long-term limiting behaviors, such as ω -limit set and α -
limit set. The term ‘dynamical systems’ comes from the work [3] but chaos has
always been in the background. During the first half of the 20th century, nonlinear
oscillators were mostly studied due to their vital role in the development of such
technologies as radio, radar, phase-locked loops, and lasers. Success in technology
stimulates the invention of new mathematical tools. The pioneers in this area include
van der Pol, Andronov, Littlewood, Cartwright, Levinson, and Smale. Especially, in
the 1950s, a type of complexity unknown previously was revealed by Cartwright,
Littlewood, and Levinson to show that a certain forced nonlinear oscillator had an
infinite number of different periods. Smale extended the result of Cartwright, Lit-
tlewood, and Levinson in a general framework and illustrated the phenomenon with
1
2 1 Overview
Entering into the 1980s, computers become a powerful tool, used to help re-
searchers visualize the complicated structures of strange attractors, calculate char-
acteristic indices of the chaotic systems, and provide evidence required by proofs
[19, 69]. Mandelbrot constructed the theory of fractal geometry at the end of the
1970s, and drew the first picture of a Mandelbrot set [39]. The theory of fractal
geometry generalizes the notion of dimension from integers to real numbers and
has become a powerful tool for characterizing the complicated structures of strange
attractors. From the middle of the 1980s, more and more researchers have paid at-
tention to how to control chaos, including suppression, synchronization, and chao-
tification.
There are many practical reasons for controlling chaos. In some situations, such as in
a distributed artificial intelligence system, chaotic behavior is undesirable and will
degrade the performance of the entire system. Naturally, chaos should be reduced
as much as possible, or totally suppressed.
However, recent research has shown that chaos may be useful under certain
circumstances. In the following, we will consider three examples where chaos is
needed.
Since a chaotic attractor is usually embedded in a dense set of unstable limit
cycles, if any of these limit cycles can be stabilized, it may be desirable to stabilize
one that characterizes a certain maximal system performance. This character can be
used to design a multi-purpose system.
Fluid mixing is another typical example in which chaos is not only useful but
also very important. When two fluids are to be thoroughly mixed and the required
energy is to be minimized, one will hope that the particle motion of the fluids is
strongly chaotic, since otherwise it is hard to obtain rigorous mixing properties, due
to the possibility of invariant 2-tori in the flow.
The sensitivity of chaotic systems to small perturbations can be used to steer sys-
tem trajectories to a desired target quickly. NASA scientists, utilizing the sensitivity
of the three-body problem of celestial mechanics to small perturbations, have used
small amounts of residual hydrazine fuel to send the spacecraft ISEE-3/IEC more
than 50 million miles across the solar system, which achieved the first scientific
cometary encounter [62]. This would not have been possible in a nonchaotic system
since a large control effort would be required to quickly direct the system trajectory
to reach distant targets.
For a long time chaotic systems were thought of as unpredictable and uncontrol-
lable: unpredictable, because a small disturbance in the initial conditions or param-
eters of a chaotic system produces changes of the original motion that grow expo-
nentially; uncontrollable, because small disturbances usually lead to other chaotic
states but not to any stable and regular motion [17]. However, the requirement for
controlling chaos, which is either the demand of industrial applications or the aca-
4 1 Overview
demic interest, becomes stronger and stronger than ever before. The first paper on
controlling chaos was published in 1989 [25], but it is the paper by Ott, Grebogi,
and Yorke [46] that attracted more attention and their method is named the OGY
method. In 1990, Pecora and Carroll opened another important application field,
synchronization of chaos [49]. Several years later, Chen and Lai proposed a method
for generating chaos in a nonchaotic discrete-time system [10]. From then on, a
new research aspect, anticontrol of chaos, or chaotification, came into being. Up to
now, numerous control methods have been proposed, developed, and tested. Many
experiments have demonstrated that chaotic physical systems respond quite well
to those newly developed, conventional or novel, simple or sophisticated, control
strategies. Applications of chaos control have been proposed in such diverse areas
of research as biology, medicine, physiology, epidemiology, chemical engineering,
laser physics, electric power systems, fluid mechanics, aerodynamics, electronics,
communications, and so on. Chaos control has become a process that manages the
dynamics of a nonlinear system on a wider scale, with the hope that more benefits
may thus be derived.
In the beginning, the goal of controlling chaos was to eliminate the harmful chaotic
motion. A remarkable feature which distinguishes chaotic systems from other non-
linear systems is that usually a chaotic system has at least one chaotic attractor.1
Thus, the key of the problem is how to utilize the attractor in the procedure of con-
troller design. The OGY method is based on such an idea: for a neighborhood of
a point located on the chaotic attractor there is always a periodic orbit, which may
be stable or unstable, passing through the neighborhood. Thus, if one tunes the pa-
rameters of the chaotic system slightly such that the trajectory is displaced on the
periodic orbit, then, from that time on, the trajectory will always be along the peri-
odic orbit. Because of the limitation of the OGY method, that it is only adapted to
the low-dimensional case in which the equilibrium points of the considered chaotic
system are saddles and the observed state variables are not contaminated with noise,
some modified OGY methods were proposed again [16, 18, 20, 27, 76]. In nature,
the OGY method is a local method. One may wait for a long time before the trajec-
tory goes into the neighborhood in which the controller will be turned on. However,
because of the ergodicity of the chaotic motion on the attractor, there is always an
instant when the trajectory will be in the neighborhood. To shorten the waiting time,
a targeting method was proposed which can direct the trajectory of the chaotic sys-
tem into the neighborhood in a very short time [62, 63]. But sometimes it is difficult
1 In some literature, a strange attractor has a different meaning to a chaotic attractor. The former
emphasizes the geometrical structure of the attractor and sometimes has a fractional dimension.
The latter emphasizes the sensitivity to initial conditions. Two trajectories starting from differ-
ent but close initial points will be divergent exponentially. Therefore, a system having a strange
attractor may not be a chaotic one while a chaotic attractor may have integer dimension [47].
1.2 Control of Chaos 5
to obtain the parameters of a chaotic system. Even if the parameters can be ob-
tained, it is difficult to adjust them as control inputs. So, the continuous feedback
control method is considered. There is a problem which should be answered when
continuous feedback is used: will the topological structure of the chaotic attractor
be destroyed by the continuous feedback? At present, we can answer the question
as follows: if all of the invariant sets of a chaotic system are hyperbolic invari-
ant sets, then the small perturbation will not destroy the topological structure of its
state space [80]. We know that many chaotic systems have hyperbolic invariant sets.
Therefore, at least for these systems, the continuous feedback control method is fea-
sible. The studies over the last decade show that almost all the methods that do well
for conventional nonlinear control problems can be used to suppress chaos (see [8]
and references therein).
One main goal of controlling chaos is to steer the trajectory of a chaotic system to
a periodic one. So, the first step of the control procedure is to identify these periodic
orbits.
Suppose that the mathematical model is known. For discrete-time systems, the
periodic orbits can be obtained by calculation according to its definition, no mat-
ter whether stable or unstable. For continuous-time systems, the problem can be
changed to the case of discrete time by the Poincaré section method. Thus, the key
problem is how to identify the periodic orbits of discrete-time systems. In practice,
one wants to use computers to find the periodic orbits of the discrete-time systems.
The iterations of a discrete-time system can only get stable periodic orbits. For un-
stable periodic orbits (UPOs) the iteration method loses its effectiveness because
even if a point is located on a UPO at present, at the next instant of the evolution
it will deviate from the UPO due to small errors caused either by calculation or by
measurement.
Four methods are known which work well for identifying UPOs. The first method
is called the word-lifting technique which was proposed by Hao [22]. Using this
method an analytical expression of periodic orbits can be obtained. The second one
was proposed in [45] in which both symbol dynamics and graph theory are used to
determine a periodic orbit. The main idea of this method is the construction of a
directed graph which represents the structure of the state space for the dynamical
system under investigation. The third one is to transform the original chaotic system
by a homeomorphism (the term will be introduced in the following chapter). With-
out changing the topological structure of the state space, this transformation only
changes the stabilities of the orbits. Thus, UPOs become stable periodic orbits of
the new system. Once these stable periodic orbits are identified by some numerical
algorithms [13, 35, 52, 53], we can get the UPOs of the original chaotic system by
inverse transformation. The fourth method is very ingenious, in which the retarded
state variables are taken as inputs [54]. Considering the fact that a chaotic trajectory
can approach any periodic orbit, if we take an expected period as the delayed time
and design a stabilizing controller, then the trajectory is just on the expected peri-
odic orbit when the controlled chaotic system is stabilized. Only the structure of the
system and period of orbit are needed in the delayed feedback method. The delayed
feedback method is a two-in-one method. The procedure of identifying the UPOs
6 1 Overview
is combined with the procedure of stabilization. The periodic orbit will be obtained
provided that the controlled system is stabilized. This merit makes the method em-
ployed extensively [4, 5, 6, 7, 23, 33, 34, 66]. However, with deeper study a fatal
problem is found with the delayed feedback method; that is, the delayed feedback
method cannot stabilize the UPOs which have real eigenvalues of odd number. This
is called the odd number limitation. Fortunately, there are some modified delayed
feedback methods which can go round the obstacle of the odd number limitation
[43, 44, 70].
If the mathematical model is unknown and only some measurable signals can be
obtained, then, at first, one needs to utilize the technique of space reconstruction to
get a model. Thus, the four methods discussed above can be used again [9, 65].
A merit of the strategy of feedback control is that the energy of input is small
and will not destroy the structure of the state space and, when the control goal is
reached, the controller can be turned off. However, the strategy requires the con-
tinuous monitoring of the state, which is difficult in some cases. For example, the
extremely high temperature in plasma will make it very difficult to precisely mea-
sure some physical quantities in the plasma [24]. Non-feedback control methods for
suppressing chaos can overcome this difficulty. There are many different kinds of
non-feedback control methods. Some of them do not require the understanding of
the chaotic system dynamics, such as the method with a periodic signal to suppress
chaos [42] or the method with a noise signal to suppress chaos [55]. When imple-
menting a control strategy, there are fewer limitations required by non-feedback con-
trol methods compared with that of feedback control methods. But, the cost is that
with the non-feedback control strategy one can only achieve some general control
goals, such as eliminating chaos, realizing periodic motion, etc. Some non-feedback
control strategies can steer the trajectories of a chaotic system to some predesigned
orbits, which, especially, may not be the inherent orbits of the original chaotic sys-
tem [12, 29, 30, 31]. But, in that situation, one must have an exact understanding
about the dynamics of the original system. When a non-feedback control method is
used, the control signal would not reduce to zero even if the control goal has been
realized, which will produce new problems in practice.
The common meaning of synchronization is that the phases of two or more oscilla-
tors change according to some patterns. C. Huygens was the first person who found
the phenomenon of synchronization of coupled pendulums. Actually, synchroniza-
tion phenomena are abundant in science, nature, engineering, and social life. Sys-
tems as diverse as clocks, singing crickets, cardiac pacemakers, firing neurons, and
applauding audiences exhibit a tendency to operate in synchrony [50]. These phe-
nomena are universal and can be understood within a common framework based
on modern nonlinear dynamics. Existing studies of suppressing chaos show that a
periodic input signal can make a chaotic system periodic with the same frequency
1.2 Control of Chaos 7
as that of the input signal. This situation can be looked at as a pattern of synchro-
nization in which a periodic signal synchronizes with a chaotic signal (it can be
thought of as a periodic motion with period of infinite length). Furthermore, if the
input signal is replaced with a chaotic signal, what will happen? Will there be the
phenomenon of synchronization? Intuitively, even if the two chaotic signals are gen-
erated from the chaotic systems with the same structure, this synchronization would
yet not happen because of the sensitivity of chaotic systems. However, Pecora and
Carroll firstly found the synchronization of chaos in the experiment of circuits. They
took the Lorenz system as the drive system, and replicated the same system as the
response system. When they replaced the first state variable of the response sys-
tem with the first state variable of the drive system, the synchronization happened.
This synchronization scheme is called the P–C scheme. They found that the syn-
chronization in the P–C scheme is not only globally asymptotically stable but also
structurally stable. By a strict analysis they pointed out that two chaotic systems in
the P–C scheme can get synchronized only if the conditional Lyapunov exponents
of the response system are all negative.2
Although the problems considered by Ott and Pecora seemed different and inde-
pendent from each other, soon researchers realized the relationship between them.
In fact, many control methods developed in suppressing chaos can also be used in
synchronizing chaos. In this section, rather than talking more about the synchro-
nization methods, we pay more attention to the the patterns of synchronization that
can be obtained among chaotic systems. For convenience, suppose that we have the
following chaotic systems:3
where x and x̂ are state variables and y and ŷ are outputs. Suppose that the following
relation holds when (1.1) and (1.2) get synchronized:
ŷ = ϕ (y), ϕ : Rm → Rq , (1.3)
where ϕ is continuous but not necessarily differentiable [26]. (y, ŷ) is called the
manifold of synchronization.
Considering the dynamics of drive–response systems and the structure of the
manifold of synchronization, we give the following categories of synchronization
of chaos.
2 It should be pointed out that this condition is not necessary for synchronization of chaos. Some
chaotic systems cannot get synchronized as in P–C schemes, but they indeed can get synchronized
[21, 48].
3 We take a continuous-time system as an example. The same illustration can also be done for
discrete-time systems.
8 1 Overview
(i) Complete synchronization. In this case, x̂(t) = x(t); that is, m = q and ϕ is an
identical mapping. If fˆ = f , the relation is called identically complete synchro-
nization. If fˆ = f , the relation is called nonidentically complete synchroniza-
tion. The manifold of complete synchronization is a diagonal hyperplane in the
phase space composed of x − x̂.
(ii) Generalized synchronization [59]. If the state of the response system is uniquely
determined by the state of the drive system, that is, (1.3) holds with ϕ a certain
mapping, then the drive system and the response system are thought of as in the
pattern of generalized synchronization. No conditions are imposed on m and q.
In this case, the manifold of synchronization is no longer a diagonal hyperplane
of the combined phase space x − x̂.4
(iii) Lag synchronization [57]. In this case, x̂(t) ≈ x(t − τ ), where τ is a small posi-
tive number.
(iv) Anticipating synchronization [72]. Analogous to lag synchronization, when an-
ticipating synchronization happens, the state variables between the drive system
and the response system have the following relationship:
(v) Phase synchronization. Let φ1 and φ2 be the phases of system (1.1) and system
(1.2), respectively. The phase synchronization means that for rational numbers
m and n, there is a small positive number ε such that
to construct an auxiliary system with the same structure as (1.2). Provide the same input signals
derived from the drive system to the response system and the auxiliary system. Choose the initial
point of the auxiliary system in the neighborhood of the initial point of the response system. If a
stable generalized synchronization happens, then the trajectories of the auxiliary system and the
response system will converge to each other. Otherwise, they will be different [1].
1.2 Control of Chaos 9
All of the chaotic systems mentioned above are described by ordinary differential
equations or difference equations. This means that for these chaotic systems, the
variation rate of the state variables with respect to spatial position is ignored. We
call these chaotic systems temporal chaotic systems. But, sometimes, a chaotic sys-
tem is not isotropic and must be described with partial differential equations. For
10 1 Overview
these chaotic systems, the synchronization can take place either in time or in space
position. Since the state of a partial differential equation belongs to a function space
whose dimension is infinite, a spatiotemporal chaotic system can provide more pat-
terns than those in a temporal chaotic system. Thus, the considered problems of
synchronizing two or more spatiotemporal chaotic systems are much more compli-
cated than those considered for temporal chaotic systems. At present, research on
control and synchronization of spatiotemporal chaos mainly follows two directions
[24, 32, 79]:
(i) The studies of coupled map lattices. Each lattice is a discrete mapping and
evolves with time. The adjacent lattices are joined with specified boundary con-
ditions. There are two reasons for studying coupled map lattices. First, it can
be obtained by discretizing a partial differential equation. Therefore, many im-
portant characteristics that belong to partial differential equations can be found
in the systems of coupled map lattices. Second, mathematical tools required by
coupled map lattices are much simpler than what is needed when one directly
studies partial differential equations.
(ii) The studies of the underlying partial differential equations. Many of the im-
portant properties of a partial differential equation cannot be presented by a
coupled map lattice. Therefore, it is necessary to study the original partial dif-
ferential equations.
There is a marked difference between temporal chaotic systems and spatiotemporal
chaotic systems when one wants to control them. A temporal chaotic system can be
controlled only by altering its vector fields through a feedback controller. But even
for a coupled map lattice this method is impractical since lots of controllers would
be required. A feasible and effective method is to alter the state of some lattices
and diffuse the alteration through coupling relations so that the control goal can be
achieved [24].
Nowadays, more and more researchers have realized that the study of spatiotem-
poral chaos can provide instructive inspiration and necessary ways for solving the
problem of turbulence.
Since lots of applications of chaos are found in many areas, a new notion, anticontrol
of chaos (or chaotification), was proposed [10]. Later, the concept and method of
synthesis of chaos were proposed [71]. Anticontrol of chaos became a new direction
in the study of chaos control. It seems that based on the well-developed theory of
chaos there should be no difficulties in chaotification. However, it is not so. At the
beginning, the difficulties mainly exist in the following aspects:
(i) How to define chaos? The first definition of chaos was proposed by Li and
Yorke in 1975 for the mappings on intervals of one-dimensional space [37].
1.3 Anticontrol of Chaos 11
1.4 Summary
In this chapter, we briefly introduced the history and presented the current status
of chaos and chaos control. Due to the focus of the book, we do not list all the
methods developed in chaos control. Our attention is to give an introduction to the
development of theory so as to provide a rough description of chaos control for a
new researcher in this area. Readers who are interested in how the synchronization
of chaos is utilized in applications are referred to, e.g., [36] and [67].
References 13
References
1. Abarbanel HDI, Rulkov NF, Sushchik MM (1996) Generalized synchronization of chaos: the
auxiliary system approach. Phys Rev E 53:4528–4535
2. Alligood KT, Sauer TD, Yorke JA (1997) Chaos – An Introduction to Dynamical Aystems.
Springer, New York
3. Birkhoff GD (1966) Dynamical Systems (revised edtion). American Mathematical Society,
Providence, RI
4. Bielawski S, Derozier D, Glorieux P (1993) Experimental characterization of unstable peri-
odic orbits by controlling chaos. Phys Rev A 47:2492–2495
5. Bleich ME, Hochheiser D, Moloney JV, Socolar J (1997) Controlling extended systems with
spatially filtered, time-delayed feedback. Phys Rev E 55:2119–2126
6. Bleich ME, Socolar J (2000) Delayed feedback control of a paced excitable oscillator. Int J
Bifurc Chaos 10:603–609
7. Brandt ME, Chen G (2000) Time-delay feedback control of complex pathological rhythms in
an atrioventricular conduction model. Int J Bifurc Chaos 10:2781–2784
8. Chen G, Dong X (1998) From Chaos to Order: Methodologies, Perspectives and Applica-
tions. World Scientific, Singapore
9. Chen G, Chen G, de Figueiredo RJP (1999) Feedback control of unknown chaotic dynamical
systems based on time-series data. IEEE Trans Circuits Syst I 46:640–644
10. Chen G, Lai D (1996) Feedback control of Lyapunov exponents for discrete-time dynamical
systems. Int J Bifurc Chaos 6:1341–1349
11. Chen G, Wang XF (2006) Chaotification of Dynamical Systems: Theories, Methodologies,
and Applications. Shanghai Jiaotong University Press, Shanghai
12. Chen L (2001) An open-plus-closed-loop control for discrete chaos and hyperchaos. Phys
Lett A 281:327–333
13. Davidchack RL, Lai Y (1999) Efficient algorithm for detecting unstable periodic orbits in
chaotic systems. Phys Rev E 60:6172–6175
14. Devaney RL (1989) An Introduction to Chaotic Dynamical Systems, 2nd edn. Addison-
Wesley, New York
15. Diacu F, Holmes P (1996) Celestial Encounters: The Origins of Chaos and Stability. Prince-
ton University Press, Princeton, NJ
16. Ding M, Ott E, Grebogi C (1994) Controlling chaos in a temporally irregular environment.
Phys D 74:386–394
17. Dyson F (1988) Infinite in All Directions. Harper and Row, New York
18. Epureanu BI, Dowell EH (2000) Optimal multi-dimensional OGY controller. Physica D
139:87–96
19. Enns R, McGuire G (1997) Nonlinear Physics with Maple for Scientists and Engineers.
Birkhäuser, Boston
20. Grebogi C, Lai Y (1997) Controlling chaos in high dimensions. IEEE Trans Circuits Syst I
44:971–975
21. Gutiérrez JM, Iglesias A (1998) Synchronizing chaotic systems with positive conditional Lya-
punov exponents by using convex combinations of the drive and response systems. Phys Lett
A 239:174–180
22. Hao B (1989) Elementary Symbolic Dynamics and Chaos in Dissipative Systems. World Sci-
entific, Singapore
23. Hikihara T, Kawagoshi T (1996) An experimental study on stabilization of unstable periodic
motion in magneto-elastic chaos. Phys Lett A 211:29–36
24. Hu G, Xiao J, Zheng Z (2000) Chaos Control. Shanghai Scientific and Technological Educa-
tion Publishing House, Shanghai
25. Hübler A (1989) Adaptive control of chaotic systems. Helv Phys Acta 62:343–347
26. Hunt BR, Ott E, Yorke JA (1997) Differentiable generalized synchronization of chaos. Phys
Rev E 55:4029–4034
14 1 Overview
27. Iplikci S, Denizhan Y (2001) Control of chaotic systems using targeting by extended control
regions method. Physica D 150:163–176
28. Ingraham R (1992) A Survey of Nonlinear Dynamics (‘Chaos Theory’). World Scientific,
Singapore
29. Jackson EA (1991) On the control of complex dynamic systems. Physica D 50:341–366
30. Jackson EA, Grosu I (1995) An open-plus-closed-loop (OPCL) control of complex dynamic
systems. Physica D 85:1–9
31. Jackson EA, Hübler A (1990) Periodic entrainment of chaotic logistic map dynamics. Physica
D 44:407–420
32. Kahan S, Montagne R (2001) Controlling spatio-temporal chaos. Physica A 290:87–91
33. Konishi K, Hirai M, Kokame H (1998) Decentralized delayed-feedback control of a coupled
map model for open flow. Phys Rev E 58:3055–3059
34. Konishi K, Kokame H, Hirata K (1999) Coupled map car-following model and its delayed-
feedback control. Phys Rev E 60:4000–4007
35. Lan Y, Cvitanović P (2004) Variational method for finding periodic orbits in a general flow.
Phys Rev E 69:061217(1–10)
36. Larson LE, Liu JM, Tsimring LS (2006) Digital Communications using Chaos and Nonlinear
Dynamics. Springer, New York
37. Li TY, Yorke JA (1975) Period three implies chaos. Am Math Mon 82:985–992
38. Lü J, Zhou TS, Chen G, Yang X (2002) Generating chaos with a switching piecewise-linear
controller. Chaos 12:344–349
39. Mandelbrot B (1983) The Fractal Geometry of Nature. Freeman, Paris
40. Marotto FR (1978) Snap-back repellers imply chaos in Rn . J Math Anal Appl 63:199–223
41. May RM (1976) Simple mathematical model with very complicated dynamics. Nature
261:459–467
42. Mirus KA, Sprott JC (1999) Controlling chaos in a high dimensional system with periodic
parametric perturbations. Phys Lett A 254:275–278
43. Nakajima H (1997) On analytical properties of delayed feedback control of chaos. Phys Lett
A 232:207–210
44. Nakajima H, Ueda Y (1998) Limitation of generalized delayed feedback control. Physica D
111:143–150
45. Osipanko G (2007) Dynamical Systems, Graphs, and Algorithms. Springer, Berlin
46. Ott E, Grebogi C, Yorke JA (1990) Controlling chaos. Phys Rev Lett 64:1196–1199
47. Ott E (2002) Chaos in Dynamical Systems. Cambridge University Press, London
48. Pan S, Yin F (1997) Using chaos to control chaotic systems. Phys Lett A 231:173–178
49. Pecora LM, Carroll TL (1990) Synchronization in chaotic systems. Phys Rev Lett 64:821–824
50. Pilovsky AS, Rosenblum MG, Kurths J (2001) Synchronization: A Universal Concepts in
Nonlinear Science. Cambridge University Press, London
51. Pilovsky AS, Rosenblum MG, Osipov GV, Kurths J (1997) Phase synchronization of chaotic
oscillators by external driving. Physica D 104:219–238
52. Pingel D, Schmelcher P, Diakonos FK, Biham O (2000) Theory and application of systematic
detection of unstable periodic orbits in dynamical systems. Phys Rev E 62:2119–2134
53. Pingel D, Schmelcher P, Diakonos FK (2001) Detecting unstable periodic orbits in chaotic
continuous-time dynamical systems. Phys Rev E 64:026214
54. Pyragas K (1992) Continuous control of chaos by self-controlling feedback. Phys Lett A
170:421–428
55. Ramesh M, Narayanan S (1999) Chaos control by nonfeedback methods in the presence of
noise. Chaos Solitons Fractals 10:1473–1489
56. Robinson RC (2004) An Introduction to Dynamical Systems: Continuous and Discrete. Pren-
tice Hall, New York
57. Rosenblum MG, Pikovsky AS, Kurths J (1997) From phase to lag synchronization in coupled
chaotic oscillators. Phys Rev Lett 78:4193–4196
58. Ruelle D, Takens F (1971) On the nature of turbulence. Commun Math Phys 20:167–192
59. Rulkov NF, Sushchik MM, Tsimring LS, Abarband HDI (1995) Generalized synchronization
of chaos in directionally coupled chaotic systems. Phys Rev E 51:980–994
References 15
60. Schiff SJ, Jerger K, Duong DH, Chang T, Spano ML, Ditto WL (1994) Controlling chaos in
the brain. Nature 370:615–620
61. Sharkovskii AN (1964) Coexistence of cycles of a continuous map of a line into itself. J Ukr
Math 16:61–71
62. Shinbort T, Grebogi C, Yorke JA, Ott E (1993) Using small perturbations to control chaos.
Nature 363:411–417
63. Shinbort T, Ott E, Grebogi C, Yorke JA (1990) Using chaos to direct orbits to targets. Phys
Rev Lett 65:3215–3218
64. Smale S (1967) Differentiable dynamical systems. Bull Amer Math Society 73:747–817
65. So P, Ott E, Sauer T, Gluckman BJ, Grebogi C, Schiff SJ (1997) Extracting unstable periodic
orbits from chaotic time series data. Phys Rev E 55:5398–5417
66. Socolar JES, Sukow DW, Gauthier DJ (1994) Stabilizing unstable periodic orbits in fast dy-
namical systems. Phys Rev E 50:3245–3248
67. Stavroulakis P (2006) Chaos Applications in Telecommunications. CRC Press, New York
68. Tang KS, Man KF, Zhong GQ, Chen G (2001) Generating chaos via x|x|. IEEE Trans Circuits
Syst I 48:636–641
69. Tucker W (2002) A rigorous ODE solver and Smale’s 14th problem. Found Comput Math
2:53–117
70. Ushio T (1996) Limitation of delayed feedback control in nonlinear discrete-time systems.
IEEE Trans Circuits Syst I 43:815–816
71. Vanecek A, Celikovsky S (1996) Control Systems: From Linear Analysis to Synthesis of
Chaos. Prentice Hall, Upper Saddle River, NJ
72. Voss H (2000) Anticipating chaotic synchronization. Phys Rev E 61:5115–5119
73. Wang XF, Chen G (2003) Generating topologically conjugate chaotic systems via feedback
control. IEEE Trans Circuits Syst I 50:812–817
74. Wang XF, Chen G, Yu X (2000) Anticontrol of chaos in continuous-time systems via time-
delay feedback. Chaos 10:771–779
75. Wiggins S (2003) Introduction to Applied Nonlinear Dynamical Systems and Chaos, 2nd edn.
Springer, New York
76. Yagasaki K, Uozumi T (1998) New approach for controlling chaotic dynamical systems. Phys
Lett A 238:349–357
77. Yang XS, Li QD (2007) Chaotic Systems and Chaotic Circuits. Science Press, Beijing (in
Chinese)
78. Zhang HG, Wang ZL, Liu D (2005) Chaotifying fuzzy hyperbolic model using impulsive and
nonlinear feedback control approaches. Int J Bifurc Chaos 15:2603–2610
79. Zhao Y (2003) Introduction to some methods of chaos analysis and control for PDEs. In:
Chen G (ed) Chaos Control. Springer, New York
80. Zhang Z (1997) The Principle of Differential Dynamical Systems. Science Press, Beijing
Chapter 2
Preliminaries of Nonlinear Dynamics and Chaos
Abstract This chapter provides a brief review of some concepts and tools related to
the subject of the monograph – chaos suppression, chaos synchronization, and chao-
tification. After a quick review of the history of ‘dynamical systems,’ we provide a
summary of important definitions and theorems, including equilibrium points, pe-
riodic orbits, quasiperiodic orbits, stable and unstable manifolds, attractors, chaotic
attractors, Lyapunov stability, orbital stability, and symbolic dynamics, which are
all from the theory of ordinary differential equations and ordinary difference equa-
tions. The results are summarized both for continuous time and for discrete time.
Then, we present three examples for chaotic attractors including the logistic map,
the Lorenz attractor, and the Smale horseshoe. At the end of the chapter, we pro-
vide some necessary definitions and theorems of functional differential equations
(PDEs).
2.1 Introduction
17
18 2 Preliminaries of Nonlinear Dynamics and Chaos
tions, which in turn began as an attempt to understand and predict the motions that
surround us such as the orbits of the planets, the vibrations of a string, the ripples on
the surface of a pond, and the forever evolving patterns of the weather. The first two
hundred years of this scientific philosophy, from Newton and Euler to Hamilton and
Maxwell, produced many stunning successes in formulating the ‘rules of the world,’
but only limited results in finding their solutions.
By the end of the 19th century, researchers had realized that many nonlinear dif-
ferential equations did not have explicit solutions. Even the case of three masses
moving under the laws of Newtonian attraction could exhibit very complicated be-
havior and its explicit solution was not possible to obtain (e.g., the motion of the
sun, the earth, and the moon cannot be given explicitly in terms of known func-
tions). Short-term solutions could be given by power series, but these were not use-
ful in determining long-term behavior. The modern theory of nonlinear dynamical
systems began with Poincaré at the end of the 19th century with fundamental ques-
tions concerning the stability and evolution of the solar system. Poincaré shifted
the focus from finding explicit solutions to discovering geometric properties of so-
lutions. He introduced many ideas in specific examples. In particular, he realized
that a deterministic system in which the outside forces are not varying and are not
random can exhibit behavior that is apparently random (i.e., chaotic). Poincaré’s
point of view was enthusiastically adopted and developed by G. D. Birkhoff. He
found many different types of long-term limiting behavior. His work resulted in the
book [4] from which the term ‘dynamical systems’ came. Other people, such as Lya-
punov, Pontryagin, Andronov, Morser, Smale, Peixoto, Kolmogorov, Arnol’d, Sinai,
Lorenz, May, Yorke, Feigenbaum, Ruelle, and Takens, all made important contribu-
tions to the theory of dynamical systems. The field of nonlinear dynamical systems
and especially the study of chaotic systems has been hailed as one of the important
breakthroughs in science in the 20th century. Today, nonlinear dynamical systems
are used to describe a vast variety of scientific and engineering phenomena and have
been applied to a broad spectrum of problems in physics, chemistry, mathematics,
biology, medicine, economics, and various engineering disciplines.
This chapter is a brief review of some concepts and tools related to the subject
of the monograph – chaos suppression, chaos synchronization, and chaotification.
The goal of the chapter is to provide readers with some necessary background on
nonlinear dynamical systems and chaos so as to ease the difficulty when they read
subsequent chapters of this book. Readers interested in the complete theory of dy-
namical systems are recommended to refer to [1, 7, 11, 13, 17].
2.2 Background
Two types of models are extensively studied in the field of dynamical systems: the
continuous-time model and the discrete-time model. Most continuous-time nonlin-
ear dynamical systems are described by a differential equation of the form
2.2 Background 19
ẋ = f (x,t; μ ), (2.1)
x : I → Rn ,
t → x(t),
ẋ(t) = f (x(t),t; μ ).
The map x has the geometrical interpretation of a curve in Rn , and (2.1) gives the
tangent vector at each point of the curve, hence the reason for referring to (2.1) as
a vector field. We will refer to the space of independent variables of (2.1) (i.e., Rn )
as the phase space or state space. One goal of the study of dynamical systems is to
understand the geometry of solution curves in the phase space. It is useful to distin-
guish a solution curve which passes through a particular point in the phase space at
a specific time, i.e., for a solution x(t) with x(t0 ) = x0 . We refer to this as specifying
an initial condition or initial value. This is often included in the expression for a
solution by x(t,t0 , x0 ). In some situations explicitly displaying the initial condition
may be unimportant, in which case we will denote the solution merely as x(t). In
other situations the initial time may be always understood to be a specific value,
say t0 = 0; in this case we would denote the solution as x(t, x0 ). Similarly, it may
be useful to explicitly display the parametric dependence of solutions. In this case
we would write x(t,t0 , x0 ; μ ) or, if we were not interested in the initial condition,
x(t; μ ).
Ordinary differential equations that depend explicitly on time (i.e., ẋ = f (x,t; μ ))
are referred to as nonautonomous or time-dependent ordinary differential equations
or vector fields, and ordinary differential equations that do not depend explicitly
on time (i.e., ẋ = f (x; μ )) are referred to as autonomous or time-independent ordi-
nary differential equations or vector fields. The same terminology may be used in
the same way for discrete-time systems. It should be noted that a nonautonomous
vector field or map can always be made autonomous by redefining time as a new in-
dependent variable. This is done as follows. For a vector field ẋ = f (x,t), by writing
it as
dx f (x,t)
= (2.3)
dt 1
20 2 Preliminaries of Nonlinear Dynamics and Chaos
and using the chain rule, we can introduce a new independent variable s so that (2.3)
becomes ⎧
⎪ dx
⎪
⎨ ≡ ẋ = f (x,t),
ds
(2.4)
⎪
⎩ dt ≡ t˙ = 1.
⎪
ds
If we define y = (x,t) and f˜(y) = ( f (x,t), 1)T , we see that (2.4) becomes
T
dy
= f˜(y), y ∈ Rn+1 .
ds
For the map x(k + 1) = f (x(k), k), if we define y(k) = (x(k), k)T and f˜(y) =
( f (x(k), k), k + 1)T , we get the autonomous system under the new phase space
ẋ = f (x), x ∈ Rn , (2.5)
and
x(k + 1) = g(x(k)), x ∈ Rn . (2.6)
This vector field on R points to the left when x ≥ 0 and to the right if x < 0. Conse-
quently, there is no solution that satisfies the initial condition x(0) = 0. Indeed, such
a solution must initially decrease since ẋ(0) = −1, but, for all negative values of x,
solutions must increase. This cannot happen. Note further that solutions are never
2.3 Existence, Uniqueness, Flow, and Dynamical Systems 21
defined for all time. For example, if x0 > 0, then the solution through x0 is given by
x(t) = x0 − t, but this solution is only valid for −∞ < t < x0 for the same reason as
above.
The problem in this example is that the vector field is not continuous at 0; when-
ever a vector field is discontinuous we face the possibility that nearby vectors may
point in ‘opposing’ directions, thereby causing solutions to halt at these points.
ẋ = 3x2/3 .
0, if t ≤ τ ,
uτ (t) =
(t − τ )3 , if t > τ
is also a solution satisfying the initial condition uτ (0) = 0. While the differential
equation in this example is continuous at x0 = 0, the problems arise because 3x2/3
is not differentiable at this point.
From these two examples it is clear that, to ensure the existence and uniqueness
of solutions, certain conditions must be imposed on the function f . In the first ex-
ample, f is not continuous at the point 0, while, in the second example, f fails to be
differentiable at 0. It turns out that the assumption that f is continuously differen-
tiable is sufficient to guarantee both existence and uniqueness of the solution. In fact,
we can furthermore guarantee the existence and uniqueness under a weaker condi-
tion, called the Lipschitz condition, on f . We now state several qualitative theorems
about the solutions of system (2.5) [7].
f (y) − f (x) ≤ K x − y
for some K < ∞. Then, there are some constant c > 0 and a unique solution
x(·, x0 ) : (−c, c) → U satisfying the differential equation described by (2.5) with
initial condition x(0) = x0 .
1 Roughly speaking, a manifold is a set which locally has the structure of Euclidean space. In ap-
plications, manifolds are most often met as m-dimensional surfaces embedded in Rn . If the surface
has no singular points, i.e., the derivative of the function representing the surface has maximal
rank, then by the implicit function theorem it can locally be represented as a graph. The surface is
a Cr manifold if the (local) coordinate charts representing it are Cr .
22 2 Preliminaries of Nonlinear Dynamics and Chaos
The local existence theorem becomes global in all cases when we work on com-
pact manifolds2 M instead of open spaces like Rn .
Theorem 2.2 (Global Existence). The differential equation ẋ = f (x), x ∈ M, with
M compact, and f ∈ C1 , has solution curves defined for all t ∈ R.
The local theorem can be extended to show that solutions depend in a ‘nice’
manner on initial conditions.
Theorem 2.3 (Dependence on Initial Value). Let U ∈ Rn be open and suppose that
f : U → Rn has a Lipschitz constant K. Let y(t), z(t) be solutions of ẋ = f (x) on the
closed interval [t0 ,t1 ]. Then, for all t ∈ [t0 ,t1 ],
If x(t) is a solution of (2.5), then x(t + τ ) is also a solution for any τ ∈ R. So, it
suffices to choose a fixed initial time, say, t0 = 0, which is understood and therefore
often omitted from the notation. If we denote by φt (x) = φ (t, x) the state in Rm
reached by the system at time t starting from x, then the totality of solutions of (2.5)
can be represented by a one-parameter family of maps φ t : U → Rm satisfying
d
[φ (t, x)] = f [φ (τ , x)]
dt t=τ
for all x ∈ U and for all τ ∈ I for which the solution is defined. The family of
maps φt (x) = φ (t, x) is called the flow (or the flow map) generated by the vector
field f . The set of points {φ (t, x0 ) : t ∈ I} defines an orbit of (2.5), starting from
a given point x0 . It is a solution curve in the state space, parameterized by time.
The set {[t, φ (t, x0 )] : t ∈ I} is a trajectory of (2.5) and it evolves in the space of
motions. However, in applications, the terms ‘orbit’ and ‘trajectory’ are often used
as synonyms. A simple example of a trajectory in the space of motions R × R2
and the corresponding orbit in the state space R2 are given in Fig. 2.1. Clearly,
the orbit is obtained by projecting the trajectory on to the state space. The flows
generated by vector fields form a very important subset of a more general class of
maps, characterized by the following definition.
Definition 2.1. A flow is a map φ : I ⊂ R × X → X where X is a metric space, that
is, a space endowed with a distance function, and φ has the following properties:
2 A compact manifold is a manifold that is compact as a topological space, such as the circle
(the only one-dimensional compact manifold) and the n-dimensional sphere and torus. For many
problems in topology and geometry, it is convenient to study compact manifolds because of their
‘nice’ behavior. Among the properties making compact manifolds ‘nice’ are the facts that they
can be covered finitely by many coordinate charts and that any continuous real-valued function is
bounded on a compact manifold.
2.3 Existence, Uniqueness, Flow, and Dynamical Systems 23
y y
time
x
(a) (b) x
Fig. 2.1 A damped oscillator in R2 : (a) space of motions; (b) state space
ẋ = −x + t. (2.7)
from which it is clear that all solutions asymptotically approach the solution t − 1 as
t → ∞. The frozen time or ‘instantaneous’ fixed points for (2.7) are given by
x = t.
At a fixed t, this is the unique point where the vector field is zero. However, x = t is
not a solution of (2.7). This is different from the case of an autonomous vector field
where a fixed point is a solution of the vector field.
Definition 2.4. Consider the following nonautonomous system:
f (t, 0) = 0, ∀t ≥ 0.
ẋ = f (x) (2.9)
and the derived flow φ . A solution φ (t, x∗ ) of system (2.9) through a point x∗ is said
to be periodic with period T > 0 if φ (T, x∗ ) = x∗ . The set L0 = {φ (t, x∗ ) : t ∈ [0, T )}
is a closed curve in the state space and is called a periodic orbit or cycle. T is called
the period of the cycle and measures its time length. It should be emphasized that
isolated periodic solutions are possible only for nonlinear differential equations.
Moreover, a limit cycle4 can be structurally stable in the sense that, if it exists for
a given system of differential equations, it will persist under a slight perturbation
of the system in the parameter space. On the contrary, linear systems of differential
equations in Rm (m ≥ 2) may have a continuum of periodic solutions characterized
by a pair of purely imaginary eigenvalues (the case of a center, which will be in-
troduced later). But, these periodic solutions can be destroyed by arbitrarily small
perturbations of the coefficients. In other words, these periodic solutions are not
structurally stable.
For the discrete-time system of xk+1 = G(xk ), an n-periodic orbit is defined as
the set of points L0 = {x0 , x1 , . . . , xn−1 } with xi = x j (i = j) such that
It should be noted that each point in an n-periodic orbit is an n-periodic point since,
for k = 0, . . . , n − 1,
4 A limit cycle is an isolated periodic solution of an autonomous system. The points on the limit
cycle constitute the limit set, which is the set of points in the state space that a trajectory repeatedly
visits. A limit set is only defined for discrete-time or continuous-time autonomous systems.
26 2 Preliminaries of Nonlinear Dynamics and Chaos
G ( x0 )
2
G( x0 )
x0 x0
n -1
G ( x0 )
L0 L0
(a) (b)
Fig. 2.2 Periodic orbits in: (a) a continuous-time system; (b) a discrete-time system
circle. For any jump point there exist other jump points arbitrarily close to it, but no
point is revisited in finite time. This means that the map is topologically transitive
(see Definition 2.12). However, points that are close to each other will remain close
under the iteration.
ẍ + ω12 x = 0, ÿ + ω22 y = 0,
where x, y ∈ R and ω1 and ω2 are real constants. The above system can be rewritten
in the form of first-order linear differential equations in R4 :
⎧
⎪
⎪ ẋ1 = −ω1 x2 ,
⎨
ẋ2 = ω1 x1 ,
(2.11)
⎪
⎪ ẏ1 = −ω2 y2 ,
⎩
ẏ2 = ω2 y1 ,
where θi and ri (i = 1, 2) denote the angle and the radius, respectively. We can see
that the above equations describe a particle rotating on a two-dimensional torus for
a given pair (r1 , r2 ), ri > 0 (i = 1, 2) (see Fig. 2.3). There are two basic possibilities
for the motion:
(i) ω1 /ω2 is a rational number, in which case there exists a continuum of periodic
orbits of period q;
2.4 Equilibrium, Periodic Orbit, Quasiperiodic Orbit, and Poincaré Map 27
(ii) ω1 /ω2 is an irrational number, in which case the orbit starting from any initial
point on the torus wanders on it, getting arbitrarily near any other point, with-
out ever returning to that exact initial point. The flow generated by (2.12) is
topologically transitive on the torus (see Fig. 2.4).
In both cases, points that are close to each other remain close under the action of the
flow.
θ1
θ2
(a) (b)
Σ := {(x, θ ) ∈ Rn × S1 : θ = θ0 }.
Every T seconds, the trajectory of (2.14) intersects Σ (see Fig. 2.5). The resulting
map PN : Σ → Σ (Rn → Rn ) is defined by
θ0
PN (x)
x
PA(x ) x0
Σ
Remark 2.3.
(i) PA is defined locally, i.e., in a neighborhood of x0 . Unlike the nonautonomous
case, it is not guaranteed that the trajectory emanating from any point on Σ will
intersect Σ .
(ii) For a Euclidean state space, the point PA (x) is not the first point where φt (x)
intersects Σ ; φt (x) must pass through Σ at least once before returning to V . This
is in contrast with the cylindrical state space in Fig. 2.5.
(iii) PA is a diffeomorphism and is, therefore, invertible and differentiable [12].
(ii) (Discrete time) S is said to be invariant under the map xk+1 = g(xk ) if for any
x0 ∈ S we have gn (x0 ) ∈ S for all n ∈ Z.
If we restrict ourselves to positive time (i.e., t ≥ 0, and n ≥ 0), then we refer to S as
a positively invariant set, while, for negative time, as a negatively invariant set.
The definition means that trajectories starting in the invariant set remain in the
invariant set, for all of their future and all of their past.
Definition 2.7. An invariant set S ⊂ Rn is said to be a Cr (r ≥ 1) invariant manifold
if S has the structure of a Cr differentiable manifold. Similarly, a positively (nega-
tively) invariant set S ⊂ Rn is said to be a Cr (r ≥ 1) positively (negatively) invariant
manifold if S has the structure of a Cr differentiable manifold.
Definition 2.8. Let φ (t, x) be a flow on a metric space M. Then, a point y ∈ M is
called an ω -limit point of x ∈ M for φ (t, x) if there exists an infinitely increasing
sequence {ti } such that
lim d(φ (ti , x), y) = 0.
i→∞
The set of all ω -limit points of x for φ (t, x) is called the ω -limit set and is denoted
by ω (x).
The definitions of α -limit point and α -limit set of a point x ∈ M are obtained just
by taking sequences ti decreasing in i to −∞. The α -limit set of x is denoted as α (x).
Definition 2.9. A point x0 is called nonwandering if the following condition holds.
Flows: for any neighborhood U of x0 and T > 0, there exists some |t| > T such that
φ (t,U) ∩U = 0;
/
gn (U) ∩U = 0.
/
The set of all nonwandering points of a flow or map is called the nonwandering set
of that particular flow or map.
Definition 2.10. A closed invariant set A ⊂ Rn is called an attracting set if there is
some neighborhood U of A such that
Flows: for any t ≥ 0, φ (t,U) ⊂ U and t>0 φ (t,U) = A;
Maps: for any n ≥ 0, gn (U) ⊂ U and n>0 gn (U) = A.
Definition
2.11. The basin of attraction of an attracting set A is given by
Flows: t≤0 φ (t,U);
Maps: n≤0 gn (U);
where U is any open set satisfying Definition 2.10.
Definition 2.12. A closed invariant set A is said to be topologically transitive if, for
any two open sets U,V ⊂ A,
Flows: there exists a t ∈ R such that φ (t,U) ∩V = 0;/
Maps: there exists an n ∈ Z such that gn (U) ∩V = 0./
2.6 Continuous-Time Systems in the Plane 31
In this section and the next two sections we will discuss the types of equilibrium
points of planar systems of continuous time and discrete time, respectively. In ap-
plications, we very often encounter linear systems described by two first-order dif-
ferential equations (or a differential equation of second order), either because the
underlying model is linear or because it is linearized around an equilibrium point.
Systems in two-dimensional space are particularly easy to discuss in full detail and
give rise to a number of interesting basic dynamic configurations. Moreover, in prac-
tice, it is very difficult or impossible to determine the exact values of the eigenvalues
and eigenvectors for matrices of order greater than two. Thus, one can draw inspi-
ration from the discussion about planar systems when studying high-dimensional
systems.
The general form of a continuous-time planar system can be written as
ẋ x a11 a12 x
=A = , (2.15)
ẏ y a21 a22 y
where x, y ∈ R and ai j are real constants. If det(A) = 0, the unique equilibrium, for
which ẋ = ẏ = 0, is x = y = 0. The characteristic equation is
λ 2 − tr(A)λ + det(A) = 0,
x x
(a) (b)
y y
x x
(c) (d)
y
y
x x
(e) (f)
(1) (2) T (1) (2) T
where u1 = u1 , u1 and u2 = u2 , u2 are eigenvectors corresponding
to the eigenvalues λ1 and λ2 , respectively. We have three basic subcases corre-
sponding to Fig. 2.7 (a), (b), and (e), respectively (eigenvalues are plotted in the
complex plane).
(i) tr(A) < 0, det(A) > 0. In this case, eigenvalues and eigenvectors are real
and both eigenvalues are negative (say, 0 > λ1 > λ2 ). The two-dimensional
2.6 Continuous-Time Systems in the Plane 33
y y
x x
(a) (b)
Fig. 2.8 Equilibrium types in the plane with repeated eigenvalue: (a) bicritical node; (b) Jordan
node
state space coincides with the stable eigenspace.5 The equilibrium is called
a stable node, and the term ‘node’ refers to the characteristic shape of the
ensemble of orbits around the equilibrium.
(ii) tr(A) > 0, det(A) > 0. In this case, eigenvalues and eigenvectors are real, both
eigenvalues are positive (say, λ1 > λ2 > 0), and the state space coincides with
the unstable eigenspace. The equilibrium is called an unstable node.
(iii) det(A) = 0. In this case, Δ > 0 independent of the sign of the trace of A. One
eigenvalue is positive, and the other is negative (say, λ1 > 0 > λ2 ). There are,
then, a one-dimensional stable eigenspace and a one-dimensional unstable
eigenspace and the equilibrium is known as a saddle point.
Case 2: Δ < 0. The eigenvalues and eigenvectors are complex conjugate pairs and
we have
(λ1 , λ2 ) = (λ , λ̄ ) = α ± iβ
with
1 1√
α = tr(A), β = −Δ .
2 2
The solutions have the form
x(t) = Ceα t cos(β t + φ ),
y(t) = Ceα t sin(β t + φ ),
time unit, that is, β /2π . The amplitude or size of the oscillations depends on
the initial condition and eα t (more on this point below). There are three subcases
depending on the sign of tr(A) and therefore of Re(λ ) = α ; see the corresponding
illustrations in Figs. 2.7 (c), (d), and (f), respectively.
(i) tr(A) < 0, Re(λ ) = α < 0. The oscillations are dampened and the system
converges to the equilibrium. The equilibrium point is known as a focus or,
sometimes, a vortex, due to the characteristic shape of the orbits around
the equilibrium. In this case the focus or vortex is stable and the stable
eigenspace coincides with the state space.
(ii) tr(A) > 0, Re(λ ) = α > 0. The amplitude of the oscillations gets larger with
time and the system diverges from the equilibrium. The unstable eigenspace
coincides with the state space and the equilibrium point is called an unstable
focus or vortex.
(iii) tr(A) = 0, Re(λ ) = α = 0. In this special case we have a pair of purely imag-
inary eigenvalues. Orbits neither converge to, nor diverge from, the equilib-
rium point, but they oscillate regularly around it with a constant amplitude
that depends only on initial conditions and the equilibrium point is called a
center.
Case 3: Δ = 0. The eigenvalues are real and equal to each other, λ1 = λ2 = λ .
In this case, if A = λ I, only one eigenvector can be determined, say u =
T
u(1) , u(2) , defining a single straight line through the origin. We can write the
general solution as
x(t) = (c1 u(1) + c2 v(1) )eλ t + tc2 u(1) eλ t ,
y(t) = (c1 u(2) + c2 v(2) )eλ t + tc2u(2) eλ t ,
with
x(0) = c1 u(1) + c2v(1) ,
y(0) = c1 u(2) + c2 v(2) .
The equilibrium type is again a node, sometimes called a Jordan node. An ex-
ample of this type is provided in Fig. 2.8 (b), where it is obvious that there is
a single eigenvector. If A = λ I the equilibrium is still a node, sometimes called
a bicritical node. However, all half-lines from the origin are solutions, giving a
star-shaped form (see Fig. 2.8 (a)).
Fig. 2.9 provides a very useful geometric representation in the (tr(A), det(A))
plane of the various cases discussed above. Quadrants III and IV of the plane cor-
respond to saddle points, quadrant II to stable nodes and foci, and quadrant I to
unstable nodes and foci. The parabola divides quadrants I and II into nodes and
foci (the former below the parabola and the latter above it). The positive part of the
det(A) axis corresponds to centers.
The analysis of systems with n > 2 variables can be developed along the same
lines although geometric insight will fail when the dimension of the state space is
larger than three. In order to give the reader a broad idea of common situations we
2.6 Continuous-Time Systems in the Plane 35
det (A)
tr (A)
depict sample orbits of three-dimensional systems in Fig. 2.10, which indicates the
corresponding eigenvalues in the complex plane. The system in Fig. 2.10 (a) has two
positive real eigenvalues and one negative real eigenvalue. The equilibrium point is
an unstable saddle. In this case the plane associates with the positive real eigen-
values. All orbits eventually converge to the unstable eigenspace and are captured
by the expanding dynamics. The only exceptions are those orbits initiating on the
stable eigenspace (defined by the eigenvector associated with the negative eigen-
value) which converge to the plane at the equilibrium point. When A is an m × m
matrix, we divide the eigenvectors (or, in the complex case, the vectors equal to the
real and imaginary parts of them) into three groups, according to whether the cor-
responding eigenvalues have negative, positive, or zero real parts. Then, the subsets
of the state space spanned (or generated) by each group of vectors are known as the
stable, unstable, and center eigenspaces, respectively, and denoted by E s , E u , and
E c . Notice that the term saddle in Rm refers to all cases in which there exist some
eigenvalues with positive and some with negative real parts. We use the term saddle
node when eigenvalues are all real, saddle focus when some of the eigenvalues are
complex. An example of the latter is presented in Fig. 2.10 (b). The two real vectors
associated with the complex conjugate pair of eigenvalues, with negative real parts,
span the stable eigenspace. Orbits approach asymptotically the unstable eigenspace
defined by the eigenvector associated with the positive eigenvalue, along which the
dynamics is explosive.
36 2 Preliminaries of Nonlinear Dynamics and Chaos
(a)
z
(b)
Fig. 2.10 Continuous-time dynamics in R3 : (a) saddle node; (b) saddle focus
2.7 General Solutions of Discrete-Time Linear Systems 37
(κ j , κ j+1 ) = (κ j , κ̄ j ) = σ j ± iθ j
(v j , v j+1 ) = (v j , v¯j ) = p j ± iq j ,
h n j −1
x(n) = ∑ ∑ k jl nl κ nj,
j=1 l=0
We assume that the matrix I − B is nonsingular. Thus, the origin is the unique equi-
librium point of (2.20). The characteristic equation is analogous to the continuous
case as well,
κ 2 − tr(B)κ + det(B) = 0,
and the eigenvalues are given by
2.8 Discrete-Time Systems in the Plane 39
1
κ1,2 = tr(B) ± tr(B)2 − 4det(B) .
2
We also assume that the equilibria of (2.20) are nondegenerate, i.e., |κ1 |, |κ2 | = 1.
We will discuss the dynamics of the discrete-time system (2.20) for the following
three cases.
Case 1: Δ > 0. The eigenvalues are real and solutions take the form
⎧
⎨ x(n) = c1 κ n v(1) + c2 κ n v(1) ,
1 1 2 2
⎩ y(n) = c κ n v(2) + c κ n v(2) .
1 1 1 2 2 2
(i) If |κ1 | < 1 and |κ2 | < 1 the fixed point is a stable node. This means that
solutions are sequences of points approaching the equilibrium as n → +∞.
If κ1 , κ2 > 0 the approach is monotonic; otherwise, there are improper os-
1
1
2 2
3 3
1 4
5
3 2 6
7 1
2
8
3
(a) (b)
1 2 12
1 4 6 8 10
3
6
2 5 3
4 4
6 1 5
7 2
3 9
4
2 11
1
3
5
(c) (d)
Fig. 2.11 Phase diagrams for real eigenvalues: (a) and (c) stable nodes; (b) and (d) saddle points
40 2 Preliminaries of Nonlinear Dynamics and Chaos
cillations6 (see Figs. 2.11 (a) and (c), respectively). In this case, the stable
eigenspace coincides with the state space.
(ii) If |κ1 | > 1 and |κ2 | > 1 the fixed point is an unstable node. In this case,
solutions are sequences of points approaching equilibrium as n → −∞. If
κ1 , κ2 > 0 the approach is monotonic; otherwise, there are improper oscil-
lations (as in Figs. 2.11 (a) and (c), respectively, but arrows point in the
opposite direction and the time order of points is reversed). In this case, the
unstable eigenspace coincides with the state space.
(iii) If |κ1 | > 1 and |κ2 | < 1 the fixed point is a saddle point. No sequences of
points approach the equilibrium for n → ±∞ except for those originating
from points on the eigenvectors associated with κ2 . Again, if κ1 , κ2 > 0
orbits move monotonically (see Fig. 2.11 (b)); otherwise they oscillate im-
properly (see Fig. 2.11 (d)). The stable and unstable eigenspaces are one
dimensional.
Case 2: Δ < 0. In this case, det(B) > 0. Eigenvalues are a complex conjugate pair
given by
(κ1 , κ2 ) = (κ , κ̄ ) = σ ± iθ
and solutions are sequences of points situated on a spiral whose amplitude in-
creases or decreases in time according to the factor rn , where
r = |σ ± iθ | = σ 2 + θ 2 = det(B)
is the modulus of the complex eigenvalue pair. Solutions are of the form
x(n) = Crn cos(ω n + φ ),
y(n) = Crn sin(ω n + φ ).
extreme time indices) can also have fluctuating behavior, called improper oscillations, owing to the
fact that if their eigenvalue β < 0, β n will be positive or negative according to whether n is even or
odd. The term improper refers to the fact that in this case oscillations of variables have a ‘kinky’
form that does not properly describe the smoother ups and downs of real variables.
2.8 Discrete-Time Systems in the Plane 41
1
5
1
3 1
2 5
6 4
4 5
2
2 3 4
3
(a) (b)
5
10 9
4
1
6
8
3
2 7
(c)
Fig. 2.12 Phase diagrams for complex eigenvalues: (a) a stable focus; (b) periodic cycles; (c) a
quasiperiodic solution
(tr(B), det(B)) plane, defining a triangle. Points inside the triangle correspond to
stable combinations of the trace and determinant of B.7 The parabola defined by
tr(B)2 = 4det(B)
divides the plane into two regions corresponding to real eigenvalues (below the
parabola) and complex eigenvalues (above the parabola). Combinations of trace and
determinant above the parabola but in the triangle lead to stable foci, combinations
below the parabola but in the triangle are stable nodes. All other combinations lead
to unstable equilibria.
Fig. 2.14 is an example of a system in R3 . There are a complex conjugate pair
with modulus less than one, and one dominant real eigenvalue greater than one. The
equilibrium point is a saddle focus.
det (B)
–2 –1 1 2
tr (B)
7 In Case 2, if 1 + tr(B) + det(B) = 0 while (ii) and (iii) hold, one eigenvalue is equal to −1; if
1 − tr(B) + det(B) = 0 while (i) and (iii) hold, one eigenvalue is equal to +1; and if det(B) = 1
while (i) and (ii) hold, the two eigenvalues are a complex conjugate pair with modulus equal to 1.
2.9 Stabilities of Trajectories I: The Lyapunov First Method 43
7
6
5
2
4
3
1
Stability theory plays a central role in systems theory and engineering. The so-called
Lyapunov first method or Lyapunov indirect method is used to study the stability of
a system’s trajectories through calculating the eigenvalues of a linearized system at
the objective trajectories. This means that the Lyapunov first method is essentially a
local method which can only be used in a neighborhood of the objective trajectory.
Let x(t)
¯ be any solution of (2.5). Roughly speaking, x(t)
¯ is stable if solutions starting
‘close’ to x(t)
¯ at a given time remain close to x(t)¯ for all later times. It is asymp-
totically stable if nearby solutions not only stay close, but also converge to x(t)
¯ as
t → ∞.
44 2 Preliminaries of Nonlinear Dynamics and Chaos
Definition 2.14 (Lyapunov Stability). x(t) ¯ is said to be stable (or Lyapunov stable)
if, for any given ε > 0, there exists a δ = δ (ε ) > 0 such that, for any other solution,
¯ 0 ) − y(t0 ) < δ , we have x(t)
y(t), of (2.5) satisfying x(t ¯ − y(t) < ε for t > t0 ,
t0 ∈ R.
A solution which is not stable is said to be unstable.
Definition 2.15 (Asymptotic Stability). x(t)
¯ is said to be asymptotically stable if it
is Lyapunov stable and, for any other solution, y(t), of (2.5), there exists a constant
¯ 0 ) − y(t0 ) < b, then limt→∞ x(t)
b > 0 such that, if x(t ¯ − y(t) = 0.
A new stability definition which is different from Lyapunov’s definitions is given
as follows.
Definition 2.16. An orbit generated by system ẋ = f (x) (x ∈ Rn ), with initial condi-
tion x0 on a compact, φ -invariant subset A of the state space (i.e., φ (A) ⊂ A), is said
to be orbitally stable (asymptotically orbitally stable) if the invariant set
Γ = {φ (t, x0 ) : x0 ∈ A, t ≥ 0}
are as follows.
Definition 2.17. The equilibrium point x¯ is Lyapunov stable (or, simply, stable) if,
for every ε > 0, there exists δ (ε ) such that
y (0) y(0)
δ
x x
y(t) y(t)
b
ε
(a) (b)
Fig. 2.15 (a) Lyapunov stability; (b) asymptotic stability
x (t ) y(t )
(a)
t t0
x (t ) y(t )
(b)
t
nearby solutions. Also, these definitions are for autonomous systems, since in the
nonautonomous case it may be that δ and b depend explicitly on t0 .
In order to determine the stability of x(t)
¯ we must understand the nature of solu-
tions near x(t).
¯ Let
x(t) = x(t)
¯ + y. (2.22)
Substituting (2.22) into (2.5) and performing Taylor expansion about x(t)
¯ gives
˙¯ + ẏ = f (x(t))
ẋ = x(t) ¯ + D f (x(t))y
¯ + o( y 2), (2.23)
˙¯ = f (x(t)),
fact that x(t) ¯ (2.23) becomes
ẏ = D f (x(t))y
¯ + o( y 2). (2.24)
y(t) = eD f (x)t
¯
y0 .
Ws
Es
h(N)
N
u
W
Eu
s
Ws E
u
E
s
E
u
W s
Ws E E
s
s Ws
E Ws
u u
E W Ws s
Ws E
(a) (b)
mapping orbits of the nonlinear system (2.5) to those of the linear system (2.25).
The map h preserves the sense of orbits and can also be chosen so as to preserve
parameterization by time.
Es Ws
Eu
u
W
Es Ws
(a)
Es
Ws
Es
Ws
Es Ws
(b)
Es Ws
Eu
u
W
Es Ws
(c)
Fig. 2.19 Stable and unstable eigenspaces and manifolds in R3
2.9 Stabilities of Trajectories I: The Lyapunov First Method 49
ξ̇ = A(t)ξ , (2.27)
where the matrix A(t) := D f (x∗ (t)) has periodic coefficients of period T , so that
A(t) = A(t + T ). Solutions of (2.27) take the general form of
B(t)eλ t ,
where the vector B(t) is periodic in time with period T , B(t) = B(t + T ). Denote the
fundamental matrix of (2.27) as Φ (t), that is, the m × m time-varying matrix whose
m columns are solutions of (2.27). Thus, Φ (t) can be written as
Φ (t) = Z(t)etR ,
50 2 Preliminaries of Nonlinear Dynamics and Chaos
Φ (T ) = eT R .
Therefore, the dynamics of orbits near the cycle Γ are determined by the eigenvalues
(λ1 , . . . , λm ) of the matrix eT R which are uniquely determined by (2.27).9 The λ s are
called characteristic (Floquet) multipliers of (2.27), whereas the eigenvalues of R,
(κ1 , . . . , κm ), are called characteristic (Floquet) exponents.
One of the roots (multipliers), say λ1 , is always equal to 1, so that one of the
characteristic exponents, say κ1 , is always equal to 0, which implies that one of
the solutions of (2.27) must have the form B(t) = B(t + T ). This can be verified
by putting B(t) = x∗ (t) and differentiating it with respect to time. The presence of
a characteristic multiplier equal to 1 (a characteristic exponent equal to 0) can be
interpreted as that if, starting from a point on the periodic orbit Γ , the system is
perturbed by a small displacement in the direction of the flow, it will remain on Γ .
What happens for small, random displacements off Γ depends only on the remaining
m − 1 multipliers λ j ( j = 2, . . . , m) (or the remaining κ j exponents, j = 2, . . . , m),
provided none of the other moduli is equal to 1 (provided none of them is equal to
0). In particular, we have
(i) If all the characteristic multipliers λ j ( j = 2, . . . , m) satisfy the conditions
|λ j | < 1, then the periodic orbit is asymptotically (in fact, exponentially) or-
bitally stable.
(ii) If for at least one of the multipliers, say λk , |λk | > 1, then the periodic orbit is
unstable.
The so-called second or direct method of Lyapunov is one of the greatest landmarks
in the theory of dynamical systems and has proved to be an immensely fruitful tool
for analysis. The basic idea of the method is as follows. Suppose that there is a vector
field in the plane with a fixed point x,
¯ and we want to determine whether it is stable
or not. Roughly speaking, according to our previous definitions of stability it would
be sufficient to find a neighborhood U of x¯ for which orbits starting in U remain
in U for all positive time (for the moment we do not distinguish between stability
and asymptotic stability). This condition would be satisfied if we could show that
the vector field is either tangent to the boundary of U or pointing inward toward x¯
(see Fig. 2.20). This situation should remain true even as we shrink U down to x. ¯
Now, Lyapunov’s method gives us a way of making this precise; we will show this
for vector fields in the plane and then generalize our results to Rn .
9 The matrix eT R itself is uniquely determined but for a similarity transformation, that is, we can
substitute eT R with P−1 eT R P where P is a nonsingular m × m matrix. This transformation leaves
eigenvalues unchanged.
2.10 Stabilities of Trajectories II: The Lyapunov Second Method 51
∇V ∇V
∇V
∇V
(x , y )
V = constant
∇V
∇V
Fig. 2.21 Level set of V and ∇V denotes gradient vector of V at various points on the boundary
52 2 Preliminaries of Nonlinear Dynamics and Chaos
where the ‘·’ represents the usual vector scalar product. (This is simply the derivative
of V along orbits of (2.28), and is sometimes referred to as the orbital derivative.)
We now state the general theorem which makes these ideas precise.
Theorem 2.6. Consider the following vector field:
ẋ = f (x), x ∈ Rn . (2.29)
φ (t, x0 ) → M as t → ∞.
According to Theorem 2.7, if a Lyapunov function V (x) can be found such that
V̇ (x) ≤ 0 for x ∈ N, among the sets of points with forward orbits in N there exist
sets of points defined by
Vk = {x : V (x ≤ k)}
(k is a finite and positive scalar) which lie entirely in N. Since V̇ ≤ 0, the sets Vk are
invariant in the sense that no orbit starting in a Vk can ever move outside of it. If,
in addition, it could be shown that the fixed point x¯ = 0 is the largest (or, for that
matter, the only) invariant subset of E, Theorem 2.7 would guarantee its asymptotic
stability.
The direct method can also be extended to discrete-time systems. We only state
a result analogous to Theorem 2.6 in the following. A discrete-time version of the
invariance principle of LaSalle will be introduced in Chap. 5.
Theorem 2.8. Consider the system described by the difference equation given in
(2.21). Let x¯ = 0 again be an isolated equilibrium point at the origin. If there exists
a C1 function V (xn ) : N → R, defined on some neighborhood N ⊂ Rm of x¯= 0, such
that
2.11 Chaotic Sets and Chaotic Attractors 53
(i) V (0) = 0,
(ii) V (x) > 0 in N − {0},
(iii) Δ V (xn ) := V [G(xn )] − V (xn ) ≤ 0 in N − {0},
then x¯= 0 is stable (in the sense of Lyapunov). Moreover, if
(iv) Δ V (x) < 0 in N − {0},
then x¯= 0 is asymptotically stable.
More complicated invariant, attracting sets and attractors in structure than that of
periodic or quasiperiodic sets are called chaotic. A dynamical system (discrete-time
or continuous-time) is called chaotic if its typical orbits are aperiodic, bounded, and
such that nearby orbits separate fast in time. Chaotic orbits never converge to a stable
fixed or periodic point, but exhibit sustained instability, while remaining forever in
a bounded region of the state space.
Definition 2.20. A flow φ (a continuous map G) on a metric space M is said to
possess sensitive dependence on initial conditions on M if there exists a real number
δ > 0 such that for all x ∈ M and for all ε > 0, there exist y ∈ M (y = x) and T > 0 (an
integer n > 0) such that d(x, y) < ε and d[φ (T, x), φ (T, y)] > δ (d[Gn (x), Gn (y)] >
δ ).
which is linear and its solution is φ (t, x) = x0 eat . Therefore, the flow map φ
is topologically transitive on the open, unbounded (and therefore noncompact)
invariant sets (−∞, 0) and (0, ∞). Also, for any two points x1 , x2 ∈ R and x1 = x2
we have
|φ (t, x1 ) − φ (t, x2 )| = eat |x1 − x2 |
and φ has sensitive dependence on initial conditions on R. However, the orbits
generated by φ are not chaotic.
(iv) This definition refers to a ‘chaotic flow (or map) on a set A’ or, for short, a
‘chaotic set A.’ It does not imply that all orbits of a chaotic flow (or map) on A
are chaotic. In fact, there are many nonchaotic orbits on chaotic sets, in particu-
lar, many unstable periodic orbits. They are so important that some researchers
add a third condition for chaos, that periodic orbits are dense on A [5]. This is
an interesting property and it is automatically satisfied if the chaotic invariant
set is hyperbolic [17].
(v) Two quite general results can be used to confirm the close relationship between
chaos, as characterized in Definition 2.21, and dense periodic sets. The first re-
sult [3] states that for any continuous map on a metric space, transitivity and the
presence of a dense set of periodic orbits imply sensitive dependence on initial
conditions, that is, chaos. The second result [16] states that for any continuous
map on an interval of R, transitivity alone implies the presence of a dense set
of periodic orbits and, therefore, in view of the first result, it implies sensitive
dependence on initial conditions, and therefore chaos.
(vi) There are several other different definitions of chaos based on orbits rather than
sets. For example, in [1] (p. 196, Definition 5.2; p. 235, Definition 6.2; pp. 385–
386, Definition 9.6), a chaotic set is defined as the ω -limit set of a chaotic orbit
Gn (x0 ) which itself is contained in the ω -limit set. In this case, the presence
of sensitive dependence on initial conditions (or a positive Lyapunov charac-
teristic exponent) is not enough to characterize chaotic properties of orbits and
additional conditions must be added to exclude unstable periodic or quasiperi-
odic orbits.
Symbolic dynamics is a powerful tool for understanding the orbit structure of a large
class of dynamical systems. In this section we only provide a brief introduction to
this tool.
To establish the tool, three steps are needed. First, we define an auxiliary system
characterized by a map, called a shift map, acting on a space of infinite sequences
called the symbol space. Next, we prove some properties of the shift map. Finally,
we establish a certain equivalence relation between a map we want to study and the
shift map, and show that the relationship preserves the properties in question.
2.12 Symbolic Dynamics and the Shift Map 55
We begin by defining the symbol space and the shift map. Let S be a collection
of symbols. In a physical interpretation, the elements of S could be anything, for
example letters of an alphabet or discrete readings of some measuring device for the
observation of a given dynamical system. To make ideas more clear, we assume here
that S consists of only two symbols; let them be 0 and 1. Then, we have S = {0, 1}.
Next, we want to construct the space of all possible bi-infinite sequences of 0 and 1,
defined as
Σ2 := · · · S × S × S × · · · .
A point in Σ2 , s, is therefore represented as a bi-infinity-tuple of elements of S, that
is, s ∈ Σ2 means
s = {. . . s−n . . . s−1 s0 s1 . . . sn . . . },
where ∀i, si ∈ S (i.e., si = 0 or 1). For example, s = {. . . 00010100111 . . .}.
We can define a distance function d¯in the space Σ2
+∞
d(si , s¯i )
¯ s)
d(s, ¯= ∑ 2|i|
, (2.30)
i=−∞
This means that two points of Σ2 are close to each other if their central elements
are close, i.e., if the elements whose indexes have small absolute values are close.
¯ i , s¯i ), the infinite sum on the right-hand side of
Notice that, from the definition of d(s
(2.30) is less than 3, and, therefore, converges.
Next, we define the shift map on Σ2 as
The map T maps each entry of a sequence from one place to the left. Similarly, the
one-sided shift map T+ can be defined on the space of one-sided infinite sequences,
Σ2+ , that is, s ∈ Σ2+ , where s = {s0 s1 . . . sn . . . }. In this case, we have
so that
T+ (s0 s1 s2 . . . ) = (s0 s1 s2 . . . ) = (s1 s2 s3 . . . ).
It is obvious that the T+ map shifts a one-sided sequence from one place to the left
and drops its first element. Although maps T and T+ have very similar properties, T
is invertible whereas T+ is not. The distance on Σ2+ is essentially the same as (2.30)
with the difference that the infinite sum will now run from zero to ∞. The map T+
can be used to prove chaotic properties of certain noninvertible, one-dimensional
maps frequently employed in applications.
56 2 Preliminaries of Nonlinear Dynamics and Chaos
Theorem 2.9. The shift map T+ on Σ2+ is chaotic according to Definition 2.21.
Remark 2.5. The shift map T+ on Σ2+ has a property that is stronger than topo-
logical transitivity called topological mixing. In general, we say that a map G is
topologically mixing on a set A if for any two open subsets U and V of A there
exists a positive integer N0 such that Gn (U) ∩ V = 0/ for all n ≥ N0 . If a map G is
topologically mixing, then for any integer n the map Gn is topologically transitive.
The importance of the fact that the shift map is chaotic in a precise sense lies
in that the chaotic properties of invariant sets of certain one- and two-dimensional
maps and three-dimensional flows may sometimes be proved by showing that the
dynamics on these sets are topologically conjugate to that of a shift map on a sym-
bol space. This indirect argument is often the only available strategy for investi-
gating nonlinear maps (or flows). We have encountered the concept of topological
conjugacy in the Hartman–Grobman theorem (Theorem 2.5, which we called home-
omorphic equivalence) between a nonlinear map (or flow) and its linearization in a
neighborhood of a fixed point. We now provide some formal definitions.
Definition 2.22. Let X and Y be topological spaces, and let f : X → X and g : Y →
Y be continuous functions. We say that f is topologically semiconjugate to g if
there exists a continuous surjection11 h : Y → X such that f ◦ h = h ◦ g. If h is a
homeomorphism, then we say that f and g are topologically conjugate, and we call
h a topological conjugation between f and g.
Similarly, a flow φ on X is topologically semiconjugate to a flow ψ on Y if there
is a continuous surjection h : Y → X such that φ (h(y),t) = h(ψ (y,t)) for each y ∈ Y ,
t ∈ R. If h is a homeomorphism, then φ and ψ are topologically conjugate.
Remark 2.6. Topological conjugation defines an equivalence relation in the space
of all continuous surjections of a topological space to itself, by declaring f and g to
be related if they are topologically conjugate. This equivalence relationship is very
useful in the theory of dynamical systems, since each class contains all functions
which share the same dynamics from the topological viewpoint. In fact, orbits of
g are mapped to homeomorphic orbits of f through the conjugation. Writing g =
h−1 ◦ f ◦ h makes this fact evident: gn = h−1 ◦ f n ◦ h. Roughly speaking, topological
conjugation is a ‘change of coordinates’ in the topological sense.
However, the analogous definition for flows is somewhat restrictive. In fact, we
require the maps φ (·,t) and ψ (·,t) to be topologically conjugate for each t, which
requires more than simply that orbits of φ be mapped to orbits of ψ homeomorphi-
cally. This motivates the definition of topological equivalence, which also partitions
the set of all flows in X into classes of flows sharing the same dynamics, again from
the topological viewpoint.
Definition 2.23. We say that two flows ψ and φ of a compact manifold M are topo-
logically equivalent if there is an homeomorphism h : Y → X, mapping orbits of
11 A function f : X → Y is a surjection if, for every y ∈ Y , there is an x ∈ X such that f (x) = y.
2.13 Lyapunov Exponent 57
∂ ∂ xs
v(t) := φ (t, xs ) = Dx φ (t, x0 ) = Dx φ (t, x0 )v0
∂s s=0 ∂s
∂ xs
and v0 := , then v(t) satisfies the first variation equation
∂s
d
v(t) = D f(φ (t,x0 )) v(t).
dt
If v0 = y0 − x0 , then v(t) would give the infinitesimal displacement at time t.
The growth rate of v(t) is a number χ such that
v(t) ≈ Ceχ t ,
ln v(t) ln(C)
≈ + χ;
t t
therefore,
ln v(t)
χ = lim .
t→∞ t
Definition 2.24. Let v(t) be the solution of the first variation equation, starting from
x0 with v(0) = v0 . The Lyapunov exponent for initial condition x0 and initial in-
finitesimal displacement v0 is defined as
58 2 Preliminaries of Nonlinear Dynamics and Chaos
ln v(t)
χ (x0 , v0 ) = lim ,
t→∞ t
whenever this limit exists.
Remark 2.7. For most initial conditions x0 for which the forward orbit is bounded,
the Lyapunov exponents exist for all vectors v. In n-dimensional state space, there
are at most n distinct values for χ (x0 , v) as v varies. If we count multiplicities, then
there are exactly n values,
Several results on Lyapunov exponents are listed as follows. For detailed proofs
please refer to [13].
Theorem 2.10. Assume that x0 is a fixed point of the differential equation ẋ = f (x).
Then, the Lyapunov exponents at the fixed point are the real parts of the eigenvalues
of the fixed point.
Theorem 2.11. Let x0 be an initial condition such that φ (t, x0 ) is bounded and ω (x0 )
does not contain any fixed points. Then,
χ (x0 , f (x0 )) = 0.
Remark 2.8. The above theorem means that there is no growth or decay in the direc-
tion of the vector field, v = f (x0 ).
Theorem 2.13. Assume that φ (t, x0 ) and φ (t, y0 ) are two orbits for the same dif-
ferential equation, which are bounded and converge exponentially (i.e., there are
constants a > 0 and C ≥ 1 such that φ (t, x0 ) − φ (t, y0 ) ≤ Ce−at for t ≥ 0). Then,
the Lyapunov exponents for x0 and y0 are the same. So, if the limits defining the Lya-
punov exponents exist for one of the points, they exist for the other point. The vec-
tors which give the various Lyapunov exponents can be different at the two points.
(ii) In particular, if the system has constant divergence δ , then the sum of the Lya-
punov exponents at any point must equal δ .
(iii) In the three-dimensional case, assume that the divergence is a constant δ and
that x0 is a point for which the positive orbit is bounded and ω (x0 ) does not
contain any fixed points. If χ1 (x0 ) is a nonzero Lyapunov exponent at x0 , then
the other two Lyapunov exponents are 0 and δ − χ1.
Dn G(x0 )w
χ (x0 , w) = lim ln ,
n→∞ w
where
Dn G(x0 ) = DG(x0 )DG(x1 ) · · · G(xn−1 )
and w is a vector in the tangent space at x0 .
2.14 Examples
In this section, we will explore how chaos appears by investigating some examples.
Proof. Consider that the graph of the nth iteration of GΛ consists of 2n linear pieces,
each with slope ±2n . Each of these linear pieces of the graph is defined on a subin-
terval of [0, 1] of length 2−n . Then, for any open subinterval J of [0, 1], we can find
a subinterval K of J of length 2−n , such that the image of K under GΛn covers the
60 2 Preliminaries of Nonlinear Dynamics and Chaos
entire interval [0, 1]. Therefore, GΛ is topologically transitive on [0, 1]. This fact,
and the discussion in point (v) of Remark 2.4 [16], proves the proposition.
Remark 2.9. From the geometry of the iterated map GΛm , it appears that the graph of
GΛm on J intersects the bisector and therefore GΛm has a fixed point in J. This proves
that periodic points are dense in [0, 1]. Also, for any x ∈ J there exists a y ∈ J such
that |GΛn (x) − GΛn (y)| ≥ 1/2 and, therefore, GΛ has sensitive dependence on initial
conditions.
(see Fig. 2.23) is also chaotic. Consider the map h(y) = sin2 (π y/2). The map h is
continuous and, restricted to [0, 1], is also one-to-one and onto. Its inverse is contin-
uous and h is thus a homeomorphism. Consider now the diagram
G
[0, 1] −−−Λ−→ [0, 1]
⏐ ⏐
⏐
h(y)
⏐h(y)
[0, 1] −−−−→ [0, 1]
G4
1 1
1 x 1 x
2 2
(a) (b)
1
G4 (x)
0.8
0.6
0.4
0.2
0
0 0.2 0.4 0.6 0.
x
Both the tent map and the logistic map are all from an interval of R to itself.
There are also many other one-dimensional mappings presenting chaotic dynamics.
In fact, for one-dimensional mappings from R to itself we have the following so-
called Li–Yorke theorem [13], famous for the phrase ‘period three implying chaos,’
and Sarkovskii’s theorem [13], the generalization of Li–Yorke’s result. Since many
references include the proof of these two theorems, we will only state the content of
the theorems.
Theorem 2.15 (Li–Yorke [13]). Assume that f is a continuous function from R to
itself.
(i) If f has a period-3 point, then it has points of all periods.
(ii) Assume that there is a point x0 such that either
a. f 3 (x0 ) ≤ x0 < f (x0 ) < f 2 (x0 ) or
b. f 3 (x0 ) ≥ x0 > f (x0 ) > f 2 (x0 ).
Then, f has points of all periods.
This theorem was obtained by Li and Yorke in 1975 and soon after it was shown
that Li–Yorke theorem is a special case of Sharkovskii’s theorem. Before introduc-
ing Sharkovskii’s theorem, we first define a new order for natural numbers as fol-
lows:
3 5 7 · · · 2 · 3 2 · 5 2 · 7 · · · 2k · 3 2k · 5 2k · 7 · · · 2k 2k−1 · · · 22 2 1.
The Smale horseshoe is the prototypical map possessing a chaotic invariant set.
Therefore, a thorough understanding of the Smale horseshoe is absolutely essential
for understanding what is meant by the term ‘chaos’ as it is applied to the dynamics
of specific physical systems [10].
Consider the geometrical construction in Fig. 2.24. Take a square S on the plane
(Fig. 2.24 (a)). Contract it in the horizontal direction and expand it in the verti-
cal direction (Fig. 2.24 (b)). Fold it in the middle (Fig. 2.24 (c)) and place it so
that it intersects the original square S along two vertical strips (Fig. 2.24 (d)). This
procedure defines a map f : R2 → R2 . The image f (S) of the square S under this
transformation resembles a horseshoe. That is why it is called a horseshoe map.
The exact shape of the image f (S) is irrelevant; however, for simplicity we assume
that both the contraction and expansion are linear and that the vertical strips in the
intersection are rectangle. The map f can be made invertible and smooth together
with its inverse. The inverse map f −1 transforms the horseshoe f (S) back into the
square S through stages (d)–(a). This inverse transformation maps the dotted square
S shown in Fig. 2.24 (d) into the dotted horizontal horseshoe in Fig. 2.24 (a), which
is assumed to intersect the original square S along two horizontal rectangles.
Denote the vertical strips in the intersection S ∩ f (S) by V1 and V2 ,
S ∩ f (S) = V1 ∪V2
(see Fig. 2.25 (a)). Now make the most important step: perform the second iteration
of the map f . Under this iteration, the vertical strips V1 and V2 will be transformed
B C f
B
C
B C
S S
A D
A D C B
-1
f
A D
A D
(a) (b) (c) (d)
into two ‘thin horseshoes’ that intersect the square S along four narrow vertical
strips: V11 ,V21 ,V22 , and V12 (see Fig. 2.25 (b)). We write this as
Similarly,
S ∩ f −1 (S) = H1 ∪ H2 ,
where H1 and H2 are the horizontal strips shown in Fig. 2.25 (c), and
with four narrow horizontal strips Hi j (Fig. 2.25 (d)). Notice that f (Hi ) = Vi , i = 1, 2,
as well as f 2 (Hi j ) = Vi j , i, j = 1.2 (see Fig. 2.26).
Iterating the map f further, we obtain 2k vertical strips in the intersection S ∩
f k (S), k ∈ N. Similarly, iteration of f −1 gives 2k horizontal strips in the intersection
S ∩ f −k (S), k ∈ N.
Most points leave the square S under iterations of f or f −1 . We consider all
remaining points in the square under all iterations of f and f −1 :
Γ = {x ∈ S : f k (x) ∈ S, ∀k ∈ Z}.
V1 V2 V 11 V 21 V 22 V 12
H 21
H2
H 22
H 12
H1
H 11
U U 2 U -1 U U
S f ( S) f ( S)
U
S f ( S) S f ( S) S f -1
( S) f -2
( S)
H 21
f (H 12) f (H 22)
H 22
H 12
f (H 11) f (H 21)
H 11
V 11 V 21 V 22 V 12
f f
which is the union of sixteen smaller squares (see Fig. 2.27 (b)), and so forth. In the
limit, we get a Cantor set. About the horseshoe map, we have the following lemma.
U U -1
f ( S)
-1 -2 U U
f ( S) S f ( S) f ( S)
U U 2
S f ( S) f ( S)
(a) (b)
Proof. For any point x ∈ Γ , define a sequence of the two symbols {1, 2} by
ω = {. . . , ω−2 , ω−1 , ω0 , ω1 , ω2 , . . . }
by the formula
1, if f k (x) ∈ H1 ,
ωk = (2.32)
2, if f k (x) ∈ H2 ,
for k ∈ Z. Here, f 0 = id, the identity map. Clearly, this formula defines a map
h : Γ → Σ2 , which assigns a sequence to each point of the invariant set. To ver-
ify that this map is invertible, take a sequence ω ∈ Σ2 , fix m > 0, and consider a set
Rm (ω ) of all points x ∈ S, not necessarily belonging to Γ , such that
2.14 Examples 65
f k (x) ∈ Hωk ,
Remark 2.10. The map h : Γ → Σ2 is continuous together with its inverse (a home-
omorphism) if we use the standard Euclidean metric in S ⊂ R2 and the metric given
by (2.30) in Σ2 .
θk = ωk+1 , k ∈ Z,
since
f k ( f (x)) = f k+1 (x).
In other words, the sequence θ can be obtained from the sequence ω by the shift
map σ :
θ = σ (ω ).
Therefore, the restriction of f to its invariant set Γ ⊂ R2 is equivalent to the shift
map σ on the set of sequences Σ2 . This result can be formulated as the following
lemma.
f |Γ = h−1 ◦ σ ◦ h.
Combining Lemmas 2.1 and 2.2 with obvious properties of the shift dynamics
on Σ2 , we get a theorem giving a rather complete description of the behavior of the
horseshoe map.
Theorem 2.17. The horseshoe map f has a closed invariant set Γ that contains a
countable set of periodic orbits of arbitrarily long period, and an uncountable set
of nonperiodic orbits, among which there are orbits passing arbitrarily close to any
point of Γ .
Remark 2.11. The limit set Γ of a Smale horseshoe map is unstable and, therefore,
not attracting. It follows that the existence of a Smale horseshoe does not imply
66 2 Preliminaries of Nonlinear Dynamics and Chaos
the existence of a chaotic attractor. The existence of a Smale horseshoe does imply,
however, that there is a region in state space that experiences sensitive dependence
on initial conditions. Thus, even when there is no strange attractor in the flow, the
dynamics of the system can appear chaotic until the steady state is reached.
Remark 2.12. We can slightly perturb the constructed map f without qualitative
changes to its dynamics. Clearly, Smale’s construction is based on a sufficiently
strong contraction or expansion, combined with a fold. Thus, a (smooth) perturba-
tion f˜ will have similar vertical and horizontal strips, which are no longer rectangles
but curvilinear regions. However, provided that the perturbation is sufficiently small,
these strips will shrink to curves that deviate only slightly from vertical and hori-
zontal lines. Thus, the construction can be carried through word for word, and the
perturbed map f˜ will have an invariant set Γ on which the dynamics is completely
described by the shift map σ on the sequence space Σ2 . This is an example of struc-
turally stable behavior.
E1 : 0, 0),
(0,
E2 : ( b(r − 1), b(r
− 1), r − 1),
E3 : (− b(r − 1), − b(r − 1), r − 1).
2.14 Examples 67
50
45
40
35
30
25
z
20
15
10
0
−20 −15 −10 −5 0 5 10 15 20
x
some trajectory of the system. That is, the trajectory one calculates might not be
exactly the one he thinks, but it is very close to one of the possible trajectories of
the system. In more technical terms, we say that the computed trajectory shadows
some possible trajectories of the system. (A proof of this shadowing property for
chaotic systems is given in [6] and [14].) In general, we are most often interested
in properties that are averaged over a trajectory; in many cases those average values
are independent of the particular trajectory we follow. So, as long as we follow some
possible trajectory for the system, we can have confidence that our results are a good
characterization of the system’s behavior. Recently, W. Tucker’s work [15] strength-
ened the above discussion, in which he has shown, using a computer-assisted proof,
that the Lorenz system not only has sensitive dependence on initial conditions, but
also has a chaotic attractor.
Although a full mathematical analysis of the observed attractor is still lacking,
some of the attractor’s properties have been established through a combination of
numerical evidence and theoretical arguments. Before presenting the analysis we
first consider the following three facts about system (2.33).
(i) The trace of the Jacobian matrix
∂ ẋ ∂ ẏ ∂ ż
tr[D f (x, y, z)] = + + = −(b + σ + 1) < 0
∂x ∂y ∂z
is constant and negative along orbits. Thus, any three-dimensional volume of
initial conditions is contracted along orbits at a rate equal to
γ = −(b + σ + 1) < 0;
V (x, y, z) = x2 + y2 + (z − r − σ )2 = K 2 (r + σ )2 (2.34)
Keeping the three facts in mind, we can investigate the Lorenz system by con-
structing a geometric model which, under a certain hypothesis, provides a reason-
able approximation of the dynamics of the original model with ‘canonical’ parame-
ter values σ = 10, b = 8/3, and r > rH .
We first consider a system of differential equations in R3 depending on a param-
eter μ with the following properties:
(i) for a certain value μh of the parameter, there exists a pair of symmetrical ho-
moclinic orbits, asymptotically converging to the origin, and tangential to the
positive z axis;
(ii) the origin is a saddle-point equilibrium and the dynamics in a neighborhood
N of the equilibrium, for μ in a neighborhood of μh , is approximated by the
system ⎧
⎨ ẋ = λ1 x,
ẏ = λ2 y,
⎩
ż = λ3 z,
where λ2 < λ3 < 0 < λ1 and −λ3 /λ1 < 1;
(iii) the system is invariant under the change of coordinates (x, y, z) → (−x, −y, z).
Under these conditions, for (x, y, z) ∈ N and μ near μh , it is possible to construct
a two-dimensional cross section Σ , such that the transversal intersections of orbits
with Σ define a two-dimensional Poincaré map P : Σ → Σ . For values of |x| and
|μ − μh | sufficiently small, the dynamics of P can further be approximated by a one-
dimensional, noninvertible map Gμ [−a, a] − {0} → R defined on an interval of the
x axis, but not at x = 0.
A typical formulation of the map Gμ is
a μ + cxδ , if x > 0;
Gμ (x) =
−a μ − c|x|δ , if x < 0;
–α 0 α
Fig. 2.30 One-dimensional map for the Lorenz model
Then, Gμ (x) > 1 for x ∈ [−α , α ] (x = 0), and limx→0± Gμ (x) = +∞. The map Gμ
on the interval is depicted in Fig. 2.30.
Because Gμ (x) > 1 for all x ∈ [−α , α ] (x = 0), Gμ is a piecewise-expanding
map and has therefore sensitive dependence on initial conditions. There are no fixed
points or stable periodic points and most orbits on the interval [−α , 0) ∪ (0, α ] are
attracted to a chaotic invariant set. Although increasing μ beyond the homoclinic
value μh leads to stable chaotic motion, if we take μ very large, the system reverts
to simpler dynamical behavior and stable periodic orbits reappear.
Remark 2.14. The idea that some essential aspects of the dynamics of the original
system (2.33) could be described by a one-dimensional map was first put forward
by Lorenz himself. In order to ascertain whether the numerically observed attractor
could be periodic rather than chaotic, he plotted successive maxima of the variable
z along an orbit on the numerically observed attractor. In doing so, he discovered
that the plot of zn+1 against zn has a simple shape, as illustrated in Fig. 2.31. The
points of the plot lie almost exactly on a curve whose form changes as the parameter
r varies. Setting σ = 10, b = 8/3, and r = 28 (the traditional ‘chaotic values’), we
2.15 Basics of Functional Differential Equations Theory 71
48
46
44
42
40
zn+1
38
36
34
32
30
30 32 34 36 38 40 42 44 46 48
zn
obtain a curve resembling a distorted tent map. It has slope everywhere greater than
1 in absolute value so, again, it approximates a piecewise-expanding map. For such
a map there cannot be stable fixed points or stable periodic orbits and, for randomly
chosen initial values, orbits converge to a chaotic attractor.
ẋ = f (t, xt ) (2.35)
f (t, φ ) − f (t, ψ ) ≤ K φ − ψ ,
x(t, σ , φ ) ≤ ε exp[−β (t − σ )]
In the following, we introduce some sufficient conditions for the stability of the
solution x = 0 of (2.35) which generalize the second method of Lyapunov for ordi-
nary differential equations.
If the functional V : R × C → R+ is continuous and x(t, σ , φ ) is the solution of
(2.35) through (σ , φ ), we define
1
V̇ (t, φ ) = lim [V (t + h, xt+h (t, φ )) − V (t, φ )].
h→0+ h
The function V̇ (t, φ ) is the upper right-hand derivative of V (t, φ ) along the solution
of (2.35).
If the function V : R × R → R+ is continuous, we define
1
V̇ (t, φ (0)) = lim [V (t + h, x(t, φ )(t + h)) − V(t, φ (0))].
h→0+ h
The function V̇ (t, φ (0)) is the upper right-hand derivative of V (t, x) along the solu-
tion of (2.35).
Sometimes we write V̇(2.35) (t, φ ) and V̇(2.35) (t, φ (0)) to emphasize the dependence
on (2.35), respectively.
Theorem 2.22. Suppose that f : R × C → Rn takes R× (bounded set of C) into
bounded sets of Rn , and u, v, w : R+ → R+ are continuous nondecreasing functions,
u(s) and v(s) are positive for s > 0, and u(0) = v(0) = 0. If there is a continuous
functional V : R × C → R+ such that
u( φ (0) ) ≤ V (t, φ ) ≤ v( φ τ ),
a continuous and nondecreasing function satisfying P(s) > s for s > 0. If there is a
continuous function V : R × Rn → R+ such that
u( x ) ≤ V (t, φ ) ≤ v( x )
and
V̇(2.35) (t, φ (0)) ≤ −w( φ (0) ) when V (t + θ , φ (θ )) < P(V (t, φ (0))), θ ∈ [−τ , 0],
and
V̇(2.35) (t, φ (0)) ≤ −μ V (t, φ (0)) when sup [eμθ V (t + θ , x(t + θ ))] = V (t, x(t)),
−τ ≤θ ≤0
2.16 Summary
One basic goal in studying dynamical systems is to explore how the trajectories of a
system evolve as time proceeds. So, in this chapter, we began with the theorems on
existence and uniqueness of solutions of ordinary differential equations. In the sub-
sequent sections we focused on a special class of solutions: equilibrium solutions.
Besides that notion, we introduced other definitions such as fixed point, periodic
orbit, quasiperiodic orbit, ω -limit set, invariant set, etc., and, furthermore, the key
concepts of the book, chaos and chaotic attractors, were introduced. Two power-
ful tools in studying chaotic systems, Lyapunov exponents and symbolic dynamics,
were discussed briefly. The stability issue was discussed and a detailed category of
the types of fixed point of planar systems for both continuous time and discrete time
was provided. Three famous examples on chaos were carefully presented through
which we wanted to give a concrete understanding about chaos. Finally, we pro-
vided some necessary preliminaries on retarded functional differential equations for
the purpose of self-containment of the book.
76 2 Preliminaries of Nonlinear Dynamics and Chaos
References
1. Alligood KT, Sauer TD, Yorke JA (1997) Chaos – An Introduction to Dynamical Systems.
Springer, New York
2. Arnold VI (1988) Geometrical Methods in the Theory of Ordinary Differential Equations,
2nd edn. Springer, New York
3. Banks J, Brooks J, Cairns G, Davis G, Stacey P (1992) On Devaney’s definition of chaos. Am
Math Mon 99:332–334
4. Birkhoff GD (1927) Dynamical Systems. American Mathematical Society, Providence, RI
5. Devaney RL (1989) An Introduction to Chaotic Dynamical Systems, 2nd edn. Addison-
Wesley, New York
6. Grebogi C, Hammel SM, Yorke JA, Sauer T (1990) Shadowing of physical trajectories in
chaotic dynamics: containment and refinement. Phys Rev Lett 65:1527–1530
7. Guckenheimer J, Holmes P (1983) Nonlinear Oscillations, Dynamical Systems, and Bifurca-
tions of Vector Fields. Springer, New York
8. Hale J (1977) Theory of Functional Differential Equations. Springer, New York
9. Hirsch MW, Smale S (1974) Differential Equations, Dynamical Aystems and Linear Algebra.
Academic Press, New York
10. Kuznetsov YA (1998) Elements of Applied Bifurcation Theory, 2nd edn. Springer, New York
11. Medio A, Lines M (2001) Nonlinear Dynamics: A Primer. Cambridge University Press, Lon-
don
12. Parker TS, Chua LO (1989) Practical Numerical Algorithms for Chaotic Systems. Springer,
New York
13. Robinson RC (2004) An Introduction to Dynamical Systems: Continuous and Discrete. Pren-
tice Hall, Upper Saddle River, NJ
14. Sauer T, Grebogi C, Yorke JA (1997) How long do numerical chaotic solutions remain valid?
Phys Rev Lett 79:59–62
15. Tucker W (2002) A rigorous ODE solver and Smale’s 14th problem. Found Comput Math
2:53–117
16. Vellekoop M, Berglund R (1994) On intervals: transitivity → chaos. Am Math Mon 101:353–
355
17. Wiggins S (2003) Introduction to Applied Nonlinear Dynamical Systems and Chaos, 2nd edn.
Springer, New York
Chapter 3
Entrainment and Migration Control of Chaotic
Systems
Abstract In this chapter, the entrainment and the migration methods of chaos con-
trol are discussed. The background of entrainment and migration control is based
on two facts: a multi-attractor chaotic system is sensible to both initial values and
parameters and each stable attractor has its basin of attraction. A common goal of
chaos control is to steer the trajectory of a chaotic system to a periodic orbit which
itself is a solution of the chaotic system. Sometimes, one wants to steer the trajec-
tory to a predesigned orbit which is not the solution of the chaotic system. For some
multiple attractor chaotic systems, one would like to direct the trajectory from one
attractor to the others. To achieve these goals, entrainment and migration methods
are developed. We first introduce these methods and their extension, the open-plus-
closed-loop (OPCL) method. Based on the OPCL method, a new control scheme,
the open-plus-nonlinear-closed-loop (OPNCL) method, is developed. In addition,
we apply the method of OPNCL to a class of dynamical systems, both continuous
time and discrete time.
3.1 Introduction
Usually, several attractors can coexist in a complex dynamical system. These attrac-
tors can be stable or unstable. When a key parameter is changed the attractor may
be altered in either appearance or spatial position, or in both. The background of
entrainment and migration control is based on two facts: a multi-attractor chaotic
system is sensible to both initial values and parameters and each stable attractor has
its basin of attraction. A common goal of chaos control is to steer the trajectory of
a chaotic system to a periodic orbit which itself is a solution of the chaotic sys-
tem. Sometimes, one wants to steer the trajectory to a predesigned orbit which is
not the solution of the chaotic system. For some multiple attractor chaotic systems,
one would like to direct the trajectory from one attractor to others. To achieve these
goals, entrainment and migration methods are proposed. In this chapter, we will in-
troduce these methods and their expansion, the OPCL method. Based on the OPCL
77
78 3 Entrainment and Migration Control of Chaotic Systems
method, a new control scheme, the OPNCL method, is proposed. Besides that, we
will apply the method of OPNCL to a class of dynamical systems, both continuous
time and discrete time.
or
ẋ = f (x), (3.2)
where x ∈ Rm and f : Rm → Rm is smooth. Sometimes, it is difficult to measure and
feed-back the state variables. Moreover, sometimes one wishes to steer the trajectory
of systems (3.1) and (3.2) to a certain region in the phase space, or even to some
predesigned orbits. For example, the target orbits can be given by
xn = g n (3.3)
or
x(t) = g(t). (3.4)
These target orbits may be fixed points, periodic orbits, nonperiodic orbits, or even
chaotic orbits. We would like to achieve the specified target, for example, (3.3), by
a control signal Ŝn . That is to say, the following controlled discrete-time system:
ẋ = f (x) + Ŝ(t),
(3.6)
Ŝ(t) = ġ(t) − f (g(t)).
It should be noted that Ŝn and Ŝ(t) are independent of the state variables of sys-
tems (3.1) and (3.2). This control method belongs to an open-loop control scheme.
Thus, the targets (3.3) and (3.4) are not arbitrary. They should be stable solutions of
3.2 Basics on Entrainment and Migration 79
the controlled systems. We take (3.6) as an example. If g(t) is a periodic orbit, its
stability can be determined by the vector field
δ ẋ = D fx δ x, (3.7)
where D fx = (∂ f /∂ x)|x=g(t) and δ x = x(t)− g(t). Because g(t) and f (x) are known,
the conditions to hold for stability are that all Floquet characteristic multipliers are
negative. Thus, we can select a certain region C, of the state space, so that the dis-
tance between any couple of points in this region is contracted by (3.7). This kind of
region is called a convergence region. In this region, for any distance, Δ x, it always
holds that
Δ xT D fx Δ x < 0. (3.8)
Therefore, if g(t) is in this region, it will be stable. For any bounded dissipative
system this region always exists. In the discrete-time case, the variation equation
given in (3.7) becomes
δ xn+1 = D fx δ xn , δ xn = xn − gn .
|Δ xT D fx Δ x| < Δ xT Δ x. (3.9)
Once the target has been selected, the next step is to determine BE(g), i.e., the basin
of entrainment (the range of initial conditions of x(t) that will satisfy the condition
limt→∞ x(t) − g(t) = 0). This step is far more difficult than determining a stable
target. A point in a contractive region must converge to the target orbit but, compared
with the basin of entrainment, this contractive region is very small. In general, we
cannot get an analytical expression for a basin of entrainment, but we can get a
rough understanding of a basin of entrainment by means of numerical simulations
or experiments.
Now, it is clear how the entrainment method is implemented. First, a target orbit
g(t) (or gn ) should be selected according to the requirements of the stability in prac-
tice. The basin of entrainment of the target orbit must cover a part of the attractor of
the chaotic system to be controlled. Then, the controlled systems are as follows:
(i) for the discrete-time case, we have
Here, S(t) = 0 when the controller does not work and S(t) = 1 when the controller
begins to work. The same definition is for Sn as well. The controller is turned on
when the trajectory of the chaotic system goes into the overlapping region of the
80 3 Entrainment and Migration Control of Chaotic Systems
basin of entrainment and the chaotic attractor. Since these kinds of controllers were
first proposed by Hübler [3], we will refer to them as Hübler open-loop controllers.
There are some distinguishable merits of the entrainment method. The selection
of the target orbits is very flexible. The target orbits can be ones which are not solu-
tions of the chaotic systems, which enhances the ability of control and, meanwhile,
avoids real-time monitoring and feedback. Therefore, the controller would be sim-
plified significantly. However, there are some problems with this method. On one
hand, a total understanding about the original chaotic system is required when one
wants to determine the control signals. This is not always achievable. On the other
hand, the control signals usually have large amplitude and the original dynamics
changes in a large range. It should be emphasized that both methods (3.10) and
(3.11) cannot stabilize unstable periodic orbits. From (3.7) and (3.9) it is easy to see
that the stability of the target orbits is not changed by the controller.
One interesting point with entrainment control is that g(t) and gn are not required to
be periodic orbits. They may be a migration orbit from one point in the state space
to another. Thus, we can use entrainment control to achieve an important control
task, the migration of state.
One of the features of nonlinear systems is that there usually exist multiple attrac-
tors corresponding to some parametric conditions. Since each attractor has its own
basin of attraction, different initial conditions will result in different final states.
If one wants to migrate an orbit from a chaotic attractor to a periodic orbit, or to
migrate a periodic orbit to another periodic orbit, then the migration orbit between
different points in the state space is a reasonable selection.
Suppose that g(t) is the selected orbit which is used to migrate g(0) = xS to
g(t → ∞) = xT . We require that
The control equation that should be obeyed by the target is (3.11). Since the starting
point of the migration orbit can be located at the chaotic attractor, the whole under-
standing about the distribution of the basin of entrainment is not required. The only
requirement is the stability of g(t). After the starting attractor and the terminal at-
tractor are determined, we should select the starting point xS and the terminal point
xT of the migration orbit. The basin of attraction may be quite big, which makes the
selection of xT very flexible. Since there is much freedom in the selection of g(t), it
is not difficult to guarantee local stability.
We take the Lorenz system as an example. The Lorenz system is of the following
form:
3.2 Basics on Entrainment and Migration 81
⎧
⎨ ẋ1 = σ (x2 − x1 ),
ẋ = ρ x1 − x1 x3 − x2 , (3.12)
⎩ 2
ẋ3 = x1 x2 − β x3.
Let g = (g1 (t), g2 (t), g3 (t))T be a migration orbit. Adding the Hübler open-loop
control, (3.12) becomes
⎧
⎨ u̇1 = σ (u2 − u1 ),
u̇ = ρ u1 − u2 − u1u3 − u1g3 − g1 u3 , (3.13)
⎩ 2
u̇3 = u1 u2 − β u3 + u1 g2 + u2g1 ,
g22 (t)
g3 (t) > ρ − 1 + . (3.16)
4β
Thus, any orbit which satisfies conditions (3.16) and (3.17) must be stable. Fur-
thermore, its basin of entrainment is the whole state space. Conditions (3.16) and
82 3 Entrainment and Migration Control of Chaotic Systems
(3.17) are just sufficient conditions, not necessary ones. When σ = 10, β = 8/3,
and 14 < ρ < 24, the Lorenz system has two attractors, and they are
A+ : ( β (ρ − 1), β (ρ − 1), ρ − 1),
A− : (− β (ρ − 1), − β (ρ − 1), ρ − 1).
Suppose that the Lorenz system starts from A+ . We want to migrate the trajectory
of the Lorenz system to A− along the migration orbit g(t). We select the following
migration orbit:
The orbit (3.18) does not satisfy condition (3.17), but it is stable. The initial point is
selected in the basin of attraction of A+ . The controller is turned on at t = 0. There-
fore, the trajectory of the Lorenz system is entrained to the basin of attraction of A− .
Then, the controller is turned off. The trajectory goes to A− under the domination
of the dynamics of the Lorenz system. This procedure is shown in Fig. 3.1.
If the values of σ and β are fixed but the value of ρ varies in the range
[24.1, 24.7], a chaotic attractor appears besides the two fixed points. We take two
steps to entrain the trajectory of the Lorenz system from the chaotic attractor to A− .
A+
6
−2
−4
A−
−6
−8
−8 −6 −4 −2 0 2 4 6 8
x1
in the convergent region as the goal of entrainment. From (3.15) to (3.17) we know
that this point is the unique fixed point of system (3.13). So, whenever the controller
is turned on, the trajectory of the Lorenz system will be entrained to this point.
Therefore, the difficulties encountered in the selection of the target orbit, which are
caused by the complex structure of the basin of entrainment of each attractor, can
be avoided. Second, we select the following migration orbit [2]:
where t0 is the instant at which the entrainment to g∗ is finished and the migration
from g∗ to A− is started. The simulation result is shown in Fig. 3.2.
80
60
A− 40
40
x3
20 20
0 0
−20
−15
−10 −20
−5
0
5
10 x2
15
20 −40
25
x1
Fig. 3.2 Migrating the trajectory of the Lorenz system from the chaotic attractor to A−
84 3 Entrainment and Migration Control of Chaotic Systems
where S(t) = 0 (if t < t0 ) and S(t) = 1 (if t ≥ t0 ), and F(x,t) is everywhere Lips-
chitzian. There exists a function K(g, x,t), linear in x(t), such that none of the basins
of entrainment (associated with g(t) and t0 ), defined by
Proof. Consider a special kind of control, K(g,t), which is the sum of Hübler’s
open-loop control and a particular form of a linear closed-loop control, namely
∂ F(g,t)
C(g,t) = −A (3.22)
∂g
and A = (ai j ) is any constant matrix whose eigenvalues all have negative real parts.
The control (3.21) and (3.22) is referred to as the OPCL control.
Set x(t) = g(t) + u(t), where u(t0 ) is arbitrarily small. Substitute x(t) into (3.19)
when t ≥ t0 , and expand F(g(t) + u(t),t) for small u(t). This yields the following
variational equation:
du ∂ F(g,t)
= u(t) − C(g,t)u(t) = Au(t). (3.23)
dt ∂g
3.4 Global Control of a Class of Continuous-Time Polynomial Chaotic Systems 85
Since all eigenvalues of A have negative real parts, the asymptotic stability of (3.23)
is established. Since this asymptotic stability also holds for the nonlinear system
(3.19), BE(g) is not an empty set.
The OPCL control is very effective for systems whose right-hand sides of differen-
tial equations are mth-order polynomials, where m ≤ 2. When m > 2, it is difficult
to determine the basin of entrainment, which will be very complex with the increase
of m. To overcome this difficulty, an improved OPCL method, called the OPNCL
method, was proposed [5].
Consider the following mth-order polynomial systems:
ẋ = F(x,t) (x ∈ Rn ) (3.24)
or
m
∑ j j
a j1 j2 ... jn (t)x11 x22 · · · xnjn (i = 1, 2, . . . , n).
(i)
ẋi = Fi (x,t) = (3.25)
j1 + j2 +···+ jn =0
Without loss of generality, we suppose that there is at least one nonzero parameter,
(i)
for example, a j1 j2 ... jn (t) = 0 (1 ≤ i ≤ n). For system (3.24) we add the following
OPNCL controller:
where D(g, x,t) is a nonlinear function of g(t) − x(t); its concrete form will be given
later. Adding the control term K(g, x,t) to system (3.24) through a switching func-
tion S(t), we get
ẋ = F(x,t) + S(t)K(g, x,t), (3.27)
where K(g, x,t) is specified by (3.26). For an arbitrary target g(t), we will prove that
the basin of entrainment of (3.24) is not an empty set. Furthermore, the entrainment
is global. Set S(t) = 1 and let x(t) = g(t) + u(t), where u(t) ∈ Rn . Substituting x(t)
into (3.27), we get
ġ + u̇ = F(g + u,t) + K(g, g + u,t).
Substituting (3.26) into the above equation, we get
m
1 ∂ k F(g,t) k
u̇ = F(g + u,t) − F(g,t) − D(g, x,t)u = ∑ k! ∂ gk
u − D(g, x,t)u. (3.28)
k=1
∂ k F(g,t) k
To calculate u , we note that F = (F1 , F2 , . . . , Fn )T and Fi = Fi (x1 , x2 , . . . ,
∂ gk
xn ). Thus, we have
Note that
n
∂ 2 Fi ∂ 2 Fi 2 ∂ 2 Fi ∂ 2 Fi
∑ h j hk = h1 + h1 h2 + · · · + h1 hn
j,k=1 ∂ x j ∂ xk ∂ x1 ∂ x1 ∂ x1 ∂ x2 ∂ x1 ∂ xn
∂ 2 Fi ∂ 2 Fi 2 ∂ 2 Fi
+ h2 h1 + h2 + · · · + h2 hn
∂ x2 ∂ x1 ∂ x2 ∂ x2 ∂ x2 ∂ xn
+ ···
∂ 2 Fi ∂ 2 Fi ∂ 2 Fi 2
+ hn h1 + hn h2 + · · · + h
∂ xn ∂ x1 ∂ xn ∂ x2 ∂ xn ∂ xn n
∂ 2 Fi ∂ 2 Fi ∂ 2 Fi
= h1 + h2 + · · · + hn ,
∂ x1 ∂ x1 ∂ x1 ∂ x2 ∂ x1 ∂ xn
∂ 2 Fi ∂ 2 Fi ∂ 2 Fi
h1 + h2 + · · · + hn ,
∂ x2 ∂ x1 ∂ x2 ∂ x2 ∂ x2 ∂ xn
⎛ ⎞
h1
∂ 2 Fi ∂ 2 Fi ∂ 2 Fi ⎜ h2 ⎟
⎜ ⎟
..., h1 + h2 + · · · + hn ⎜ .. ⎟
∂ xn ∂ x1 ∂ xn ∂ x2 ∂ xn ∂ xn ⎝ . ⎠
hn
3.4 Global Control of a Class of Continuous-Time Polynomial Chaotic Systems 87
⎡ ⎛ ⎞
h1
⎢ ∂ 2F ∂ 2 Fi ∂ 2 Fi ⎜ h2 ⎟
⎢ i ⎜ ⎟
=⎢ , ,..., ⎜ .. ⎟ ,
⎣ ∂ x1 ∂ x1 ∂ x1 ∂ x2 ∂ x1 ∂ xn ⎝ . ⎠
hn
⎛ ⎞⎤ ⎛ ⎞
h1 h1
⎜ ⎟ ⎥ ⎜ ⎟
2
∂ Fi 2
∂ Fi 2
∂ Fi ⎜ 2 ⎟⎥ ⎜ h 2 ⎟
h
··· , , ,..., ⎜ .. ⎟⎥ ⎜ .. ⎟ .
∂ xn ∂ x1 ∂ xn ∂ x2 ∂ xn ∂ xn ⎝ . ⎠ ⎦ ⎝ . ⎠
hn hn
∂ 3 Fi
Analogously, we get the expressions of h j hk hl , . . . .
∂ x j ∂ xk ∂ xl
To entrain the solution of system (3.24) to the target orbit g(t), we require that
the condition
lim (x(t) − g(t)) = 0
t→∞
holds. This will require that limt→∞ u(t) = 0 holds. So, we set
m
1 ∂ k F(g,t)
D(g, x,t) = ∑ k! ∂ gk
(x(t) − g(t))k−1 − A, (3.29)
k=1
where A = (ai j ) is an n × n constant matrix with all its eigenvalues having negative
real parts. Substituting (3.29) into (3.28), we get a linear differential equation with
respect to u(t):
u̇(t) = Au(t). (3.30)
The zero solution of this system is asymptotically stable, which implies that the
solution of (3.27) is asymptotically stable. According to (3.20), the definition of
BE(g), we know that BE(g) is not empty. Since the stability of u(t) is independent
of initial conditions, BE(g) is global, which means that, for any starting point x0 ,
the relation limt→∞ x(t) − g(t) = 0 always holds.
By summarizing the above analysis, we obtain the following theorem.
Theorem 3.2 ([5]). Suppose that g(t) ∈ Cr (r ≥ 1) and x(t) is the solution of system
(3.27). Then, there exists an OPNCL controller K(g, x,t), which is nonlinear with
respect to x(t), such that the basin of entrainment (3.20) is not empty. Furthermore,
by this control, the entrainment is global.
In the rest of this section, we will study the ability of entrainment with respect to
controllers of open-loop, linear open-loop, OPCL, and OPNCL.
Consider the controlled system (3.27). For convenience, we suppose that x(t) =
g(t) + u(t), where g(t) is the target orbit, and set S(t) = 1.
(i) K(g, x,t) is the Hübler open-loop action. In this case
where A is a constant matrix with all its eigenvalues having negative real parts.
Substituting the above equation into (3.27), we have
∂ F(g,t)
C(g,t) = −B
∂g
and B = (bi j )n×n is a constant matrix with all its eigenvalues having negative
real parts. Substituting (3.34) into (3.27), we have
m
1 ∂ k F(g,t) k
u̇ = Bu + ∑ u. (3.35)
k=2 k! ∂ gk
3.4 Global Control of a Class of Continuous-Time Polynomial Chaotic Systems 89
The judgement of stability of u = 0 for system (3.35) is easier than that of (3.32)
because the linear part of (3.35) is a constant matrix. When m = 2, system (3.25)
has the form
n n n
ẋi = ai (t) + ∑ bi j (t)x j + ∑ ∑ c jk (t)x j xk
(i)
(i = 1, 2, . . . , n). (3.36)
j=1 j=1 k=1
Adding OPCL action to (3.36) and transforming it into the equation about u, we
get
n n n
∑ bi j (t)u j + ∑ ∑ c jk (t)u j uk
(i)
u̇i = (i = 1, 2, . . . , n). (3.37)
j=1 j=1 k=1
It is clear that (3.37) is independent of the target orbit g(t).1 The stability of
u = 0 of (3.37) can be discussed by the Lyapunov direct method. However,
when m ≥ 3, the determination of the basin of entrainment is more complicated
than the case of m = 2, because the right-hand side of (3.35) may include g(t)
as its parameters.
From the above discussion and Theorem 3.2 we can find the superiority of the OP-
NCL method. First, the OPNCL method is independent of the target orbit g(t); sec-
ond, the basin of entrainment of the OPNCL method is global; that is to say, the
starting point may be arbitrarily selected in phase space.
3.4.2 Simulations
For validating the effectiveness of the OPNCL method, we take Chua’s circuit as an
example [1]. The diagram of Chua’s circuit is very simple and is shown in Fig. 3.3.
Chua’s circuit is composed of an inductor L, two capacitors C1 and C2 , a linear
resistor G, and a nonlinear resistor g. The system equations of Chua’s circuit are as
follows: ⎧
⎪ dVC1
⎪
⎪ C1 dt = G(VC2 − VC1 ) − g(VC1 ),
⎪
⎪
⎪
⎨
dVC2
C2 = G(VC1 − VC2 ) + iL , (3.38)
⎪
⎪ dt
⎪
⎪
⎪
⎪
⎩ L diL = −V .
C2
dt
Here, VC1 and VC2 are voltages of C1 and C2 , respectively, iL is the current through
the inductor L, and the nonlinear resistor g has the following feature:
1
g(VC1 ) = m0VC1 + (m1 − m0 )(|VC1 + 1| − |VC1 − 1|),
2
1 Both the Rössler system and the Lorenz system belong to this case whose linear parts are con-
stants.
90 3 Entrainment and Migration Control of Chaotic Systems
G iR
L
V2 VR
V1
C2 C1
iL
g
C2 1 C2 C2
p= , r= −1 a0 , q = − 2 ,
C1 G C1 LG
and still use the symbol t to denote the transformed time τ , then system (3.38)
becomes a dimensionless one:
⎧
⎨ ẋ1 = px2 + r(x1 − 2x31 ),
ẋ = x1 − x2 + x3 , (3.39)
⎩ 2
ẋ3 = qx2 .
When p = 10, r = 10/7, and q = −100/7, system (3.39) has a chaotic attractor,
which is shown in Fig. 3.4. Suppose that we want to steer the solution of system
(3.39) to a target g(t) = (g1 (t), g2 (t), g3 (t))T . Adding an OPCL control action to
(3.39), we get a group of equations about u as follows:
3.4 Global Control of a Class of Continuous-Time Polynomial Chaotic Systems 91
0.2
0.15
0.1
0.05
0
x3
−0.05
−0.1
−0.15
−0.2
1
0.5 1.5
1
0 0.5
0
−0.5 −0.5
−1
x2 −1 −1.5
x1
3 3 3 3 3 3
1 ∂ 2 Fi (g,t) 1 ∂ 3 Fi (g,t)
u̇i = ∑ bi j u j + 2 ∑ ∑ ∂ g j ∂ g k
u j uk +
6 ∑ ∑ ∑ ∂ g j ∂ gk ∂ gl u j uk ul
j=1 j=1 k=1 j=1 k=1 l=1
(i = 1, 2, 3). (3.40)
where (bi j )3×3 is a diagonal matrix with all-negative elements. According to the
notation used in [4], the first equation of (3.40) can be written as
where a = −b11, b(t) = 6rg1 (t), and c = 2r. When b2 (t) < 4ac, the BE(g) of Chua’s
circuit (3.39) is global. When b2 (t) > 4ac, the cases of ġ1 (t) = 0 and ġ1 = 0 should
be considered, respectively, in order to determine the BE(g). However, for some
target g(t), it becomes more difficult to determine the basin of entrainment because
of the uncertainty on the sign of b2 (t) − 4ac [4]. For example, we choose the target
as follows:
g(t) = (g1 , g2 , g3 )T = (1 + 0.1 sint,t cost, −10)T .
92 3 Entrainment and Migration Control of Chaotic Systems
Substituting the above equation into (3.39) and setting x(t) = g(t) + u(t), we get
Choose bii = −1 (i = 1, 2, 3). It is easy to see that u will tend to zero quickly
without determining the basin of entrainment. The effects of methods of open-loop,
linear closed-loop, OPCL, and OPNCL are shown in Figs. 3.5–3.8, in which the
controllers, ui , are those of (3.31), (3.32), (3.35), and (3.42). From these figures
we can see that it is impossible for the methods of open-loop and linear feedback
0
u1
−2
−4
0 2 4 6 8 10 12 14 16 18 20
5
u2
−5
0 2 4 6 8 10 12 14 16 18 20
10
5
u3
−5
−10
0 2 4 6 8 10 12 14 16 18 20
t (sec)
0
u1
−2
−4
0 2 4 6 8 10 12 14 16 18 20
20
10
u2
−10
−20
0 2 4 6 8 10 12 14 16 18 20
40
20
u3
−20
0 2 4 6 8 10 12 14 16 18 20
t (sec)
0.5
u1
0
−0.5
0 2 4 6 8 10 12 14 16 18 20
0.5
u2
−0.5
−1
0 2 4 6 8 10 12 14 16 18 20
10
5
u3
−5
−10
0 2 4 6 8 10 12 14 16 18 20
t (sec)
0.3
0.2
0.1
x3
−0.1
−0.2
1
0.5 1.5
1
0 0.5
0
−0.5 −0.5
−1
x2 −1 −1.5
x1
Fig. 3.9 The solution of Chua’s circuit system approaches the target orbit quickly under the OP-
NCL action
3.5 Global Control of a Class of Discrete-Time Systems 95
to achieve the goal. The OPCL method can approximately achieve the aim but the
error curves are piecewise convergent. However, the OPNCL method can achieve
the goal and the basin of entrainment is global. We choose another target g(t) =
(0.5 sint, 0.3 + e−t , 0.8 cost)T . Under the action of the OPNCL method, the solution
of Chua’s circuit system is entrained to the target orbit quickly, which is shown in
Fig. 3.9.
In this section, we will study the OPNCL control for a class of discrete-time systems.
For the given mapping
by adding a Hübler open-loop control action on the right-hand side of (3.43), we get
where the {gk } are some target to be reached and Sk is the switching function (Sk =
0, k < 0; 0 ≤ Sk ≤ 1, k ≥ 0). This kind of control (when Sk = 1) was first proposed
by Hübler and was used in the study of the logistic map and nonlinear damped
oscillations. Jackson and Hübler studied the global control scale of the logistic map
and proposed the concept of the basin of entrainment, denoted by BE({gk }); that is,
x0 ∈ BE({gk }),
lim xk − gk = 0.
t→∞
where
1, i = j;
δi j =
j.
0, i =
96 3 Entrainment and Migration Control of Chaotic Systems
If the target orbit is not a point, then it is not easy to determine BE({gk }).
By adding a linear closed-loop control action to the right-hand side of system
(3.43), we have
xk+1 = F(xk ) + Dk (gk − xk ),
where Dk is a suitable selected matrix. For chaotic systems, the target orbit is usually
required to be an unstable periodic solution embedded in the chaotic attractor, which
satisfies
gk+1 = F(gk ).
Now suppose that there is a discrete-time chaotic system
(i) (1) (2) (p)
xk+1 = Fi xk , xk , . . . , xk (i = 1, 2, . . . , p), (3.45)
(1) (2)
where Fi (xk ) (x%k ∈ R p , i = 1, 2,
&. . . , p) is an mth-order polynomial about xk , xk , . . . ,
(p) (1) (p)
xk . If {gk } = g1 , . . . , gk is the target orbit to be achieved for (3.45), by adding
a combination control, K(gk , xk ), composed of a Hübler open-loop and a nonlinear
feedback, to the right-hand side of (3.45), we get
(i)
xk+1 = Fi (xk ) + Sk Ki (gk , xk ) (i = 1, 2, . . . , p), (3.46)
where
p
Ki (gk , xk ) = gk+1 − Fi (gk ) − ∑ Ci j xk − gk ,
(i) ( j) ( j)
j=1
and ai j is the element of a suitably selected matrix A. Thus, when Sk = 1 the set
BE({gk }) = {x0 : limn→∞ xn − gn = 0} is not empty, but is global. That is to say,
from any initial point the solution of system (3.45) can be entrained to the target
orbit {gk } by adding an OPNCL control action.
In fact, when Sk = 1, setting xk = uk + gk and noting that F(xk ) is an mth-order
polynomial, we know that the expansion of Fi (gk + uk ) is finite. Substituting xk =
uk + gk into (3.46), we get
3.5 Global Control of a Class of Discrete-Time Systems 97
(i) (i)
uk+1 + gk+1 = Fi (uk + gk ) + Ki (gk , xk )
p
1 ∂ Fi (gk )
∑
( j)
= Fi (gk ) + ( j)
uk + · · ·
1! j=1 ∂ gk
p
1 ∂ m Fi (gk ) ( j)
∑
(l)
+ ( j) (l) k
u · · · uk + Ki (gk , xk ).
m! j,...,l=1 ∂ gk · · · ∂ gk
Substituting the expression of Ki (gk , xk ) into the above equation, we get a group of
equations with respect to uk :
p
∑ ai j uk
(i) ( j)
uk+1 = . (3.47)
j=1
There are many ways to select the matrix A. For simplicity, we can choose ai j =
(i)
0 (i = j) and |aii | < 1. By this choice, the uk (i = 1, . . . , p) of (3.47) will be conver-
gent to 0, which means that the solution of system (3.45) is entrained to the target
orbit {gk } after Sk = 1.
3.5.2 Simulations
To present the effectiveness of the OPNCL control action we take the logistic map,
xk+1 = λ xk (1 − xk ), (3.48)
as the first example. The bifurcation diagram2 (it displays how the periodic points
of the logistic map vary with λ ) and the Lyapunov exponent of (3.48) are shown in
Fig. 3.10 and Fig. 3.11. When λ = 3.7 the logistic map is in chaotic state. Set the
target orbit as gk = 0.65 + 0.15(−1)k and a11 = −0.95. When k = 2000, Sk = 1. The
control effect is shown in Fig. 3.12.
The second example is to consider the Hénon map:
xk+1 = yk + 1 − ax2k ,
(3.49)
yk+1 = bxk .
ẋ = f (x, α )
with a parameter α ∈ R. As α changes, the limit sets of the system also change. Typically, a small
change in α produces small quantitative changes in a limit set. For instance, perturbing α could
change the position of a limit set slightly and, if the limit set is not an equilibrium point, its shape
or size could also change. There is also the possibility that a small change in α can cause a limit
set to undergo a qualitative change. Such a qualitative change is called a bifurcation and the value
of α at which a bifurcation occurs is called a bifurcation value.
98 3 Entrainment and Migration Control of Chaotic Systems
Fig. 3.12 The logistic map under the action of OPNCL control
0.5
0
Lyapunov Exponents
−0.5
−1
−1.5
−2
0 0.2 0.4 0.6 0.8 1 1.2 1.4
a
1
x(k)
−1
−2
0 500 1000 1500 2000 2500 3000 3500 4000
k
1.5
1
y(k)
0.5
−0.5
0 500 1000 1500 2000 2500 3000 3500 4000
k
Fig. 3.15 The Hénon map under the action of OPNCL control
3.6 Summary 101
1.4
1.2
0.8
0.6
y(k)
0.4
0.2
−0.2
−0.4
−1.5 −1 −0.5 0 0.5 1 1.5 2 2.5
x(k)
Fig. 3.16 The phase diagram of Hénon map under the action of OPNCL control
When a = 1.75 and b = 0.3 the Hénon map is in chaotic state. The bifurcation
diagram and Lyapunov exponents are shown in Fig. 3.13 and Fig. 3.14. The target
orbit is chosen as
(1) (2)
gk = 2 + 0.2 sin(0.02kπ ), gk = 1 + 0.25 cos(0.04kπ ).
When k = 2000, Sk = 1, the OPNCL controller begins to work. The control effect is
shown in Fig. 3.15 and Fig. 3.16, respectively.
Remark 3.2. In theory, the OPNCL method can entrain the solution of the controlled
system to any target orbit, even if the target is unbounded. But, in practice, when the
target orbit is unbounded it is impossible to design the controller since the output
energy of the controller is infinite.
3.6 Summary
In this chapter, we studied methods for chaos control, entrainment, and migration.
At the beginning, we introduced an open-loop entrainment method, and then a mod-
ified entrainment method, the OPCL control method, was introduced. Based on the
OPCL method, we proposed the OPNCL control method. It can entrain the solu-
tion of the controlled system to any bounded target orbit. We applied the OPNCL
102 3 Entrainment and Migration Control of Chaotic Systems
References
1. Chua LO, Komura M, Matsumoto T (1986) The double scroll family: parts I and II. IEEE
Trans Circuits Syst I 33:1073–1118
2. Hu G, Xiao J, Zheng Z (2000) Chaos Control. Shanghai Scientific and Technological Educa-
tion Publishing House, Shanghai
3. Hübler A (1989) Adaptive control of chaotic systems. Helv Phys Acta 62:343–346
4. Jackson EA, Grosu I (1995) An open-plus-closed-loop (OPCL) control of complex dynamic
systems. Physica D 85:1–9
5. Wang J, Tian P, Chen C (2000) Global control of continuous polynomial chaotic systems.
Control Decis 15:309–313 (in Chinese)
Chapter 4
Feedback Control of Chaotic Systems
Abstract The parameters of a chaotic system play an important role, whose vari-
ation will lead to completely different dynamics. Sometimes, we want to design a
controller which is optimal in a certain sense. In this chapter, we focus on two kinds
of methods of suppressing chaos: the adaptive control method and the inverse op-
timal control method. We develop two new methods of parametric adaptive control
for a class of discrete-time chaotic systems and a class of continuous-time chaotic
systems with multiple parameters, respectively. The systems are first assumed to
be linear with respect to parameters. Then, systems with nonlinear distributed pa-
rameters and uncertain noise are considered. Finally, we apply the inverse optimal
control method to stabilize a four-dimensional chaotic system.
4.1 Introduction
In recent years, many conventional and novel control methods have been applied to
the suppression of chaotic behaviors, such as adaptive control [14], fuzzy control
[8], linear feedback control [13], impulsive control [3], time-delay feedback control
[9], optimal control [6], etc.
The parameters of a chaotic system play an important role, whose variation will
lead to completely different dynamics. The dynamical mechanism of the system
may change qualitatively during a small change of parameters. Such abrupt tran-
sitions between dynamical regimes are usually modeled in terms of bifurcations,
catastrophes, and crises [2, 4]. In other cases the transitions are not unwanted, but
are useful for switching between related phenomena (bistability and hysteresis). Re-
cently, there has been much work done in the area of parametric adaptive control of
chaotic systems, most of which usually assumed the parameters to be constants or
to be varying very slowly compared to the dynamical time scales of the systems. In
real applications the parameters are not always linearly distributed. Sometimes, we
want to design a controller which is optimal in a certain sense. However, it is a dif-
ficult task to solve the Hamilton–Jacobi–Bellman (HJB) equation when one designs
103
104 4 Feedback Control of Chaotic Systems
an optimal controller directly along the conventional route. To solve the aforemen-
tioned problems, in this chapter, we focus on two kinds of methods of suppressing
chaos: the adaptive control method and the inverse optimal control method. We will
develop two new methods of parametric adaptive control for a class of discrete-
time chaotic systems and a class of continuous-time chaotic systems with multiple
parameters, respectively. The systems are first assumed to be linear with respect
to parameters. Then, systems with nonlinear distributed parameters and uncertain
noise are considered.
Finally, we will apply the inverse optimal control method to stabilize a four-
dimensional chaotic system. In the 1990s, Freeman and Kokotovic [1] proposed
the inverse optimal control method, in which an optimal controller was constructed
through a Lyapunov function. As long as we can find such a Lyapunov function, it
can be inferred that the designed controller is optimal with a specified performance
index. The merit of this approach is that we do not need to solve the complicated
HJB equation.
where x(k) ∈ Rn is the state vector, μ (k) = (μ1 (k), . . . , μm (k))T is the parameter
vector which determines the nature of dynamics and is time dependent under the
influence of both disturbances and control, and F : Rn × Rm → Rn is a nonlinear
map. First, suppose that F is linearly parameterized. Many typical discrete-time
chaotic systems belong to this case, such as the Hénon map and the logistic map.
Then, (4.1) can be rewritten as
m
x(k + 1) = A(x(k)) + ∑ Bl (x(k))μl (k)
l=1
or
m
x p (k + 1) = A p (x(k)) + ∑ B pq (x(k))μq (k), p = 1, 2, . . . , n,
q=1
In light of the basic idea of adaptive control, we propose a new parametric adaptive
algorithm as follows:
Therefore, we can let the dynamics of the adjustable system redirect to the reference
model to predict the dynamics of y(k + 1). Let FM = F, and denote Gl (z) = ∑nk=1 zk
(l = 1, 2, . . . , m), where z = (z1 , z2 , . . . , zn )T ∈ Rn . Then, combining (4.1), (4.3), and
(4.4) yields
⎧ m
⎪
⎪ x p (k + 1) = A p (x(k)) + ∑ B pq (x(k))μq (k),
⎪
⎪
⎪
⎪ q=1
⎨ m
y p (k + 1) = A p (x(k)) + ∑ B pq (x(k))μqg , (4.5)
⎪
⎪ q=1
⎪
⎪ n m
⎪
⎪
⎩ μl (k + 1) = μl (k) + αl (x(k)) ∑ ∑ B pq (x(k))(μq (k)−μqg ) ,
p=1 q=1
Theorem 4.1 ([11]). For the parametric adaptive control equations (4.5), if we
choose
αl (x(k)) = −K(x(k))Wl (x(k)), (4.6)
106 4 Feedback Control of Chaotic Systems
n
where Wl (x(k)) = ∑ B pl (x(k)),
p=1
' (−1
m
K(x(k)) = 2 d + ∑ Wl2 (x(k)) , (4.7)
l=1
and d > 0 is an arbitrary constant, then the parameters of (4.5) will converge asymp-
totically to the desired values.
Δ V (k) = V (k + 1) − V(k)
m m
= ∑ (μl (k + 1) − μlg)2 − ∑ (μl (k) − μlg)2 . (4.9)
l=1 l=1
Substituting (4.5) into (4.9), and denoting μ j (k)− μ jg = μk, jg (1 ≤ j ≤ m), we derive
' ' ((2
m n m m
Δ V (k) = ∑ μk,lg + αl (x(k)) ∑ ∑ B pq(x(k))μk,qg − ∑ μk,lg
2
. (4.10)
l=1 p=1 q=1 l=1
It is clear that system (4.5) is asymptotically stable with respect to μq (k). In other
words, the parameters μq (k) will asymptotically converge to the desired values μqg ,
q = 1, 2, . . . , m, respectively.
Next, we will discuss the case that the system parameters are nonlinearly dis-
tributed. Moreover, the disturbances of the uncertain noise are taken into account.
Consider a class of discrete-time nonlinear systems as follows:
where Δ W (k) is the uncertain noise. Consider the same reference model as in (4.4),
i.e.,
y(k + 1) = FM (x(k), μg ).
Furthermore, assume that F = FM , and the adaptive control algorithm is designed
the same as (4.3), i.e.,
where
e(k + 1) = x(k + 1) − y(k + 1)
= F(x(k), μ (k)) − F(x(k), μg ) + Δ W (k).
In the above, α (x(k)) is defined as
' (−1
m
αl (x(k)) = −2Rl (x(k)) d + ∑ R2l (x(k)) , (4.12)
l=1
n )
where Rl (x(k)) = ∑ ∂ Fi (x(k), μ (k)) ∂ μlg and d > 0 is an arbitrary positive con-
i=1
stant.
Using Taylor expansion of F(x(k), μ (k)) − F(x(k), μg ) around the neighborhood
of μ (k) = μg , we can derive
' (
n m
∂ Fi
μl (k + 1) = μl (k) + αl (x(k)) ∑ ∑ (μ j (k) − μ jg )
i=1 j=1 ∂ μ jg
where
n
h2l (x(k), μ (k) − μg ) = ∑ (αl (x(k))h1i (x(k), μ (k) − μg ) + Δ Wi )
i=1
108 4 Feedback Control of Chaotic Systems
and h1i (x(k), μ (k) − μg ) = o(μ (k) − μg ). Now, consider the stability of solutions of
the linear part of (4.13), i.e.,
' (
n m
∂ Fi
μl (k + 1) = μl (k) + αl (x(k)) ∑ ∑ (μ j (k)−μ jg ) . (4.14)
i=1 j=1 ∂ μ jg
Δ V = V (k + 1) − V(k)
m m
= ∑ (μl (k + 1) − μlg)2 − ∑ (μl (k) − μlg)2 . (4.15)
l=1 l=1
≤ 0. (4.16)
From (4.16) we can infer that (4.14) is asymptotically stable. As for the nonlinear
equation given in (4.13), as long as the term h2l is small enough, it is stable, too.
4.2.3 Simulations
where p and q are real parameters. When p = 1.4 and q = 0.3, system (4.17) presents
chaotic behaviors. When p = 0.85 and q = 0.55, system (4.17) is in the normal state.
Suppose that the reference model is given as follows:
y1 (k + 1) = −0.85x21(k) + x2 (k) + 1,
y2 (k + 1) = 0.55x1(k).
4.2 Model-Reference Adaptive Control of a Class of Discrete-Time Chaotic Systems 109
1
x1 (k)
0
−1
−2
0 500 1000 1500
k
1.5
1
x2 (k)
0.5
−0.5
−1
0 500 1000 1500
k
1.6
1.4
p(k)
1.2
0.8
0 500 1000 1500
k
0.8
0.6
q(k)
0.4
0.2
0
0 500 1000 1500
k
Fig. 4.2 The parameter change process of controlling the Hénon map (i)
110 4 Feedback Control of Chaotic Systems
1
x1 (k)
0
−1
−2
0 500 1000 1500
k
1
0.5
x2 (k)
−0.5
−1
0 500 1000 1500
k
1.8
1.6
1.4
p(k)
1.2
0.8
0 500 1000 1500
k
0.8
0.6
q(k)
0.4
0.2
0
0 500 1000 1500
k
Fig. 4.4 The parameter change process of controlling the Hénon map (ii)
4.3 Model-Reference Adaptive Control of a Class of Continuous-Time Chaotic Systems 111
Assume that the parameters p and q of system (4.17) deviate from the desired
values (pg = 0.85, qg = 0.55) by a sudden disturbance. The parameters p and q
change to p = 1.4 and q = 0.3 for k = 501, 502, . . ., 850. The disturbed system (4.17)
presents chaotic behavior with the changed parameters. The control goal is to make
the changed parameters return to the desired values such that the disturbed system
(4.17) returns to the normal state. Apply the parametric adaptive control law (4.5),
i.e.,
where Q = (d + x41 (k) + x21 (k))−1 , d = 2, and k = 851, . . ., 1500. The simulation
results are given in Figs. 4.1 and 4.2. From the two figures, we can see that the
disturbed parameters converge to the desired values rapidly. This control method
can be applied to many practical discrete-time systems.
On the other hand, assume that the normal state of system (4.17) is chaotic. The
system presents nonchaotic behavior with external disturbances. Now the control
goal is to make the disturbed system return to the chaotic state. Still consider the
Hénon map, but the disturbed parameters are given as p = 0.85, q = 0.55, and k =
501, . . ., 850. Suppose that the reference model is given as follows:
y1 (k + 1) = −1.4x21(k) + x2 (k) + 1,
y2 (k + 1) = 0.3x1 (k).
Apply the parametric adaptive control law (4.5), where d = 2 and k = 851, . . . , 1500.
The simulation results are given in Figs. 4.3 and 4.4. From the two figures, we can
see that the disturbed parameters converge to the desired values rapidly.
ẋ = F(x,t, μ ), (4.18)
First, consider a reference model ẏ = FM (y,t, μ ), which has the same dynamics as
the system (4.19) at the desired value μ = μg , i.e.,
ẏ = FM (y,t, μg ). (4.20)
μ̇ = β (x,t)G(ė), (4.21)
put x(t) of system (4.19) to system (4.20). We then derive the coupling equation
between the system and the model:
y = FM (x,t, μg ).
and
n
Gl (ė) = ∑ ėk , (4.23)
k=1
where l = 1, 2, . . . , m.
Proof. Suppose that F(x,t, μ ) = φ (x,t) + ∑m l=1 ψl (x,t)μl , in which φ , ψl : R ×
n
R → Rn , φ (x,t) = (φ1 (x,t), φ2 (x,t), . . . , φn (x,t))T , ψl (x,t) = (ψl1 (x,t), ψl2 (x,t), . . . ,
ψln (x,t))T , and ψlk (x,t) (l = 1, 2, . . . , m; k = 1, 2, · · · , n) are uniformly continuous
real functions. Then, the adaptive control algorithm (4.22) can be transformed to the
following:
⎧ m
⎪
⎪ ẋk = φk (x,t) + ∑ ψlk (x,t)μl ,
⎪
⎪
⎪
⎨ l=1
m
ẏk = φk (x,t) + ∑ ψlk (x,t)μlg , (4.24)
⎪
⎪ l=1
⎪
⎪ n m n
⎪
⎩ μ̇l = − ∑ ψlk (x,t) ∑ ∑ ψ pq (x,t) (μ p − μ pg) .
k=1 p=1 q=1
1 m
V (t) = ∑ (μl − μlg)2 ,
2 l=1
lim V̇ (t) = 0
t→∞
or
m n
lim
x→∞
∑ ∑ ψlk (x,t)(μl − μlg) = 0.
l=1 k=1
Therefore, we have proven that the parametric adaptive control equations (4.24)
are efficient. In other words, the parameter μl is asymptotically stable, which
achieves the stability match between the adjustable system and the reference model.
Next, we will discuss the case in which the system is nonlinearly parameterized.
Here, we cannot adopt the same adaptive control method as (4.24). Consider the
4.3 Model-Reference Adaptive Control of a Class of Continuous-Time Chaotic Systems 115
following equations: ⎧
⎨ ẋk = Fk (x,t, μ ) + Δ Wk (t),
ẏ = FMk (x,t, μg ), (4.25)
⎩ k
μ̇l = βl (x,t)Gl (ė),
where l = 1, 2, . . . , m and Δ Wk (t) (k = 1, 2, . . . , n) are disturbed terms of external un-
certain noise. Suppose that there is no modeling error, i.e., Fk = FMk (k = 1, 2, . . . , n),
and Gl is defined by (4.23). Then, the third equation of (4.25) can be rewritten as
' (
n
μ̇l = βl (x,t) ∑ (Fk (x,t, μ ) − Fk (x,t, μg )) + Δ Wk (t) . (4.26)
k=1
where ∂ Fk (x,t, μ )/∂ μlg = (∂ Fk (x,t, μ )/∂ μ )|μ =μg and h1k (x,t, μ − μg ) is the higher-
order infinitesimal term of μ − μg . Substituting (4.27) into (4.26), we get
-
n m
∂ Fk (x,t, μ )
μ̇l = βl (x,t) ∑ ∑ (μl − μlg ) + h1k (x,t, μ − μg ) + Δ Wk (t)
k=1 l=1 ∂ μlg
m n
∂ Fk (x,t, μ )
= βl (x,t) ∑ ∑ (μl − μlg ) + h2(x,t, μ − μg), (4.28)
l=1 k=1 ∂ μlg
Since the term h2 (x,t, μ − μg ) includes the information of the unknown distur-
bance Δ Wk (t), the parametric adaptive law (4.30) is not feasible. Therefore, let us
consider the following parametric adaptive law instead of (4.30):
' (' (
n m n
∂ Fk (x,t, μ ) ∂ Fq (x,t, μ )
μ̇l = − ∑ ∑ ∑ ∂ μ pg (μ p − μ pg) . (4.31)
k=1 ∂ μlg p=1 q=1
116 4 Feedback Control of Chaotic Systems
Now we analyze the stability of the solution of (4.31). We still choose a Lyapunov
function as
1 m
V (t) = ∑ (μl − μlg )2 ,
2 l=1
and then we get
m
V̇ (t) = ∑ μl − μlg μ̇l
l=1
' (' (-
m n
∂ Fk (x,t, μ ) m n
∂ Fq (x,t, μ )
= ∑ μl − μlg − ∑ ∑ ∑ ∂ μ pg (μ p − μ pg)
l=1 k=1 ∂ μlg p=1 q=1
' (
m n
∂ Fk (x,t, μ ) n ∂ Fq (x,t, μ )
= ∑ (μl − μlg ) − ∑ ∑ ∂ μlg (μl − μlg)
l=1 k=1 ∂ μlg q=1
' (' (-
n
∂ Fk (x,t, μ ) m n
∂ Fq (x,t, μ )
− ∑ μl − μlg ∑ ∑ ∂ μlg (μ p − μ pg)
k=1 ∂ μlg p=l q=1
⎧' (2
m ⎨ n
∂ Fk (x,t, μ ) 2
=− ∑ ∑ μl − μlg
l=1
⎩ k=1 ∂ μlg
' ( -
n
∂ Fk (x,t, μ ) m n ∂ Fq (x,t, μ )
+ ∑ μl − μlg ∑ ∑ (μ p − μ pg)
k=1 ∂ μlg p=l q=1
∂ μlg
-2
m n
∂ Fk (x,t, μ )
=− ∑ ∑ μl − μlg
l=1 k=1 ∂ μlg
≤ 0.
Similar to the discussion of V (t) earlier, one obtains that limt→∞ V̇ (t) = 0 and
We have proven that (4.31) is asymptotically stable. For the nonlinear equation
given in (4.30), if the term h2 (x,t, μ − μg ) is sufficiently small, then it is also stable.
4.3.3 Simulations
2.5
1.5
0.5
x2
−0.5
−1
−1.5
−2
−2.5
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
x1
Fig. 4.5 The chaotic attractor of the Duffing equation described by (4.32)
where p1 , p2 , and q are system parameters. When p1 = 0.4, p2 = −1.1, q = 1.8, and
the initial state is given as x(0) = (1, 1)T , system (4.32) presents chaotic behavior,
which is illustrated in Fig. 4.5.
Suppose that the reference model is
ẋ1 = x2 ,
ẋ2 = 0.8x1 − x31 − 0.7x2 + 0.6 cos(1.8t),
i.e., the desired values of the system parameters are p1g = 0.7, p2g = −0.8, and
qg = 0.6. The reference model corresponds to a normal state. Suppose that the pa-
rameters are disturbed to p1 = 0.4, p2 = −1.1, and q = 1.8, which make system
(4.32) chaotic. Our goal is to make the disturbed system return to the normal state.
Applying the parametric adaptive control law (4.24), we get
⎧
⎨ ṗ1 = −(−x2 ) [−x2 (p1 − 0.7) − x1(p2 + 0.8) + (q − 0.6) cos(1.8t)] ,
ṗ2 = −(−x1 ) [−x2 (p1 − 0.7) − x1(p2 + 0.8) + (q − 0.6) cos(1.8t)] ,
⎩
q̇ = − cos(1.8t) [−x2 (p1 − 0.7) − x1(p2 + 0.8) + (q − 0.6) cos(1.8t)] .
The simulation results are shown in Figs. 4.6–4.10 which show that the disturbed
parameters converge quickly to their desired values after the transient. Figs. 4.6 and
4.7 show the moving trajectory of the undisturbed normal state, disturbed chaotic
state, and controlled desired state of system (4.32). From Figs. 4.8–4.10 we can
see that parametric adaptive control law (4.24) is very powerful in stabilizing the
system. The control method can be applied to electric power systems in a chaotic
118 4 Feedback Control of Chaotic Systems
1.5
0.5
x1
−0.5
−1
−1.5
−2
−60 −40 −20 0 20 40 60 80
t (sec)
Fig. 4.6 The changed state response curve of x1 (t), which is disturbed to be chaotic at t = −30,
and approaches the desired state when the controller is applied at t = 0
state after the external disturbance, which automatically adjusts to a normal state
and guarantees the security of the electric power network.
Now, we assume that an external disturbance Δ W (t) acts on the Duffing equation
when t = 0. The parametric adaptive law remains unchanged. Choosing Δ W (t) as
a random background noise and maxt {|Δ W (t)|} = 0.1, the adaptive control law is
remarkably robust (see Figs. 4.11–4.15).
Next, consider the Rössler system:
⎧
⎨ ẋ1 = −x2 − x3 ,
ẋ2 = x1 + μ1 x2 , (4.33)
⎩
ẋ3 = 0.2 + μ2x3 (x1 − 5.7),
where μ1 and μ2 are system parameters. When the desired parameter values are
μ1g = 0.2 and μ2g = 1, system (4.33) corresponds to a chaotic state (see Fig. 4.16).
Suppose that the reference model of system (4.33) is chosen as
⎧
⎨ ẋ1 = −x2 − x3 ,
ẋ2 = x1 + 0.2x2,
⎩
ẋ3 = 0.2 + x3(x1 − 5.7).
Suppose that the parameters μ1 and μ2 of system (4.33) are disturbed from the target
values (μ1g = 0.2, μ2g = 1), and set to μ1 = −0.8, μ2 = 2.8, which corresponds to a
nonchaotic state. The control automatically changes the parameters until the system
4.3 Model-Reference Adaptive Control of a Class of Continuous-Time Chaotic Systems 119
2.5
1.5
0.5
x2
−0.5
−1
−1.5
−2
−2.5
−60 −40 −20 0 20 40 60 80
t (sec)
Fig. 4.7 The changed state response curve of x2 (t), which is disturbed to be chaotic at t = −30,
and approaches the desired state when the controller is applied at t = 0
0.85
0.8
0.75
0.7
0.65
p1
0.6
0.55
0.5
0.45
0.4
0.35
0 10 20 30 40 50 60 70 80
t (sec)
Fig. 4.8 Parameter p1 (disturbed initial value p1 (0) = 0.4) approaches the desired value p1g = 0.7
120 4 Feedback Control of Chaotic Systems
−0.6
−0.7
−0.8
−0.9
p2
−1
−1.1
−1.2
−1.3
0 10 20 30 40 50 60 70 80
t (sec)
Fig. 4.9 Parameter p2 (disturbed initial value p2 (0) = −1.1) approaches the desired value p2g =
−0.8
1.8
1.6
1.4
1.2
q
0.8
0.6
0.4
0 10 20 30 40 50 60 70 80
t (sec)
Fig. 4.10 Parameter q (disturbed initial value q(0) = 1.8) approaches the desired value qg = 0.6
4.3 Model-Reference Adaptive Control of a Class of Continuous-Time Chaotic Systems 121
1.5
0.5
x1
−0.5
−1
−1.5
−2
−60 −40 −20 0 20 40 60 80
t (sec)
Fig. 4.11 External disturbance on the Duffing equation. The changed state response curve of x1 (t),
which is disturbed to be chaotic at t = −30, and approaches the desired state when the controller
is applied at t = 0
2.5
1.5
0.5
x2
−0.5
−1
−1.5
−2
−2.5
−60 −40 −20 0 20 40 60 80
t (sec)
Fig. 4.12 External disturbance on the Duffing equation. The changed state response curve of x2 (t),
which is disturbed to be chaotic at t = −30, and approaches the desired state when the controller
is applied at t = 0
122 4 Feedback Control of Chaotic Systems
0.8
0.75
0.7
0.65
0.6
p1
0.55
0.5
0.45
0.4
0.35
0 10 20 30 40 50 60 70 80
t (sec)
Fig. 4.13 External disturbance on the Duffing equation. Parameter p1 (disturbed initial value
p1 (0) = 0.4) approaches the desired value p1g = 0.7
−0.6
−0.7
−0.8
−0.9
p2
−1
−1.1
−1.2
−1.3
0 10 20 30 40 50 60 70 80
t (sec)
Fig. 4.14 External disturbance on the Duffing equation. Parameter p2 (disturbed initial value
p2 (0) = −1.1) approaches the desired value p2g = −0.8
4.3 Model-Reference Adaptive Control of a Class of Continuous-Time Chaotic Systems 123
1.8
1.6
1.4
1.2
q
0.8
0.6
0.4
0 10 20 30 40 50 60 70 80
t (sec)
Fig. 4.15 External disturbance on the Duffing equation. Parameter q (disturbed initial value q(0) =
1.8) approaches the desired value qg = 0.6
25
20
15
x3
10
0
10
15
0 10
5
−10 0
−5
x2 −20 −10 x1
1.2
0.8
0.6
0.4
μ1
0.2
−0.2
−0.4
−0.6
−0.8
0 10 20 30 40 50 60 70 80
t (sec)
Fig. 4.17 Parameter μ1 (disturbed initial value μ1 (0) = −0.8) approaches the desired value μ1g =
0.2
2.5
2
μ2
1.5
0.5
0 10 20 30 40 50 60 70 80
t (sec)
Fig. 4.18 Parameter μ2 (disturbed initial value μ2 (0) = 2.8) approaches the desired value μ2g = 1
4.4 Control of a Class of Chaotic Systems Based on Inverse Optimal Control 125
returns to the initial target state. Using the parametric adaptive law (4.24), we get
μ̇1 = −x2 [x2 (μ1 − 0.2) + x3(x1 − 5.7)(μ2 − 1)],
(4.34)
μ̇2 = −x3 (x1 − 5.7)[x2(μ1 − 0.2) + x3(x1 − 5.7)(μ2 − 1)].
The simulation results are shown in Figs. 4.17 and 4.18. The figures show that
the disturbed parameters converge quickly to the desired parameter values after the
transient, which indicates that the parametric adaptive control law (4.24) is very
powerful in stabilizing the system. This example makes it clear that the control
method can be applied to control a nonchaotic state to a chaotic state. For example, a
chaotic diode resonator should be kept in a chaotic state under external disturbances.
1 1
x11 = √ a (a + b + p) q, x21 = − √ a (a + b + p) q ,
2a 2a
1 1
x31 = −2a (a + b − p) q, x41 = − −2a (a + b − p) q ,
2a 2a
1 1
x51 = √ a (a + b − p) q, x61 = − √ a (a + b − p) q ,
2a 2a
1 1
x71 = −2a (a + b + p) q, x18 = − −2a (a + b + p) q ,
2a . 2a
q j 1 j 2 j c j
xi2 = ± i , x3 = ± ab x1 − adq, x4 = ± x3 ,
x1 q q
√
q = cd, p = a2 + 6ab + b2.
126 4 Feedback Control of Chaotic Systems
20
10
0
x3
−10
−20
20
10
0
x2
−10
−20 10 20
−10 0
−20 x1
Fig. 4.19 The projection of the chaotic attractor of system (4.35) in the space of x1 –x2 –x3 with
a = 30, b = 10, c = 37, and d = 10
The chaotic behavior of the system (4.35) is shown in Fig. 4.19, when the parameters
of the system (4.35) are chosen as a = 30, b = 10, c = 37, and d = 10 and the initial
condition of the system (4.35) is chosen as x(0) = (−10, 10, 10, 20)T .
then the zero equilibrium of the controlled four-dimensional chaotic system (4.36)
is globally asymptotically stable.
Proof. Construct a Lyapunov function as
1 2
V= x1 + 3x22 + x23 + x24 .
2
Then, the time derivative of V along the trajectory of system (4.36) is given as
follows:
where
⎧ ' (
2
⎪
⎨ (a + 3b) (a + 3b)2
L f V = −a x1 − x2 − cx23 − dx24 + 3b + x22 ,
2a 4a (4.39)
⎪
⎩
LgV = x2 .
which implies that V̇ < 0 for all x = 0, i.e., the controller (4.40) can globally asymp-
totically stabilize the closed-loop system (4.36) .
Based on the idea of inverse optimal control, in order to determine the value of
k0 , we define the following performance index:
128 4 Feedback Control of Chaotic Systems
% t &
J(u) = lim 2β V (x(t)) + l(x(τ )) +uT (x (τ )) R (x (τ )) u (x (τ )) dτ , (4.41)
t→∞ 0
where
l (x) = −2β L f V + β 2R−1 (x) (LgV )2 > 0 . (4.42)
Substituting (4.39) into (4.42), we can derive
2
(a + 3b)
l (x) = 2aβ x1 − x2 + 2cβ x23 + 2d β x24
2a
' ( ' (
(a + 3b)2 2 3 (a + 3b)2
− 2β 3b + x2 + β 3b + + k0 x22 . (4.43)
4a 4a
Therefore, the controller (4.40) is optimal with performance index (4.41). Substitut-
ing (4.44) into (4.40), we get the optimal controller (4.37).
4.4 Control of a Class of Chaotic Systems Based on Inverse Optimal Control 129
30
25
20
15
x3
10
0
30
20 30
10 20
0 10
x2 0
−10 −10 x1
−20 −20
Fig. 4.20 Globally asymptotically stable system with a = 30, b = 10, c = 37, and d = 10
10
5
x3
0
−5
−6
−4
0
−2
x2 0
2 x1
4
5 6
Fig. 4.21 The projection of the chaotic attractor of system (4.35) in the space of x1 –x2 –x3 with
a = 35, b = 10, c = 1, and d = 10
130 4 Feedback Control of Chaotic Systems
30
25
20
x3
15
10
0
30
20 30
10 20
0 10
x2 0
−10 −10 x1
−20 −20
Fig. 4.22 Globally asymptotically stable system with a = 35, b = 10, c = 1, and d = 10
4.4.3 Simulations
4.5 Summary
References
1. Freeman RA, Kokotovic PV (1996) Inverse optimality in robust stabilization. J Control Optim
34:1365–1391
2. Guckenheimer J, Holmes P (1983) Nonlinear Oscillations, Dynamical Systems and Bifurca-
tions of vector Fields. Spinger, New York
3. Khadra A, Liu XZ, Shen XM (2005) Impulsive control and synchronization of spatiotemporal
chaos. Chaos Solitons Fractals 26:615–636
4. Poston T, Steward I (1978) Catastrophe Theory and its Application. Pitman, London
5. Qi GY, Du SZ, Chen GR, Chen ZQ, Yuan ZZ (2005) On a four-dimensional chaotic system.
Chaos Solitons Fractals 23:1671–1682
6. Rafikov M, Balthazar JM (2004) On an optimal control design for Rössler system. Phys Lett
A 333:241–245
7. Sinha S, Ramaswamy R, Rao JS (1990) Adaptive control in nonlinear dynamics. Physica D
43:118–128
8. Tanaka K, Ikeda T, Wang HO (1998) A unified approach to controlling chaos via an LMI-
based fuzzy control system design. IEEE Trans Circuits Syst I 45:1021–1040
9. Tian YP (2005) Delayed feedback control of chaos in a switched arrival system. Phys Lett A
339:446–454
10. Vassiliadis D (1994) Parametric adaptive control and parameter identification of low-
dimensional chaotic systems. Physica D 71:319–341
11. Wang J, Wang X (1998) Parametric adaptive control in nonlinear dynamical systems. Int J
Bifurc Chaos 8:2215–2223
12. Xie YH (2007) Researches on control and synchronization of several kinds of nonlinear
chaotic systems with time delay. Doctoral dissertation, Northeastern University, Shenyang
13. Yassen MT (2005) Controlling chaos and synchronization for new chaotic system using linear
feedback control. Chaos Solitons Fractals 26:913–920
14. Yau HT, Chen CL (2007) Chaos control of Lorenz systems using adaptive controller with
input saturation. Chaos Solitons Fractals 34:1567–1574
Chapter 5
Synchronizing Chaotic Systems Based on
Feedback Control
Abstract The goal of this chapter is to present some methods that can be used
to synchronize a rather wide class of chaotic systems. Considering that a chaotic
system in nature is a nonlinear system, the theories and methods developed for con-
trolling nonlinear systems could be utilized for synchronizing chaotic systems. The
content of this chapter is about the synchronization of chaotic systems which can be
described by ordinary differential equations and ordinary difference equations. The
method is mainly based on the technology of feedback linearization. In this chapter
we first develop synchronization methods for continuous-time chaotic systems, and
after that we study the problem of synchronizing discrete-time chaotic systems. We
also study the situation of synchronizing two different chaotic systems with adaptive
and nonadaptive controllers.
5.1 Introduction
Although many methods have been proposed [1, 3] for synchronizing two or more
chaotic systems since the landmark work by Pecora and Carroll [2], a systematic
framework under which the synchronizing controller can be designed is still not
constructed. It is usually the case that one method is only applicable to some speci-
fied chaotic systems. Our goal of this chapter is to present some methods that work
for a rather wide class of chaotic systems. Considering that a chaotic system in
nature is a nonlinear system, the theories and methods developed for controlling
nonlinear systems could be utilized for the synchronization of chaotic systems. The
content of this chapter is about the synchronization of chaotic systems which can
be described by ordinary differential equations and ordinary difference equations.
The method is based on the technology of feedback linearization. In this chapter
we first develop synchronization methods for continuous-time chaotic systems, and
after that we study the problem of synchronizing discrete-time chaotic systems.
133
134 5 Synchronizing Chaotic Systems Based on Feedback Control
(ii) Lg Lr−1
f h(x) = 0.
∂ h(x) ∂ Lk−1
f h(x)
Here, L f h(x) := f (x), Lkf h(x) := L f Lk−1
f h(x) = f (x), Lg Lkf h(x) :=
∂x ∂x
∂ Lkf h(x)
g(x), and L0f h(x) := h(x).
∂x
Theorem 5.1 ([9]). If system (5.1) has relative degree r at x0 , then r ≤ n. Set
⎧
⎪
⎪ φ1 (x) = h(x),
⎪
⎨ φ2 (x) = L f h(x),
.. (5.2)
⎪
⎪ .
⎪
⎩
φr (x) = Lr−1
f h(x).
Theorem 5.2 ([9]). If system (5.1) has relative degree r at x0 , then by transformation
z = Φ (x) system (5.1) can be transformed to
⎧
⎪ ż1 = z2 ,
⎪
⎪
⎪
⎪ ż2 = z3 ,
⎪
⎪
⎪
⎪ ..
⎪
⎪ .
⎪
⎪
⎪
⎨ żr−1 = zr ,
żr = b(z) + a(z)u, (5.3)
⎪
⎪
⎪
⎪ żr+1 = q r+1 (z),
⎪
⎪ ..
⎪
⎪
⎪
⎪ .
⎪
⎪
⎪
⎪ żn = qn (z),
⎩
y = z1 ,
System (5.5) is called the zero dynamics which reflects the internal stability of sys-
tem (5.4).
1
u= (−b(z) + v),
a(z)
the nonlinear control system (5.6) becomes a linear control system given by
⎧
⎪ ż1 = z2 ,
⎪
⎪
⎪
⎪ ż = z3 ,
⎪
⎪ 2
⎪
⎨ ..
.
⎪
⎪ ż = zn ,
⎪
⎪
n−1
⎪
⎪ żn = v,
⎪
⎪
⎩
y = z1 .
ẋ = f (x),
(5.7)
y = h(x).
such that system (5.8) has relative degree r (r ≤ n). Set ẑ = Φ (x̂), where Φ is defined
as (5.2). Thus, system (5.8) can be transformed to
⎧ 1
˙ 1
⎨ ẑ = Λ ẑ + E(b(ẑ) + a(ẑ)u),
⎪
ẑ˙2 = Q(ẑ1 , ẑ2 ),
⎪
⎩
ŷ = ẑ1 .
5.2 Synchronization of Continuous-Time Chaotic Systems with a Single Input 137
Defining e = (e1 , e2 )T = (ẑ1 −z1 , ẑ2 −z2 )T , we can obtain the following error system:
Theorem 5.3 ([15]). Denote C = (c0 , c1 , . . . , cr−1 ), where c0 , . . . , cr−1 are coeffi-
cients of the Hurwitz polynomial P(s) = sr + cr−1 sr−1 + · · · + c1 s + c0 . If a(ẑ) = 0
for all z ∈ Bρ (Φ (x0 )) and the zero dynamics of system (5.8) is asymptotically stable,
then with the controller
1
u=− (b(ẑ1 , ẑ2 ) − b(z1 , z2 ) + Ce) (5.10)
a(ẑ1 , ẑ2 )
Proof. Since Φ is nonsingular and the zero dynamics of system (5.9) is asymptoti-
cally stable, z and ẑ are both bounded. Therefore, the output signal of the controller
is also bounded. Substituting (5.10) into the first equation of (5.9), we have
ė1 = Λ̃ e1 , (5.11)
⎛ ⎞
0 1 0 ··· 0
⎜ 0 0 1 ··· 0 ⎟
⎜ ⎟
⎜ .. ⎟ . By the selection of C we know that all
where Λ̃ = ⎜ ... ..
.
.. . .
. . . ⎟
⎜ ⎟
⎝ 0 0 ··· 0 1 ⎠
−c0 −c1 −c2 · · · −cr−1
eigenvalues of Λ̃ have real parts which are less than 0, and therefore the origin of
(5.11) is asymptotically stable. The proof is completed.
Remark 5.2. The above theorem guarantees that outputs of system (5.7) and system
(5.8) can be synchronized asymptotically, i.e., lim |y − ŷ| = 0.
t→∞
5.2.3 Simulations
We take both the Rössler system and the Lorenz system as examples to show how
the method developed above is used.
138 5 Synchronizing Chaotic Systems Based on Feedback Control
When a = 0.2, b = 0.2, and c = 5.7, this system has a chaotic attractor, which
is shown in Fig. 5.1. Taking system (5.12) as the drive system, we construct the
following response system:
⎧
˙
⎨ x̂ = −ŷ − ẑ,
⎪
ŷ˙ = x̂ + aŷ, (5.13)
⎪
⎩˙
ẑ = b + ẑ(x̂ − c) + u,
where u is the controller and g(x̂) = (0, 0, 1)T . Choosing the second state of system
(5.12) as the output, we can easily verify that the system (5.13) has relative degree
3. The transformation can be chosen as (ŷ, x̂ + aŷ, ax̂ + (a2 − 1)ŷ − ẑ)T . Select C =
25
20
15
x3
10
0
10
5 15
0 10
−5 5
0
−10
−5
x2 −15 −10 x1
40
20
x̂1 − x1
−20
0 10 20 30 40 50 60 70 80 90 100
20
10
x̂2 − x2
−10
−20
0 10 20 30 40 50 60 70 80 90 100
200
100
x̂3 − x3
−100
−200
0 10 20 30 40 50 60 70 80 90 100
t (sec)
Fig. 5.2 The error curves of system (5.12) and system (5.13)
(27, 27, 9). Initial values of system (5.12) and system (5.13) are set as (30, 10, 10)T
and (20, 12, 10)T, respectively. When t = 40 sec the controller is turned on. Fig. 5.2
presents the error curves of the synchronization between systems (5.12) and (5.13).
Since the coordinate transformation is linear, all three state variables of the system
(5.13) will converge to those of the system (5.12).
When σ = 10, β = 8/3, and ρ = 28, the Lorenz system has a chaotic attractor,
which is the well-known butterfly attractor and is shown in Fig. 5.3.
We take system (5.14) as the drive system and construct the controlled response
system as follows:
140 5 Synchronizing Chaotic Systems Based on Feedback Control
50
40
30
x3
20
10
0
40
20
0
15 20
−20 10
0 5
−10 −5
x2 −40 −15
−20
x1
⎧
˙
⎨ x̂ = σ (ŷ − x̂),
⎪
ŷ˙ = ρ x̂ − ŷ − x̂ẑ + u, (5.15)
⎪
⎩˙
ẑ = x̂ŷ − β ẑ.
Here, the output is chosen as h(x̂) = x̂ and g = [0, 1, 0]T . By simple calculation we
have Lg h(x̂) = 0, L f h(x̂) = σ (ŷ − x̂), and Lg L f h(x̂) = σ = 0. Therefore, the relative
degree of system (5.15) is 2. Choosing the coordinate transformation as
ė3 = −ρ e3,
20
0
x̂1 − x1
−20
−40
0 5 10 15 20 25 30
t (sec)
50
x̂2 − x2
−50
0 5 10 15 20 25 30
t (sec)
50
x̂3 − x3
−50
0 5 10 15 20 25 30
t (sec)
Fig. 5.4 The error curves of system (5.14) and system (5.15)
142 5 Synchronizing Chaotic Systems Based on Feedback Control
which is asymptotically stable. Select C = (4, 4). Under the action of controller
1
u=− (L2 h(ẑ) − L2f h(z) +Ce) the origin of system (5.17) is asymptotically
Lg L f h(ẑ) f
stable. Furthermore, for any x̂ ∈ R3 it is always true that Lg L f h(x̂) = 0, and the origin
e = 0 is also globally and asymptotically stable. Since the controller is added on the
second equation of (5.15), if x̂ tends to x it implies that ŷ tends to y. Recall that
e3 = ẑ − z. The three state variables of system (5.15) will be synchronized to those
of system (5.14). Fig. 5.4 validates the above analysis.
The method proposed in the last section can only synchronize a single signal be-
tween the drive system and the response system. In this section we study how to
synchronize multi-signals simultaneously, which is especially useful in secret com-
munications when channel resources are not enough.
∂ f2 ∂ f1
[ f1 , f2 ](x) = f1 (x) − f2 (x),
∂x ∂x
where x is in the domain of f1 and f2 .
Δ = span{ f1 , . . . , fk }
f1 ∈ Δ , f2 ∈ Δ ⇒ [ f1 , f2 ] ∈ Δ .
ẋ = f (x) + g(x)u,
(5.18)
y = h(x),
is nonsingular at x = x0 .
Theorem 5.4 ([9]). Suppose that system (5.18) has a vector relative degree {r1 , . . . ,
rm } at x0 . Then,
r1 + · · · + rm ≤ n.
For 1 ≤ i ≤ m, set ⎧ i
⎪
⎪
φ1 (x) = hi (x),
⎪
⎪
⎪
⎨ φ2i (x) = L f hi (x),
⎪ ..
⎪
⎪ .
⎪
⎪
⎩ i
φri (x) = Lrfi −1 hi (x).
If r = r1 + · · · + rm is strictly less than n, it is always possible to find n − r additional
functions φr+1 (x), . . . , φn (x) such that the mapping
Φ (x) = [(φ11 (x), . . . , φr11 , . . . , φ1m (x), . . . , φrmm (x), φr+1 (x), . . . , φn (x))T (5.19)
G = span{g1 , . . . , gm }
is involutive near x0 , it is always possible to choose φr+1 (x), . . . , φn (x) in such a way
that
144 5 Synchronizing Chaotic Systems Based on Feedback Control
Lg j φi (x) = 0
for all r + 1 ≤ i ≤ n, all 1 ≤ j ≤ m, and all x around x0 .
Remark 5.4. By transformation Φ system (5.18) is changed to
⎧
⎨ ξ̇ = Λ ξ + E(A(ξ , η )u + B(ξ , η )),
⎪
η̇ = Q(ξ , η ), (5.20)
⎪
⎩
y = ξ,
where ⎛ ⎞
0 1 0 ···0
⎜0 0 1 ···0⎟
⎜ ⎟
⎜ .. ⎟
Λi = ⎜ ... ..
.
.. . .
. . .⎟
⎜ ⎟
⎝0 0 0 ··· 1 ⎠
0 0 0 · · · 0 r ×r
i i
and
Ei = (0, . . . , 0, 1)Tri ×1
for all 1 ≤ i ≤ m,
⎛ ⎞
Lg1 Lrf1 −1 h1 (ξ , η ) · · · Lgm Lrf1 −1 h1 (ξ , η )
⎜ r −1 r −1 ⎟
⎜ Lg1 L f2 h2 (ξ , η ) · · · Lgm L f2 h2 (ξ , η ) ⎟
A(ξ , η ) = ⎜
⎜ .. ..
⎟
⎟
⎝ . ··· . ⎠
Lg1 Lrfm −1 hm (ξ , η ) · · · Lgm Lrfm −1 hm (ξ , η )
m×m,
and
B(ξ , η ) = (Lrf1 h1 (ξ , η ), Lrf2 h2 (ξ , η ), . . . , Lrfm hm (ξ , η ))T .
When ξ = 0, the second equation of (5.20) becomes
η̇ = Q(0, η )
Suppose that the drive chaotic system has the following form:
ẋ = f (x),
(5.21)
y = h(x),
If the system (5.22) has relative degree r, then by nonsingular coordinate transfor-
mation (5.19), system (5.22) is changed to
⎧
⎪ ˙
⎨ ξ̂ = Λ ξ̂ + E(A(ξ̂ , η̂ )u + B(ξ̂ , η̂ )),
⎪
where the definitions of Λ , E, A(·, ·), B(·, ·), and C are the same as those in Remark
5.4. Denote e = ξ̂ − ξ and ε = η̂ − η . The error system can then be obtained as
Theorem 5.6 ([15]). If MIMO system (5.22) has relative degree r in U ⊂ Rn as well
as asymptotically stable zero dynamics, then the following controller:
146 5 Synchronizing Chaotic Systems Based on Feedback Control
1
u=− (B(x̂) − B(x) + Ce) (5.25)
A−1 (x̂)
Proof. Since the zero dynamics of (5.22) is stable, x̂ is bounded. Because (5.21) is
a chaotic system, x is also bounded. By the smoothness of f and h we know that
for each 0 ≤ k ≤ r and 1 ≤ i ≤ m, Lkf hi (x) is also bounded. Therefore, the controller
(5.25) is realizable. Substituting the controller into the first equation of (5.24), we
get
ė = Λ e, (5.26)
where Λ = diag(Λi ) and
⎛ ⎞
0 0 ···
1 0
⎜ 0 1 ···
0 0 ⎟
⎜ ⎟
⎜ .. ..
.. . . .. ⎟
Λi = ⎜ . .. . . ⎟
⎜ ⎟
⎝ 0 0 0 ··· 1 ⎠
−ci0 −ci1 −ci2 · · · −ciri −1 r ×r .
i i
Therefore, we have
m
det(λ I − Λ ) = ∏ det(λ Ii − Λi ).
i=1
From the selection of C we know that the real parts of all eigenvalues of Λ are less
than 0. Thus, the proof is completed.
5.3.3 Simulations
We take the Rössler hyperchaotic system [14] as an example to validate the effec-
tiveness of the proposed method. The system equation is as follows:
⎧
⎪ ẋ1 = −(x2 + x3),
⎪
⎪
⎨ ẋ = x + 0.25x + x ,
2 1 2 4
(5.27)
⎪ ẋ
⎪ 3 = 3.0 + x x
1 3 ,
⎪
⎩
ẋ4 = 0.05x4 − 0.5x3.
The projection into R3 of the chaotic attractor is presented in Fig. 5.5. This sys-
tem has two positive Lyapunov exponents, i.e., λ1 = 0.16 and λ2 = 0.03, which
means that the trajectory of the system has two divergent directions. This fact im-
5.3 Synchronization of Multi-Signals in Continuous-Time Chaotic Systems 147
60
200
40
x3
x4
100
0 20
50 100 50
0 0 0 50
−50 −50 0
−100 −50
x2 −100 −200 x2 −100 −100
x1 x1
60 60
40 40
x4
x4
20 20
300 300
200 50 200 50
100 0 100 0
−50 −50
0 −100 x3 0 −100 x2
x3 x1
plies that one cannot even stabilize the hyperchaotic system to the origin with only
one scalar controller. In the following, we will design a vector controller to synchro-
nize two Rössler hyperchaotic systems.
Taking system (5.27) as the drive system, we construct the following controlled
response system: ⎧
⎪
⎪ x̂˙1 = −(x̂2 + x̂3),
⎪
⎪
⎨ x̂˙ = x̂ + 0.25x̂ + x̂ + u ,
2 1 2 4 1
(5.28)
⎪
⎪ ˙
x̂3 = 3.0 + x̂1x̂3 + u2 ,
⎪
⎪
⎩˙
x̂4 = 0.05x̂4 − 0.5x̂3.
This means that g1 = (0, 1, 0, 0)T and g2 = (0, 0, 1, 0)T . Select outputs as h1 (x̂) = x̂1
and h2 (x̂) = x̂4 . By simple calculation, we have
and
Lg1 L f h1 (x̂) Lg2 L f h1 (x̂) −1 −1
A(x̂) = = , (5.29)
Lg1 L f h2 (x̂) Lg2 L f h2 (x̂) 0 −0.5
148 5 Synchronizing Chaotic Systems Based on Feedback Control
150
100
50
x̂1 − x1
−50
−100
0 50 100 150 200 250 300
t (sec)
40
20
x̂4 − x4
−20
−40
0 50 100 150 200 250 300
t (sec)
Fig. 5.6 The error curves of the synchronized Rössler hyperchaotic systems
which is nonsingular. Therefore, system (5.28) has relative degree (2, 2). Choose a
coordinate transformation as
Φ (x̂) = (ẑ1 , ẑ2 , ẑ3 , ẑ4 )T = (h1 (x̂), L f h1 (x̂), h2 (x̂), L f h2 (x̂))T .
where
⎛ ⎞ ⎛ ⎞
0 1 0 0 0 0 ' 2 (
⎜0 0 1 0⎟ ⎜1 0⎟ L f h1 (ẑ) u1
Λ =⎜
⎝0
⎟, E = ⎜ ⎟ , B(ẑ) = , and u = .
0 0 1⎠ ⎝0 0⎠ L2f h2 (ẑ) u2
0 0 0 0 0 1
A(ẑ) is equal to (5.29). Applying the same transformation to system (5.27), we get
ż = Λ z + EB(z).
300
200
x̂2 − x2
100
−100
0 50 100 150 200 250 300
t (sec)
400
200
x̂3 − x3
−200
−400
0 50 100 150 200 250 300
t (sec)
Fig. 5.7 The error curves of the synchronized Rössler hyperchaotic systems
Choose
625 50 0 0
C = diag{C1 ,C2 } = .
0 0 625 50
Initial values are set to
In the last section we assumed that the synchronized chaotic systems have the iden-
tical structure and the same parameters, but in practice the structures of the drive
system and the response system may be different and effects of noise (such as
parameters’ shift and measurement errors, etc.) should not be ignored. So, in this
section, we consider the problem of synchronizing chaotic systems with different
structures and parameter perturbations.
If lim e = lim x̃(t, x̃0 ) − x(t, x0 ) = 0 we say that system (5.30) and system
t→∞ t→∞
(5.31) are synchronized. In the rest of this section we suppose that each element of
F(x) and G(x̃) is in L∞ . This is a reasonable assumption since the systems consid-
ered here are chaotic.
Theorem 5.7 ([17]). With the controller
Remark 5.5. Many chaotic systems have the form of (5.30), such as Lorenz, Chen,
Lü, Rössler, Chua, Van der Pol, and Duffing, etc.
5.4.2 Simulations
In this section we will show the effectiveness of the presented method by simulation.
Suppose that the drive system is the Lorenz system
⎧
⎨ ẋ1 = α1 (x2 − x1),
⎪
ẋ2 = α2 x1 − x1 x3 − x2 ,
⎪
⎩
ẋ3 = x1 x2 − α x3 .
20
0
x̃1 − x1
−20
−40
0 10 20 30 40 50 60 70 80 90 100
t (sec)
20
0
x̃2 − x2
−20
−40
0 10 20 30 40 50 60 70 80 90 100
t (sec)
10
5
x̃3 − x3
−5
−10
0 10 20 30 40 50 60 70 80 90 100
t (sec)
Fig. 5.8 The error curves of the synchronized Lorenz system and Lü system when k = 1 and
δ =γ =1
5.4 Synchronization of Different Continuous-Time Chaotic Systems 153
20
10
α̃1
0
−10
−20
0 10 20 30 40 50 60 70 80 90 100
t (sec)
60
40
α̃2
20
0
0 10 20 30 40 50 60 70 80 90 100
t (sec)
10
5
α̃3
−5
0 10 20 30 40 50 60 70 80 90 100
t (sec)
15
10
β̃1
0
0 10 20 30 40 50 60 70 80 90 100
t (sec)
60
40
β̃2
20
0
0 10 20 30 40 50 60 70 80 90 100
t (sec)
10
5
β̃3
−5
0 10 20 30 40 50 60 70 80 90 100
t (sec)
20
10
x̃1 − x1
0
−10
0 10 20 30 40 50 60 70 80 90 100
t (sec)
20
10
x̃2 − x2
−10
−20
0 10 20 30 40 50 60 70 80 90 100
t (sec)
10
5
x̃3 − x3
−5
−10
0 10 20 30 40 50 60 70 80 90 100
t (sec)
Fig. 5.11 The error curves of the synchronized Lorenz system and Lü system when k = 1 and
δ =γ =4
20
10
α̃1
−10
−20
0 10 20 30 40 50 60 70 80 90 100
t (sec)
60
40
α̃2
20
0
0 10 20 30 40 50 60 70 80 90 100
t (sec)
10
0
α̃3
−10
−20
0 10 20 30 40 50 60 70 80 90 100
t (sec)
20
β̃1 10
−10
0 10 20 30 40 50 60 70 80 90 100
t (sec)
50
0
β̃2
−50
0 10 20 30 40 50 60 70 80 90 100
t (sec)
10
0
β̃3
−10
−20
0 10 20 30 40 50 60 70 80 90 100
t (sec)
20
10
x̃1 − x1
−10
−20
0 10 20 30 40 50 60 70 80 90 100
t (sec)
50
x̃2 − x2
−50
0 10 20 30 40 50 60 70 80 90 100
t (sec)
10
5
x̃3 − x3
−5
−10
0 10 20 30 40 50 60 70 80 90 100
t (sec)
Fig. 5.14 The error curves of the synchronized Lorenz system and Lü system when k = 3 and
δ =γ =1
156 5 Synchronizing Chaotic Systems Based on Feedback Control
20
10
α̃1 0
−10
−20
0 10 20 30 40 50 60 70 80 90 100
t (sec)
60
40
α̃2
20
0
0 10 20 30 40 50 60 70 80 90 100
t (sec)
5
α̃3
−5
0 10 20 30 40 50 60 70 80 90 100
t (sec)
20
15
10
β̃1
0
0 10 20 30 40 50 60 70 80 90 100
t (sec)
40
30
20
β̃2
10
0
0 10 20 30 40 50 60 70 80 90 100
t (sec)
10
5
β̃3
−5
0 10 20 30 40 50 60 70 80 90 100
t (sec)
Parameters’ initial values are α̃ (0) = β̃ (0) = (3, 3, 3)T , k = 1, and δ = γ = 1. Simu-
lation results are shown in Figs. 5.8–5.16. From Figs. 5.11–5.13, it is easy to see that
increasing δ and γ does not shorten the transient time. On the contrary, the ampli-
tude of the oscillation becomes bigger. However, from Figs. 5.14–5.16, we find that
increasing k can shorten the transient time and decrease the amplitude of oscillation.
With the popularization of personal computers more and more people have real-
ized the importance of data security. Compared to continuous-time chaotic systems,
discrete-time chaotic systems are more suitable for digital computers in applications
of secure communications. Two reasons can be used to explain why discrete-time
chaotic systems have more advantages than continuous-time chaotic systems. First,
chaos can be found in very simple discrete-time systems even if the dimension of
the dynamical equation is only one. But, in a continuous-time autonomous system,
the dimension of the phase space must be greater than two in order to see chaos.
Second, a discrete-time chaotic system can be directly utilized using a computer
program while a continuous-time chaotic system needs to be discretized before it
is used. In the process of discretization one must pay careful attention to the cho-
sen method since false discretization may cause deviation dynamics between the
discretized system and the original continuous-time system.
Although there are already some works on the control and synchronization of
discrete-time chaotic systems [4, 5, 6, 12], most of them assume that parameters
of the drive and the response systems are identical. In this section we first propose
a control method to synchronize two chaotic systems with the same parameters,
after that an adaptive algorithm is designed to tune the controller’s parameters when
parameter disturbance exists. To show the effectiveness of the proposed methods, a
computer simulation is provided in the last part of the section.
f s = f ( f (· · · f (x, us ) · · · , u2 ), u1 ).
Definition 5.6. If there exists a minimal positive integer r such that for any 0 ≤ i ≤
r − 1 the following conditions are satisfied:
∂ (h ◦ f i )
(i) = 0,
∂u
∂ (h ◦ f r )
(ii) = 0,
∂u
then we say that system (5.36) has relative degree r.
Remark 5.6. System (5.36) having relative degree r means that inputs at k = 0 do
not affect the system’s output until the instant k = r.
Theorem 5.8 ([11]). If system (5.36) has relative degree r = n, then there is a coor-
dinate transformation z = Φ (x) such that, in the new coordinates, system (5.36) has
the following form:
z(k + 1) = Az(k) + Bv(k),
y(k) = Cz(k),
where
z = (z1 , . . . , zn )T
= (h(x), h ◦ f (x, u), . . . , h ◦ f n−1(x, u))T , (5.37)
n
v = h◦ f (x, u), (5.38)
⎛ ⎞
0 1 0 ··· 0 0
⎜0 0 1 ··· 0 0⎟
⎜ ⎟
⎜ .. ⎟
A = ⎜ ... ..
.
.. . .
. .
..
. .⎟ (5.39)
⎜ ⎟
⎝0 0 0 ··· 0 1⎠
0 0 0 ··· 0 0 n×n,
B = (0, . . . , 0, 1)Tn×1 ,
C = (1, 0, . . . , 0)1×n. (5.40)
5.5 Synchronization of Discrete-Time Chaotic Systems 159
where g, d are constant vectors, A, B,C are defined as those in (5.39) and (5.40),
a(·) : Rn → R is a continuous function, and b is a nonzero constant.
Proof. Define mappings ψ = F(x) and φu = x + gu. Then, f (x, u) = φu ◦ ψ (x). Since
system (5.41) has relative degree n, we have
∂ ∂ ∂h ∂ (φu ◦ ψ )i
(h ◦ f i ) = (h ◦ (φu ◦ ψ )i ) = = 0, 0 ≤ i ≤ n − 1,
∂u ∂u ∂ (φu ◦ ψ )i ∂u
∂ ∂ ∂h ∂ (φu ◦ ψ )n
(h ◦ f n) = (h ◦ (φu ◦ ψ )n) = = 0,
∂u ∂u ∂ (φu ◦ ψ )n ∂u
then system (5.43) and system (5.44) are said to be in synchronization. Before de-
signing the controller, we assume that the following two conditions hold for system
(5.44):
(i) system (5.44) has relative degree n in the neighborhood of zero Bρ (0);
(ii) h ◦ f n (x, u) = a(x) + bu.
For some mechanical systems [12], the first condition can be satisfied. At least
for affine control systems the second condition can also be satisfied.
Considering the above assumptions and Lemma 5.1, we know that through the
nonlinear coordinate transformation
e(k + 1) = Λ e(k),
where ⎛ ⎞
0 0 ··· 0
1 0
⎜ 0 1 ··· 0
0 0 ⎟
⎜ ⎟
⎜ ... ..
.. . . . . .. ⎟ .
Λ =⎜ . . .. . ⎟
⎜ ⎟
⎝ 0 0 0 ··· 0 1 ⎠
−q0 −q1 −q2 · · · −qn−2 −qn−1
5.5 Synchronization of Discrete-Time Chaotic Systems 161
By the selection of Q we know that all eigenvalues of Λ have absolute values less
than 1. The proof is completed.
In this section we will study how to synchronize two discrete-time chaotic systems
whose parameters are not identical. Before stating the main results we need two
lemmas.
For a general discrete-time system
If the rank of (φ (1), . . . , φ (s)) equals the dimension of β0 , then β (k) will converge
to β0 within s steps.
Setting G ⊂ R, we state the discrete version of the LaSalle invariance principle.
Lemma 5.3 ([13]). For a discrete-time system x(k + 1) = f (x(k)), where f : Rn →
Rn is a smooth mapping, if the following two conditions are satisfied:
(i) there is a function V ∈ C[G ⊂ Rn , R] such that along the solutions of the system,
we have
Δ V (x(k)) = V (x(k + 1)) − V(x(k)) ≤ 0;
(ii) there exists a bounded solution x(k) such that x(k) ⊂ G for all k ≥ 0,
162 5 Synchronizing Chaotic Systems Based on Feedback Control
x(k + 1) = f T (x(k))θ0 ,
(5.49)
y(k) = Dx(k),
Since system (5.50) has relative degree n in U, we can find a local transformation
ẑ = Φ (x) such that, in the new coordinates, system (5.50) becomes
where A, B, and C have the same form as in (5.39) and (5.40). Setting e(k) = ẑ(k) −
z(k), we get the error system as
where ⎛ ⎞
0 0 ··· 0
1 0
⎜ 0 1 ··· 0
0 0 ⎟
⎜ ⎟
⎜ .. ..
.. . . .. .. ⎟
Λ =⎜ . . . . . . ⎟
⎜ ⎟
⎝ 0 0 0 ··· 0 1 ⎠
−q0 −q1 −q2 · · · qn−2 −qn−1 n×n.
For system (5.54) we choose the following Lyapunov function:
n
V (e(k)) = ∑ |ei (k)|,
i=1
Since system (5.49) is a chaotic system, there must exist a large N > 0 such that
the condition of Lemma 5.2 is satisfied. Therefore, θ (k) will equal θ0 at most in N
steps. Thus, we have
Δ V |k>N = −|e1 (k)| ≤ 0.
When k > N the error system becomes e(k + 1) = Λ e(k). If Δ V |k>N = 0, it means
that e1 (k) = 0 (k > N). From the error system equations we have ei (k) = 0, 2 ≤ i ≤ n.
By Lemma 5.3 we know that system (5.52) is asymptotically stable in U. The proof
is completed.
164 5 Synchronizing Chaotic Systems Based on Feedback Control
5.5.3 Simulations
In this section we take the Hénon system [8] as an example to illustrate the proposed
methods. The difference equations of the Hénon system are as follows:
When p = 1.4 and q = 0.3, the Hénon system is in chaotic state and its attractor is
shown in Fig. 5.17. The system has two fixed points:
' (T
(q0 − 1)2 + 4p0
q0 − 1 + q0 − 1 + (q0 − 1)2 + 4p0
X f1 = , q0 ,
2p0 2p0
' (T
q0 − 1 − (q0 − 1)2 + 4p0 q0 − 1 + (q0 − 1)2 + 4p0
X f2 = , q0 .
2p0 2p0
0.4
0.3
Xf1
0.2
0.1
x2 0
−0.1
−0.2 Xf2
−0.3
−0.4
−1.5 −1 −0.5 0 0.5 1 1.5
x1
From Fig. 5.17 we can see that X f1 is in the attractor. To apply the proposed
method we should first displace the origin of system (5.55) to X f1 . For simplicity
we will use the same symbols to denote the transformed system. Using the new
coordinates, the Hénon system is as follows:
Let x1 be the output. Then, the drive system has the following form:
⎧ 2
⎨ x1 (k + 1) = −p0 x1 (k) − 2X f1 p0 x1 (k) + x2 (k),
⎪
x2 (k + 1) = q0 x1 (k), (5.56)
⎪
⎩
y(k) = x1 (k).
Notice that system (5.55) can be transformed into the input–output form as
So, parameter update laws (5.34) can be used. It is easy to validate that system (5.57)
has relative degree 2. By the following coordinate transformation:
Φ (x̂) = (ẑ1 , ẑ2 ) = (h(x̂), h ◦ f (x̂))T = (x̂, −p(k)x̂21 (k) − 2X f1 p(k)x̂1 (k) + x̂2 (k))T ,
Setting e1 (k) = ẑ1 (k) − z1 (k) and e2 (k) = ẑ2 (k) − z2 (k), we get the following error
system:
⎧
⎨ e1 (k + 1) = e1 (k),
e (k + 1) = −p(k)z22 (k) − 2X f1 (k)p(k)ẑ2 (k) + q(k)ẑ1 (k)
⎩ 2
+p0 z22 (k) + 2X f1 p0 z2 (k) − q0 z1 (k) + u.
The controller is designed according to (5.53) with Q = (0.25, 1)T. Choose p0 = 1.4,
q0 = 0.3, p(0) = 1.4, and q(0) = 0.3. Initial values of the drive system and the
response system are (0.3, 0.1)T and (1.3, 0.5)T. Simulation results are shown in Figs.
5.18 and 5.19. Before k = 1000 the two systems have the same parameters. Within
0 ≤ k < 500 no controller is added on the response system. When 500 ≤ k < 1000
the controller is turned on and the two Hénon systems become synchronized very
2
x̂1 − x1
−2
−4
0 200 400 600 800 1000 1200 1400 1600 1800 2000
k
0
x̂2 − x2
−1
−2
−3
0 200 400 600 800 1000 1200 1400 1600 1800 2000
k
1.4
1.35
1.3
p(k)
1.25
1.2
1.15
1500 1550 1600 1650 1700 1750 1800 1850 1900 1950 2000
k
0.4
0.35
0.3
q(k)
0.25
0.2
1500 1550 1600 1650 1700 1750 1800 1850 1900 1950 2000
k
quickly. When k = 1000 the parameters of the response system are disturbed to
1.2 and 0.2, and the previous controller cannot synchronize the Hénon systems any
more. At k = 1500, parameter update laws are added and the two Hénon systems
are synchronized again. From Fig. 5.19 we can see that parameters of the response
system converge to those of the drive system when synchronization is achieved.
5.6 Summary
nizing general chaotic systems. We do not exclude the special cases in which the
controller can be simplified.
References
6.1 Introduction
In this chapter, we will study how to synchronize two identical or different chaotic
systems by impulsive control methods. Impulsive control is an efficient method to
deal with dynamical systems which cannot be controlled by continuous control [18].
In addition, in the synchronization process, the response system receives informa-
tion from the drive system only at discrete time instants, which drastically reduces
the amount of synchronization information transmitted from the drive system to the
response system and makes this method more efficient in a great number of real-life
applications.
As is well known, for many applications, parametric uncertainties of systems
are inevitable, and the effects of these uncertainties will destroy the synchroniza-
tion. In the past few years, the analysis of synchronization for chaotic systems with
parametric uncertainties has gained much research attention [4, 8, 11]. Most of the
researches only concern the synchronization between two identical chaotic systems
169
170 6 Synchronizing Chaotic Systems via Impulsive Control
where N(x(t), x̃(t)) ∈ Rn×n is a bounded matrix with elements depending on x(t)
and x̃(t). Most chaotic systems can be described by (6.1) and (6.2), such as the
Lorenz system, the Rössler system, the Chen system, the Lü system, the unified
system, several variants of Chua’s circuit, etc.
Suppose that a set of discrete instants {tk } satisfies
0 < t1 < t2 < · · · < tk < tk+1 < · · · , lim tk = ∞ (k ∈ N), 0 ≤ t0 < t1 .
k→∞
At discrete time instant tk , the state variables of the drive system are transmitted
to the response system and the state variables of the response system are suddenly
changed at these instants. Therefore, the response system can be written in the fol-
6.2 Complete Synchronization of a Class of Chaotic Systems via Impulsive Control 171
lowing form:
⎧
˙
⎨ x̃(t) = Ax̃(t) + h(x̃(t)), t = tk ,
⎪
Δ x̃(tk ) = x̃(tk+ ) − x̃(tk− ) = x̃(tk+ ) − x̃(tk ) = −Bk e(tk ), t = tk , k ∈ N, (6.3)
⎪
⎩
x̃(t0+ ) = x̃0 ,
where
e(t) = x(t) − x̃(t) = (x1 (t) − x̃1(t), x2 (t) − x̃2(t), . . . , xn (t) − x̃n (t))T
is the synchronization error. Let x̃(tk+ ) = lim x̃(t), x̃(tk− ) = lim x̃(t), i.e., tk+ and
t→tk+ t→tk−
tk− denote the times after and before tk , respectively. x̃(tk− )
= x̃(tk ) implies that x̃(t)
is left-continuous for t ≥ t0 . Bk ∈ Rn×n are the impulsive control gains. Δ x(tk ) =
x(tk+ )−x(tk− ) = 0. Then, from (6.1) and (6.3), the following error system is obtained:
⎧
⎨ ė(t) = (A + N(x(t), x̃(t)))e(t), t = tk ,
⎪
Δ e|t=tk = e(tk+ ) − e(tk ) = Bk e(tk ), t = tk , k ∈ N, (6.4)
⎪
⎩ +
e(t0 ) = e0 .
Here, the objective is to find the conditions on the control gains Bk and the im-
pulsive distances δ¯k = tk − tk−1 (k ∈ N) such that the error system (6.4) is asymptot-
ically stable, which implies that the impulsively controlled response system (6.3) is
asymptotically synchronized with the drive system (6.1) for arbitrary initial condi-
tions.
V (e) = eT e.
Therefore,
V (e(t)) ≤ V (e(tk−1
+
)) exp[(λA + λN )(t − tk−1 )], t ∈ (tk−1 ,tk ], k ∈ N. (6.6)
From inequalities (6.6) and (6.7), we know for any t ∈ (t0 ,t1 ],
and
V (e(t1+ )) ≤ β1V (e(t1 )) ≤ V (e(t0+ ))β1 exp[(λA + λN )(t1 − t0 )].
Thus, for any t ∈ (t1 ,t2 ],
This means that V (e(t)) → 0 and e(t) → 0 as t → ∞. Thus, the origin of system (6.4)
is asymptotically stable, which implies that the impulsively controlled response sys-
6.2 Complete Synchronization of a Class of Chaotic Systems via Impulsive Control 173
tem (6.3) is asymptotically synchronized with the drive system (6.1). This completes
the proof.
4 4
Based on the matrix theory, we can obtain λN ≤ 4N(x, x̃) + N T(x, x̃)4, where ·
refers to any induced matrix norm in this section. Then, we can derive the following
corollary.
Remark 6.1. Since the trajectory of a chaotic system is bounded, inequalities (6.5)
and (6.8) hold for suitable values of βk and δ¯k (k ∈ N).
Remark 6.2. Generally speaking, the estimate of a matrix norm is easier than that of
eigenvalues of a matrix. Therefore, it is more convenient to apply Corollary 6.1 than
Theorem 6.1 in practice.
Remark 6.3. In Theorem 6.1 and Corollary 6.1, if δ¯k = δ¯ > 0 and Bk = B (k ∈ N),
then the conditions of Theorem 6.1 and Corollary 6.1 can be reduced to
and 4 4
ln(ρβ ) + λA + 4N(x, x̃) + N T (x, x̃)4 δ¯ ≤ 0,
respectively, where β is the largest eigenvalue of (I + B)T(I + B).
and
sup{βk exp[(λA + λN )(tk+1 − tk )]} = ϒ < ∞; (6.10)
k
(ii) λA + λN < 0, and there exists a constant ρ (0 ≤ ρ < −(λA + λN )) such that
V (e) = eT e.
For t ∈ (tk−1 ,tk ] (k ∈ N), the time derivative of V (e) along the solution of (6.4) is
V (e(t)) ≤ V (e(tk−1
+
)) exp[(λA + λN )(t − tk−1 )]. (6.12)
If λA + λN ≥ 0, there exists a constant ρ > 1. From (6.9), (6.10), and (6.14), when
(i) t ∈ (t2k−1 ,t2k ], we have
2k−1
V (e(t)) ≤ V (e(t0+ )) ∏ βi exp[(λA + λN )(t − t0)]
i=1
2k−1
≤ V (e0 ) ∏ βi exp[(λA + λN )(t2k − t0)]
i=1
= V (e0 )β1 β2 exp[(λA + λN )(t3 − t1 )] · · ·
× β2k−3β2k−2 exp[(λA + λN )(t2k−1 − t2k−3 )]
× β2k−1 exp[(λA + λN )(t2k − t2k−1)]
× exp[(λA + λN )(t1 − t0 )]
V (e0 )
≤ϒ exp[(λA + λN )(t1 − t0 )], (6.15)
ρ k−1
2k
V (e(t)) ≤ V (e(t0+ )) ∏ βi exp[(λA + λN )(t − t0 )]
i=1
2k
≤ V (e0 ) ∏ βi exp[(λA + λN )(t2k+1 − t0)]
i=1
= V (e0 )β1 β2 exp[(λA + λN )(t3 − t1 )] · · ·
× β2k−1 β2k exp[(λA + λN )(t2k+1 − t2k−1 )]
× exp[(λA + λN )(t1 − t0 )]
V (e0 )
≤ exp[(λA + λN )(t1 − t0 )]. (6.16)
ρk
Remark 6.4. From (6.9) in Theorem 6.2, it can be seen that we need only to choose
the odd switching sequence {t2k−1 } instead of the whole switching sequence {tk } as
in Theorem 6.1.
Remark 6.5. The inequality (6.9) can be generalized to the following condition.
There exist a finite integer n0 > 0 and a constant ρ > 1 such that
Remark 6.6. In Theorem 6.2 and Corollary 6.2, if t2k+1 − t2k−1 = Δ > 0 and Bk = B
(k ∈ N), then the conditions of Theorem 6.2 and Corollary 6.2 can be reduced to
ln(ρβ 2 ) + (λA + λN )Δ ≤ 0
176 6 Synchronizing Chaotic Systems via Impulsive Control
and 4 4 4 4
ln(ρβ 2 ) + 4A + AT4 + 4N(x, x̃) + N T (x, x̃)4 Δ ≤ 0,
respectively, where β is the largest eigenvalue of (I + B)T(I + B).
6.2.3 Simulations
We take both the Lorenz system and the unified system as examples to show how to
use the proposed methods to synchronize chaotic systems.
First, we decompose the linear and nonlinear parts of the Lorenz system (6.19), and
rewrite it as follows:
ẋ = Ax + h(x), (6.20)
⎛ ⎞ ⎛ ⎞
−α1 α1 0 0
where x(t) = (x1 , x2 , x3 )T , A = ⎝ α2 −1 0 ⎠, and h(x) = ⎝ −x1 x3 ⎠. The im-
0 0 −α3 x1 x2
pulsively controlled response system is given by
⎧
⎨ x̃˙ = Ax̃ + h(x̃), t = tk ,
Δ x̃ = −Bk e(tk ), t = tk , k ∈ N, (6.21)
⎩ +
x̃(t0 ) = x̃0 ,
where e = (e1 , e2 , e3 )T = (x1 − x̃1 , x2 − x̃2 , x3 − x̃3 )T is the synchronization error and
⎛ ⎞ ⎛ ⎞⎛ ⎞
0 0 0 0 e1
h(x) − h(x̃) = ⎝ x̃1 x̃3 − x1x3 ⎠ = ⎝ −x3 0 −x̃1 ⎠ ⎝ e2 ⎠ .
x1 x2 − x̃1x̃2 x2 x̃1 0 e3
Therefore, we obtain
6.2 Complete Synchronization of a Class of Chaotic Systems via Impulsive Control 177
⎛ ⎞
0 0 0
N(x, x̃) = ⎝ −x3 0 −x̃1 ⎠ ,
x2 x̃1 0
⎛ ⎞
0 −x3 x2
N(x, x̃) + N T (x, x̃) = ⎝ −x3 0 0 ⎠ ,
x2 0 0
and 4 4
4N(x, x̃) + N T (x, x̃)4 = max{|x3 | + |x2|, |x3 |, |x2 |}.
∞
In this simulation, we take α1 = 10, α2 = 28, and α3 = 8/3. Then, λA = 28.0512.
4 system, we find4−20 ≤ x1 ≤ 20, −30 ≤ x2 ≤
From the chaotic attractor of the Lorenz
30, and 0 ≤ x3 ≤ 50, which leads to 4N(x, x̃) + N T(x, x̃)4∞ ≤ 80. Let δ¯k = δ¯ (k ∈ N)
and Bk = B = diag{σ , σ , σ }. Then, βk = β = (σ + 1)2 . According to Corollary 6.1
and Remark 6.3, the impulsively controlled response system (6.21) is asymptotically
synchronized with the drive system (6.20) if the following condition is satisfied:
2
ln ρ + ln(σ + 1)
0 < δ¯ ≤ − .
108.0512
Letting σ = −0.7 and ρ = 1.1, we have 0 < δ¯ ≤ 0.0214. The numerical simulation
results with σ = −0.7 and δ¯ = 0.02 are shown in Fig. 6.1. The initial conditions
of the drive and response systems are taken as x0 = (6, 2, 4)T and x̃0 = (4, 1, 5)T ,
respectively.
1
e1
−1
0 0.05 0.1 0.15 0.2 0.25 0.3
1
e2
−1
0 0.05 0.1 0.15 0.2 0.25 0.3
0.5
0
e3
−0.5
−1
0 0.05 0.1 0.15 0.2 0.25 0.3
t (sec)
Fig. 6.1 Synchronization error curves of the Lorenz system with δ = 0.02
178 6 Synchronizing Chaotic Systems via Impulsive Control
30
20
10
0
x1
−10
−20
−30
40
20 60
50
0 40
30
−20 20
10
x2 −40 0 x3
where e = (e1 , e2 , e3 )T = (x1 − x̃1 , x2 − x̃2 , x3 − x̃3 )T is the synchronization error and
⎛ ⎞ ⎛ ⎞⎛ ⎞
0 0 0 0 e1
h(x) − h(x̃) = ⎝ x̃1 x̃3 − x1x3 ⎠ = ⎝ −x3 0 −x̃1 ⎠ ⎝ e2 ⎠ .
x1 x2 − x̃1x̃2 x2 x̃1 0 e3
Therefore, we obtain ⎛ ⎞
0 0 0
N(x, x̃) = ⎝ −x3 0 −x̃1 ⎠ ,
x2 x̃1 0
1
e1
−1
0 0.05 0.1 0.15 0.2 0.25 0.3
1
e2
−1
0 0.05 0.1 0.15 0.2 0.25 0.3
0.5
0
e3
−0.5
−1
0 0.05 0.1 0.15 0.2 0.25 0.3
t (sec)
Fig. 6.3 Synchronization error curves of the unified system with Δ = 0.03
180 6 Synchronizing Chaotic Systems via Impulsive Control
⎛ ⎞
0 −x3 x2
N(x, x̃) + N T (x, x̃) = ⎝ −x3 0 0 ⎠ ,
x2 0 0
and 4 4
4N(x, x̃) + N T (x, x̃)4 = max{|x3 | + |x2|, |x3 |, |x2 |}.
∞
4 4
In this simulation, we choose a = 1. Then, 4A + AT4∞ = 98. From Fig. 6.2, we find
−30 ≤ x1 ≤ 30, −30 ≤ x2 ≤ 35, and 0 ≤ x3 ≤ 55. Therefore,
4 4
4N(x, x̃) + N T (x, x̃)4 ≤ 90.
∞
ln ρ + 2 ln(σ + 1)2
0<Δ ≤− .
188
Let σ = −0.8 and ρ = 1.1. Then, 0 < Δ ≤ 0.0338. The numerical simulation results
with σ = −0.8 and Δ = 0.03 are shown in Fig. 6.3. The initial conditions of the drive
and response systems are taken as x0 = (6, 2, 4)T and x̃0 = (4, 1, 5)T , respectively.
where x and x̃ are the state vectors of the drive system and the response system,
respectively, and η is a finite channel time delay, which is often unknown.
In this section, we study how to achieve the synchronization of the unified sys-
tems with channel time delay and parametric uncertainties based on the practical
stability theory. Meanwhile, a new definition of chaos synchronization is described,
namely, the state of the drive system at time t − η synchronizes with that of the re-
sponse system at time t in the sense of practical stability, i.e., for any different initial
conditions, x0 (t − η ) and x̃0 (t),
where · refers to the Euclidean vector norm or the induced matrix 2-norm.
6.3.1 Preliminaries
ẋ = f (t, x, u(t)),
Definition 6.2 ([17]). Let V ∈ V0 . For any (t, x) ∈ (tk−1 ,tk ] × Rn , the upper and right
(Dini) derivative of V (t, x(t)), along the solution x(t) of (6.25), is defined as
1
D+V (t, x) := lim sup [V (t + h, x + h f (t, x, u(t))) − V(t, x)].
h→0 + h
Instead of studying the stability of the nth-order impulsive differential equation
given in (6.25), it is convenient to study that of a scalar impulsive differential equa-
tion, whose stability properties are related to (6.25).
Definition 6.3 ([19]). Comparison system. Let V ∈ V0 and assume that
Ω := {u ∈ Rm : Γ (t, u) ≤ r(t), t ≥ t0 },
(vi) x ∈ S(ζ ) implies that x + U(k, x) ∈ S(ρ ) and V (tk+ , x + U(k, x)) ≤ ψk (V (t, x)).
(vii) α (λ ) < β (ζ ).
Then, the practical stability properties of the comparison system (6.26) with respect
to (α (λ ), β (ζ )) imply the corresponding practical stability properties of system
(6.25) with respect to (λ , ζ ) for every u(t) ∈ Ω , where S(ρ ) := {x ∈ Rn : x <
ρ }.
6.3 Lag-Synchronization of the Unified Systems via Impulsive Control 183
Remark 6.7. In Lemma 6.1, if the conditions in (iii) and (vii) are changed to 0 <
ζ < λ and α (λ ) > β (ζ ), respectively, the conclusion is still valid.
Lemma 6.2 ([19]). Let the comparison system (6.26) be described by the following
equations:
⎧
⎪ g(t, ω , v(t)) = ϕω + θ , ϕ > 0, θ > 0, t = tk ,
⎪
⎨
⎪ ψk (ω ) = γk ω , γk > 0, t = tk , k ∈ N, (6.27)
⎪
⎩
ω (t0+ ) = ω0 ≥ 0.
1
ln(γ ) + ϕ < 0
δ
and
θ 1 − eϕδ
< ζ,
ϕ (1 − γ eϕδ )
where δ = tk+1 − tk , 0 < δ < +∞, then system (6.25) is practically stable with re-
spect to (λ , ζ ) for any λ < +∞.
In the above, Δ Ã, Δ Ā, and Δ A satisfy the same assumptions, Δ Ã = μ F̃(t) =
μ [ f˜i j (t)]n×n , and Δ Ā = μ F(t − η ) = μ [ fi j (t − η )]n×n .
184 6 Synchronizing Chaotic Systems via Impulsive Control
If the linear error feedback composed of the state variables of the drive system
(6.30) and the response system (6.29) is used as impulsive control signal, the con-
trolled response system can be written in the following form:
⎧
⎨ x̃˙(t) = (A + Δ Ã)x̃(t) + h(x̃(t)), t = tk ,
Δ x̃(t) = Bk (x̃(t) − x(t − η )), t = tk , k ∈ N, (6.31)
⎩ +
x̃(t0 ) = x̃0 .
From (6.30) and (6.31), the following error system can be obtained:
⎧
⎪
⎪ ė(t) = Ae(t) + h(x(t − η )) − h(x̃(t))
⎪
⎪
⎨ + Δ Āx(t − η ) − Δ Ãx̃(t), t = tk ,
(6.32)
⎪
⎪ Δ e|t=tk = Bk e(tk ), t = tk , k ∈ N,
⎪
⎪
⎩ e(t + ) = e ,
0 0
For simplicity, in (6.33) and the rest of this section, e(t), x̃(t), x(t − η ), F̃(t),
F(t − η ), f˜i j (t), and fi j (t − η ) are denoted by e, x̃, x, F̃, F, f˜i j , and fi j , respectively.
Remark 6.8. Due to the boundedness of the chaotic signals, there exist constants M j
such that x̃ j ≤ M j , x j ≤ M j , j = 1, 2, 3.
Theorem 6.3 ([11]). The practical stability of the synchronization error system
(6.32) with respect to (λ , ζ ) is equivalent to that of the comparison system (6.27)
√ 3
with respect to ( 3λ , ζ ) , where ϕ = Hmax = max{H1 , H2 , H3 }, θ = H f = 6μ ∑ M j ,
j=1
H1 = |28 − 35a| − (25a + 10) + M2 + M3 + 3μ , H2 = |25a + 10| + (29a − 1) + M1 +
1
3μ , and H3 = − (a + 8) + M1 + 3μ .
3
6.3 Lag-Synchronization of the Unified Systems via Impulsive Control 185
Then, we have
5 6
3
≤ |28 − 35a| − (25a + 10) + |x2 | + |x3 | + μ ∑ | fi1 | |e1 |
i=1
5 6
3
+ |25a + 10| + (29a − 1) + |x̃1 | + μ ∑ | fi2 | |e2 |
i=1
5 6
3
1
+ − (a + 8) + |x̃1 | + μ ∑ | fi3 | |e3 |
3 i=1
3 3
+μ ∑∑ fi j − f˜i j x̃ j
j=1 i=1
3 3
≤ H1 |e1 | + H2 |e2 | + H3 |e3 | + μ ∑∑ fi j − f˜i j M j
j=1 i=1
3
≤ max{H1 , H2 , H3 }V (t, e) + 6μ ∑ Mj.
j=1
(|e1 | + |e2 | + |e3 |)2 = e21 + e22 + e23 + 2 |e1 | |e2 | + 2 |e2 | |e3 | + 2 |e1 | |e3 |
≤ e21 + e22 + e23 + (e21 + e22 ) + (e22 + e23 ) + (e21 + e23 )
= 3(e21 + e22 + e23 ),
√ √
we get |e1 | + |e2 | + |e3 | ≤ 3 e21 + e22 + e23 , i.e., α (s) = 3s and β (s) = s.
From e ∈ S(ζ ), or e < ζ , we have
If there exists γk ∈ (0, 1) such that the last inequality holds, then e +U(k, e) ∈ S(ρ ).
Other conditions in Theorem 6.3 can easily be proved, which are omitted here. This
completes the proof of the theorem.
Based on the results of Lemma 6.2 and Theorem 6.3, we will give the following
practical stability criterion of the error system (6.32).
Theorem 6.4 ([11]). Let γk = γ > 0 be a constant, δ = tk+1 −tk , 0 < δ < +∞, k ∈ N,
(λ , ζ ) be given, λ > 0, and ζ > 0. If
1
ln(γ ) + Hmax < 0, (6.34)
δ
H f 1 − eHmax δ
< ζ, (6.35)
Hmax (1 − γ eHmax δ )
then the synchronization error system (6.32) is practically stable with respect to
(λ , ζ ) for any λ < +∞, where Hmax and H f are defined in Theorem 6.3.
Proof. Replace ϕ and θ in Lemma 6.2 with Hmax and H f in Theorem 6.3, respec-
tively. Then, the conclusion can easily be obtained. For details, which are omitted
here, please refer to [19]. This completes the proof.
6.3.3 Simulations
Let a = 1. From the attractor of the unified system, we get −30 ≤ x1 ≤ 30,
−30 ≤ x2 ≤ 35, 0 ≤ x3 ≤ 55, and then M ⎛1 = 30, M2 = 35, ⎞ M3 = 55. Assume
sint cost sint
that the parametric uncertainty Δ A = 0.03 ⎝ cost cost cost ⎠, i.e., μ = 0.03 and
sint cost sint
⎛ ⎞
sint cost sint
F(t) = [ fi j (t)]3×3 = ⎝ cost cost cost ⎠. By computation, we have H1 = 62.09,
sint cost sint
H2 = 93.09, H3 = 27.09, and H f = 21.6. So, Hmax = 93.09. The initial conditions
6.4 Impulsive Synchronization of Different Chaotic Systems 187
0.4
0.2
e1
−0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.4
0.2
e2
−0.2
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0.2
0
e3
−0.2
−0.4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
t (sec)
of the drive and response systems are x0 = (4, 1, 5)T and x̃0 = (3.6, 0.8, 5.4)T, re-
spectively. Let δ = 0.005. From (6.34) it gives γ < 0.6279. We choose γ = 0.1, and
from (6.35) it gives ζ ≈ 0.1636. It means that under this condition, the maximum
amplitudes of e1 , e2 , and e3 are less than 0.1636. We choose the ‘unknown’ channel
time delay η = 3 sec. The simulation result is shown in Fig. 6.4. Obviously, the syn-
chronization errors fluctuate around zero with small amplitudes that are estimated
by Theorem 6.4.
In practice, it is difficult to ensure the structures of the drive and the response sys-
tems to be identical. Moreover, the parametric uncertainties of the drive and the
response systems are always different and time varying. Therefore, it is significant
188 6 Synchronizing Chaotic Systems via Impulsive Control
where x ∈ Rn is the state vector, A ∈ Rn×n is a known constant matrix, and f (x(t)) ∈
Rn is a continuous nonlinear function. Assume that
where N(x(t), x̃(t)) ∈ Rn×n is a matrix with bounded elements depending on x(t) ∈
Rn and x̃(t) ∈ Rn .
The parametric uncertainty Δ A(t) is said to be admissible if Δ A(t) satisfies the
following assumptions:
(i) Δ A(t) = EF(t)H, where E and H are known real constant matrices with appro-
priate dimensions, and the uncertain matrix F(t) ∈ Rn×n satisfies F(t) ≤ 1.
(ii) The parametric uncertainty Δ A(t) does not destroy the chaotic behavior of the
chaotic system (6.36).
Remark 6.10. The parametric uncertainty Δ A(t) is a time-varying perturbation,
which is often used in other papers to deal with the scheme of robust stabilization
for uncertain systems [2, 14].
Consider the channel time delay η ; we replace t with t − η in (6.36), and write
the drive system at time t − η as follows:
where x̃ ∈ Rn is the state vector, Ã ∈ Rn×n is a known constant matrix, and g(x(t)) ∈
Rn is a continuous nonlinear function. The parametric uncertainty Δ Ã(t) = Ẽ F̃(t)H̃
satisfies the same assumptions as that for Δ A(t). Suppose that the following condi-
tion holds:
f (x̃(t)) − g(x̃(t)) = U(x̃(t))x̃(t), (6.40)
where U(x̃(t)) ∈ Rn×n is a matrix with each element bounded depending on x̃(t).
Suppose that a set of discrete instants {tk } satisfies
0 < t1 < t2 < · · · < tk < tk+1 < · · · , lim tk = ∞ (k ∈ N), 0 ≤ t0 < t1 .
k→∞
6.4 Impulsive Synchronization of Different Chaotic Systems 189
At the discrete time tk , the state variables of the drive system are transmitted to
the response system as part of the control inputs such that the state variables of
the response system are suddenly changed at these instants. Therefore, the response
system can be written in the following form:
⎧
⎨ x̃˙(t) = (Ã + Δ Ã(t))x̃(t) + g(x̃(t)), t = tk ,
Δ x̃(t) = x̃(tk+ ) − x̃(tk− ) = x̃(tk+ ) − x̃(tk ) = −Bk e(t), t = tk , k ∈ N, (6.41)
⎩ +
x̃(t0 ) = x̃0 ,
where
e(t) = x(t − η ) − x̃(t) = [x1 (t − η ) − x̃1 (t), x2 (t − η ) − x̃2 (t), . . . , xn (t − η ) − x̃n (t)]T
For simplicity, in the rest of the section, x̃(t), x(t − η ), N(x(t − η ), x̃(t)), U(x̃(t)),
Δ A(t − η ), and Δ Ã(t) are denoted by x̃, x, N, U, Δ A, and Δ Ã, respectively.
Remark 6.11. Due to the boundedness of the chaotic signals, there exist positive
constants ς and ς̃ such that x ≤ ς and x̃ ≤ ς̃ .
Definition 6.7. The synchronization of systems (6.38) and (6.41) is said to have
been achieved if, for arbitrary initial conditions x0 and x̃0 , the trivial solution of the
error system (6.42) converges to a predetermined neighborhood of the origin for any
admissible parametric uncertainties.
Here, the objective is to find the conditions on the control gains Bk and the im-
pulsive distances δ¯k = tk − tk−1 (k ∈ N) such that the error magnitude, i.e., e ,
converges to below some constant ξ , which implies that the impulsively controlled
response system (6.41) is synchronized with the drive system (6.38) for arbitrary
initial conditions.
The following theorem gives sufficient conditions for robust stability of the error
system (6.42), which imply that the impulsively controlled response system (6.41)
is synchronized with the drive system (6.38).
190 6 Synchronizing Chaotic Systems via Impulsive Control
and
sup{βk exp[θ1 (tk+1 − tk )]} = ϒ < ∞; (6.44)
k
(ii) when θ1 < 0, there exists a constant ρ (0 ≤ ρ < −θ1 ) such that
where ξ > 0 is the bound of the error magnitude e and can be chosen small
enough, where
ς̃ 4 4 4 44 4 4 4
θ1 = λA + λN +2 E H + 2 4A − Ã4 +2 E H +2 4Ẽ 4 4H̃ 4 + 4U + U T4 .
ξ
Proof. Choose the following Lyapunov function:
V (e) = eT e.
For t ∈ (tk−1 ,tk ], k ∈ N, the time derivative of V (e) along the solution of (6.42) is
V (e(t)) ≤ V (e(tk−1
+
)) exp[θ1 (t − tk−1 )], t ∈ (tk−1 ,tk ], k ∈ N. (6.47)
which leads to
V (e(t1 )) ≤ V (e(t0+ )) exp[θ1 (t1 − t0 )]
and
V (e(t1+ )) ≤ β1V (e(t1 )) ≤ V (e(t0+ ))β1 exp[θ1 (t1 − t0 )].
Similarly, for t ∈ (t1 ,t2 ], we have
If θ1 ≥ 0, there exists a constant ρ > 1 and, from (6.43), (6.44), and (6.48), when
(i) t ∈ (t2k−1 ,t2k ], we have
192 6 Synchronizing Chaotic Systems via Impulsive Control
2k−1
V (e(t)) ≤ V (e(t0+ )) ∏ βi exp[θ1 (t − t0)]
i=1
2k−1
≤ V (e0 ) ∏ βi exp[θ1 (t2k − t0)]
i=1
= V (e0 )β1 β2 exp[θ1 (t3 − t1 )] · · · β2k−3 β2k−2 exp[θ1 (t2k−1 − t2k−3 )]
× β2k−1 exp[θ1 (t2k − t2k−1 )] exp[θ1 (t1 − t0 )]
V (e0 )
≤ϒ exp[θ1 (t1 − t0 )]; (6.49)
ρ k−1
If θ1 < 0, and there exists a constant ρ (0 ≤ ρ < −θ1 ) such that (6.45) holds, i.e.,
βk ≤ exp[ρ (tk − tk−1 )], then
It follows from (6.49)–(6.51) that the error magnitude e will converge to below
the constant ξ if the error started from e > ξ . This completes the proof.
Remark 6.12. Most typical chaotic systems with parametric uncertainties can be de-
scribed by (6.36) or (6.39), and the continuous nonlinear functions can satisfy (6.37)
and (6.40), such as the Lorenz system, the Chen system, the Rössler system, the uni-
fied system, the Lü system, several variants of Chua’s circuit, etc.
Remark 6.13. If A = Ã and f (·) = g(·), then the structures of system (6.38) and sys-
tem (6.39) are identical. Therefore, Theorem 6.5 is also applicable to the impulsive
synchronization between two identical chaotic systems with or without parametric
uncertainties.
Remark 6.14. In (6.41) or (6.42), the value of the channel time delay η is not re-
quired to be known for executing the impulsive control, since we can obtain the
time-delay signal x(t − η ) at the discrete time tk easily and use it in the response
system blindly when it is received.
6.4 Impulsive Synchronization of Different Chaotic Systems 193
Based
4 on the 4 matrix 4 theory 4and the boundedness
4 4 of chaotic signals, we obtain
λA ≤ 4A + AT4, λN ≤ 4N + N T 4 ≤ φ , and 4U + U T 4 ≤ ϕ , where φ and ϕ are posi-
tive real constants that can be obtained for different chaotic systems. Then, we have
the following corollary.
Corollary 6.3. Let βk be the largest eigenvalue of (I + Bk )T (I + Bk ). If there exists
a constant ρ > 1 such that
and
sup{βk exp[θ2 (tk+1 − tk )]} = ϒ < ∞,
k
then the response system (6.41) is synchronized with the drive system (6.38), where
ξ > 0 is the bound of the error magnitude e and can be chosen small enough,
where
4 4 ς̃ 4 4 4 44 4
θ2 = 4A + AT4 + φ + 2 E H + 2 4A − Ã4 + 2 E H + 2 4Ẽ 4 4H̃ 4 + ϕ .
ξ
Remark 6.15. Though 4 Corollary
4 6.3 4 will be4more conservative than Theorem 6.5,
the computation of 4A + AT4 and 4N + N T 4 will be easier than that of λA and λN
in general. Therefore, the result in Corollary 6.3 will be more convenient for use in
practice.
4 4
Remark46.16. From
4 the boundedness of chaotic systems, we know that λA , 4A + AT4,
λN , φ , 4A − Ã4, and ϕ are bounded, and the inequalities (6.43) and (6.52) can be
satisfied by choosing appropriate βk and Δk = t2k+1 − t2k−1 (k ∈ N).
In practice, for purpose of convenience, the gains Bk are always selected as a
constant matrix and the impulsive distances Δk are set to be a positive constant.
Then, we have the following corollary.
Corollary 6.4. Assume that t2k+1 −t2k−1 = Δ > 0 and Bk = B (k ∈ N). If there exists
a constant ρ > 1 such that
then the response system (6.41) is synchronized with the drive system (6.38), where
ξ > 0 is the bound of the error magnitude e and can be chosen small enough, β
is the largest eigenvalue of (I + B)T(I + B), and θ2 is defined in Corollary 6.3.
Remark 6.17. It is worth pointing out that the topic of synchronization between dif-
ferent chaotic systems with different parametric uncertainties has attracted a great
deal of attention, and the scheme considered here is more general in a practical en-
vironment. In this section, to compensate for the different structures of the drive
and the response systems, stronger impulsive strength and higher impulsive fre-
quency are necessary. On the other hand, to save the energy of impulsive control
194 6 Synchronizing Chaotic Systems via Impulsive Control
and improve its performance, one can introduce an additional feedback controller to
compensate for the different structures, and this will lead to a more complicated con-
troller. Hence, to achieve synchronization, we can choose one of the two schemes
mentioned above by considering implementation simplicity and performance im-
provement, which depend on the practical situation.
6.4.3 Simulations
In this simulation, the impulsive distances Δk are set to be a positive constant Δ , and
the gains Bk are selected as a constant matrix, i.e., Bk = B = diag{d, d, d}. Then,
β = (d + 1)2.
In order to observe the synchronization behavior of different chaotic systems, we
assume that the Lorenz system drives the Chen system. Therefore, the drive and the
response systems are as follows:
⎧
⎨ ẋ1 = α1 (x2 − x1 ),
ẋ2 = α2 x1 − x1 x3 − x2 , (6.53)
⎩
ẋ3 = x1 x2 − α3 x3
and ⎧
⎨ x̃˙1 = α̃1 (x̃2 − x̃1),
x̃˙2 = (α̃2 − α̃1 )x̃1 − x̃1 x̃3 + α̃2 x̃2 , (6.54)
⎩˙
x̃3 = x̃1 x̃2 − α̃3 x̃3 .
From (6.42), (6.53), and (6.54), the following compact form of the error system is
obtained:
⎧
⎨ ė(t) = (A + Δ A + N)e(t) + (A − Ã+ Δ A − Δ Ã + U)x̃(t), t = tk ,
Δ e|t=t = e(tk+ ) − e(tk ) = Bk e(tk ), t = tk , k ∈ N,
⎩ + k
e(t0 ) = e0 .
Let α1 = 10, α2 = 28, α3 = 8/3, α̃1 = 35, α̃2 = 28, and α̃3 = 3. Systems (6.53)
and (6.54) are chaotic with the above parameters. Then,
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
−α1 α1 0 −10 10 0 −20 38 0
A = ⎝ α2 −1 0 ⎠ = ⎝ 28 −1 0 ⎠, A + AT = ⎝ 38 −2 0 ⎠,
0 0 −α3 0 0 −8/3 0 0 −16/3
⎛ ⎞ ⎛ ⎞ ⎛ ⎞
−α̃1 α̃1 0 −35 35 0 25 −25 0
à = ⎝ α̃2 − α̃1 α̃2 0 ⎠ = ⎝ −7 28 0 ⎠, and A − à = ⎝ 35 −29 0 ⎠.
0 0 −α̃3 0 0 −3 0 0 1/3
6.4 Impulsive Synchronization of Different Chaotic Systems 195
−3
x 10
1.4
1.2
0.8
Δ
0.6
ρ=1 1
0.4
3 3
10 10
Stable region
0.2 30 30
0
−1.5 −1.4 −1.3 −1.2 −1.1 −1 −0.9 −0.8 −0.7 −0.6 −0.5
d
−3
x 10
4
3.5
2.5
Δ
2
ξ =2 2
1 1 1
0.5 0.5
0.5
Stable region
0
−1.5 −1.4 −1.3 −1.2 −1.1 −1 −0.9 −0.8 −0.7 −0.6 −0.5
d
0.2
0
e1
−0.2
0 10 20 30 40 50 60 70 80
0.2
0
e2
−0.2
0 10 20 30 40 50 60 70 80
0.02
e3
−0.02
0 10 20 30 40 50 60 70 80
t (sec)
(a)
0.7
0.6
0.5
0.4
e
0.3
0.2
0.1
0
0 10 20 30 40 50 60 70 80
t (sec)
(b)
Fig. 6.7 Simulation results for synchronization between the Lorenz system and the Chen system
with Δ = 0.001. (a) Synchronization errors, (b) synchronization error magnitude e
6.4 Impulsive Synchronization of Different Chaotic Systems 197
50
0
e1
−50
0 10 20 30 40 50 60 70 80
50
e2
−50
0 10 20 30 40 50 60 70 80
50
e3
−50
0 10 20 30 40 50 60 70 80
t (sec)
Fig. 6.8 Synchronization errors without impulsive control between the Lorenz system and the
Chen system with Δ = 0.001
⎛ ⎞
sin t 0 0
Let Δ A = −Δ Ã = 0.04 ⎝ 0 cost 0 ⎠, where E = H = Ẽ = H̃ = 0.2I and
0 0 sint
⎛ ⎞
sint 0 0
F(t) = −F̃(t) = ⎝ 0 cost 0 ⎠. We have
0 0 sint
⎛ ⎞ ⎛ ⎞ ⎛ ⎞⎛ ⎞
0 0 0 0 0 e1
Ne = f (x) − f (x̃) = ⎝ −x1 x3 ⎠ − ⎝ −x̃1 x̃3 ⎠ = ⎝ −x3 0 −x̃1 ⎠ ⎝ e2 ⎠ ,
x1 x2 x̃1 x̃2 x2 x̃1 0 e3
⎛ ⎞ ⎛ ⎞
0 0 0 0 −x3 x2
i.e., N = ⎝ −x3 0 −x̃1 ⎠. Then, N + N T = ⎝ −x3 0 0 ⎠ and
x2 x̃1 0 x2 0 0
4 4
4 N + N T 4 = x2 + x2 ≤ x2 + x2 + x2 = x ≤ ς = φ .
2 3 1 2 3
⎛ ⎞
0
From f (x̃) = g(x̃) = ⎝ −x̃1 x̃3 ⎠, we have U = 0 and ϕ = 0.
x̃1 x̃2
198 6 Synchronizing Chaotic Systems via Impulsive Control
ln ρ + ln β 2
0<Δ ≤− = 0.0011.
13930
Using these parameters, conditions in Corollary 6.4 are satisfied for Δ ≤ 0.0011.
Robust impulsive synchronization between the Lorenz system and the Chen system
with impulsive distance Δ = 0.001 is given in Fig. 6.7. We can see that the syn-
chronization has been achieved practically and e is smaller than ξ = 0.5. Fig. 6.8
shows the synchronization errors without impulsive control.
Recently, there have been many efforts for the study of dynamical properties of de-
layed neural networks (DNNs) [1, 9]. Furthermore, it has been shown that these
networks can exhibit some complicated dynamics and even chaotic behavior if the
parameters and time delays are appropriately chosen [3, 10, 23]. Thus, the problem
of stabilization and synchronization of chaotic DNNs has received extensive consid-
eration [15, 21, 22]. This section addresses a practical issue of using an impulsive
control method to synchronize a class of chaotic DNNs with different time-varying
parametric uncertainties. This class of chaotic DNNs includes several well-known
chaotic DNNs, such as chaotic delayed Hopfield neural networks (DHNNs) and
chaotic delayed cellular neural networks (DCNNs), etc. Based on the theory of Lya-
punov stability and impulsive FDEs, some new sufficient conditions are derived,
which are expressed in terms of matrix norm inequalities. Furthermore, the syn-
chronization error magnitude can be reduced arbitrarily as long as some specific
conditions hold.
6.5.1 Preliminaries
Remark 6.18. Equation (6.55) unifies several well-known neural networks such as
Hopfield neural networks with or without delays and cellular neural networks with
or without delays. In particular, if di = 1 and the activation function fˆi (ui ) is sigmoid,
then (6.55) describes the dynamics of Hopfield neural networks. Similarly, if di = 1
and the activation function fˆi (ui ) = (|ui + 1| − |ui − 1|)/2, then (6.55) describes the
dynamics of cellular neural networks.
where x(t) = (x1 (t), x2 (t), . . . , xn (t))T is the state vector of the new system (6.57),
and f (x(t)) = ( f1 (x1 ), f2 (x2 ), . . . , fn (xn ))T with fi (xi ) = fˆi (xi + u∗i ) − fˆi (xi ), i =
1, 2, . . . , n. It is easy to see that by Assumption 6.1, fi (xi ) satisfies the following
assumption.
and
Ξ := {H ∈ C(R+ , R+ ) : H(0) = 0, H(s) > 0, ∀s > 0}.
Consider the following impulsive FDEs:
ẋ(t) = f (t, xt ), t ≥ t0 ,
(6.58)
x(tk+ ) = Jk (x(tk− )), k ∈ N,
where f : [t0 , ∞) × PC([−τ , 0], Rn ) → Rn and Jk (x) : S(ρ ) → Rn for each k ∈ N. For
any t ≥ t0 , xt ∈ PC([−τ , 0], Rn ) is defined by xt (s) = x(t + s), −τ ≤ s ≤ 0. Suppose
that the set of discrete instants {tk } satisfies
We denote lim x(t) = x(tk+ ) and lim x(t) = x(tk− ). Then, x(tk ) = x(tk− ) implies that
t→tk+ t→tk−
x(t) is left-continuous for t ≥ t0 .
Lemma 6.3 (Lyapunov-like Stability Theorem [7, 16]). The zero solution of
(6.58) is uniformly asymptotically stable if there exist V ∈ V0 , ω1 , ω2 ∈ K , ψ ∈ K ∗
and H ∈ Ξ such that
(i) ω1 ( x ) ≤ V (t, x) ≤ ω2 ( x ) for (t, x) ∈ [t0 , ∞) × S(ρ ).
(ii) For all x ∈ S(ρ1 ), 0 < ρ1 ≤ ρ , and k ∈ N, V (tk , Jk (x)) ≤ ψ (V (tk− , x)).
(iii) For any solution x(t) of (6.58), V (t + s, x(t + s)) ≤ ψ −1 (V (t, x)), − τ ≤ s ≤ 0,
implies that D+V (t, x(t)) ≤ g(t)H(V (t, x(t))), where g : [t0 , ∞) → R+ is locally
integrable and ψ −1 is the inverse function of ψ .
(iv) H is nondecreasing and there exist constants λ2 ≥ λ1 > 0 and A0 > 0 such that,
for all k ∈ N and μ > 0, λ1 ≤ tk − tk−1 ≤ λ2 and
μ tk
du
− g(s)ds ≥ A0 .
ψ ( μ ) H(u) tk−1
In this section, we study the robust synchronization scheme for two identical chaotic
DNNs with parametric uncertainties via impulsive control. Consider the following
drive system described by:
where x(t) = (x1 (t), x2 (t), . . . , xn (t))T is the state vector, f (x) = ( f1 (x1 ), f2 (x2 ),
. . . , fn (xn ))T is the nonlinear vector-value function, D = diag{d1 , d2 , . . . , dn } > 0,
C = diag{c1 , c2 , . . . , cn } > 0, and A = (ai j ) ∈ Rn×n and B = (bi j ) ∈ Rn×n are two
real constant matrices.
The parametric uncertainties Δ A(t) ∈ Rn×n and Δ B(t) ∈ Rn×n satisfy the follow-
ing assumptions.
(i) Δ A(t) = E1 F1 (t)H1 , Δ B(t) = E2 F2 (t)H2 , where Ei and Hi (i = 1, 2) are known
real constant matrices with appropriate dimensions, and the uncertain matrices
Fi (t) satisfy Fi (t) ≤ 1.
(ii) The parametric uncertainties Δ A(t) and Δ B(t) do not destroy the chaotic be-
havior of the chaotic neural networks (6.59).
The response system can be described by
9 :
x̃˙(t) = −D Cx̃(t) − (A + Δ Ã(t)) f (x̃(t)) − (B + Δ B̃(t)) f (x̃(t − τ )) ,
where x̃(t) is the state vector of the response system Δ Ã(t) = Ẽ1 F̃1 (t)H̃1 and
Δ B̃(t) = Ẽ2 F̃2 (t)H̃2 satisfy the same assumptions as Δ A(t) and Δ B(t).
At the discrete time tk , the state variables of the drive system are transmitted to
the response system as the control inputs such that the state variables of the response
system are suddenly changed at these instants. Therefore, the response system can
be written in the following form:
⎧ 7
˙
⎨ x̃(t) = −D Cx̃(t) − (A + Δ Ã(t)) f8(x̃(t))
⎪
− (B + Δ B̃(t)) f (x̃(t − τ )) , t ≥ 0, t = tk , (6.60)
⎪
⎩ +
x̃(tk ) = x̃(tk ) + (Wk − I)e(tk ), t = tk , k ∈ N,
where Wk ∈ Rn×n , I is an n × n identity matrix, and e(t) = x̃(t) − x(t) is the synchro-
nization error. Then, from (6.59) and (6.60), the following error system is obtained:
⎧ 7
⎪
⎪ ė(t) = − D Ce(t) − (A + Δ Ã(t))h(e(t), x(t))
⎪
⎪
⎨ − (B + Δ B̃(t))h(e(t − τ ), x(t − τ )) − (Δ Ã(t) − Δ A(t)) f (x(t))
8 (6.61)
⎪
⎪ − (Δ B̃(t) − Δ B(t)) f (x(t − τ )) , t ≥ 0, t = tk ,
⎪
⎪
⎩ +
e(tk ) = Wk e(tk ), t = tk , k ∈ N,
where h(e(t), x(t)) = f (e(t) + x(t)) − f (x(t)) = f (x̃(t)) − f (x(t)). Using Assump-
tion 6.2, we have
n n
h(e(t), x(t)) 2
= ∑ h2i (ei (t), xi (t)) ≤ ∑ σi2 (e2i (t)) ≤ σ 2 e(t) 2
i=1 i=1
and
2 2
h(e(t − τ ), x(t − τ )) ≤ σ 2 e(t − τ ) ,
where σ = max {σi }.
1≤i≤n
202 6 Synchronizing Chaotic Systems via Impulsive Control
Remark 6.19. Due to the boundedness of the chaotic signals, there exists a positive
constant χ such that x ≤ χ and x̃ ≤ χ .
Remark 6.20. According to Lemma 6.3, for investigating the stability of the zero
solution of the error system (6.61), we construct such an appropriate Lyapunov-like
function V ∈ V0 that the first two conditions of Lemma 6.3 hold. Then, by calculat-
ing and estimating the Dini derivative of V along the trajectory of the error system,
one can obtain the candidates of g and H. Then, based on condition (iv), we derive
a sufficient condition to ensure robust stability of the error system. Furthermore, to
obtain an appropriate construction of V , the estimate of D+V (t, x(t)) often requires
some necessary inequality techniques such as the matrix norm inequality.
Definition 6.8. The synchronization of systems (6.59) and (6.60) is said to have
been achieved if, for arbitrary initial conditions x0 and x̃0 , the trivial solution of the
error system (6.61) converges to a predetermined neighborhood of the origin for any
admissible parametric uncertainty.
Here, the objective is to find conditions on the control gains Wk and the impulsive
distances δ¯k = tk − tk−1 (k ∈ N) such that the error magnitude, i.e., e , converges
to below some constant ξ , which implies that the impulsively controlled response
system (6.60) is synchronized with the drive system (6.59) for arbitrary initial con-
ditions. Similarly, for convenience of analysis, we choose a constant control gain
and the same impulsive distance, i.e., Wk = wI and δ¯k = δ¯.
Theorem 6.6. Assume that δ¯k = δ¯ and Wk = wI for k ∈ N. If the following condi-
tions hold:
(i) |w| < 1,
(ii) 2 ln |w| + vδ¯ < 0,
then the response system (6.60) is synchronized with the drive system (6.59), where
ξ > 0 is the bound of the error magnitude e and can be chosen small enough,
4 44 4 4 4 4 4
v = − 2 min {di ci } + σ d 2 4Ẽ1 4 4H̃1 4 + (1 + w−2) B + 4Ẽ2 4 4H̃2 4
¯
1≤i≤n
2χ 4 44 4 4 4 4 4
+2 A + 4 4 4 4
E1 H1 + Ẽ1 H̃1 + E2 H2 + Ẽ2 H̃2 4 4 4 4
ξ
> 0,
V (e) = eT e. (6.62)
then we have
eT (t − τ )e(t − τ ) ≤ w−2 eT (t)e(t).
For e ≥ ξ one can conclude that e ≤ e 2 /ξ . Subsequently, calculating the
Dini derivative of V (e) along the trajectory of (6.61) yields
7
D+V (t, e(t)) = 2eT (t)D −Ce(t) + (A + Δ Ã(t))h(e(t), x(t))
+ (B + Δ B̃(t))h(e(t − τ ), x(t − τ ))
8
+(Δ Ã(t) − Δ A(t)) f (x(t)) + (Δ B̃(t) − Δ B(t)) f (x(t − τ ))
n 4 4 4 4
≤ − 2 ∑i=1 di ci e2i (t) + 2σ D A + 4Ẽ1 4 4H̃1 4 e(t) 2
4 4 4 4
+ 2σ D B + 4Ẽ2 4 4H̃2 4 e(t) e(t − τ )
4 4 4 4
+ 2σ D E1 H1 + 4Ẽ1 4 4H̃1 4 x(t) e(t)
4 4 4 4
+ 2σ D E2 H2 + 4Ẽ2 4 4H̃2 4 x(t − τ ) e(t)
4 4 4 4
≤ − 2 min {di ci } e(t) 2 + 2σ d¯ A + 4Ẽ1 4 4H̃1 4 e(t) 2
1≤i≤n
4 4 4 4
+ σ d¯ B + 4Ẽ2 4 4H̃2 4 e(t) 2
+ e(t − τ ) 2
2σ χ d¯ 4 4 4 4
+ E1 H1 + 4Ẽ1 4 4H̃1 4 e(t) 2
ξ
2σ χ d¯ 4 4 4 4
+ E2 H2 + 4Ẽ2 4 4H̃2 4 e(t) 2
ξ
4 4 4 4
≤ −2 min {di ci } + 2σ d¯ A + 4Ẽ1 4 4H̃1 4
1≤i≤n
4 4 4 4
+ σ d¯ 1 + w−2 B + 4Ẽ2 4 4H̃2 4
2σ χ d¯ 4 44 4
+ E1 H1 + 4Ẽ1 4 4H̃1 4
ξ
4 4 4 4
4
+ E2 H2 + Ẽ2 H̃2 4 4 4 e(t) 2
= vV (t, e(t)).
Let g(t) = 1 and H(s) = vs. Condition (iv) of Lemma 6.3 implies that
204 6 Synchronizing Chaotic Systems via Impulsive Control
μ tk μ
du du
− g(s)ds = − (tk − tk−1 )
ψ ( μ ) H(u) tk−1 w2 μ vu
2
= − ln |w| − (tk − tk−1 )
v
1
= − (2 ln |w| + vδ¯)
v
= A0
> 0.
It is easy to see that (6.62) satisfies condition (i) of Lemma 6.3. Thus, the condi-
tions of Lemma 6.3 are all satisfied. Therefore, we can conclude that the error mag-
nitude e can converge to below the constant ξ if the error started from e > ξ ,
which implies that the response system (6.60) is synchronized with the drive system
(6.59). This completes the proof.
6.5.3 Simulations
and
7 8
x̃˙(t) = −D Cx̃(t) − (A + Δ Ã(t)) f (x̃(t)) − (B + Δ B̃(t)) f (x̃(t − τ )) ,
where
√
1 + π /4 20 −1.3 2π /4 0.1
√
A= , B= ,
0.1 1 + π /4 0.1 −1.3 2π /4
D = C = I, f (x) = ( f1 (x1 ), f2 (x2 ))T , and fi (xi ) = (|xi + 1| − |xi − 1|)/2. Let F1 (t) =
sint 0 cost 0
−F̃1 (t) = , F2 (t) = −F̃2 (t) = , and Ei = Ẽi = Hi = H̃i =
0 sint 0 cost
0.2I (i = 1, 2).
From (6.59), (6.60), and (6.61), we get the error system as follows:
⎧ 7
⎪
⎪ ė(t) = − D Ce(t) − (A + Δ Ã(t))h(e(t), x(t))
⎪
⎪
⎨ − (B + Δ B̃(t))h(e(t − τ ), x(t − τ )) − (Δ Ã(t) − Δ A(t)) f (x(t))
8
⎪
⎪ −(Δ B̃(t) − Δ B(t)) f (x(t − τ )) , t ≥ 0, t = tk ,
⎪
⎪
⎩ +
e(tk ) = Wk e(tk ), t = tk , k ∈ N.
6.5 Impulsive Synchronization of a Class of Chaotic Delayed Neural Networks 205
0.04
0.035
ξ = 0.5 0.5
0.03
0.2
0.2
0.025
0.02 0.1
δ̄
0.1
0.015 0.05
0.05
0.01
0
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 1
w
0.05
0
e1
−0.05
0 1 2 3 4 5 6 7 8 9 10
0.05
0
e2
−0.05
0 1 2 3 4 5 6 7 8 9 10
0.05
e
0
0 1 2 3 4 5 6 7 8 9 10
t (sec)
Fig. 6.10 Synchronization errors and synchronization error magnitude with δ¯ = 0.015
206 6 Synchronizing Chaotic Systems via Impulsive Control
We choose χ = 16. From Theorem 6.6, we get the stable region for different ξ ,
which is shown in Fig. 6.9. Let w = 0.2, ξ = 0.05, and σ1 = σ2 = 1. Then, the
estimate of the bound of the stable region is given by
2 2
δ¯ < − ln |w| = − ln |0.2| = 0.0177.
v 181.9792
Using these parameters, conditions in Theorem 6.6 are satisfied for δ¯ < 0.0177.
Synchronization errors and synchronization error magnitude with impulsive dis-
tance δ¯ = 0.015 are given in Fig. 6.10. As we can see, synchronization has been
achieved practically and e is smaller than ξ = 0.05. The initial conditions of the
drive and response systems are taken as x(s) = (0.1, 0.1)T and x̃(s) = (0.2, −0.2)T,
respectively, for −1 ≤ s ≤ 0.
6.6 Summary
In this chapter we have studied synchronization schemes for both identical and dif-
ferent chaotic systems. Sect. 6.2 studies the complete synchronization of a class
of chaotic systems. Sects. 6.3–6.5 mainly focus on robust synchronization, and the
synchronization error will converge to a predetermined level, which can be reduced
arbitrarily as long as some specific conditions hold. It should be pointed out that
the conditions we give for synchronization are only sufficient ones, and continued
research is desirable since less conservative conditions still need to be developed,
which are interesting and important for secure communications.
References
1. Cao J (2001) Global stability conditions for delayed CNNs. IEEE Trans Circuits Syst I
48:1330–1333
2. Gau RS, Lien CH, Hsieh JG (2007) Global exponential stability for uncertain cellular neu-
ral networks with multiple time-varying delays via LMI approach. Chaos Solitons Fractals
32:1258–1267
3. Gilli M (1993) Strange attractors in delayed cellular neural networks. IEEE Trans Circuits
Syst I 40:849–853
4. Huang H, Li HX, Zhong J (2006) Master–slave synchronization of general Lur’e systems
with time-varying delay and parameter uncertainty. Int J Bifurc Chaos 16:281–294
5. Jiang GP, Zheng WX, Chen GR (2004) Global chaos synchronization with channel time-
delay. Chaos Solitons Fractals 20:267–275
6. Lakshmikantham V, Leela S, Martynyuk AA (1990) Practical Stability of Nonlinear Systems.
World Scientific, Singapore
7. Li CD, Liao XF, Yang XF (2005) Impulsive stabilization and synchronization of a class of
chaotic delay systems. Chaos 15:043103
8. Li P, Cao JD, Wang ZD (2007) Robust impulsive synchronization of coupled delayed neural
networks with uncertainties. Physica A 373:261–272
References 207
9. Liu QS, Cao JD, Xia YS (2005) A delayed neural network for solving linear projection equa-
tions and its analysis. IEEE Trans Neural Networks 16:834–843
10. Lu HT (2002) Chaotic attractors in delayed neural networks. Phys Lett A 298:109–116
11. Ma TD, Zhang HG, Wang ZL (2007) Impulsive synchronization for unified chaotic systems
with channel time-delay and parameter uncertainty. Acta Phys Sinica 56:3796–3802
12. Ma TD, Zhang HG (2008) Impulsive synchronization of a class of unified chaotic systems
with parameter uncertainty. J Syst Simu , 18:4923–4926
13. McRae FA (1994) Practical stability of impulsive control systems. J Math Anal Appl
181:656–672
14. Moon YS, Park P, Kwon WH, Lee YS (2001) Delay-dependent robust stabilization of uncer-
tain state-delayed systems. Int J Control 74:1447–1455
15. Wang ZS, Zhang HG (2006) Global synchronization of a class of chaotic neural networks.
Acta Phys Sinica 55:2687–2693
16. Yan JR, Shen JH (1999) Impulsive stabilization and synchronization of functional differential
equations by Lyapunov-Razumikhin functions. Nonlinear Anal 37:245–255
17. Yang T (1999) Impulsive control. IEEE Trans Autom Control 44:1081–1083
18. Yang T (2001) Impulsive Control Theory. Springer, New York
19. Yang T, Chua LO (2000) Practical stability of impulsive synchronization between two nonau-
tonomous chaotic systems. Int J Bifurc Chaos 10:859-867
20. Zhang HG, Huang W, Wang ZL (2005) Impulsive control for synchronization of a class of
chaotic systems. Dyn Contin Discrete Impuls Syst B 12:153–161
21. Zhang HG, Xie YH, Liu D (2006) Synchronization of a class of delayed chaotic neural net-
works with fully unknown parameters. Dyn Contin Discrete Impuls Syst B 13:297–308
22. Zhang HG, Xie YH, Wang ZL, Zheng CD (2007) Adaptive synchronization between two
different chaotic neural networks with time delay. IEEE Trans Neural Networks 18:1841–
1845
23. Zou F, Nossek JA (1993) Bifurcation and chaos in cellular neural networks. IEEE Trans Cir-
cuits Syst I 40:166–173
Chapter 7
Synchronization of Chaotic Systems with Time
Delay
Abstract In many physical, industrial, and engineering systems, delays occur due
to the finite capabilities of information processing and data transmission among
various parts of the system. Delays could arise as well from inherent physical phe-
nomena like mass transport flow or recycling. Also, they could be by-products of
computational delays or could intentionally be introduced for some design consid-
erations. Such delays could be constant or time varying, known or unknown, deter-
ministic or stochastic depending on the system under consideration. In all of these
cases, the time-delay factors have counteracting effects on the system behavior and
most of the time lead to poor performance. Therefore, the subject of time-delay sys-
tems has been investigated as functional differential equations over the past three
decades. In this chapter, we study how to synchronize chaotic systems when time
delay exists and the synchronized systems have different structures. We first develop
synchronization methods for a class of delayed chaotic systems when the drive sys-
tem and the response system have the same structure but different parameters. After
that, the problem of synchronizing different chaotic systems is studied. Some con-
crete examples are presented to show how to design the controller. Based on that,
a more general case, synchronizing two different delayed chaotic neural networks
with known and unknown parameters, is considered.
7.1 Introduction
In the last two chapters, we have studied the synchronization of chaotic systems
which are described by ordinary differential equations. However, in many physical,
industrial, and engineering systems, delays occur due to the finite capabilities of
information processing and data transmission among various parts of the system.
Delays could arise as well from inherent physical phenomena like mass transport
flow or recycling. Also, they could be by-products of computational delays or could
intentionally be introduced for some design considerations. Such delays could be
constant or time varying, known or unknown, deterministic or stochastic depending
209
210 7 Synchronization of Chaotic Systems with Time Delay
on the system under consideration. In all of these cases, the time-delay factors have,
by and large, counteracting effects on the system behavior and most of the time
lead to poor performance. Therefore, the subject of time-delay systems has been
investigated as functional differential equations over the past three decades. This has
occupied a separate discipline in mathematical sciences falling between differential
and difference equations.
It is well known that chaos can easily be generated in a time-delay system. When
synchronizing two time-delay chaotic systems, the values of delays and parameters
of the systems are often unknown in advance. In some special cases, the structure
or parameters of the drive system are even unknown in advance. Therefore, in this
chapter, we study the synchronization of chaotic systems with time delays and with
different structures. In Sect. 7.2, we first develop the synchronization methods for
a class of delayed chaotic systems when the drive system and the response system
have the same structure but different parameters. After that, in Sect. 7.3, the problem
of synchronizing different chaotic systems is studied. Some examples are presented
to show how to design the controller. Based on that, in Sect. 7.4, a more general
case, synchronizing two different delayed chaotic neural networks with known and
unknown parameters, is considered.
In this section, the synchronization problem of two identical delayed chaotic sys-
tems with unknown parameters is studied. We take the delayed Chen system and
the delayed Lorenz system as examples to show the effectiveness of the proposed
method.
The drive delayed Chen chaotic system [14] in this section is described by the fol-
lowing differential equations:
⎧
⎨ ẋ1 (t) = a(x2 (t) − x1(t)),
ẋ2 (t) = (c − a)x1(t) − x1(t)x3 (t) + (c − d) x2 (t) + dx2(t − τ ), (7.1)
⎩
ẋ3 (t) = x1 (t)x2 (t) − bx3(t),
where a, b, c, and d are the unknown parameters of system (7.1), d < η , and η is
a positive constant. The chaotic behavior of the drive system (7.1) is shown in Fig.
7.1, when the initial values of the parameters of the drive system (7.1) are selected
as a = 35, b = 3, c = 28, d = 3, and τ = 0.1.
7.2 Adaptive Synchronization of a Class of Delayed Chaotic Systems 211
40
30
20
x3
10
0
40
20 −40
−20
0
0
x1 −20
20 x2
−40 40
The response delayed Chen chaotic system is described by the following equa-
tions:
⎧
⎨ ẏ1 (t) = a(y2 (t) − y1(t)) + u1,
ẏ (t) = (c − a)y1(t) − y1 (t)y3 (t) + (c − d) y2 (t) + dy2(t − τ ) + u2, (7.2)
⎩ 2
ẏ3 (t) = y1 (t)y2 (t) − by3(t) + u3,
where u1 , u2 , and u3 denote the designed controllers that will realize the synchro-
nization of system (7.1) and system (7.2).
Let us define the synchronization error signal as ei (t) = yi (t)− xi (t) and eiτ (t) =
yi (t − τ ) − xi (t − τ ) (i = 1, 2, 3). The fact that ei (t) → 0 as t → ∞ means that the
drive system and the response system are synchronized. The goal of the controllers’
design is to obtain ui (t) so that lim ei (t) = lim yi (t) − xi (t) = 0, where ·
t→∞ t→∞
denotes the Euclidean norm.
Therefore, the error system between (7.1) and (7.2) can be expressed as
⎧
⎨ ė1 = a(e2 − e1 ) + u1,
ė2 = (c − a)e1 + (c − d) e2 − e1 e3 − e3 x1 − e1 x3 + de2τ + u2 , (7.3)
⎩
ė3 = e1 e2 + e1 x2 + e2 x1 − be3 + u3.
the drive system (7.1) and the response system (7.2) will be globally asymptotically
synchronized. Here, â, b̂, ĉ, and dˆ are the estimated values of unknown parameters
a, b, c, and d, respectively, and k > (η 2 + 1)/2.
Proof. Substituting (7.4) into (7.3), the error system (7.3) can be rewritten as
⎧
⎨ ė1 = (a − â) (e2 − e1 ) − e1 ,
ė2 = (c − ĉ − a + â)e ˆ
1 + c − ĉ − d + d − k e2 − e1 e3 − e3 x1 + de2τ , (7.6)
⎩
ė3 = e1 e2 + e2x1 − b − b̂ e3 − e3 .
Then, the time derivative of V (e) along the trajectory of the error dynamical system
(7.6) is as follows:
1 1
V̇ (e) = e1 ė1 + e2ė2 + e3ė3 + eaėa + ebėb + ec ėc + ed ėd + e22 − e22τ
2 2
= e1 (ea (e2 − e1 ) − e1) + e3 (e1 e2 + e2 x1 − eb e3 − e3 )
+ e2 ((ec − ea)e1 + (ec − ed − k) e2 − e1 e3 − e3 x1 + de2τ )
1 1
− ea â˙ + ebb̂˙ + ecĉ˙ + ed d˙ˆ + e22 − e22τ . (7.7)
2 2
Substituting (7.5) into (7.7), we obtain
1 1
V̇ (e) = −e21 − e23 − ke22 + e2 de2τ + e22 − e22τ
2 2
d2 1 1 1
≤ −e21 − e23 − ke22 + e22 + e22τ + e22 − e22τ
2 2 2 2
η 2 1
≤ −e21 − e23 − k − − e2 .
2 2 2
7.2 Adaptive Synchronization of a Class of Delayed Chaotic Systems 213
Setting Q = k − (η 2 + 1)/2, we obtain V̇ (e) ≤ − e21 + e23 + Qe22 .
When k > (η 2 + 1)/2, i.e., Q > 0, we get V̇ (e) ≤ 0, which implies that V̇ (e) < 0
for all e(t) = 0. Since V (e) is positive definite and V̇ (e) is negative semidefinite, it
follows that e1 ∈ L∞ , e2 ∈ L∞ , and e3 ∈ L∞ . From the fact that
t t
−(e21 + Qe22 + e23 )ds ≤ − V̇ ds ≤ V (e1 (0), e2 (0), e3 (0)),
0 0
This means that the proposed controller (7.4) and (7.5) can globally asymptotically
synchronize the system (7.1) and the system (7.2). This completes the proof.
Fig. 7.2 The synchronization error curves of the drive system (7.1) and the response system (7.2)
214 7 Synchronization of Chaotic Systems with Time Delay
ˆ
Fig. 7.6 The curve of the adaptive parameter d(t)
216 7 Synchronization of Chaotic Systems with Time Delay
nization error curves of the drive system (7.1) and the response system (7.2) with the
controller u(t) = (u1 (t), u2 (t), u3 (t))T are shown in Fig. 7.2. The curves of the esti-
mated parameters â, b̂, ĉ, and dˆ are shown in Figs. 7.3–7.6, respectively. From simu-
lation results we can see that the synchronization errors converge asymptotically to
zero, i.e., the trajectory of the response delayed Chen chaotic system can synchro-
nize asymptotically with the trajectory of the drive delayed Chen chaotic system.
The estimated parameters converge asymptotically to some constants, which shows
the effectiveness of the adaptive synchronization scheme.
The drive delayed Lorenz chaotic system [13] is described by the following differ-
ential equations:
⎧
⎨ ẋ1 (t) = a(x2 (t) − x1 (t)),
ẋ2 (t) = cx1 (t) − x1(t)x3 (t) + (d − 1) x2 (t) − dx2(t − τ ), (7.8)
⎩
ẋ3 (t) = x1 (t)x2 (t) − bx3(t),
where a, b, c, and d are the unknown parameters of the system (7.8), and d < η .
The chaotic behavior of the drive system (7.8) is shown in Fig. 7.7, when the initial
values of the parameters of the drive system (7.8) are selected as a = 10, b = 8/3,
c = 5, d = 6.5, and τ = 0.5.
The response delayed Lorenz chaotic system is described by the following equa-
tions:
⎧
⎨ ẏ1 (t) = a(y2 (t) − y1 (t)) + u1,
ẏ2 (t) = cy1 (t) − y1(t)y3 (t) + (d − 1) y2 (t) − dy2(t − τ ) + u2, (7.9)
⎩
ẏ3 (t) = y1 (t)y2 (t) − by3(t) + u3,
where u1 , u2 , and u3 denote the designed controllers that will realize the synchro-
nization between system (7.1) and system (7.2).
Therefore, the error system between (7.8) and (7.9) can be expressed as
⎧
⎨ ė1 = a(e2 − e1) + u1,
ė2 = ce1 + (d − 1) e2 − e1e3 − e3x1 − e1x3 − de2τ + u2, (7.10)
⎩
ė3 = e1 e2 + e1x2 + e2x1 − be3 + u3 .
40
30
20
x3
10
0
−20 40
−10 20
0 0
x1 10 −20 x2
20 −40
the drive system (7.8) and the response system (7.9) will be globally asymptotically
synchronized. Here, â, b̂, ĉ, and dˆ are the estimated values of unknown parameters
a, b, c, and d, respectively, and k > (η 2 − 1)/2.
Proof. Substituting (7.11) into (7.10), the error system (7.10) can be rewritten by
⎧
⎨ ė1 = (a − â) (e2 − e1) − e1 ,
ė2 = (c − ĉ)e1 + d− dˆ −k − 1 e2 − e1e3 − e3x1 − de2τ , (7.13)
⎩
ė3 = e1 e2 + e2 x1 − b − b̂ e3 − e3 .
Then, the time derivative of V (e) along the trajectory of the error dynamical system
(7.13) is as follows:
1 1
V̇ (e) = e1 ė1 + e2ė2 + e3ė3 + eaėa + ebėb + ec ėc + ed ėd + e22 − e22τ
2 2
= e1 (ea (e2 − e1 ) − e1) + e3 (e1 e2 + e2 x1 − eb e3 − e3 )
+ e2 (ec e1 + (ed − k − 1) e2 − e1 e3 − e3 x1 − de2τ )
1 1
− ea â˙ + ebb̂˙ + ecĉ˙ + ed d˙ˆ + e22 − e22τ . (7.14)
2 2
Substituting (7.12) into (7.14), we obtain
1 1
V̇ (e) = − e21 − e23 − ke22 − e2de2τ − e22 − e22τ
2 2
d 2 1 1 1
≤ − e21 − e23 − ke22 + e22 + e22τ − e22 − e22τ
2 2 2 2
η2 1 2
≤ − e21 − e23 − k − + e . (7.15)
2 2 2
Setting Q = k − (η 2 − 1)/2, we obtain V̇ (e) ≤ − e21 + e23 + Qe22 .
When k > (η 2 − 1)/2, i.e., Q > 0, we get V̇ (e) ≤ 0, which implies that V̇ (e) < 0
for all e(t) = 0. Since V (e) is positive definite and V̇ (e) is negative semidefinite, it
follows that e1 ∈ L∞ , e2 ∈ L∞ , and e3 ∈ L∞ . From the fact that
t t
−(e21 + Qe22 + e23 )ds ≤ − V̇ ds ≤ V (e1 (0), e2 (0), e3 (0)),
0 0
Fig. 7.8 The synchronization error curves of the drive system (7.8) and the response system (7.9)
ˆ
Fig. 7.12 The curve of the adaptive parameter d(t)
errors converge asymptotically to zero, i.e., the trajectory of the response delayed
Lorenz chaotic system can synchronize asymptotically with the trajectory of the
drive delayed Lorenz chaotic system. The estimated parameters converge asymptot-
ically to some constants, which shows the effectiveness of the adaptive synchroniza-
tion scheme.
In the last section, the underlying assumption is that the drive system and the re-
sponse system have identical dynamic structures. In this section, we consider the
problem of synchronizing two different delayed chaotic systems with unknown pa-
rameters. Three examples are used to illustrate the design procedure of the proposed
synchronization methods.
222 7 Synchronization of Chaotic Systems with Time Delay
The drive delayed Chen system in this section is described by differential equations
(7.1).
The response delayed Lorenz system is described by the following equations:
⎧
⎨ ẏ1 (t) = α (y2 (t) − y1(t)) + u1,
ẏ2 (t) = γ y1 (t) − y1 (t)y3 (t) + (θ − 1) y2 (t) − θ y2 (t − τ ) + u2, (7.16)
⎩
ẏ3 (t) = y1 (t)y2 (t) − β y3(t) + u3,
where α , β , γ , and θ are the unknown parameters of system (7.16) and u1 , u2 , and
u3 denote the designed controllers that will realize the synchronization of system
(7.1) and system (7.16).
Therefore, the error system between (7.1) and (7.16) can be expressed as
⎧
⎪ ė1 = α (y2 − y1 ) − a(x2 − x1) + u1 ,
⎪
⎪
⎨ ė = γ y − (c − a)x − e e − e x − e x
2 1 1 1 3 3 1 1 3
(7.17)
⎪
⎪ + ( θ − 1) y 2 − (c − d) x 2 − θ y 2 τ − dx2τ + u2,
⎪
⎩
ė3 = e1 e2 + e1 x2 + e2 x1 − β y3 + bx3 + u3 .
the drive system (7.1) and the response system (7.16) will be globally asymptotically
synchronized. Here, â, b̂, ĉ, d,ˆ α̂ , β̂ , γ̂ , and θ̂ are the estimated values of unknown
parameters a, b, c, d, α , β , γ , and θ , respectively.
7.3 Adaptive Synchronization Between Two Different Delayed Chaotic Systems 223
Proof. Substituting (7.18) into (7.17), the error system (7.17) can be rewritten as
⎧
⎪
⎪ ė1 = (α − α̂ ) (y2 − y1 ) − (a − â) (x2 − x1 ) − e1 ,
⎪
⎨ ė2 = (γ − γ̂ ) y1 − (c − ĉ − a + â)x1 − c − ĉ − d + dˆ x2
⎪
(7.20)
⎪
⎪ + θ − θ̂ y2 − θ − θ̂ y2τ − d − dˆ x2τ − e2 − e1 e3 − e3 x1 ,
⎪
⎪
⎩
ė3 = e1 e2 + e2 x1 − (β − β̂ )y3 + b − b̂ x3 − e3 .
which implies that V̇ (e) < 0 for all e(t) = 0. It is clear that e1 ∈ L∞ , e2 ∈ L∞ , and
e3 ∈ L∞ . From the fact
t t
−(e21 + e22 + e23 )ds ≤ − V̇ ds ≤ V (e1 (0), e2 (0), e3 (0)),
0 0
224 7 Synchronization of Chaotic Systems with Time Delay
Fig. 7.13 The synchronization error curves of the drive system (7.1) and the response system
(7.16)
7.3 Adaptive Synchronization Between Two Different Delayed Chaotic Systems 225
ˆ
Fig. 7.17 The curve of the adaptive parameter d(t)
7.3 Adaptive Synchronization Between Two Different Delayed Chaotic Systems 227
The drive delayed Ikeda chaotic system [4] in this section is described by the fol-
lowing differential equation:
where τ > 0 denotes the bounded delay, a > 0, and c > 0. We assume that c is the un-
known parameter of system (7.22), and a is the known parameter of system (7.22).
The initial condition of system (7.22) is given by x(t) = ϕx (t) ∈ C([−τ , 0], R),
where C ([−τ , 0] , R) denotes the set of all continuous functions from [−τ , 0] to R.
The function f (x) is a one-dimensional and continuous nonlinear function.
The response delayed Ikeda chaotic system is described by the following equa-
tion:
ẏ (t) = −dy (t) + bg (y (t − τ )) + u (t) , (7.23)
where τ > 0 denotes the bounded delay, b > 0, and d > 0. We assume that d is
the unknown parameters of the system (7.23), and b is the known parameter of the
system (7.23). The initial conditions of system (7.23) are given by y(t) = ϕy (t) ∈
C([−τ , 0], R), where C ([−τ , 0] , R) denotes the set of all continuous functions from
[−τ , 0] to R. The function g (y) is a one-dimensional and continuous nonlinear func-
tion, and u(t) denotes the designed controller that will achieve the goal of synchro-
nizing system (7.22) and system (7.23), i.e., lim e (t) = lim y (t) − x (t) = 0.
t→∞ t→∞
For this problem, we have the following theorem.
Theorem 7.4 ([16]). By the following controller:
ˆ
u (t) = −ĉx(t) + a f (x (t − τ )) + dy(t) − bg (y (t − τ )) − ke (t) (7.24)
ĉ˙ = eT (t)x(t),
(7.25)
d˙ˆ = −eT (t)y(t),
the drive system (7.22) and the response system (7.23) will be globally asymptot-
ically synchronized. Here, k > 0 and ĉ and dˆ are the estimated values of unknown
parameters c and d, respectively.
Proof. From (7.22)–(7.24), we get the error system as follows:
ė (t) = − d − dˆ y (t) + (c − ĉ) x (t) − ke (t) . (7.26)
230 7 Synchronization of Chaotic Systems with Time Delay
T
Define ε := eT (t), ĉ, dˆ . If a Lyapunov function candidate is chosen as
1 T 2
V (ε ) = e (t) e (t) + (c − ĉ)2 + d − dˆ ,
2
then the time derivative of V (ε ) along the trajectory of the error system (7.26) is as
follows:
V̇ (ε ) = eT (t) ė (t) + (c − ĉ) −ĉ˙ + d − dˆ (−dˆ˙ )
= eT (t) − d − dˆ y (t) + (c − ĉ) x (t)
− (c − ĉ) ĉ˙ − d − dˆ dˆ˙ − keT (t) e (t) . (7.27)
which implies that V̇ (ε ) < 0 for all e(t) = 0. It is clear that e(t) ∈ L∞ , ĉ ∈ L∞ , and
dˆ ∈ L∞ . From the fact that
t
[V (e(0)) − V (e(t))] V (e(0))
e(t) 2 ds = ≤ ,
0 k k
we can easily show that e(t) ∈ L2 . From (7.26) we have ė(t) ∈ L∞ . Thus, by Bar-
balat’s lemma, we have lim e(t) = 0. This means that the proposed controller (7.24)
t→∞
can globally asymptotically synchronize the system (7.22) and the system (7.23).
This completes the proof.
The following illustrative example is used to demonstrate the effectiveness of the
above method.
Assume that the following delayed Ikeda chaotic system [7] is the drive system:
where the activation function f (x (t − τ )) = sin (x (t − τ )), and the following de-
layed Ikeda chaotic system [6] is the response system:
15
10
5
x
−5
−10
0 10 20 30 40 50 60 70 80 90 100
t (sec)
Fig. 7.22 The state curve of the drive chaotic system (7.28)
1.5
0.5
0
y
−0.5
−1
−1.5
−2
0 10 20 30 40 50 60 70
t (sec)
Fig. 7.23 The state curve of the response chaotic system (7.29)
232 7 Synchronization of Chaotic Systems with Time Delay
0.6
0.4
0.2
0
e
−0.2
−0.4
−0.6
−0.8
0 2 4 6 8 10 12 14 16 18 20
t (sec)
Fig. 7.24 The synchronization error curve of the drive system (7.28) and the response system
(7.29)
1.8
1.7
1.6
1.5
1.4
â
1.3
1.2
1.1
0.9
0 2 4 6 8 10 12 14 16 18 20
t (sec)
1.6
1.5
1.4
1.3
b̂
1.2
1.1
0.9
0 2 4 6 8 10 12 14 16 18 20
t (sec)
ˆ
Fig. 7.26 The curve of the adaptive parameter d(t)
tem (7.29) is shown in Fig. 7.24. The curves of the estimated parameters ĉ and
dˆ are shown in Figs. 7.25 and 7.26, respectively. From simulation results we can
see that the synchronization errors converge asymptotically to zero, and the esti-
mated parameters converge asymptotically to some constants, i.e., the trajectory of
the response system can synchronize asymptotically with the trajectory of the drive
system.
The drive delayed Liao chaotic system [3] in this section is described by the follow-
ing differential equation:
where τ > 0 denotes the bounded delay, α , a, and b are the parameters of the sys-
tem (7.30), and c is a constant. We assume that α is the unknown parameter of
system (7.30), and a > 0 and b > 0 are the known parameters of the system. The
initial conditions of system (7.30) are given by x(t) = ϕx (t) ∈ C([−τ , 0], R), where
C ([−τ , 0] , R) denotes the set of all continuous functions from [−τ , 0] to R. The
function f (x) is a one-dimensional and continuous nonlinear function.
234 7 Synchronization of Chaotic Systems with Time Delay
The response delayed Liao chaotic system is described by the following equation:
where τ > 0 denotes the bounded delay and β , m, and n are the parameters of
system (7.31). We assume that β is the unknown parameter of system (7.31), and
m > 0 and n > 0 are the known parameters of the system. The initial conditions of
system (7.31) are given by y(t) = ϕy (t) ∈ C([−ρ , 0], R). The function g (y) is a
one-dimensional and continuous nonlinear function, and u(t) denotes the designed
controller that will realize the synchronization of system (7.30) and system (7.31),
i.e., lim e (t) = lim y (t) − x (t) = 0.
t→∞ t→∞
Theorem 7.5 ([16]). By the following controller:
α̂˙ = eT (t)x(t),
˙ (7.33)
β̂ = −eT (t)y(t),
the drive system (7.30) and the response system (7.31) will be globally asymptoti-
cally synchronized. Here, k > 0 and α̂ and β̂ are the estimated values of unknown
parameters α and β , respectively.
Proof. From (7.30)–(7.32), we get the error system as follows:
ė (t) = − β − β̂ y(t) + (α − α̂ ) x(t) − ke (t) . (7.34)
Define
ε := (eT (t) , α̂ , β̂ )T .
1 T 2
V (ε ) = e (t) e (t) + (α − α̂ )2 + β − β̂ ,
2
then the time derivative of V (ε ) along the trajectory of the error system (7.34) is as
follows:
˙
V̇ (ε ) = eT (t) ė (t) + (α − α̂ ) −α̂˙ + β − β̂ −β̂
= eT (t) − β − β̂ y(t) + (α − α̂ ) x(t)
˙
− (α − α̂ ) α̂˙ − β − β̂ β̂ − keT (t) e (t) . (7.35)
7.3 Adaptive Synchronization Between Two Different Delayed Chaotic Systems 235
which implies that V̇ (ε ) < 0 for all e(t) = 0. It is clear that e(t) ∈ L∞ , α̂ ∈ L∞ , and
β̂ ∈ L∞ . From the fact that
t
2 [V (e(0)) − V (e(t))] V (e(0))
e(s) ds = ≤ ,
0 k k
we can easily show that e(t) ∈ L2 . From (7.34) we have ė(t) ∈ L∞ . Thus, by Bar-
balat’s lemma, we have lim e(t) = 0. This means that the proposed controller (7.32)
t→∞
can globally asymptotically synchronize the system (7.30) and the system (7.31).
This completes the proof.
Assume that the following delayed Liao chaotic system [8] is the drive system:
0.8
0.6
0.4
x
0.2
−0.2
−0.4
0 10 20 30 40 50 60 70 80 90 100
t (sec)
Fig. 7.27 The state curve of the drive chaotic system (7.37)
236 7 Synchronization of Chaotic Systems with Time Delay
1.2
0.8
0.6
y
0.4
0.2
−0.2
0 10 20 30 40 50 60 70 80 90 100
t (sec)
Fig. 7.28 The state curve of the response chaotic system (7.38)
The initial conditions of the drive system (7.37) are taken as x (s) = 0.5, s ∈
[−1, 0], α = 1, a = 3, b = 4.5, a1 = 2, a2 = −1.5, k1 = 1, k2 = 4/3, c = 0, and
τ = 1. The state curve of the drive system (7.37) is shown in Fig. 7.27.
Take the following delayed Liao chaotic system [12] as the response system:
The initial conditions of the response system (7.38) are taken as y (s) = 1, s ∈ [−1, 0],
β = 1, m = 3, n = 4.5, a1 = 2, a2 = −1.5, k1 = 1, k2 = 4/3, c = 0, and τ = 1. The
state curve of the response system (7.38) is shown in Fig. 7.28.
According to Theorem 7.5, the initial values of the drive system and the response
system are x (0) = 0.7 and y (0) = 1.2. Choose α̂ (0) = 0.7, β̂ (0) = 1.1, and k = 1.
The synchronization error curve of the drive system (7.37) and the response sys-
tem (7.38) is shown in Fig. 7.29. The curves of the estimated parameters α̂ and
β̂ are shown in Figs. 7.30 and 7.31, respectively. From simulation results we can
see that the synchronization errors converge asymptotically to zero, and the esti-
mated parameters converge asymptotically to some constants, i.e., the trajectory of
7.3 Adaptive Synchronization Between Two Different Delayed Chaotic Systems 237
Fig. 7.29 The synchronization error curve of the drive system (7.37) and the response system
(7.38)
the response system can synchronize asymptotically with the trajectory of the drive
system.
In this section, the problem of synchronizing two identical chaotic DNNs is stud-
ied. Under the framework of the inverse optimal control approach, synchronization
controllers and adaptive updating laws of parameters with known and unknown pa-
rameters are designed, respectively [18].
A class of delayed chaotic neural networks in this section is described by the fol-
lowing differential equation:
n n
ẋi (t) = −ci xi (t) + ∑ ai j g j (x j (t)) + ∑ bi j g j (x j (t − τ j )) + Ui , (7.39)
j=1 j=1
where n denotes the number of neurons in the network, xi denotes the state variable ; <
associated with the ith neuron, τ j ≥ 0 denotes bounded delay and ρ = max τ j ,
ci > 0, ai j indicates the strength of the neuron interconnections within the network,
bi j indicates the strength of the neuron interconnections within the network with
constant delay parameter τ j , i, j = 1, . . . , n, and Ui is a constant input vector function.
The activation function g j (x j ) satisfies
g j (ξ ) − g j (ζ )
0≤ ≤ δ j,
ξ −ζ
for all ξ , ζ ∈ R, and ξ = ζ . The initial conditions of system (7.39) are given by
xi (t) = φi (t) ∈ C ([−ρ , 0] , R).
Chaotic dynamics is extremely sensitive to initial conditions. Even infinitesimal
changes in the initial conditions will lead to an exponential divergence of orbits.
In order to observe the synchronization behavior in this class of chaotic DNNs,
we study two chaotic DNNs where the drive system and the response system have
identical dynamical structures. The drive system’s state variables are denoted by xi
and the response system’s state variables are denoted by yi .
Suppose that the drive system has the form of (7.39). The response system is de-
scribed by the following equation:
n n
ẏi (t) = −ci yi (t) + ∑ ai j g j (y j (t)) + ∑ bi j g j (y j (t − τ j )) + Ui + ui (t), (7.40)
j=1 j=1
where the initial conditions of system (7.40) are given by yi (t) = ϕi (t) ∈ C([−ρ , 0] ,
R), and ui (t) denotes the designed controller that will realize the synchronization
of system (7.39) and system (7.40).
240 7 Synchronization of Chaotic Systems with Time Delay
Let us define the synchronization error signal as ei (t) = yi (t) − xi (t), where xi (t)
and yi (t) are the ith state variables of the drive and response neural networks, re-
spectively. The fact that ei (t) → 0 as t → ∞ means that the drive neural network and
the response neural network are synchronized. The goal of the controller design is
to obtain ui (t) so that lim ei (t) = lim yi (t) − xi (t) = 0, where · denotes the
t→∞ t→∞
Euclidean vector norm.
Therefore, the error system between (7.39) and (7.40) can be expressed as
n n
ėi (t) = −ci ei (t) + ∑ ai j f j (e j (t)) + ∑ bi j f j (e j (t − τ j )) + ui(t), (7.41)
j=1 j=1
X TY + Y T X ≤ X T QX + Y T Q−1Y.
About the synchronization of systems (7.39) and (7.40), we have the following
theorem.
Theorem 7.6 ([18]). If the controller is designed as follows:
u(t) = − AAT + BBT + 2Σ 2 e (t) , (7.43)
Then, the time derivative of V (e) along the trajectory of the error system (7.42) is
as follows:
7.4 Synchronization of Chaotic Delayed Neural Networks 241
1 1
V̇ (e) = eT (t) ė (t) + eT (t) Σ 2 e (t) − eT (t − τ ) Σ 2 e (t − τ )
2 2
= − eT (t)Ce(t) + eT (t) (A f (e(t)) + B f (e(t − τ )))
1 1
+ eT (t) Σ 2 e (t) − eT (t − τ ) Σ 2 e (t − τ ) + eT (t) u(t)
2 2
:= L f¯V + (LgV ) u (t) , (7.44)
where
1 1
V̇ (e) ≤ −eT (t) C + AAT + BBT + Σ 2 e(t)
2 2
1 T 1 T
≤ −λmin C + AA + BB + Σ 2 e 2
2 2
≤ 0,
which implies that V̇ (e) < 0 for all e(t) = 0. Since V (ε ) is positive definite and V̇ (ε )
is negative semidefinite, it follows that e(t) ∈ L∞ . From the fact that
t
2 V (e(0)) − V (e(t))
e (s) ds =
0 1 1
λmin C + AAT + BBT + Σ 2
2 2
V (e(0))
≤ ,
1 T 1 T
λmin C + AA + BB + Σ 2
2 2
we can easily show that e(t) ∈ L2 . From (7.41) we have ė(t) ∈ L∞ . Thus, by Bar-
balat’s lemma, we have lim e(t) = 0, i.e., lim e (t) = lim y (t) − x (t) = 0. This
t→∞ t→∞ t→∞
means that the proposed controller (7.50) can globally asymptotically synchronize
the system (7.39) and the system (7.40). This completes the proof.
Under the framework of the inverse optimal control approach, the controller
(7.43) can be proved to optimize a cost functional of the following form:
t /
J(u) = lim E(x(t)) + l(x(τ )) + uT (τ )R(x(τ ))u(τ ) dτ , (7.51)
t→∞ 0
where R = RT > 0, u (0) = 0, and l(x(τ )) and E(x(t)) are positive-definite and radi-
ally unbounded functions [5].
Theorem 7.7 ([18]). The following cost functional for system (7.42):
t /
J (u) = lim 2β V (e (t)) + l (e (τ )) + uT(τ )R (e (τ )) u(τ ) dτ (7.52)
t→∞ 0
and −1
R (e) = β AAT + BBT + 2Σ 2 , β ≥ 2. (7.54)
1 1
l (e) ≥ − 2β eT (t) −C + AAT + BBT + Σ 2 e (t)
2 2
+ β eT (t) AAT + BBT + 2Σ 2 e (t)
= 2β eT (t)Ce (t) .
This means that l(e) is radially unbounded. Substituting (7.48) into (7.44), we get
Thus, the minimum of the cost functional is Jmin (u) = 2β V (e (0)) for the optimal
controller (7.43). The proof is completed.
0.8
0.6
0.4
0.2
0
x2
−0.2
−0.4
−0.6
−0.8
−1
−20 −15 −10 −5 0 5 10 15 20
x1
Fig. 7.32 Chaotic behavior of the drive delayed chaotic cellular neural network
Fig. 7.33 The curves of the drive system’s state variables x1 and x2
7.4 Synchronization of Chaotic Delayed Neural Networks 245
0.8
0.6
0.4
0.2
0
y2
−0.2
−0.4
−0.6
−0.8
−1
−20 −15 −10 −5 0 5 10 15 20
y1
Fig. 7.34 Chaotic behavior of the response delayed chaotic cellular neural network
Fig. 7.35 The error curves of the delayed chaotic cellular neural network
246 7 Synchronization of Chaotic Systems with Time Delay
The synchronization error curves of the response system and the drive system
are shown in Fig. 7.35. From simulation results we can see that the synchronization
errors converge asymptotically to zero, i.e., the trajectory of the response delayed
chaotic cellular neural network can synchronize asymptotically with the trajectory
of the drive delayed chaotic cellular neural network.
In the above subsection, the parameters of the two neural networks to be synchro-
nized are known and are identical. Because the synchronized systems are inevitably
perturbed by external factors and cannot be exactly identical, synchronizing two
nonidentical delayed chaotic neural networks is more essential and useful in real-
world applications. In this section, we consider the synchronization problem of a
class of chaotic DNNs in which the parameters of the response system are tunable.
Suppose that the drive system has the form of (7.39), and the response system is
described by the following equation:
n n
ẏi (t) = −ĉi yi (t) + ∑ âi j g j (y j (t)) + ∑ b̂i j g j (y j (t − τ j )) + Ui + ui (t), (7.58)
j=1 j=1
where ĉi , âi j , and b̂i j are tunable parameters of the response system whose updating
laws need to be designed, and ui (t) denotes the designed controller that will achieve
the synchronization of system (7.39) and system (7.58).
Let us define the synchronization error signal as ei (t) = yi (t) − xi (t). The fact
that ei (t) → 0 as t → ∞ means that the drive neural network and the response neural
network are synchronized. Therefore, the error system between (7.39) and (7.58)
can be expressed as
n n
ėi (t) = − ci ei (t) + ∑ ai j f j (e j (t)) + ∑ bi j f j (e j (t − τ j ))
j=1 j=1
n
+ (ci − ĉi ) yi (t) + ∑ (âi j − ai j ) g j (y j (t))
j=1
n
+ ∑ (b̂i j − bi j )g j (y j (t − τ j )) + ui(t), (7.59)
j=1
the systems (7.39) and (7.58) will be globally asymptotically synchronized. Here,
i, j = 1, . . . , n and Σ = diag{δ1 , . . . , δn }.
Proof. Define
ε := eT(t), c̃1 , . . . , c̃n , ã11 , . . . , ã1n , . . . , ãn1 , . . . , ãnn ,
T
b̃11 , . . . , b̃1n , . . . , b̃n1 , . . . , b̃nn .
We choose t
n
1 1
V (ε ) = eT (t) e (t) +
2 2 ∑ t−τ j
δ j2 e2j (s) ds
j=1
1 n 1 n 1 n
+ ∑ c̃2i + ∑ ã2i j + ∑ b̃2i j .
2 i=1 2 i, j=1 2 i, j=1
Its time derivative can be derived as follows:
1 1
V̇ (ε ) = eT (t) ė (t) + eT (t) Σ 2 e (t) − eT (t − τ ) Σ 2 e (t − τ )
2 2
n n n
∑
+ c̃ c̃˙ + i i ∑
ã ã˙ + b̃ b̃˙
ij ij ∑ ij ij
i=1 i, j=1 i, j=1
= − eT (t)Ce(t) + eT(t) A f (e(t)) + B f (e(t − τ )) + eT (t) u(t)
1
+ eT (t) C̃y (t) + Ãg (y (t)) + B̃g (y (t − τ )) + eT (t) Σ 2 e (t)
2
n
1 T
− e (t − τ ) Σ e (t − τ ) − ∑ c̃i ĉ˙i
2
2 i=1
n n
+ ∑ ãi j â˙i j + ∑ b̃i j b̂˙ i j
i, j=1 i, j=1
where
L f¯V := − eT (t)Ce(t) + eT (t) A f (e(t)) + B f (e(t − τ ))
1
+ eT (t) C̃y (t) + Ãg (y (t)) + B̃g (y (t − τ )) + eT (t) Σ 2 e (t)
2
n
1 T
− e (t − τ ) Σ 2 e (t − τ ) − ∑ c̃i ĉ˙i
2 i=1
n n
+ ∑ ãi j â˙i j + ∑ b̃i j b̂˙ i j
i, j=1 i, j=1
and
LgV := eT (t) .
Applying Lemma 7.1 with Q = I, we get
1 1
V̇ (ε ) ≤ − eT (t)Ce(t) + eT (t) AAT e(t) + f T (e(t)) f (e(t))
2 2
1 T T 1 T
+ e (t) BB e(t) + f (e(t − τ )) f (e(t − τ ))
2 2
1 T 1 T
+ e (t) Σ e (t) − e (t − τ ) Σ 2 e (t − τ ) + eT (t) u(t)
2
2 2
+ eT (t) C̃y (t) + Ãg (y (t)) + B̃g (y (t − τ ))
n n n
− ∑ c̃i ĉ˙i + ∑ ãi j â˙i j + ∑ b̃i j b̂˙ i j
i=1 i, j=1 i, j=1
1 1
≤ − eT (t)Ce(t) + eT (t) AAT e(t) + eT (t) Σ 2 e(t)
2 2
1 1
+ eT (t) BBT e(t) + eT (t − τ ) Σ 2 e(t − τ )
2 2
1 T 1 T
+ e (t) Σ e (t) − e (t − τ ) Σ 2 e (t − τ ) + eT (t) u(t)
2
2 2
n n
+ ∑ c̃i ei (t) yi (t) − ĉ˙i + ∑ ãi j â˙i j + ei (t) g j (y j (t))
i=1 i, j=1
n
+ ∑ b̃i j b̂˙ i j + ei (t) g j (y j (t − τ j )) .
i, j=1
1 1
V̇ (ε ) ≤ eT (t) −C + AAT + BBT + Σ 2 e(t) + eT (t) u(t). (7.63)
2 2
1 1
V̇ (ε ) ≤ −eT (t) C + AAT + BBT + Σ 2 e(t)
2 2
1 1
≤ −λmin C + AAT + BBT + Σ 2 e 2
2 2
≤ 0,
which implies that V̇ (ε ) < 0 for all e(t) = 0. Since V (ε ) is positive definite and
V̇ (ε ) is negative semidefinite, it follows that e(t) ∈ L∞ , c̃i (t) ∈ L∞ , ãi j (t) ∈ L∞ , and
b̃i (t) ∈ L∞ . From the fact that
t
2 V (e(0)) − V (e(t))
e (s) ds =
0 1 1
λmin C + AAT + BBT + Σ 2
2 2
V (e(0))
≤ ,
1 T 1 T
λmin C + AA + BB + Σ 2
2 2
we can easily show that e(t) ∈ L2 . From (7.59) we have ė(t) ∈ L∞ . Thus, by Bar-
balat’s lemma, we have lim e(t) = 0, i.e., lim e (t) = lim y (t) − x (t) = 0. This
t→∞ t→∞ t→∞
means that the proposed controller (7.61) can globally asymptotically synchronize
the system (7.39) and the system (7.58). This completes the proof.
Remark 7.1. Theorem 7.8 can similarly be proved by showing that the cost func-
tional (7.52) is minimized by the optimal control laws (7.61) and (7.62).
The following illustrative example is used to demonstrate the effectiveness of the
above method.
We consider that the drive delayed chaotic cellular neural network has the form
of (7.39). When the parameters and the initial conditions are the same as those in
(7.57), the chaotic behavior of the drive system is shown in Fig. 7.32.
The response delayed chaotic cellular neural network is chosen in the form of
(7.58). In this simulation, the initial values of ‘unknown’ parameter vectors of the
response system are selected as
Fig. 7.36 The error curves of the delayed chaotic cellular neural network
In this section, the problem of synchronizing two different chaotic DNNs with un-
known parameters is studied. By Lyapunov stability theory, the delay-independent
and delay-dependent adaptive synchronization controllers and adaptive updating
laws of parameters are designed, respectively [19].
where n ≥ 2 denotes the number of neurons in the network, xi (t) denotes the
; <associated with the ith neuron, τ j ≥ 0 denotes the bounded delay,
state variable
ρ = max τ j , ci > 0 denotes the self connection of neurons, ai j indicates the
strength of the neuron interconnections within the network, bi j indicates the strength
of the neuron interconnections within the network with constant delay τ j , and Hi
is an external constant input, i, j = 1, . . . , n. The activation function g j : R → R,
j ∈ {1, 2, . . . , n}, is bounded, and satisfies the condition g j (0) = 0 and the Lips-
chitz condition with a Lipschitz constant Lx > 0, i.e., g j (ξ ) − g j (ς ) ≤ Lx ξ − ς
for all ξ , ς ∈ R and ξ = ς . The initial conditions of system (7.64) are given by
xi (t) = φi (t) ∈ C ([−ρ , 0] , R), where C ([−ρ , 0] , R) denotes the set of all continu-
ous functions from [−ρ , 0] to R.
Suppose that the drive system has the form of (7.64). The response system is
described by the following equation:
n n
ẏi (t) = −di yi (t) + ∑ pi j f j (y j (t)) + ∑ qi j f j (y j (t − τ j )) + Wi + ui (t), (7.65)
j=1 j=1
where n ≥ 2 denotes the number of neurons in the network, yi (t) denotes the state
variable associated with the ith neuron, di > 0 denotes the self connection of neu-
rons, pi j and qi j indicate the interconnection strengths among neurons without and
with the bounded delay τ j , Wi is an external constant input, and ui (t) denotes the
external control input and will be appropriately designed to obtain a certain con-
trol objective, i, j = 1, . . . , n. The activation function f j : R → R ( j ∈ {1, 2, . . . , n})
is bounded, and satisfies the condition f j (0) = 0 and the Lipschitz condition with
a Lipschitz constant Ly > 0, i.e., f j (ξ ) − f j (ς ) ≤ Ly ξ − ς for all ξ , ς ∈ R,
and ξ = ς . The initial conditions of system (7.65) are given by yi (t) = ϕi (t) ∈
C ([−ρ , 0] , R).
7.4 Synchronization of Chaotic Delayed Neural Networks 253
Define the synchronization error signal ei (t) = yi (t) − xi (t). Our goal is to de-
sign a controller u(t) = (u1 (t), u2 (t), . . . , un (t))T such that the trajectory of the
response delayed neural network (7.65) can synchronize asymptotically with the
trajectory of the drive delayed neural network (7.64), i.e.,lim e(t) = 0, where
t→∞
e(t) = (e1 (t), e2 (t), . . . , en (t))T .
In this section, we assume that ci and ai j are the unknown parameters of system
(7.64), and bi j is the known parameter of system (7.64). We assume that di and pi j
are the unknown parameters of the response system (7.65), and qi j is the known
parameter of the response system (7.65).
We will need the following assumption and definition.
Assumption 7.1. Both the states of the drive system and the response system are
bounded, i.e., x(t) ≤ Bd and y(t) ≤ Br .
Definition 7.1. A vector function Φ (x) is defined by
x −1 x, if x = 0,
Φ (x) =
0, if x = 0.
Clearly, we have Φ (0) = 0 from Definition 7.1.
Theorem 7.9 ([19]). By the following controller:
Let c̃i (t) = ĉi (t) − ci , ãi j (t) = âi j (t) − ai j , d˜i (t) = dˆi (t) − di, and p̃i j (t) = p̂i j (t) − pi j
be the estimated errors of unknown parameters ci , ai j , di , and pi j , respectively. We
obtain the following compact form:
where τ = (τ1 , . . . , τn )T , C̃(t) = diag{c̃1 (t), c̃2 (t), . . . , c̃n (t)}, D̃(t) = diag{d˜1 (t), d˜2 (t),
. . . , d˜n (t)}, Ã(t) = (ãi j (t))n×n , P̃(t) = ( p̃i j (t))n×n , g(x(t − τ )) = (g1 (x1 (t − τ1 )), . . . ,
gn (xn (t − τn )))T , and f (y(t − τ )) = ( f1 (y1 (t − τ1 )), . . . , fn (yn (t − τn )))T .
Define
ε := (eT (t) , c̃1 (t), . . . , c̃n (t), ã11 (t), . . . , ã1n (t), . . . , ãn1 (t), . . . , ãnn (t),
d˜1 (t), . . . , d˜n (t), p̃11 (t), . . . , p̃1n (t), . . . , p̃n1 (t), . . . , p̃nn (t))T .
then the time derivative of V (ε ) along the trajectory of the error system (7.69) is as
follows:
n n
V̇ (ε ) = eT (t) ė (t) + ∑ c̃i (t)c̃˙i (t) + ∑ ãi j (t)ã˙i j (t)
i=1 i, j=1
n n
+ ∑ d˜i (t)d˙˜i (t) + ∑ p̃i j (t) p̃˙i j (t)
i=1 i, j=1
7.4 Synchronization of Chaotic Delayed Neural Networks 255
V̇ (ε ) ≤ e (t) (Lx −B x (t − τ ) + Ly Q y (t − τ ) )
T T T
− e (t) m1 Φ (e) − e (t) m2 Φ (e) − e (t) Ke (t)
≤ e (t) (Lx B Bd + Ly Q Br ) − m1 e (t)
− m2 e (t) − eT (t) Ke (t)
= − eT (t) Ke (t) ≤ 0,
which implies that V̇ (ε ) < 0 for all e(t) = 0. Since V (ε ) is positive definite and V̇ (ε )
is negative semidefinite, it follows that e(t) ∈ L∞ , c̃i (t) ∈ L∞ , ãi j (t) ∈ L∞ , d˜i (t) ∈ L∞ ,
and p̃i j (t) ∈ L∞ . From the fact that
t
2 V (e(0)) − V (e(t)) V (e(0))
e(s) ds = ≤ ,
0 min{ki } min{ki }
we can easily show that e(t) ∈ L2 . From (7.69) we have ė(t) ∈ L∞ . Thus, by Bar-
balat’s lemma, we have lim e(t) = 0, i.e., lim e (t) = lim y (t) − x (t) = 0. This
t→∞ t→∞ t→∞
means that the proposed controller (7.66) can globally asymptotically synchronize
the system (7.64) and the system (7.65). This completes the proof.
The synchronization error curves of the drive system (7.72) and the response
system (7.73) with the controller u(t) are shown in Fig. 7.41. The curves of the es-
timated parameters Ĉ, Â, D̂, and P̂ are shown in Figs. 7.42–7.45, respectively. From
simulation results we can see that the synchronization errors converge asymptot-
ically to zero, i.e., the trajectory of the response delayed chaotic Hopfield neural
network can synchronize asymptotically with the trajectory of the drive delayed
chaotic cellular neural network. The estimated parameters converge asymptotically
to some constants, which shows the effectiveness of the adaptive synchronization
scheme.
7.4 Synchronization of Chaotic Delayed Neural Networks 257
2
y2
−2
−4
−6
−1.2 −1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6
y1
Fig. 7.41 The synchronization error curves of the drive system (7.72) and the response system
(7.73)
258 7 Synchronization of Chaotic Systems with Time Delay
In this section, we assume that ci , ai j , and bi j are the unknown parameters of system
(7.64) and di , pi j , and qi j are the unknown parameters of the response system (7.65).
Theorem 7.10 ([16]). By the following controller:
Let c̃i (t) = ĉi (t) − ci , ãi j (t) = âi j (t) − ai j , b̃i j (t) = b̂i j (t) − bi j , d˜i (t) = dˆi (t) − di ,
p̃i j (t) = p̂i j (t) − pi j , and q̃i j (t) = q̂i j (t) − qi j be the estimated errors of unknown
parameters ci , ai j , bi j , di , pi j , and qi j , respectively.
We obtain the following compact form:
where C̃(t) = diag{c̃1 (t), c̃2 (t), . . . , c̃n (t)}, D̃(t) = diag{d˜1 (t), d˜2 (t), . . . , d˜n (t)}, Ã(t)
= (ãi j (t))n×n , B̃(t) = (b̃i j (t))n×n , P̃(t) = ( p̃i j (t))n×n , and Q̃(t) = (q̃i j (t))n×n . Define
ε := [ eT (t), c̃1 (t), . . . , c̃n (t), ã11 (t), . . . , ã1n (t), . . . , ãn1 (t), . . . , ãnn (t),
b̃11 (t), . . . , b̃1n (t), . . . , b̃n1 (t), . . . , b̃nn (t), d˜1 (t), . . . , d˜n (t),
p̃11 (t), . . . , p̃1n (t), . . . , p̃n1 (t), . . . , p̃nn (t),
q̃11 (t), . . . , q̃1n (t), . . . , q̃n1 (t), . . . , q̃nn (t)]T .
then the time derivative of V (ε ) along the trajectory of the error system (7.76) is as
follows:
n n
V̇ (ε ) = eT (t) ė (t) + ∑ c̃i (t)c̃˙i (t) + ∑ ãi j (t)ã˙i j (t)
i=1 i, j=1
n n
+ ∑ b̃i j (t)b̃˙ i j (t) + ∑ d˜i (t)d˙˜i (t)
i, j=1 i=1
n n
+ ∑ p̃i j (t) p̃˙i j (t) + ∑ q̃i j (t)q̃˙i j (t)
i, j=1 i, j=1
which implies that V̇ (ε ) < 0 for all e(t) = 0. Since V (ε ) is positive definite and V̇ (ε )
is negative semidefinite, it follows that e(t) ∈ L∞ , c̃i (t) ∈ L∞ , ãi j (t) ∈ L∞ , b̃i j (t) ∈ L∞ ,
d˜i (t) ∈ L∞ , p̃i j (t) ∈ L∞ , and q̃i j (t) ∈ L∞ . From the fact that
t
2 V (e(0)) − V (e(t)) V (e(0))
e(s) ds = ≤ ,
0 min{ki } min{ki }
we can easily show that e(t) ∈ L2 . From (7.76) we have ė(t) ∈ L∞ . Thus, by Bar-
balat’s lemma, we have lim e(t) = 0, i.e., lim e (t) = lim y (t) − x (t) = 0 . This
t→∞ t→∞ t→∞
means that the proposed controller (7.74) and (7.75) can globally asymptotically
synchronize the system (7.64) and the system (7.65). This completes the proof.
Fig. 7.46 The synchronization error curves of the drive system (7.72) and the response system
(7.73)
network can synchronize asymptotically with the trajectory of the drive delayed
chaotic cellular neural network. The estimated parameters converge asymptotically
to some constants, which shows the effectiveness of the adaptive synchronization
scheme.
7.5 Summary
References
1. Cao J (2000) Periodic solution and exponential stability of delayed CNNs. Phys Lett A
270:157–163
2. Gilli M (1993) Strange attractors in delayed cellular neural networks. IEEE Trans Circuits
Syst I 40:849–853
3. Gopalsmay K, Issic LKC (1994) Convergence under dynamical thresholds with delays. IEEE
Trans Neural Networks 8:341–348
4. Ikeda K (1979) Multiple-valued stationary state and its instability of the transmitted light by
a ring cavity system. Opt Commun 30:257–261
5. Krstic M, Li ZH (1998) Inverse optimal design of input-to-state stabilizing nonlinear con-
trollers. IEEE Trans Autom Control 43:336–351
6. Li CD, Liao XF, Wong KW (2004) Chaotic lag synchronization of coupled time-delayed
systems and its applications in secure communication. Physica D 194:187–202
7. Li CD, Liao XF, Zhang R (2005) A unified approach for impulsive lag synchronization of
chaotic systems with time delay. Chaos Solitons Fractals 23:1177–1184
8. Liao XF, Wong KW, Leung CS, Wu Z (2001) Hopf bifurcation and chaos in a single delayed
neuron equation with non-monotonic activation function. Chaos Solitons Fractals 12:1535–
1547
9. Liu D, Zhang Y, Zhang HG (2005) Self-learning call admission control for CDMA cellular
networks. IEEE Trans Neural Networks 16:1219–1228
10. Liu D, Xiong X, DasGupta B, Zhang HG (2006) Motif discoveries in unaligned molecular
sequences using self-organizing neural networks. IEEE Trans Neural Networks 17:919–928
11. Lu HT (2002) Chaotic attractors in delayed neural networks. Phys Lett A 298:109–116
12. Peng J, Liao XF (2003) Synchronization of a coupled time-delay chaotic system and its ap-
plication to secure communications. Comput Res Dev 40:263–268
13. Ren HP, Liu D, Han CZ (2006) Anticontrol of chaos via direct time delay feedback. Acta
Phys Sinica 55:2694–2708
14. Song Y, Wei J (2004) Bifurcation analysis for Chen’s system with delayed feedback and its
application to control of chaos. Chaos Solitons Fractals 22:75–91
15. Wang ZS, Zhang HG, Wang ZL (2006) Global synchronization of a class of chaotic neural
networks. Acta Phys Sinica 55:2687–2693
16. Xie YH (2007) Researches on control and synchronization of several kinds of nonlinear
chaotic systems with time delay. Ph.D. dissertation, Northeastern University, Shenyang
17. Zhang HG, Guan HX, Wang ZS (2007) Adaptive synchronization of neural networks with
different attractors. Prog Nat Sci 17:687–695
18. Zhang HG, Xie Y, Liu D (2006) Synchronization of a class of delayed chaotic neural networks
with fully unknown parameters. Dyn Contin Discrete Impuls Syst B 13:297–308
19. Zhang HG, Xie Y, Wang ZL, Zheng CD (2007) Adaptive synchronization between two dif-
ferent chaotic neural networks with time-delay. IEEE Trans Neural Networks 18:1841–1845
20. Zhang HG, Wang ZS, Liu D (2007) Robust exponential stability of cellular neural networks
with multiple time varying delays. IEEE Trans Circuits Syst II 54:730–734
21. Zhang HG, Wang ZS (2007) Global asymptotic stability of delayed cellular neural networks.
IEEE Trans Neural Networks 18:947–950
22. Zhang Y (2002) Absolute periodicity and absolute stability of delayed neural networks. IEEE
Trans Circuits Syst I 49:256–261
Chapter 8
Synchronizing Chaotic Systems Based on Fuzzy
Models
Abstract A motivation for using fuzzy systems and fuzzy control stems in part from
the fact that they are particularly suitable for industrial processes when the physical
systems or qualitative criteria are too complex to model and they have provided
an efficient and effective way in the control of complex uncertain nonlinear or ill-
defined systems. In recent years, fuzzy logic systems have received much attention
from control theorists as a powerful tool for nonlinear control. In this chapter, we
first introduce fuzzy modeling methods for some classical chaotic systems via the
Takagi–Sugeno (T–S) fuzzy model. Next, we model some hyperchaotic systems
using the T–S fuzzy model and then, based on these fuzzy models, we develop
an H∞ synchronization method for two different hyperchaotic systems. Finally, the
problem of synchronizing a class of time-delayed chaotic systems based on the T–S
fuzzy model is considered.
8.1 Introduction
In recent years, fuzzy logic systems have received much attention from the control
community as a powerful tool for nonlinear control. A motivation for using fuzzy
systems and fuzzy control stems in part from the fact that they are particularly suit-
able for industrial processes when the physical systems or qualitative criteria are
too complex to model and they have provided an efficient and effective way in the
control of complex uncertain nonlinear or ill-defined systems.
Among various kinds of fuzzy control or system methods, the T–S fuzzy system
is widely accepted as a powerful tool for the design of fuzzy control [12, 19, 20,
21, 24]. The T–S fuzzy model is frequently used for mathematical simplicity of
analysis. In this type of fuzzy model, local dynamics in different state–space regions
is represented by linear models [13]. The overall model of the system is achieved by
fuzzy blending of these linear models.
It is well known that, since the pioneering work of Pecora and Carroll [11], syn-
chronization of two chaotic dynamical systems is one of the most important applica-
269
270 8 Synchronizing Chaotic Systems Based on Fuzzy Models
tions of chaos and has been paid much attention. Various control methods have been
applied to synchronize two chaotic or hyperchaotic systems, such as linear feedback
control, nonlinear feedback control, and impulsive control [1, 22, 23]. Among these
synchronization methods, some must use high gains in designing parameters, others
need Lipschitz conditions of nonlinear terms to be satisfied. Moreover, many results
aforementioned dealt with one or two kinds of specified chaotic systems. There are
few unified methods suitable for synchronization of a variety of chaotic systems.
In particular, for chaotic systems that evolve within a bounded region of the state
space, the T–S fuzzy model can represent the nonlinear dynamics by a small set of
linear subsystems coupled with linguistic variables. The idea of controlling nonlin-
ear dynamical systems based on the T–S fuzzy model can always be divided into
two steps: (i) represent the nonlinear system using the T–S fuzzy system and (ii)
design a controller for the fuzzy system. Based on the parallel distributed compen-
sation (PDC) scheme, a lot of research on the synchronization of fuzzy systems has
been carried out.
In this chapter, we first introduce fuzzy modeling methods for some classical
chaotic systems via the T–S fuzzy model. Next, we model some hyperchaotic sys-
tems with the T–S fuzzy model and then, based on the fuzzy model, we develop
an H∞ synchronization method for two different hyperchaotic systems. Finally, the
problem of synchronizing a class of time-delayed chaotic systems based on the T–S
fuzzy model is considered.
The T–S fuzzy dynamical model, which originated from Takagi and Sugeno, is de-
scribed by fuzzy IF–THEN rules in which the consequent parts represent local linear
models. To realize a fuzzy model-based design, chaotic systems should first be rep-
resented exactly by T–S fuzzy models. From the investigation of many well-known
continuous-time and discrete-time chaotic systems, we find that nonlinear terms
have a common variable or depend only on one variable. If we take it as the premise
variable of fuzzy rules, a simple fuzzy dynamical model can be obtained and will
represent exactly chaotic systems. We observe that all the well-known chaotic sys-
tems can be applied to synchronization and secure communications either by a fuzzy
driving signal or by a crisp driving signal. In the following, we will introduce how
to represent many well-known chaotic systems by the T–S fuzzy model [9].
Consider a general chaotic dynamical system as follows:
Ri : IF z(t) is Mi ,
(8.2)
THEN sx(t) = Ai x(t) + Bi u(t) + bi,
where x(t) ∈ Rn and u(t) ∈ Rm denote the state vector and the input vector, respec-
tively, Ai ∈ Rn×n and Bi ∈ Rn×m are the known system matrix and input matrix with
appropriate dimensions, respectively, z(t) is a premise variable, Mi is a fuzzy set, bi
is a constant vector, i = 1, 2, . . . , r, and r is the number of fuzzy rules.
By taking a standard fuzzy inference strategy, i.e., using a singleton fuzzifier,
product fuzzy inference, and center average defuzzifier, the final continuous-time
fuzzy T–S system is inferred as follows:
The premise variables are assumed to be independent of the input vector u(t) in
this chapter. However, it should be noted that all the stability conditions derived in
this chapter can be applied even to the case that the premise variables depend on the
input vector. This assumption is employed to avoid a complicated defuzzification
process of fuzzy controllers.
Assumption 8.1. Considering the boundedness of chaotic systems, the fuzzy set is
chosen as the region of the system trajectory in the set
Ω := {x(t) ∈ Rn : x(t) ε }.
For chaotic systems, the existence of the parameter ε is natural. Therefore, nonlinear
systems can be represented exactly by the T–S fuzzy model.
In the process of constructing a T–S fuzzy model (8.2) which represents exactly
the nonlinear system (8.1), we will focus on nonlinear terms of the nonlinear system.
The consistency of nonlinear terms in the system and its associated fuzzy represen-
tation is emphasized here. Without loss of generality, fuzzy modeling methods are
proposed for three kinds of nonlinear terms [9]; that is (i) only one variable in a
nonlinear term; (ii) multi-variables in a nonlinear term; and (iii) multiple nonlinear
terms in a system.
Case 1: The Hénon Map (Only One Variable in a Nonlinear Term) is given by
272 8 Synchronizing Chaotic Systems Based on Fuzzy Models
Consider a single scalar nonlinear function f (x(t)) = x21 (t) which depends only on
one state variable xk in the Hénon map (xk = x1 ). Let the nonlinear term f (xk ) take
the form φ (xk )x̃k , where
⎧
⎨ xk , if lim f (xk ) ∈ L∞ ,
⎪
x̃k = x∈Ω xk
⎪ xk →0
⎩
1, otherwise.
Take xk , which forms the function φ (xk ), as the premise variable. Then, the fuzzy
representation is composed of the following fuzzy rules:
IF xk is Fi ,
THEN fˆ = di x̃k ,
r
Without loss of generality, it is required that ∑ μi (xk ) = 1, μi (xk ) ∈ [0, 1], which
i=1
r
further yields φ (xk ) = ∑ μi (xk )di . We let r = 2 and specify the membership func-
i=1
tions. From μ1 (xk ) + μ2 (xk ) = 1 and μ1 (xk )d1 + μ2 (xk )d2 = φ (xk ), we have
−d2 1
μ1 = + φ (xk ),
d1 − d2 d1 − d2
μ2 = 1 − μ1 .
For instance, let d = d1 = −d2 in which d is the upper bound of φ (xk ), i.e., d =
supx∈Ω |φ (xk )|. The fuzzy sets can be chosen as
1 φ (xk )
F1 = 1+ ,
2 d
1 φ (xk )
F2 = 1− .
2 d
8.2 Modeling Chaotic Systems via T–S Fuzzy Models 273
0.4
0.3
Xf1
0.2
0.1
x2 0
−0.1
−0.2 Xf2
−0.3
−0.4
−1.5 −1 −0.5 0 0.5 1 1.5
x1
Let x1 be the premise variable. Then, the equivalent fuzzy model of the Hénon
map can be constructed as
IF x1 is Fi ,
THEN ẋ(t) = Ai x(t) + bi,
−2 0.3 2 0.3
i = 1, 2, where d = 2, φ (x1 ) = x1 , A1 = , A2 = , b1 = b2 =
1 0 1 0
1.4
, and the fuzzy sets are
0
1 x1
F1 = 1+ ,
2 2
1 x1
F2 = 1− .
2 2
The chaotic attractor of the fuzzy Hénon map is shown in Fig. 8.1.
Case 2: The Rössler System (Multi-variables in a Nonlinear Term) is given by
⎧
⎨ ẋ1 = −x2 − x3 ,
⎪
ẋ2 = x1 + 0.2x2,
⎪
⎩
ẋ3 = 0.2 + x3(x1 − 5).
274 8 Synchronizing Chaotic Systems Based on Fuzzy Models
Let the variables xk in φk (·), for k = 1, 2, . . . , n, be premise variables. Then, the ith
rule of the fuzzy system will be of the following form:
where μi (x) = ∏nk=1 Fki (x)/ ∑ri=1 ∏nk=1 Fki (x) and Fki (xk ) is the degree of member-
ship of xk in f (x). The membership functions and coefficients can be chosen such
that Fki (xk ) ∈ [0, 1], ∑ri=1 μi (x) = 1, and φ1 (x1 )φ2 (x2 ) · · · φn (xn ) = ∑ri=1 μi (x)di .
The nonlinear term x1 x3 can have the extracted variable as x1 or x3 . Here, we
choose φ (x1 ) = x1 and let x1 be the premise variable of fuzzy rules, which satisfies
x1 (t) ∈ [−d, d]. The fuzzy model which represents exactly the Rössler system is
IF x1 is Fi ,
THEN ẋ(t) = Ai x(t) + bi,
The chaotic attractor of the fuzzy Rössler system is shown in Fig. 8.2.
Case 3: The Transformed Rössler System (Multiple Nonlinear Terms in a Sys-
tem) is given by ⎧
⎨ ẋ1 = −x2 − exp(x3 ),
⎪
ẋ2 = x1 + ax2,
⎪
⎩
ẋ3 = x1 + c exp(−x3 ) − b,
where a = 0.2, b = 5, and c = 0.2. More than one nonlinear term would be si-
multaneously considered in the system. When a nonlinear m × 1 vector f (x) =
( f1 (x), f2 (x), . . . , fm (x))T is considered, each element of f (x) is assumed to satisfy
f j (x) = φ1 j (x1 )φ2 j (x2 ) · · · φn j (xn )x̃ j , j = 1, 2, . . . , m. According to Cases 1 and 2, the
fuzzy system representing the nonlinear terms is described as follows:
25
20
15
x3
10
0
10
5 15
0 10
−5 5
0
−10 −5
x2 −15 −10 x1
with μi (x) = ∏nk=1 Fki (xk )/ ∑ri=1 ∏nk=1 Fki (xk ). The remaining procedure is the same
as that for Case 2. It is noted that if the nonlinear terms f j (x), for j = 1, 2, . . . , m,
have a common factor, the number of fuzzy rules may be reduced. Since the Rössler
system depends on the common variable x3 (t), the premise variable is set as x3 (t).
However, the variable x3 (t) of two nonlinear terms cannot be extracted due to
exp(x3 ) exp(−x3 )
lim ∈
/ L∞ and lim ∈/ L∞ , and thus bias terms will appear in
x∈Ω x3 x∈Ω x3
x3 →0 x3 →0
the fuzzy model.
The fuzzy model which represents exactly the transformed Rössler system is
IF x3 is Fi ,
THEN ẋ(t) = Ai x(t) + bi,
i = 1, 2, 3, 4, where x(t) = (x1 (t), x2 (t), x3 (t))T . The transformed fuzzy Rössler sys-
tem is represented exactly by the fuzzy model with r = 4 and
⎛ ⎞
0 −1 0
Ai = ⎝ 1 0.2 0 ⎠ , i = 1, 2, 3, 4,
1 0 0
0
x3
−2
−4
−6
10
5 15
10
0 5
−5 0
−5
x2 −10 −10 x1
1 exp(x3 ) exp(−x3 )
F1 = 1+ 1+ ,
22 75 75
1 exp(x3 ) exp(−x3 )
F2 = 2 1+ 1− ,
2 75 75
1 exp(x3 ) exp(−x3 )
F3 = 2 1− 1+ ,
2 75 75
1 exp(x3 ) exp(−x3 )
F4 = 2 1− 1− .
2 75 75
The chaotic attractor of the fuzzy Rössler system is shown in Fig. 8.3.
In this subsection, we investigate the T–S fuzzy modeling of many typical hyper-
chaotic systems. In the following hyperchaotic systems, x1 , x2 , x3 , and x4 are state
variables, α , β , θ , σ , e, l, k, h, β1 , and β2 are system parameters. It has been proven
that all the following systems are hyperchaotic if their parameters are chosen ap-
propriately. Assume that x1 ∈ [c̃1 − d˜1 , c̃1 + d˜1], x2 ∈ [c̃2 − d˜2, c̃2 + d˜2 ], d˜1 > 0, and
d˜2 > 0. According to the property of boundedness of hyperchaotic systems, this
assumption is reasonable. The exact values of c̃1 , d˜1 , c̃2 , and d˜2 can be obtained
by simulation. Since there is no input in the autonomous hyperchaotic systems, the
term Bi u(t) in the T–S fuzzy model is omitted in the process of fuzzy modeling of
hyperchaotic systems.
System 1: The Chen Hyperchaotic System [3]
⎧
⎪
⎪ ẋ1 = α (x2 − x1) + x4 ,
⎨
ẋ2 = σ x1 − x1 x3 + θ x2 ,
⎪
⎪ ẋ3 = x1 x2 − β x3,
⎩
ẋ4 = x2 x3 + lx4 .
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
and
b1 = b2 = (0 0 0 0)T .
The membership functions are chosen as
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
and
b1 = b2 = (0 0 0 0)T .
The membership functions are chosen as
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
and
b1 = b2 = (0 0 0 0)T .
The membership functions are chosen as
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
F12 (x2 (t)) = F22 (x2 (t)) = F31 (x1 (t)) = F41 (x1 (t)) = 1.
System 10: The Modified Rössler Hyperchaotic System (i) [10]
⎧
⎪
⎪ ẋ1 = −x2 − x3 ,
⎨
ẋ2 = x1 + α x2 + x4 ,
(8.6)
⎪
⎪ ẋ3 = β + β1x1 + x1x3 ,
⎩
ẋ4 = −θ x3 + σ x4 .
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
For the above hyperchaotic system, we derive the following T–S fuzzy model to
represent it exactly:
and
b1 = b2 = (0 0 β 0)T .
The membership functions are chosen as
where
Fi (z(t))
μi (z(t)) = r .
∑i=1 Fi (z(t))
Next, we give a systematic modeling method to represent exactly the above hy-
perchaotic systems by T–S fuzzy models. Note that all the aforementioned hyper-
chaotic systems can be expressed as follows:
where Ax(t) and f (x) denote the linear part and the nonlinear part of hyperchaotic
systems, respectively, and b is the constant vector of the hyperchaotic systems. Next,
fuzzy modeling methods are summarized for several cases of the nonlinear part f (x),
as follows.
Case 1: Only one nonlinear term in f (x). Systems 10 and 11 belong to this
case. For this case, we can choose any state variable in the nonlinear term as premise
variable. For example, in system 10, the unique nonlinear term is x1 x3 . Then, we can
choose either x1 or x3 as premise variable. If we choose x1 as premise variable, then
the membership functions are defined as
8.3 Synchronization of Hyperchaotic Systems via T–S Fuzzy Models 287
1 c̃1 − x1 (t)
F1 (x1 (t)) = 1−
2 d˜1
and
1 c̃1 − x1 (t)
F2 (x1 (t)) = 1+ ,
2 d˜1
and the fuzzy IF–THEN rules are defined as:
Denote A1 = A1 + A1 , A2 = A2 + A2 , and let A1 = A2 = A and b1 = b2 = β . Substi-
tuting them into (8.9), and making (8.9) equal to (8.8), we get the values of A1 and
A2 as follows:
⎛ ⎞ ⎛ ⎞
0 0 0 0 0 0 0 0
⎜0 0 0 0⎟ ⎜0 0 0 0⎟
A1 = ⎜ ⎟ ⎜
⎝ 0 0 c̃1 + d˜1 0 ⎠ , A2 = ⎝ 0 0 c̃1 − d˜1 0 ⎠ .
⎟
0 0 0 0 0 0 0 0
Consequently, we derive the exact T–S fuzzy models for this class of hyperchaotic
systems.
Case 2: More than one nonlinear term in f (x), but there is a common factor in
these nonlinear terms. Systems 2, 3, 4, 6, and 8 belong to this case. For this case,
we can choose the common factor in the nonlinear terms as premise variable. The
remaining process of T–S fuzzy modeling of this class of hyperchaotic systems is
similar to Case 1, and therefore it is omitted.
Case 3: More than one nonlinear term in f (x), and there is no common factor
in these nonlinear terms. Systems 1, 5, 7, and 9 belong to this case. For this case, we
have to choose two state variables occurring in f (x) as premise variables. For exam-
ple, in system 1, we can choose x1 and x2 as premise variables and the membership
288 8 Synchronizing Chaotic Systems Based on Fuzzy Models
0 0 0 l x2 x3 0
Denote Ai = Ai + Ai , and let Ai = A, bi = b (i = 1, 2, 3, 4). Substituting them into
(8.10) and making (8.10) equal to (8.8), we get the values of Ai as follows:
⎛ ⎞
0 0 0 0
⎜0 0 2(−c̃1 − d˜1 ) 0 ⎟
A1 = ⎜
⎝ 0 2(c̃1 + d˜1)
⎟,
0 0⎠
0 0 0 0
⎛ ⎞
0 0 0 0
⎜ 0 0 2(− c̃ ˜
1 + d1 ) 0⎟
A2 = ⎜
⎝ 0 2(c̃1 − d˜1)
⎟,
0 0⎠
0 0 0 0
8.3 Synchronization of Hyperchaotic Systems via T–S Fuzzy Models 289
⎛ ⎞ ⎛ ⎞
0 0 0 0 0 0 0 0
⎜0 0 0 0⎟ ⎜0 0 0 0⎟
A3 =⎜
⎝0
⎟ , and A4 = ⎜
⎟.
0 0 0⎠ ⎝0 0 0 0⎠
0 0 2(c̃2 + d˜2 ) 0 0 0 2(c̃2 − d˜2) 0
Consequently, we derive the exact T–S fuzzy model of this class of hyperchaotic
systems.
Suppose in (8.11) b̂i = b̂ and in (8.7) bi = b. Based on the PDC technique, a fuzzy
controller can be designed to realize the synchronization as follows:
Subcontroller u1 (t):
Subcontroller u2 (t):
Denote the error signal as e(t) = x(t) − x̂(t). Substituting (8.12) into (8.11), we can
derive the closed-loop synchronization error system as follows:
r r
ė(t) = ∑ μi (z(t))(Ai − Ki )e(t) + ∑ hi (ẑ(t))(Âi − Hi )e(t) + ϖ (t), (8.13)
i=1 i=1
where
r r
ϖ (t) = ∑ μi (z(t))(Ai − Ki )x̂(t) − ∑ hi (ẑ(t))(Âi − Hi )x(t) + (b − b̂).
i=1 i=1
290 8 Synchronizing Chaotic Systems Based on Fuzzy Models
For the synchronization error system (8.13), consider the following H∞ perfor-
mance index:
tf tf
1 γ 2 + γ22
e (t)e(t)dt < eT (0)Pe(0) + 1
T
ϖ T (t)ϖ (t)dt, (8.14)
0 2 2 0
where γ1 > 0 and γ2 > 0 are prescribed attenuation levels, and t f is the terminal
time.
Theorem 8.1. Consider the fuzzy error system (8.13). For the given constants γ1 > 0
and γ2 > 0, if there exist P = PT > 0, Xi , and Yi with appropriate dimensions, such
that the following linear matrix inequalities (LMIs) hold:
and
PÂi + ÂTi P − YiT − Yi + I 12 P
1 < 0, (8.16)
2P −γ22 I
then we can choose Ki = P−1 Xi and Hi = P−1Yi in (8.12), such that the robust per-
formance (8.14) is guaranteed for i = 1, 2, . . . , r.
V (t) = eT (t)Pe(t),
where P is a positive-definite matrix. The time derivative of V (t) along the trajectory
of (8.13) is given by
where
r
Ξ = ∑ μi (z(t))(Ai − Ki )
i=1
and
r
Π = ∑ hi (ẑ(t))(Âi − Hi ).
i=1
8.3 Synchronization of Hyperchaotic Systems via T–S Fuzzy Models 291
We have
tf
J= [2eT (t)e(t) − (γ12 + γ22 )ϖ T (t)ϖ (t) + V̇ (t)]dt − V (t f ) + V (0)
0
tf r T 1
e(t) Ω 2P e(t)
≤ ∑ μi (z(t)) ϖ (t) 1
−γ12 I ϖ (t)
0 i=1 2P
-
r T 1
e(t) Θ 2P e(t)
+ ∑ hi(ẑ(t)) ϖ (t) 1
−γ22 I ϖ (t)
dt + V (0), (8.17)
i=1 2P
where
Ω = (Ai − Ki )T P + P(Ai − Ki ) + I
and
Θ = (Âi − Hi )T P + P(Âi − Hi ) + I.
It is easy to see that if the following inequalities hold:
1
Ω 2P
1 <0 (8.18)
2P −γ12 I
and
1
Θ 2P
1 < 0, (8.19)
2P −γ22 I
then we have
tf
J= 2eT (t)e(t) − (γ12 + γ22 )ϖ T (t)ϖ (t) dt ≤ V (0). (8.20)
0
From (8.20), it is obvious that the H∞ performance index (8.14) is guaranteed. De-
note Xi = PKi and Yi = PHi . Then, (8.18) and (8.19) are equivalent to the LMIs
(8.15) and (8.16), respectively. This completes the proof.
8.3.3 Simulations
To illustrate the effectiveness of the theoretical analysis and design method, some
examples are included for illustration. First, we investigate the T–S fuzzy modeling
of system 2 (the Liu hyperchaotic system) and system 6 (the Lorenz hyperchaotic
system). When the parameters are chosen as α = 10, β = 40, θ = 2.5, σ = 10.6,
k = 1, and h = 4, system 2 has two positive Lyapunov exponents λ1 = 1.1491 and
λ2 = 0.12688. Thus, system 2 is hyperchaotic in this case, whose attractors are
292 8 Synchronizing Chaotic Systems Based on Fuzzy Models
150
100
50
x3
−50
40
20 20
0 10
0
−20
−10
x2 −40 −20 x1
0.5
0
x4
−0.5
−1
40
20 20
0 10
0
−20
−10
x2 −40 −20 x1
shown in Figs. 8.4 and 8.5. The initial state vector is chosen as x(0) = (1, 1, −1, 0)T.
From the simulation we get x1 ∈ [c̃1 − d˜1 , c̃1 + d˜1 ], where c̃1 = 1.2347 and d˜1 =
18.6833. Then, we can derive the T–S fuzzy models of system 2 as follows:
and
b1 = b2 = (0 0 0 0)T .
The membership functions are chosen as
When the parameters are chosen as α = 10, l = 28, β = 8/3, and σ = 1.3, system
6 has two positive Lyapunov exponents λ1 = 0.39854 and λ2 = 0.24805. Thus,
system 6 is hyperchaotic in this case, which is shown in Figs. 8.6 and 8.7. The
initial state vector is chosen as x(0) = (1, 2, 1, 2)T . From the simulation we get x1 ∈
[c̃1 − d˜1 , c̃1 + d˜1 ], where c̃1 = −1.4366 and d˜1 = 28.4241. Then, we can derive the
T–S fuzzy models of systems 6 as follows:
50
40
30
x3
20
10
0
40
20 40
0 20
0
−20
−20
x2 −40 −40 x1
200
100
0
x4
−100
−200
40
20 40
0 20
0
−20
−20
x2 −40 −40 x1
1 −1.4366 − x1(t)
F1 (x1 (t)) = 1− ,
2 28.4241
1 −1.4366 − x1(t)
F2 (x1 (t)) = 1+ .
2 28.4241
In the following, based on the above T–S fuzzy hyperchaotic models, we in-
vestigate the synchronization between the Liu hyperchaotic system and the Lorenz
hyperchaotic system. We consider the Liu hyperchaotic system as the drive sys-
tem and the Lorenz hyperchaotic system as the response system. The initial con-
ditions of the drive system and the response system are x(0) = (15, 25, 100, 0)T
and x̂(0) = (−25, −20, 30, 100)T, respectively. According to Theorem 8.1, choos-
ing γ1 = γ2 = 0.1, we get the positive-definite matrix P and feedback gains Ki and
Hi (i = 1, 2) computed by LMI Control Toolbox in Matlab R
7.0 as follows:
⎛ ⎞
1.6093 −0.0410 −0.0022 −0.3284
⎜ −0.0410 0.3789 −0.0044 0.0092 ⎟
P=⎜ ⎝ −0.0022 −0.0044 0.1059 −0.0003 ⎠ ,
⎟
Fig. 8.8 Synchronization error e1 (t) between two different hyperchaotic systems
Fig. 8.9 Synchronization error e2 (t) between two different hyperchaotic systems
8.3 Synchronization of Hyperchaotic Systems via T–S Fuzzy Models 297
Fig. 8.10 Synchronization error e3 (t) between two different hyperchaotic systems
Fig. 8.11 Synchronization error e4 (t) between two different hyperchaotic systems
298 8 Synchronizing Chaotic Systems Based on Fuzzy Models
This section deals with the synchronization problem of chaotic systems with time-
varying delays and unknown parameters. A fuzzy controller is developed and pre-
sented via T–S fuzzy models. By using an appropriate Lyapunov–Krasovskii func-
tional, we derive synchronization conditions in the form of an LMI. The advantages
of the approach are that the structurer of the fuzzy controller is simple and this
method can be applied to most of the synchronization control of chaotic systems.
A numerical example is given to demonstrate the validity of the proposed synchro-
nization approach.
A T–S fuzzy system is described by a set of fuzzy IF–THEN rules where each rule
locally represents a linear input–output realization of the system over a certain re-
gion of the state space. For the synchronization problem, the drive and response
systems are designed such that the internal states of the drive and response systems
are to be the same. Let the chaotic response system be described by a T–S fuzzy
model with the following rule-base:
where
l
ωi (z(t))
vi (z(t)) = l
, vi (z(t)) ≥ 0, ∑ vi (z(t)) = 1,
∑ ωi (z(t)) i=1
i=1
g l
ωi (z(t)) = ∏ Mi j (z(t)), ∑ ωi (z(t)) > 0, ωi (z(t)) ≥ 0,
j=1 i=1
8.4 Synchronizing Fuzzy Chaotic Systems with Time-Varying Delays 299
i = 1, . . . , l, where y(t) ∈ Rn is the state vector of the drive system, ẑ(t) = [ẑ1 (t), ẑ2 (t),
. . . , ẑg (t)]T is a vector of the premise variables, and Âi and B̂i are some unknown
constant matrices of compatible dimensions. The overall fuzzy drive system can be
inferred as
l
ẏ(t) = ∑ λi (ẑ(t))[Âi y(t) + B̂i y(t − τ (t))] + ε (t). (8.22)
i=1
Denote the synchronization error signal by e(t) = x(t) − y(t). From (8.21) and
(8.22), the error dynamics of e(t) is arranged as
where
' (
l l
ϖ (t) = ∑ vi (z(t)) − ∑ λi(ẑ(t)) [Âi y(t) + B̂iy(t − τ (t))].
i=1 i=1
The purpose of this section is to design adaptive control law such that the drive
and response systems are synchronized; that is,
lim e(t) = 0.
t→∞
300 8 Synchronizing Chaotic Systems Based on Fuzzy Models
In order to achieve the synchronization of the two systems, let us design the follow-
ing fuzzy controller based on the PDC technique,
i = 1, 2, . . . , l, where Ki are the feedback gains to be designed later and Ci (t) and
Di (t) are adjustable control gains. The inferred control law is represented as
l
u(t) = ∑ vi (z(t))[−Ki e(t) + (−Ai + Ci (t))y(t) + (−Bi + Di (t))y(t − τ (t))].
i=1
(8.24)
Substituting (8.24) into (8.23), we obtain the following error system:
Theorem 8.2. For the error system (8.25), if there exist symmetric positive-definite
matrices P and Q, a positive constant σ , feedback gain matrices Ki , and the para-
metric adaptive laws
⎧
⎪
⎪ ċ1 jk (t) = −vk (z(t))α1 m j yk (t),
⎨
.. (8.26)
⎪ .
⎪
⎩
ċl jk (t) = −vk (z(t))αl m j yk (t),
⎧ ˙
⎪ d1 jk (t) = −vk (z(t))β1 m j yk (t − τ (t)),
⎪
⎨
.. (8.27)
⎪ .
⎪
⎩ ˙
dl jk (t) = −vk (z(t))βl m j yk (t − τ (t)),
where αi and βi are some given positive constants, m j is the elements of 2eT P, ci jk
and di jk are elements of Ci and Di , i = 1, 2, . . . , l, j, k = 1, 2, . . . , n, such that
⎛ ⎞
∏ PBi P I
⎜ BT P − (1 − h)Q 0 0 ⎟
⎜ i ⎟ < 0, (8.28)
⎝ P 0 − σ 2I 0 ⎠
I 0 0 −I
where ∏ = (Ai − Ki )T P + P(Ai − Ki ) + Q, then we obtain H∞ performance given by
8.4 Synchronizing Fuzzy Chaotic Systems with Time-Varying Delays 301
tf tf
eT (t)e(t)dt ≤ eT (0)e(0) + σ 2 ϖ T (t)ϖ (t)dt.
0 0
where
t
V1 (t) = eT (t)Pe(t) + eT (s)Qe(s)ds,
t−τ (t)
l n n
1 2 1
V2 (t) = ∑ ∑ ∑ ci jk (t) − âi jk + (di jk (t) − b̂i jk )2 ,
i=1 j=1 k=1 2αi 2βi
where âi jk and b̂i jk are elements of unknown parameters matrices Âi and B̂i , respec-
tively.
The derivative of V (t) along the trajectory of (8.25) is
and
l n n
1 1
V̇2 (t) = ∑ ∑ ∑ (ci jk (t) − âi jk )ċi jk (t) + (di jk (t) − b̂i jk )d˙i jk (t)
i=1 j=1 k=1 αi βi
l n n 9
=− ∑ ∑ ∑ vi (z(t)) (ci jk (t) − âi jk )m j yk (t)
i=1 j=1 k=1
:
+ (di jk (t) − b̂i jk )m j yk (t − τ (t))
r 7 8
= − 2eT(t)P ∑ vi (z(t)) (Ci (t) − Âi )y(t) + (Di (t) − B̂i)y(t − τ (t)) . (8.29)
i=1
where t f denotes the terminal time of the control and σ is a disturbance attenuation
value which denotes the effect of ϖ (t) on e(t). Using (8.30) and (8.31), we obtain
tf 7 8 tf
T 2 T
J= e (t)e(t) − σ ϖ (t)ϖ (t) + V̇ (t) dt − V̇ (t)dt
0 0
tf 7 8
= eT (t)e(t) − σ 2ϖ T (t)ϖ (t) + V̇ (t) dt − V (t f ) + V (0)
0
tf l
≤
0
∑ vi (z(t))η T (t)Φη (t)dt + V (0),
i=1
⎛ ⎞
∏ +I PBi P
where η (t) = (e(t)T , e(t − τ (t))T , ϖ (t)T )T , Φ = ⎝ BTi P −(1 − h)Q 0 ⎠,
P 0 −σ 2 I
T
and ∏ = (Ai − Ki ) P + P(Ai − Ki ) + Q.
If there exist symmetric positive-definite matrices P > 0 and Q > 0, a positive
constant σ , and feedback gain matrices Ki such that
⎛ ⎞
∏ +I PBi P
⎝ BTi P −(1 − h)Q 0 ⎠ < 0, (8.32)
P 0 −σ 2 I
we get tf
J= [eT (t)e(t) − σ 2 ϖ T (t)ϖ (t)]dt < V (0).
0
The matrix inequality (8.32) can be converted into (8.28) via the Schur complement.
The proof is complete.
8.4.3 Simulations
i = 1, 2.
In this fuzzy model, the state vector y = (y1 , y2 )T are taken as the premise vari-
ables. The membership functions of fuzzy sets are
R1 : IF y1 (t) is M11 , THEN ẏ(t) = Â1 y(t) + B̂1 y(t − τ (t)) + ε (t),
R2 : IF y1 (t) is M21 , THEN ẏ(t) = Â2 y(t) + B̂2 y(t − τ (t)) + ε (t),
where y(t) = (y1 (t), y2 (t))T . Let parameter matrices Â1 , Â2 , B̂1 , B̂2 , and ε (t) be
respectively selected as follows
0 1 0 1 0 0
Â1 = , Â2 = , B̂1 = B̂2 = ,
−47.6 −0.18 1.2 −0.18 0.01 0.01
and
0
ε (t) = .
25 cos(1.29t)
Let τ (t) = 0.125 sin(2t), the membership functions are selected as follows
y21 y2
M11 (y1 (t)) = , M21 (y1 (t)) = 1 − 1 .
100 100
Here, we suppose that the response chaotic system to be synchronized has the
same structure as the drive system. The fuzzy response system can be described by
the following rules:
R1 : IF x1 (t) is F11 , THEN ẋ(t) = A1 x(t) + B1x(t − τ (t)) + ε (t),
R2 : IF x1 (t) is F21 , THEN ẋ(t) = A2 x(t) + B2x(t − τ (t)) + ε (t),
where x(t) = (x1 (t), x2 (t))T ,
0 1 0 1 0 0
A1 = , A2 = , B1 = B2 = ,
−39.6 −0.1 0.4 −0.1 0.01 0.01
ε (t) and τ (t) are the same definition as drive systems, and the membership functions
are selected as follows:
x21 x2
M11 (x1 (t)) = , M21 (x1 (t)) = 1 − 1 .
100 100
We illustrate each of these synthesis procedures, in which the LMI Control Tool-
box is used to compute the solutions of LMIs. From the assumption, we obtain the
symmetric positive-definite matrices P and Q and the feedback gains K1 and K2 as
follows:
0.2299 0.0000 1.5325 0.0000
P= , Q= ,
0.0000 0.2298 0.0000 1.5325
304 8 Synchronizing Chaotic Systems Based on Fuzzy Models
where c21 22 21 22
1 , c1 , c2 , and c2 are uncertain parameters. The initial value of C1 (t) and
C2 (t) are selected as follows
0 0 0 0
C1 (0) = = ,
c21
1 (0) c 22 (0)
1 −0.4 − 2
0 0 0 0
C2 (0) = =
c21 22
2 (0) c2 (0) −2 − 8
The synchronization errors and the estimated errors are illustrated as follows. The
synchronization errors of each state variable are shown in Figs. 8.12 and 8.13.
Changing parameters can be clearly seen in Fig. 8.14 by using the scheme pro-
posed in Theorem 8.2. The adaptive synchronization is more rapidly achieved for
uncertain drive system with the response system.
8.5 Summary
In this chapter, first, we represented the typical chaotic and hyperchaotic systems via
the T–S fuzzy models. Based on the fuzzy hyperchaotic models, simple fuzzy con-
trollers were designed for synchronizing hyperchaotic systems. After that, a fuzzy
controller was presented for the synchronization of chaotic systems based on the T–
S fuzzy models with uncertain parameters and time-varying delays, which is based
on the PDC technique, the T–S fuzzy models, and the adaptive control method. The
theoretical analysis and simulation study show the validity of the proposed methods.
References
1. Chen G, Dong X (1998) From Chaos to Order: Methodologies, Perspectives and Applica-
tions. World Scientific, Singapore
2. Chen ZQ, Yang Y, Qi GY, Yuan ZZ (2007) A novel hyperchaos only with one equilibrium.
Phys Lett A 360:696–701
3. EI-Dessoky MM (2007) Synchronization and anti-synchronization of a hyperchaotic Chen
system. Chaos Solitons Fractals 39:1790–1797
4. Elabbasy EM, Agiza HN, EI-Dessoky MM (2006) Adaptive synchronization of a hyper-
chaotic system with uncertain parameter. Chaos Solitons Fractals 30:1133–1142
5. Gao TG, Chen ZQ, Yuan ZZ, Yu DC (2007) Adaptive synchronization of a new hyperchaotic
system with uncertain parameters. Chaos Solitons Fractals 33:922–928
6. Jia Q (2007) Hyperchaos generated from the Lorenz chaotic system and its control. Phys Lett
A 366:217–222
7. Jia Q (2007) Adaptive control and synchronization of a new hyperchaotic system with un-
known parameters. Phys Lett A 362:424–429
8. Kim JH, Shin H, Kim E, Park M (2005) Synchronization of time-delayed TS fuzzy chaotic
systems via scalar output variable. Int J Bifurc Chaos 15:2593–2601
9. Lian KY, Chiang TS, Chiu CS, Liu P (2001) Synthesis of fuzzy model-based designs to syn-
chronization and secure communications for chaotic systems. IEEE Trans Syst Man Cybern
B 31:66–83
10. Nikolov S, Clodong S (2006) Hyperchaos–chaos–hyperchaos transition in modified Rőssler
systems. Chaos Solitons Fractals 28:252–263
11. Pecora LM, Carroll TL (1990) Synchronization in chaotic systems. Phys Rev Lett 64:821–824
12. Takagi T, Sugeno M (1985) Fuzzy identification of systems and its applications to modeling
and control. IEEE Trans Syst Man Cybern 15:116–132
13. Tanaka K, Ikeda T, Wang HO (1998) A unified approach to controlling chaos via an LMI-
based fuzzy control system design. IEEE Trans Circuits Syst I 45:1021–1040
14. Wang FQ, Liu CX (2006) Hyperchaos evolved from the Liu chaotic system. Chinese Phys
15:963–968
15. Wang GY, Zhang X, Zheng Y, Li YX (2006) A new modified hyperchaotic Lü system. Physica
A 371:260–272
16. Wu XY, Guan ZH, Wu ZP (2008) Adaptive synchronization between two hyperchaotic sys-
tems. Nonlinear Anal 68:1346–1351
17. Yang DS, Zhang HG, Li AP, Meng ZY (2007) Generalized synchronization of two non–
identical chaotic systems based on fuzzy model. Acta Phys Sinica 56:3121–3126
18. Yassen MT (2007) On hyperchaos synchronization of a hyperchaotic Lü system. Nonlinear
Anal 68:3592–3600
19. Zhang HG, Cai LL, Bien Z (2000) A fuzzy basis function vector-based multivariable adaptive
fuzzy controller for nonlinear systems. IEEE Trans Syst Man Cybern B 30:210–217
References 307
20. Zhang HB, Liao XF, Yu JB (2005) Fuzzy modeling and synchronization of hyperchaotic
systems. Chaos Solitons Fractals 26:835–843
21. Zhang HG, Guan HX, Wang ZS (2007) Adaptive synchronization of neural networks with
different attractors. Prog Nat Sci 17:687–695
22. Zhang HG, Huang W, Wang ZL, Chai TY (2006) Adaptive synchronization between different
chaotic systems with unknown parameters. Phys Lett A 350:363–366
23. Zhang HG, Xie YH, Liu D (2006) Synchronization of a class of delayed chaotic neural net-
works with fully unknown parameters. Dyn Contin Discrete Impuls Syst B 13:297–308
24. Zhang HG, Yang DD, Chai TY (2007) Guaranteed cost networked control for T–S fuzzy
systems with time delay. IEEE Trans Syst Man Cybern C 37:160–172
Chapter 9
Chaotification of Nonchaotic Systems
9.1 Introduction
In this chapter, we will investigate how to chaotify dynamical systems which are
nonchaotic originally. As a reverse process of suppressing or eliminating chaotic
behaviors in order to reduce the complexity of an individual system or a coupled
system, chaotification aims at creating or enhancing the system’s complexity for
some special applications. More precisely, chaotification is to generate some chaotic
behaviors from a given system, which is nonchaotic or even is stable originally [22].
It is well known that chaotification of dynamical systems has many potential appli-
cations in electronic, mechanical, biological, and medical systems, etc. [9, 16, 20].
In recent years, many conventional control methods and special techniques were
applied to the chaotification of discrete-time dynamical systems or continuous-time
dynamical systems, such as linear feedback control [9, 12], time-delay feedback
control [17, 19], nonlinear feedback control [11], adaptive inverse optimal control
[25], impulsive control [21, 27], saturating control [13], switching control [1], and
other control methods [22], etc. However, most of them are based on the assumption
that the analytical representations of nonlinear dynamical systems to be chaotified
309
310 9 Chaotification of Nonchaotic Systems
are known exactly. For an unknown or uncertain nonlinear dynamical system, the
above methods are ineffective. Recently, Zhang and Quan [24] and Margaliot et
al. [14] successively proposed a novel fuzzy model named the hyperbolic tangent
model (FHM). The FHM is a nonlinear model in nature which is very suitable for
expressing nonlinear dynamical properties, and it is easy to design a stable and op-
timal controller based on the FHM than other models such as the fuzzy dynamic
model, T–S model, etc. Moreover, the FHM has been shown to be a universal ap-
proximator, i.e., it can approximate an arbitrary nonlinear continuous function on a
compact set with arbitrary accuracy [26]. This provides a theoretical foundation for
using the FHM to represent complex nonlinear systems approximately. Therefore,
the FHM can be regarded as an intermediate step for chaotifying an unknown non-
linear dynamical system. In this chapter, first, we develop a simple nonlinear state
feedback controller to chaotify the discrete-time FHM with uncertain parameters.
Second, we use an impulsive and nonlinear feedback control method to chaotify the
continuous-time FHM. Finally, we chaotify two classes of continuous-time linear
systems via a sampled data control approach.
First, we define the fuzzy rule base for inferring the discrete-time FHM.
Definition 9.1 ([14, 23]). Given a plant with n state variables x(k) = (x1 (k), . . . ,
xn (k))T , we call the following fuzzy rule base a discrete-time hyperbolic-type fuzzy
rule base if it satisfies the following conditions:
(i) Every fuzzy rule adopts the following form:
(iii) There are 2n fuzzy rules in the rule base; that is, it includes all possible P and
N combinations of state variables in the IF part and all sign combinations of
constants in the THEN part.
Theorem 9.1 ([14, 23]). Given a hyperbolic-type rule base, if we define the mem-
bership functions of Pxi and Nxi as
9 2 :
μPx (xi (k)) = exp − xi (k) − sxi /2 ,
i
9 2 :
μNx (xi (k)) = exp − xi (k) + sxi /2 ,
i
x(k + 1) = A tanh(Sx(k)),
This model is named the discrete-time FHM. In the sequel, sxi is denoted by si
for simplicity.
Consider a general n-dimensional discrete-time autonomous system with the
form
x(k + 1) = g(x(k)), (9.1)
where g is a C1 nonlinear map. Denote Z̄ = N ∪ {0}. Suppose that t ∈ Z̄ and denote
gt the t times compositions of g with itself. x∗ is a p-periodic point of g (p ∈ N) if
g p (x∗ ) = x∗ and gt (x∗ ) = x∗ for t < p. When p = 1, x∗ is a fixed point. Let g (x) and
det(g (x)) be the Jacobian of g at the point x and its determinant, respectively. With
this notation, we can give the definition of a snap-back repeller.
Definition 9.2 ([15]). Suppose that x∗ is a fixed point of g with all eigenvalues of
g (x∗ ) exceeding 1 in magnitude, and suppose that there exists a point x0 = x∗ in
a neighborhood1 of x∗ such that, for some positive integer m, gm (x0 ) = x∗ and
det((gm ) (x0 )) = 0. Then, x∗ is said to be a snap-back repeller of g.
Lemma 9.1 ([15]). If system (9.1) has a snap-back repeller then the system is
chaotic in the sense of Li–Yorke, namely,
(i) there exists a positive integer n such that, for every integer p ≥ n, system (9.1)
has p periodic points;
(ii) there exists a scrambled set (an uncountable invariant set S containing no peri-
odic points) such that
a. g(S) ⊂ S,
1 In [15], Marotto pointed out that identifying a repelling neighborhood of x∗ is not in general a
difficult task. In fact, since the local unstable manifold of x∗ includes all points of Rn close to x∗ ,
every closed ball B(x∗ , r) defined by the Euclidean norm must be so for sufficiently small r > 0.
312 9 Chaotification of Nonchaotic Systems
Lemma 9.2 (Gershgorin Disc Theorem [3]). Let A be a complex matrix whose
n
i jth element is ai j . For every i = 1, 2, . . . , n, let ri := ∑ ai j . Let Di be a disc
j=1, j=i
of radius ri centered at aii . Then, all eigenvalues of A lie in the union of Di for
n
i = 1, 2, . . . , n, i.e., λi ∈ D j for i = 1, 2, . . . , n.
j=1
where Δ A represents the uncertain parameters. Assume that Δ A and S are bounded
in the sense of infinite norm, i.e., there exist two positive constants γ and η such that
Δ A ∞ ≤ γ and S ∞ ≤ η . Consequently, it can be inferred that A + Δ A ∞ ≤ ρ ,
where ρ is another positive constant.
Next, it will be proven that the controlled system is chaotic in the sense of Li–
Yorke if we design the parameters of the following controller (9.3) appropriately.
The relevant result is given as follows.
Theorem 9.2 ([28]). For the uncertain discrete-time FHM (9.2), given an arbitrarily
small δ , there exists a sufficiently large constant β such that under the controller
βπ
u(k) = δ sin x(k)
δ
T
βπ βπ βπ
= δ sin x1 (k) , δ sin x2 (k) , . . . , δ sin xn (k) , (9.3)
δ δ δ
Proof. Define
βπ
x(k + 1) = (A + Δ A) tanh(Sx(k)) + δ sin x(k) := g(x(k)). (9.4)
δ
βπ ∗
Note that Φ (Sx∗ ) = Φ (0) = I and cos x = cos(0) = I, where I denotes the
δ
identity matrix of appropriate dimensions. Differentiating (9.4) with respect to x at
the fixed point x∗ = 0 yields
g (x∗ ) = (A + Δ A)S + πβ I.
According to Lemma 9.2, one can assert that all eigenvalues λi (g (x∗ )), i = 1, 2, . . ., n,
n
lie in the union of all Di of radius ∑ (ai j + Δ ai j )s j centered at (aii + Δ aii )si +
j=1, j=i
πβ . Therefore, as long as one chooses an appropriate β , such that the following
inequalities are met:
n
(aii + Δ aii )si + πβ − ∑ (ai j + Δ ai j )s j > 1, for i = 1, 2, . . . , n, (9.5)
j=1, j=i
1
β> ( 1 + (A + Δ A)S ∞) ,
π
314 9 Chaotification of Nonchaotic Systems
one gets
n
(aii + Δ aii )si + πβ − ∑ (ai j + Δ ai j )s j > 1 + 2(aii + Δ aii )si , for i = 1, 2, . . . , n.
j=1, j=i
simultaneously.
Noting that s j > 0 (for j = 1, . . . , n), x1 > 0, x2 > 0, and tanh(x) < x for x > 0,
we can derive the following two inequalities:
n βπ
gi (x1 ) ≥ − ∑ ai j + Δ ai j tanh s j x1 j + δ sin x1i
j=1 δ
δ
≥ − A + ΔA ∞ S ∞ +δ
2β
ρηδ
=− +δ (9.7)
2β
and
n βπ
gi (x2 ) ≤ ∑ ai j + Δ ai j tanh(s j x2 j ) + δ sin
δ
x2i
j=1
3δ
≤ A + ΔA ∞ S ∞ −δ
2β
3ρηδ
= −δ, (9.8)
2β
where i = 1, 2, . . . , n. From (9.7) we can infer that if β > (ρη )/2 then g(x1 ) > 0
can be guaranteed. From (9.8) we can infer that if β > (3ρη )/2 then g(x2 ) < 0
9.2 Chaotification of Discrete-Time Fuzzy Hyperbolic Model with Uncertain Parameters 315
can be guaranteed. Combining the above two results, it can be concluded that if β >
δ δ T 3δ 3δ T
(3ρη )/2, then there exist two points x1 = ,..., and x2 = ,..., ,
2β 2β 2β 2β
such that g(x1 ) > 0 and g(x2 ) < 0. Therefore, by the mean value theorem of calculus
[10], there exists a point x0 ∈ B(x∗ , r), x1 < x0 < x2 , such that g(x0 ) = 0 = x∗ . Then,
as long as an appropriate β satisfying det(g (x0 )) = 0 is determined, we can con-
clude that x∗ is a snap-back repeller of the map g defined in (9.4). In fact, considering
that β can be chosen sufficiently large, we can always determine an appropriate con-
βπ 0
stant β , such that det(g (x0 )) = det (A + Δ A)SΦ (Sx0) + πβ cos x = 0.
δ
∗
Then, it can be inferred that x is a snap-back repeller of the map g defined in (9.4).
β is selected, √
To conclude, if an appropriate such / that β > β¯ and det(g (x0 )) = 0
1 + ρη 3ρη 3 nδ
is satisfied, where β¯ = max , , , then x∗ = 0 is a snap-back
π 2 2r
repeller of the map g defined in (9.4). Then, according to Lemma 9.1, it can be con-
cluded that the controlled system is chaotic in the sense of Li–Yorke. This completes
the proof.
Remark 9.1. On one hand, there exists the possibility that the selected β is not very
appropriate, which makes the equality det(g (x0 )) = 0 met, though the probability of
the event det(g (x0 )) = 0 is very small for a certain β . Then, the controlled system
may be unable to generate chaos in this case. On the other hand, the exact value
of r in B(x∗ , r) is not easy to determine. Based on the two facts above, a feasible
method is presented here. Actually, we can regard β as an adjustable parameter, i.e.,
the value of β is increased gradually from the initial value 0. In this process, we can
see that a series of period-doubling bifurcation behaviors occur continually. In the
end, chaos will be generated in the controlled system.
Remark 9.2. Assume that the original discrete-time system to be chaotified is fully
unknown. However, if it can be approximated by the uncertain discrete-time FHM
(9.2) with adequate accuracy, then we can believe that the controller for chaotifica-
tion of the uncertain discrete-time FHM (9.2) can chaotify the original system at the
same time.
9.2.3 Simulations
Fig. 9.4 The chaotic attractor of the controlled system with β = 200
318 9 Chaotification of Nonchaotic Systems
⎛ ⎞
kπ kπ
⎜ 0.07 sin 2 0.1 cos
2
0 ⎟
⎜ ⎟
⎜ kπ kπ ⎟
⎜ ⎟
Δ A = ⎜ 0.01 cos 0 0.05 cos ⎟,
⎜ 2 2 ⎟
⎜ ⎟
⎝ kπ kπ ⎠
0.09 cos 0.08 sin 0
2 2
S = diag {s1 , s2 , s3 } = diag {1, 1, 1}, and
βπ
u(k) = δ sin x(k) .
δ
First, we review some necessary preliminaries for Devaney’s definition of chaos and
the continuous-time FHM.
Definition 9.3 ([2]). A map φ : S → S, where S is a set, is chaotic if
(i) φ has sensitive dependence on initial conditions, i.e., for any x ∈ S and any
neighborhood Ñ of x in S, there exists a δ > 0 such that φ m (x) − φ m (y) > δ
for some y ∈ Ñ and m > 0, where φ m is the mth-order iteration of φ , i.e., φ m :=
φ ◦ φ ◦ · · · ◦ φ (m times).
(ii) φ is topologically transitive, i.e., for any pair of subsets U,V ⊂ S, there exists
an integer m > 0 such that φ m (U) ∩V = 0. /
(iii) The periodic points of φ are dense in S.
Remark 9.3. Definition 9.3 is for discrete-time systems. For continuous-time sys-
tems, we need to construct a Poincaré section so as to get a Poincaré map. Details
will be provided later.
Definition 9.4 ([24]). Given a plant with n input variables x = (x1 (t), . . . , xn (t))T and
an output variable ẋ, we call the fuzzy rule base a continuous-time hyperbolic-type
fuzzy rule base if it satisfies the following conditions:
(i) Every fuzzy rule has the following form:
where kxi > 0 (i = 1, . . . , n), then we can derive the following model:
ẋ = A tanh(Kx), (9.9)
AT Q + QA < 0.
320 9 Chaotification of Nonchaotic Systems
where fi (·) is the ith element of f (x); then V (x) is positive definite and radially
unbounded, namely, V (x) > 0 for all x = 0 and V (x) → +∞ as x → +∞. The time
derivative of V (x) along system (9.10) can be computed as follows:
Substituting (9.11) into (9.12) and noting that (A + BL) is a Hurwitz diagonally
stable matrix, we get
7 8
V̇ (x) = f T (x) (A + BL)T Q + Q(A + BL) f (x) < 0, if x = 0.
Moreover, the proof is independent of initial values, so the closed-loop system (9.10)
is globally and asymptotically stable.
Remark 9.4. For the purpose of deriving our main results, we only require the sta-
bility of closed-loop systems (9.10) and (9.11). It should be noted that there are
other results for the stability of closed-loop systems (9.10) and (9.11). System (9.9)
and the closed-loop systems (9.10) and (9.11) can be regarded as recurrent neural
networks. They are also special cases of Lur’e systems [8].
9.3 Chaotification of Continuous-Time Fuzzy Hyperbolic Model Using Impulsive Control 321
Remark 9.5. Under the conditions of Lemma 9.3, solutions of the controlled system
(9.10) are defined in a bounded region D ⊂ Rn .
Remark 9.6. If we let à = A + BL, system (9.10) with u given in (9.11) becomes
ẋ = Ã f (x). (9.13)
Because of the form of f (x), there exists a constant γ > 0 such that, for all differ-
ent x, y ∈ D, f (y) − f (x) ≤ γ y − x . That is to say, f (x) is Lipschitzian on D.
From the theory of ordinary differential equations, we know that system (9.10) has
a unique continuous solution φ (t,t0 , x0 ) through a given initial point, (t0 , x0 ), which
is also continuously dependent on x0 [5].
Proof. First, it should be emphasized that for system (9.14) to display chaotic dy-
namics, its phase space must be a finite region; that is to say, there exists an M > 0
such that the trajectory of system (9.14), φ (t,t0 , x0 ), satisfies φ (t,t0 , x0 ) ≤ M for
all t in the domain. In fact, for t ∈ [τ0 , τ1 ), system (9.14) is under the action of
à f (x). By Lemma 9.3, its trajectory, φ (t,t0+ , x0 ), is asymptotically stable. When
t = τ1 , Ã f (x) turns off and I1 (x(τ1 )) acts on system (9.14). The trajectory of sys-
tem (9.14) jumps from φ (τ1− ,t0+ , x0 ) to x10 . We denote the trajectory at each instant
t = τk as xk0 , k = 1, 2, 3, . . . , where x00 = x0 . Since I1 (x(τ1 )) is a finite impulse sig-
nal, x10 is also finite. For t ∈ (τ1 , τ2 ), I1 (x(τ1 )) turns off and à f (x) turns on with
the initial point (τ1+ , x10 ). The trajectory of system (9.14) in this time interval, de-
noted φ (t, τ1+ , x10 ), is also asymptotically stable. The analysis in other time intervals,
(τk , τk+1 ), k = 2, 3, 4, . . . , is the same as that in (τ1 , τ2 ), and situations at other time
instants, τk , k = 2, 3, 4, . . . , are similar to that at t = τ1 . It is easy to see that although
322 9 Chaotification of Nonchaotic Systems
Sk = {(t, x) : x ∈ D,t = τk } , k = 1, 2, 3, . . . .
P : V → V, P = ψ ◦ g ◦ ψ −1,
Since
PN = (ψ ◦ g ◦ ψ −1) ◦ · · · ◦ (ψ ◦ g ◦ ψ −1) = ψ ◦ gN ◦ ψ −1, (9.17)
0 12 3
Nth-order iteration of P
to prove that (9.16) holds is equivalent to prove that
4 N 4
4ψ g (y0 ) − ψ gN (y¯0 ) 4 > ε (9.18)
PN (E) ∩ F = 0.
/ (9.19)
It is known that for any two subsets ψ −1 (E) and ψ −1 (F) ⊂ Y , there exists an integer
N > 0 such that
gN ψ −1 (E) ∩ ψ −1 (F) = 0. / (9.20)
9.3 Chaotification of Continuous-Time Fuzzy Hyperbolic Model Using Impulsive Control 323
9.3.3 Simulations
ẋ = A f (x) = A tanh(Kx),
⎛ ⎞ ⎛ ⎞
0 cx2 cx3 0 1 3
where x = (x1 , x2 , x3 )T , A = ⎝ cx1 0 cx3 ⎠ = ⎝ 2 0 3 ⎠, and K = diag{k1 ,
cx1 cx2 0 2 1 0
k2 , k3 } = diag {2, 3, 1}. According to Theorem 9.4, the controlled system is
⎧
⎨ ẋ = (A + BL) f (x), t = τk ,
Δ x = I (x(t)), t = τk , k = 1, 2, 3, . . . , (9.22)
⎩ + k
x(t0 ) = x0 .
3.5
2.5
2
x3
1.5
0.5
0
2
1
0 1.5
1
0.5
−1 0
−0.5
x2 x1
Fig. 9.8 State curves of system (9.22) when Λ = diag {0.8, 0.2, 1}
1.2
0.8
0.6
x3
0.4
0.2
−0.2
2
1
1.5 2
0 0.5 1
−0.5 0
x2
x1
Fig. 9.9 Phase diagram of system (9.22) when Λ = diag {0.8, 0.2, 1}
9.4 Chaotification of Linear Systems Using Sampled Data Control 327
T
where Λ = diag {λ1 , λ2 , λ3 } and g(yk−1 ) = g1 y1k−1 , g2 y2k−1 , g3 y3k−1 with
gi (·) a logistic map:
yik = gi yik−1 = ayik−1 1 − yik−1 , i = 1, 2, 3.
is globally asymptotically stable, whose trajectories are shown in Fig. 9.5. With
the impulsive control and T = 0.01, from Figs. 9.6–9.9, we see that the controlled
system is in chaos for different Λ . These results verify the claims of Theorem 9.4
and Remark 9.7.
Consider the following two classes of continuous-time linear systems. One class of
systems is given as follows:
ẋ = Ax + Bu, (9.24)
where x ∈ Rn is the state vector, A ∈ Rn×n is a Hurwitz stable matrix whose
eigenvalues are real and nonidentical to one another, i.e., λi < 0, λi = λ j (i = j,
i, j = 1, 2, . . . , n), B ∈ Rn×m , m ≤ n, rank(B) = r, u ∈ Rm satisfies
u ∞ ≤ ε0 , (9.25)
ẋ = A x + bu, (9.26)
where x ∈ Rn is the state vector, and A ∈ Rn×n is a Hurwitz stable matrix whose
eigenvalues are nonidentical√to one another. Moreover, there is a pair of complex
eigenvalues. i.e., λ1,2 = α ± −1β , λ ∈ R (i = 3, 4, . . . , n), α < 0, λ < 0, λ = λ
i i i j
(i = j, i, j = 3, 4, . . . , n), b ∈ Rn×1 , and u is an input variable satisfying
|u| ≤ ε0 . (9.27)
The aim is to design a controller satisfying (9.27) to chaotify the system (9.26).
328 9 Chaotification of Nonchaotic Systems
ϕε (x)
4ε
−4ε −2ε 2ε x
To chaotify the above two classes of stable linear systems, a lemma is presented
below.
Lemma 9.5 ([18]). Consider the following discrete-time linear systems:
where x(k) = (x1 (k), x2 (k), . . . , xn (k))T , and Λ is a Schur stable matrix, i.e., ρ (Λ ) <
1, where ρ (·) denotes the spectral radius. If the controller u(k) is chosen as
First, we will develop a sampled data controller based on Lemma 9.5 to make the
linear systems (9.24) chaotic.
Since the eigenvalues of A are nonidentical to one another, there exists a nonsin-
gular matrix Q ∈ Rn×n such that system (9.24) can be transformed into
Assuming that b̄i1 , b̄i2 , . . . , b̄ir are linearly independent columns of B, (9.30) can
be rewritten as ⎧
˙
⎪ x¯i1 = λi1 x¯i1 + b̄i1 u,
⎪
⎪
⎪ ˙
⎪ i2 = λi2 x¯i2 + b̄i2 u,
⎪
⎨
x
¯
..
. (9.31)
⎪
⎪ ˙¯ir = λir x¯ir + b̄ir u,
⎪
⎪ x
⎪
⎪
⎩ ..
.
According to (9.31), introducing a new vector x̃, we can denote
⎧
⎪ x̃1 = x¯i1 ,
⎪
⎪
⎪
⎪ x̃2 = x¯i2 ,
⎪
⎨ ..
. (9.32)
⎪
⎪
⎪
⎪ x̃ r = x¯ir ,
⎪
⎪
⎩ ..
.
i.e., x̃ = R−1 x,
¯ where R−1 is a nonsingular transformation matrix corresponding to
(9.32). Denoting P = QR, system (9.24) can be rewritten as
where x̃ = P−1 x, Ã = P−1 AP = diag{λ̃1 , λ̃2 , . . . , λ̃n }, and B̃ = P−1 B. Moreover, the
first r row vectors of B̃ are linearly independent.
Since rank(B) = rank(B̃) = r, there exists a nonsingular matrix C ∈ Rm×m satis-
fying
Ir×r 0r×(m−r)
B̃C = , (9.34)
D(n−r)×r 0(n−r)×(m−r)
where Ir×r is the identity matrix, 0r×(m−r) and 0(n−r)×(m−r) are zero matrices, and
the matrix D(n−r)×r can be represented as
⎛ ⎞
d11 . . . d1r
⎜ . .. .. ⎟
D(n−r)×r = ⎝ .. . . ⎠. (9.35)
d(n−r)1 . . . d(n−r)r
Choose
u = Cv,
where v ∈ Rm×1 is a redefined control input. Therefore, the controlled system (9.33)
is transformed into
x̃˙ = Ãx̃ + B̃Cv. (9.36)
Next, we will discretize system (9.36). Assume that the sampling interval T > 0 is
sufficiently small. Cascading the sampler with a zeroth-order hold, system (9.36) is
discretized to
330 9 Chaotification of Nonchaotic Systems
where ϕε (x) is a sawtooth function. Then, the subsystem composed of the first r
variables of (9.37) is given by
⎧
⎪
⎪
⎪ x̃1 (k + 1) = eλ̃1 T x̃1 (k) + ϕε (σ x̃1 (k)),
⎪
⎨ x̃2 (k + 1) = eλ̃2 T x̃2 (k) + ϕε (σ x̃2 (k)),
⎪ ..
⎪
⎪ .
⎪
⎩
x̃r (k + 1) = eλ̃r T x̃r (k) + ϕε (σ x̃r (k)),
k k−1 k−1−i
x̃(k) = Ã(T ) x̃(0) + ∑ Ã(T ) B̃(T )v(i).
i=1
332 9 Chaotification of Nonchaotic Systems
4 4
Denote 4Ã(T )4∞ = α . Since λ̃i < 0 (i = 1, 2, . . . , n), it can be inferred that α < 1.
Furthermore, it can be seen that v ∞ ≤ εδ = ε0 / C ∞ from (9.38), (9.40), and
(9.41). Therefore,
k−1 4 4
lim x̃(k)
k→∞
∞ ≤ lim α k x̃(0)
k→∞
∞+ lim
k→∞
∑ α k−1−i 4B̃(T )4∞ v(i) ∞
i=1
4 4
ε0 4B̃(T )4∞ k−1
≤ lim ∑ α k−1−i
C ∞ k→∞ i=1
4 4
ε0 4B̃(T )4∞
= . (9.42)
C ∞ (1 − α )
It is obvious that x̃ is chaotic if x̃R is chaotic. So, the next step is to chaotify system
(9.39). Since λ̃i < 0 (i = 1, 2, . . . n), it is clear that eλ̃i T < 1. Therefore, system (9.39)
is stable without control (i.e., v(k) ≡ 0). According to Lemma 9.5, choosing
/
4 4 4 4
σ > max 3 4ÃR (T )4∞ , 4ÃR (T )4∞ + 1 + 1 , (9.43)
system (9.39) is chaotic in the sense of Li–Yorke, that is, system (9.37) is chaotic in
the sense of Li–Yorke.
The fact that x̃(k) in (9.37) is chaotic means that x̃(t) in (9.33) and x(t) in (9.24)
are chaotic. Therefore, we can derive the following theorem.
Theorem 9.5 ([6]). For the n-dimensional continuous-time linear controlled system
(9.24), if the nonlinear feedback controller is chosen as
⎛ ⎞
λ̃1
ϕε σ (P−1 x(kT ))1
⎜ e 1 −1
λ̃ T
⎟
⎛ ⎞ ⎜ .. ⎟
⎜ . ⎟
v1 (t) ⎜ ⎟
⎜ ⎟ ⎜ λ̃r −1 x(kT )) ⎟
u(t) = Cv(t) = C ⎝ ... ⎠ = C ⎜ ⎜ eλ̃r T −1
ϕε σ (P r ⎟,
⎟ (9.44)
⎜ 0 ⎟
vm (t) ⎜ ⎟
⎜ .. ⎟
⎝ . ⎠
0
where i = 1, 2, . . . , n − r, we have
9.4 Chaotification of Linear Systems Using Sampled Data Control 333
4 4 1 λ̃1 T 1 D1 λ̃r+1 T
4B̃(T )4 = max (e − 1), . . . , (eλ̃r T − 1), (e − 1), . . .,
∞
λ̃1 λ̃r λ̃r+1
/
D(n−r) λ̃n T
(e − 1) .
λ̃n
4 4
From the above analysis, it can be seen that 4B̃(T )4∞ will decrease with the
decreasing of the sampling interval T . Therefore, it can be inferred from (9.42) that
the chaotic region will shrink with the decreasing of the sampling interval T and the
control upper bound ε0 .
Remark 9.9. The control input in Theorem 9.5 satisfies u ∞ ≤ ε0 , where ε0 is an
arbitrarily small positive constant. In fact, if the magnitude of the control input can
be chosen arbitrarily large, then we can chaotify any linear system ẋ = Ax + Bu
by assigning appropriate poles and adopting the control input in the form of (9.44)
as long as (A, B) is controllable completely. In this case, the controller consists of
two parts: one is a linear state feedback controller and the other is a nonlinear state
feedback controller.
Next, we will investigate the problem of chaotification of the linear systems
(9.26).
Since the eigenvalues of (9.26) are λ1,2 = α ± iβ , λ ∈ R (i = 3, 4, . . . , n), there
i
exists a nonsingular matrix P such that (9.26) can be transformed into the following
Jordan standard form:
x̃˙ = Ã x̃ + b̃u, (9.45)
Ã1 0 α β
where x̃ = P−1 x, Ã = P−1 A P = is a Jordan matrix, Ã1 = ,
0 Ã2 −β α
; < T
Ã2 = diag λ3 , . . . , λn , and b̃ = P−1 b := b̃1 , b̃2 , . . . , b̃n .
Discretizing (9.45) by a zeroth-order hold, we can derive the discretized linear
systems as follows:
x̃(k + 1) = Ã (T )x̃(k) + b̃(T )u(k), (9.46)
where T is the sampling interval,
Ã1 (T ) 0
à (T ) = eà T = ,
0 Ã2 (T )
T
eα T Λ1 (T ) 0
b̃(T ) = eà (t) dt b̃ = b̃,
0 α2 + β 2 0 Λ2 (T )
eα T cos β T −eα T sin β T
Ã1 (T ) = eÃ1 T = ,
eα T sin β T eα T cos β T
%
&
Ã2 (T ) = diag eλ3 T , . . . eλn T ,
⎛ ⎞
α cos β T + β sin β T − α e−α T −(α sin β T − β cos β T + β e−α T )
Λ1 (T ) = ⎝ ⎠,
α sin β T − β cos β T + β e−α T α cos β T + β sin β T − α e−α T
334 9 Chaotification of Nonchaotic Systems
Λ2 (T ) = diag {σ3 (T ), . . . , σn (T )} ,
and
(α 2 + β 2)(eλi T − 1)
σi (T ) = (i = 3, . . . , n).
eα T λi
Choosing the sampling interval T as
π
T = T0 = , (9.47)
β
; <
we get Ã1 (T0 ) = diag −eα T0 , −eα T0 . Consequently,
%
&
à (T0 ) = diag −eα T0 , −eα T0 , eλ3 T0 , . . . , eλn T0 .
Here,
⎛ ⎞ ⎛ ⎞
−α (1 + e−α T0 ) −β (1 + e−α T0 ) 0 b̃1 (T0 )
eα T0
⎝ β (1 + e−α T0 ) −α (1 + e−α T0 ) 0 ⎠ b̃ := ⎜ . ⎟
b̃(T0 ) = 2 ⎝ .. ⎠ ,
α +β2 0 0 Λ2 (T0 ) b̃n (T ) 0
where
−eα T0
b̃1 (T0 ) = α (1 + e−α T0 )b̃1 + β (1 + e−α T0 )b̃2 ,
α +β
2 2
−eα T0
b̃2 (T0 ) = α (1 + e−α T0 )b̃2 − β (1 + e−α T0 )b̃1 ,
α2 + β 2
b̃i
b̃i (T0 ) = eλi T0 − 1 (i = 3, . . . , n).
λi
Therefore, (9.46) can be rewritten as follows:
⎧
⎪ x̃1 (k + 1) = −eα T0 x̃1 (k) + b̃1(T0 )u(k),
⎪
⎪ α T0 x̃ (k) + b̃ (T )u(k),
⎪
⎨ x̃2 (k + 1) = −e
⎪
λ T
2 2 0
x̃3 (k + 1) = e 3 0 x̃3 (k) + b̃3 (T0 )u(k), (9.48)
⎪
⎪ ..
⎪
⎪ .
⎪
⎩
x̃n (k + 1) = eλn T0 x̃n (k) + b̃n (T0 )u(k).
It is obvious that the state variables of (9.48), i.e., x̃1 (k), x̃2 (k), . . . , x̃n (k), are inde-
pendent. Since b is nonsingular, b̃ is nonsingular after a nonsingular transformation.
So, all b̃i (T0 ) are not zero. We consider the following one-dimensional systems:
ϕε (σ x̃1 (k))
u(k) = , (9.50)
b̃1 (T0 )
where
2ε0 eα T0
ε = ε0 b̃1 (T0 ) = 2 α (1 + e−α T0 )b̃1 + β (1 + e−α T0 )b̃2 ,
α +β2
% &
σ > max 3eα T0 , eα T0 + 1 + 1 ,
the controlled system (9.49) is chaotic in the sense of Li–Yorke. Moreover, |u(k)| ≤
ε0 .
With the control input u(k) of (9.50), the solution of the closed-loop system
(9.48) is given by
k k−1 k−1−i
x̃(k) = Ã (T0 ) x̃(0) + ∑ Ã (T0 ) b̃(T0 )u(k).
i=1
4 4
Denote 4Ã (T0 )4∞ = ρ . Since α < 0 and λi < 0 (i = 3, 4, . . . , n), it is obvious that
ρ < 1. Therefore,
k−1 4 4
lim x̃(k)
k→∞
∞ ≤ lim ρ k x̃(0)
k→∞
∞+ lim
k→∞ i=1
∑ ρ (k−1−i) 4b̃(T0 )4∞ |u(k)|
4 4
ε0 4b̃(T0 )4∞
≤ ,
1−ρ
i.e., the state variables of system (9.48) are bounded. For the n-dimensional system
(9.48), the fact that x̃1 (k) is chaotic means that x̃2 (k), . . . , x̃n (k) are chaotic.
The fact that the discretized system (9.48) is chaotic means that the continuous-
time systems (9.45) and (9.26) are chaotic. Therefore, when b̃1 (T0 ) = 0, we can
adopt the following control input:
Theorem 9.6 ([6]). For the n-dimensional continuous-time linear systems (9.26),
when b̃i (T0 ) = 0 (i = 1, 2, . . . , n), we can adopt the following control input to chao-
tify the continuous-time linear systems (9.26)
Remark 9.10. The control input u(k) in Theorem 9.6 satisfies |u| ≤ ε0 , where ε0 is
an arbitrarily small positive constant. If, without this restraint, i.e., the magnitude of
the control input u(k) can be given arbitrarily large, then we can chaotify any linear
systems ẋ = Ax+ bu as long as (A, b) is completely controllable. Here, we can assign
the poles of the closed-loop system appropriately by state feedback control first, and
then add a nonlinear feedback control (9.51) to the system; then, we can chaotify
any linear systems ẋ = Ax + bu. In this case, the controller also consists of two parts:
one is a linear state feedback controller and the other is a nonlinear state feedback
controller.
9.4.3 Simulations
In this subsection, two illustrative examples are presented to demonstrate the effec-
tiveness of the proposed chaotification approach. First, let us consider the following
three-dimensional continuous-time linear system:
⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞
ẋ1 −29/18 17/18 1/6 x1 10
⎝ ẋ2 ⎠ = ⎝ 7/18 −19/18 1/6 ⎠ ⎝ x2 ⎠ + ⎝ 1 1 ⎠ u1 . (9.53)
u2
ẋ3 −5/18 11/18 −5/6 x3 01
By calculation, the eigenvalues of the system matrix A is given by⎛λ1 = −1/2, ⎞λ2 =
1 1 2
−1, and λ3 = −2. Choosing the nonsingular matrix P as P = ⎝ 1 1 −1 ⎠, the
1 −2 1
transformation is given by x̃ = P−1 x. System (9.53) is transformed into the following
diagonal form:
9.4 Chaotification of Linear Systems Using Sampled Data Control 337
⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞
x̃˙1 −1/2 0 0 x̃1 2/3 8/9
⎝ x̃˙2 ⎠ = ⎝ 0 −1 0 ⎠ ⎝ x̃2 ⎠ + ⎝ 1/3 −2/9 ⎠ u1 .
u2
x̃˙3 0 0 −2 x̃3 0 −1/3
It can be verified that the first two row vectors of b̃ are linearly independent. So, we
2/3 8/9 1/2 2
get C−1 = , i.e., C = . It can be calculated easily that
1/3 −2/9 3/4 −3/2
C ∞ = 2.5. According to (9.43), we can choose σ = 3. Assume that ε0 = 0.2 and
the initial values are chosen as (0.01, 0.01, 0)T and the sampling intervals are cho-
sen as T = 5, T = 1, and T = 0.2, respectively. According to (9.40), δ is presented
as δ = 1.0068, δ = 1.5820, and δ = 5.5167, respectively. According to (9.41), ε is
presented as ε = 0.0795, ε = 0.0506, and ε = 0.0145. The chaotic response curves
are shown in Figs. 9.11–9.13 corresponding to various sampling intervals, respec-
tively. From the simulation results we can see that the scales of the chaotic region
become smaller and smaller with the decrease of sampling intervals.
Second, let us consider the following three-dimensional continuous-time linear
system:
⎛ ⎞ ⎛ ⎞⎛ ⎞ ⎛ ⎞
ẋ1 −10/9 −1/9 −25/18 x1 1
⎝ ẋ2 ⎠ = ⎝ 5/18 −13/18 −5/18 ⎠ ⎝ x2 ⎠ + ⎝ 1 ⎠ u. (9.54)
ẋ3 2/3 2/3 −1/6 x3 1
0.2
0.2 0.1
0
x1
0
x3
0.2 −0.1
−0.2
0.2 0
0 x1 −0.2
−0.2 −0.2 0 100 200 300 400 500
x2
t
0.2 0.4
0.1 0.2
0 0
x2
x3
−0.1 −0.2
−0.2 −0.4
0 100 200 300 400 500 0 100 200 300 400 500
t t
Fig. 9.11 Chaotic orbit in phase space and state variables of system (9.53) with T = 5
338 9 Chaotification of Nonchaotic Systems
0.1
0.1 0.05
0
x3
x1
−0.1
−0.05
0.2
0.1
0 0
x2 −0.1
−0.2 −0.1 x1 0 100 200 300 400 500
t
0.2 0.2
0.1 0.1
0 0
x2
−0.1 x3 −0.1
−0.2 −0.2
0 100 200 300 400 500 0 100 200 300 400 500
t t
Fig. 9.12 Chaotic orbit in phase space and state variables of system (9.53) with T = 1
0.04
0.05
0.02
0
x1
x3
−0.02
−0.05 0.05
0.05 0
0 −0.04
−0.05 −0.05 x1 0 100 200 300 400 500
x2
t
0.04 0.1
0.02 0.05
0 0
x3
x2
−0.02 −0.05
−0.04 −0.1
0 100 200 300 400 500 0 100 200 300 400 500
t t
Fig. 9.13 Chaotic orbit in phase space and state variables of system (9.53) with T = 0.2
9.5 Summary 339
0.05
0.05
0
x3
x1
−0.05
0.05
0.05
0 0
x2 −0.05
−0.05 −0.05 x1 0 100 200 300 400 500
t
0.04 0.04
0.02
0.02
0
x3
0
x2
−0.02
−0.02
−0.04
−0.04 −0.06
0 100 200 300 400 500 0 100 200 300 400 500
t t
Fig. 9.14 Chaotic attractor and state variables of the controlled system (9.54)
It is easy to see that b̃1 = 4, b̃2 = 1, and b̃3 = 0. According to (9.47), we get T0 = π .
2
According to (9.52), we can derive b̃1 (T0 ) = (e−π /2 + 1) ≈ −0.4832. Assuming
5
that ε0 = 0.2, according to Theorem 9.6, we get ε ≈ 0.0996. Choosing σ = 3 and
T
the initial states x0 = (0.01, 0.01, 0.02) , the linear system (9.54) is chaotified and
is shown by Fig. 9.14.
9.5 Summary
References
1. Chen QF, Zhong QH, Hong YG, Chen GR (2007) Generation and control of spherical circular
attractors using switching schemes. Int J Bifurc Chaos 17:243–253
2. Devaney RL (1989) An Introduction to Chaotic Dynamical Systems, 2nd edn. Addison-
Wesley, New York
3. Golub GH, van Loan CF (1983) Matrix Computations. Johns Hopkins University Press, Bal-
timore
4. Guckenheimer J, Holmes P (1983) Nonlinear Oscillations, Dynamical Systems, and Bifurca-
tions of Vector Fields. Springer, New York
5. Hirsch MW, Smale S (1974) Differential Equations, Dynamical Systems, and Linear Algebra.
Academic Press, New York
6. Huang W (2005) A study of synchronization of chaotic systems and chaotification of non-
chaotic systems. Ph.D. dissertation, Northeastern University, Shenyang
7. Kaszkurewicz E, Bhaya A (2000) Matrix Diagonal Stability in Systems and Computation.
Birkhäuser, Boston
8. Khalil HK (2002) Nonlinear Systems, 3rd edn. Prentice Hall, New Jersey
9. Lai DJ, Chen GR (2005) Chaotification of discrete-time dynamical systems: an extension of
the Chen–Lai algorithm. Int J Bifurc Chaos 15:109–117
10. Lang S (1997) Undergraduate Analysis, 2nd edn. Springer, New York
11. Li Z, Park JB, Chen GR, Joo YH, Choi YH (2002) Generating chaos via feedback control
from a stable TS fuzzy system through a sinusoidal nonlinearity. Int J Bifurc Chaos 12:2283–
2291
12. Lu JG (2005) Generating chaos via decentralized linear state feedback and a class of nonlinear
functions. Chaos Solitons Fractals 25:403–413
13. Lu JG (2006) Chaotic behavior in sampled-data control systems with saturating control.
Chaos Solitons Fractals 30:147–155
14. Margaliot M, Langholz G (2003) A new approach to fuzzy modeling and control of discrete-
time systems. IEEE Trans Fuzzy Syst 11:486–494
15. Marotto FR (2005) On redefining a snap-back repeller. Chaos Solitons Fractals 25:25–28
16. Schiff SJ, Jerger KD, Duong H, Chang T, Spano ML, Ditto WL (1994) Controlling chaos in
the brain. Nature 370:615–620
17. Wang XF, Chen GR, Yu XH (2000) Anticontrol of chaos in continuous-time systems via
time-delay feedback. Chaos 10:1–9
18. Wang XF, Chen GR (2000) Chaotifying a stable LTI system by tiny feedback control. IEEE
Trans Circuits Syst I 47:410–415
19. Wang XF, Chen GR, Man KF (2001) Making a continuous-time minimum-phase system
chaotic by using time-delay feedback. IEEE Trans Circuits Syst I 48:641–645
20. Yang W, Ding M, Mandell AJ, Ott E (1995) Preserving chaos: control strategies to preserve
complex dynamics with potential relevance to biological disorders. Phys Rev E 51:102–110
References 341
21. Yang L, Liu Z, Chen GR (2002) Chaotifying a continuous-time system via impulsive input.
Int J Bifurc Chaos 12:1121–1128
22. Yang RT, Hong YG, Qin HS, Chen GR (2005) Anticontrol of chaos for dynamic systems in
p-normal form: a homogeneity-based approach. Chaos Solitons Fractals 25:687–697
23. Zhang HG, Quan YB (2000) Modeling and control based on fuzzy hyperbolic model. Acta
Autom Sinica 26:729–735
24. Zhang HG, Quan YB (2001) Modeling, identification and control of a class of non-linear
system. IEEE Trans Fuzzy Syst 9:349–354
25. Zhang HG, Wang ZL, Liu D (2004) Chaotifying fuzzy hyperbolic model using adaptive in-
verse optimal control approach. Int J Bifurc Chaos 14:3505–3517
26. Zhang HG, Wang ZL, Li M, Quan YB, Zhang MJ (2004) Generalized fuzzy hyperbolic
model: a universal approximator. Acta Autom Sinica 30:416–422
27. Zhang HG, Wang ZL, Liu D (2005) Chaotifying fuzzy hyperbolic model using impulsive and
nonlinear feedback control approaches. Int J Bifurc Chaos 15:2603–2610
28. Zhao Y, Zhang HG, Zheng CD (2008) Anticontrol of chaos for discrete-time fuzzy hyperbolic
model with uncertain parameters. Chinese Phys B 17:529–535
Index
343
344 Index