Stability Analysis of ODEs Lecture Notes
Stability Analysis of ODEs Lecture Notes
x +
dx = 2 5 Lecture Notes
2y )
dt x +
) -
= 5 (2
dy = (2
-
dt I) =
2
=0
2
5
(A
-
1 25
i
𝑑
2 +
5
A= 5
A-
I
𝑑𝑥
Y
t (
de
-1
∫
-2
-2
-1
SI
SI
N
No. 2
-1
N
0
𝑒
X
1
x!
--
∕x
CO
CO
S
-1
x
y
1
S
5
X
2
0
X
=
TA
5y =
TA
N
𝜙
-1
2
N
e
y =0
LN
x +
5i 5i
LO
10
x
SI
SI
x +
+/
N
-1
N
SI
=
SI
2
N
-1
N
SI
SI
N
𝑖
-1
SI
DYNAMICAL SYSTEMS
SI
N
-1
N
SI
SI
N
-1
N
SI
SI
N
𝜔
-1
1
Stability Analysis of ODEs 2
1 Linear Systems
d d
x2 (t) = x1 (s) = f (x1 (s)) = f (x2 (t))
dt ds
And for the initial condition, we have x2 (t0 ) = x1 (t0 − t0 ) = x0
An autonomous system of two first order differential equations has the form
dx
= f (x, y)
dt
dy
= g(x, y)
dt
If the system is linear, we can express it in the given format
dx
= ax + by
dt
dy
= cx + dy
dt
For which we can write
dx
a b x
ẋ = dt
dy = = Ax ; (a, b, c, d) ∈ R4
dt
c d y
dx
= ax =⇒ x = c1 eat
dt
dy
= by =⇒ y = c2 ebt
dt
at
a 0 e 0 c1
ẋ = x =⇒ x = = CeAt
0 b 0 ebt c2
After a bit careful examination, it is evident that the solutions of this differential equation
1
b c2b
lies on R and they have the form y = kx , where k =
2 a 1 .
c1a
Phase Plane: While trying to describe the motion of the particle governed by the
provided differential equations, we can draw the solution curves in the plane Rn , and this
is known as the Phase Plane. Clearly, in the above uncoupled system, R2 is the Phase
Plane.
Phase Portrait: The set of all solution curves drawn in the Phase space is known as
Phase Portrait.
If a < 0 and b < 0, then this limit goes to (0, 0). Otherwise, most of the solutions
diverge to infinity,
Roughly speaking, an equilibrium (x0 , y0 ) is asymptotically stable if every trajectory
(x(t), y(t)) beginning from an initial condition near (x0 , y0 ) stays near (x0 , y0 ) for
t > 0, and
lim (x(t), y(t)) = (x0 , y0 )
t→∞
The equilibrium is unstable if there are trajectories with initial conditions arbitrar-
ily close to the equilibrium that move far away from that equilibrium.
Later on, we will discuss about this in greater detail.
Invariance of the Axes: There is another observation that we can make for
uncoupled systems. Suppose that the initial condition for an uncoupled system lies
on the x axis; that is, suppose y0 = 0, then the solution (x(t), y(t)) = (x0 eat , 0) also
lies on the x axis ∀ time. Similarly, if the initial condition lies on the y axis, then
the solution (0, y0 ebt ) lies on the y axis ∀ time.
Stability Analysis of ODEs 5
1.3 Diagonalization
Theorem: If eigenvalues λ1 , λ2 , ..., λn of a matrix A are real and distinct, then any
set
of corresponding
eigenvectors {v1 , v2 , ...vn } forms a basis of Rn . The matrix P =
v1 v2 ... vn is invertible and
λ1
..
P −1 AP = .
λn
This theorem can be used to reduce the linear system ẋ = Ax to an uncoupled linear
system. To do so, we first define the change of coordinates x = P y. So we have,
ẏ = P −1 ẋ = P −1 Ax = P −1 AP y
λ1
...
=⇒ ẏ = y
λn
e λ1 t
...
=⇒ y(t) = y(0)
λn t
e
e λ1 t
... −1
=⇒ P −1 x(t) = P x(0)
e λn t
e λ1 t
.. −1
=⇒ x(t) = P . P x(0)
λn t
e
Stability Analysis of ODEs 6
E S = span{v1 , . . . , vk }
E U = span{vk+1 , . . . , vn }
If we have pure imaginary eigenvalues, then we also get a center subspace, namely
EC .
Some Properties:
• ||A|| ≥ 0 ; ||A|| = 0 ⇐⇒ A = 0.
• kT −1 k ≥ 1
∥T ∥
induction, we can show that if k elements of A are non-zero, then ||Ax|| > 0. Hence if
Stability Analysis of ODEs 7
||A|| = 0, then A = 0.
∴ ||A|| ≥ 0 and ||A|| = 0 ⇐⇒ A = 0. ■
||λA|| = max ||λAx|| = max |λ| · ||Ax|| = |λ| max ||Ax|| = |λ| · ||A|| ; λ ∈ R ■
||x||≤1 ||x||≤1 ||x||≤1
Again,
||A+B|| = max ||(A+B)x|| ≤ max (||Ax||+||Bx||) ≤ max ||Ax|| + max ||Bx|| = ||A||+||B|| ■
||x||≤1 ||x||≤1 ||x||≤1 ||x||≤1
Again,
||Ax|| ||Ax||
||A|| = max =⇒ ≤ ||A|| =⇒ ||Ax|| ≤ ||A|| · ||x|| ■
n x∈R \{0} ||x|| ||x||
Moreover,
||AB|| = max ||ABx|| ≤ ||A|| max ||Bx|| = ||A|| · ||B|| ■
||x||≤1 ||x||≤1
We also observe
||Ak || ≤ ||A|| · ||Ak−1 || ≤ · · · ≤ ||A||k ■
And lastly,
1
1 = kT T −1 k ≤ kT k · kT −1 k =⇒ kT −1 k ≥ ■
kT k
Proof: Say, the solution obtained is given by ϕ(t, x0 ) = x0 eAt . For a fixed t, we take
ε
the matrix norm to be defined analogously as L2 norm. We also define δ := ||eAt ||
.
Now for ||y0 − x0 || < δ, we have ||ϕ(t, y0 ) − ϕ(t, x0 )|| ≤ ||e || · ||y0 − x0 || < ε ■
At
Therefore
! !
X
∞
(S + T )n X∞
1 X n! k j X
∞
Sk X
∞
Tj
eS+T = = S T = · = eS eT ■
n=0
n! n=0
n! k+j=n
k!j! k=0
k! j=0
j!
a −b A a cos b − sin b
Theorem 3: if A = , then e = e
b a sin b cos b
k Re(z k ) −Im(z k )
Thus by induction,A = . Now we have
Im(z k ) Re(z k )
" #
X∞
A k X∞ k
Re( k! ) −Im( k! )
z z k
Re(ez ) −Im(ez )
A
e = = k k =
k! Im( zk! ) Re( zk! ) Im(ez ) Re(ez )
k=0 k=0
Now ez = ea+ib = ea (cos b + i sin b), so we have Re(ez ) = ea cos b and Im(ez ) = ea sin b.
a cos b − sin b
∴e =e
A
■
sin b cos b
a b A a 1 b
Theorem 4: If A = , then e = e
0 a 0 1
0 b
Proof: A = aI + = aI + B. Clearly aI and B commute. Moreover B k = 0 ∀k ≥
0 0
2 =⇒ eB = I + B. So we can hereby conclude
a 1 b
A
e =e aI+B a B
=e e =e ■
0 1
Now, !
X
∞
Tk X
∞ v
T k
e (v) = (v) = T
k=0
k! k=0
k!
We know v
Tk ∈ E ∀k ∈ N ∪ {0}
k!
These altogether concludes
eT (v) ∈ E =⇒ eT (E) ⊆ E ■
Here, our aim is to establish the fact that for x0 ∈ Rn , the initial value problem
ẋ = Ax
x(0) = x0
has a unique solution ∀t ∈ R which is given by
x(t) = x0 eAt
Note: Here, we can place the limit inside the summation because |h| ≤ 1
Stability Analysis of ODEs 11
ẋ = Ax , x ∈ R2
ẋ = A(av) = aλv
The derivative is a multiple of v and hence points along the line determined by v. As
λ > 0, the derivative points in the direction of v when a is positive and in the opposite
direction when a isnegative.
1 1
We consider A = and we draw the vector field and a couple of solutions (go to
0 2
next page). Notice that the picture looks like a source with arrows coming out from the
origin. Hence we call this type of picture a source or sometimes an unstable node.
−1 −1
If A = , then both the eigenvalues are negative. We call this kind of pic-
0 −2
ture a sink or sometimes a stable node.
1 1
If A = , then one eigenvalue is positive, and the other is negative. Then, we
0 −2
reverse the arrows on one line (corresponding to the negative eigenvalue) in Figure 2.
This is known as a Saddle
We can take any linear combination of them to get other solutions, which one we take
depends on the initial conditions. Now note that the real part is a parametric equation for
an ellipse. Same with the imaginary part and in fact any linear combination of the two.
This is what happens in general when the eigenvalues are purely imaginary. So when the
eigenvalues are purely imaginary, we get ellipses for the solutions. This type of picture is
sometimes called a center.
Now
suppose
the complex eigenvalues have a positive
real part. For example, let A =
1 1 1
. We take 1 + 2i and its eigenvector , and find the real and imaginary of
−4 1 2i
⃗v e(1+2i)t are
1 (1+2i)t t cos(2t) 1 (1+2i)t t sin(2t)
Re e =e Im e =e
2i −2 sin(2t) 2i 2 cos(2t)
Note the et in front of the solutions. This means that the solutions grow in magnitude
while spinning around the origin. Hence we get a spiral source.
Finally suppose the complex eigenvalues have a negative real part. Here we get a e−t
in front of the solution. This means that the solutions shrink in magnitude while spinning
around the origin. Hence we get a spiral sink.
Stability Analysis of ODEs 13
(a) (b)
(c) (d)
If A ∈ GL2n (R) and has complex eigenvalues, they occur as conjugate pairs. The following
Theorem gives us an insight about this.
(a) (b)
Proof: We say, if V is a real vector space, its complexification V C is the complex vector
space consisting of elements x + iy where x, y ∈ V . If T : V → W , its complexification
T C : V C → W C is defined by
T C (x + iy) = T x + iT y
If we use P = u1 v1 . . . un vn , then we have
−1 aj bj
P AP = diag
−bj aj
ẋ = Ax , x(0) = x0
as
aj t cos bj t − sin bj t −1
x(t) = P diag e P x0
sin bj t cos bj t
Till now, we have only dealt with those systems which have distinct eigenvalues. Now,
we want to solve the system where A has multiple eigenvalues.
Definition:
(A − λI)k v = 0
Theorem: If T ∈ L(V ) with real eigenvalues, then there is only one way of writing T
as S + N , where S is diagonalizable, N is nilpotent, and SN = N S.
Note that S and N both commute with S and N , hence both of them commute with T =
S + N as well. So we have Ek is invariant under S and N . Now we say Sk = λk I ∈ L(Ek )
and Nk = Tk − Sk . If we can show S|Ek = Sk , it will then conclude N |Ek = Nk , and thus
we can show the uniqueness.
Enough to show S|Ek − Sk = 0.
Now, it is given that S is diagonalizable, so is S|Ek , then S|Ek − λk I is also diagonalizable.
Stability Analysis of ODEs 16
If λ is aneigenvalue with multiplicity n, then the solution of the initial value problem
is
N k tk
x(t) = e I + N t + · · · +
λt
x0
k!
Under the light of this theorem, we can right the theorem discussed in the previous section
in a newly tailored way.
Say, for am matrix A has generalised eigenvalues λj = aj + ibj and generalised eigenvector
vj = uj + iwj . Then the stable subspace, unstable subspace and central subspace is given
by
E S = Span{uj , vj | aj < 0}
Stability Analysis of ODEs 17
E U = Span{uj , vj | aj > 0}
E C = Span{uj , vj | aj = 0}
Solutions in E S tend to approach x(0) as t → ∞; and solutions in E U tend to approach
x(0) as t → −∞.
The set of mappings eAt : Rn → Rn may be regarded as the movement of points x0 ∈ Rn
along the trajectories.
Hyperbolic flow: If all eigenvalues of A has non-zero real parts, then the flow eAt :
Rn → Rn is called hyperbolic flow, and the corresponding linear system is known as hy-
perbolic linear system.
Now,
X
ns
At
e x0 = ck eAt Vk
k=1
As Ak Vj ∈ E S , then eAt x0 ∈ E S , ∀t ∈ R.
Sink (or Source): If all eigenvalues has negative (or positive) real part, then the origin
is known as sink (or source) of the linear system.
Proof: Here we use the fact that any solution of the linear system is the linear combi-
nation of functions of the form tk eat cos bt or tk eat sin bt.
Say, one of the eigenvalues has positive real part. For that particular eigenvalue ∀x0 6= 0
limt→∞ eAt x0 = ∞ and for x0 ∈ Rn , limt→−∞ eAt x0 = 0, contradicting (a). If one of the
eigenvalues has a zero real part, then the solution is of the form tk cos bt or tk sin bt, and
again clearly ∀x0 ∈ Rn limt→∞ eAt x0 6= 0 So, we can say (a) =⇒ (b). ■
sin and cos being periodic function, for eigenvalues with a negative real part, We can give
them a bound as described in (c). So (b) =⇒ (c). ■
Using squeeze theorem on the relation obtained at (c) and by taking t → ∞, we get
∀x0 ∈ Rn limt→∞ eAt x0 = 0, and the second inequality in part (c) gives us x0 6= 0,
limt→−∞ eAt x0 = ∞. Hence (c) =⇒ (a). ■
In the similar fashion, we can devise another theorem, with similar proof.
ẋ = Ax + b(t)
Once we find a Fundamental Matrix Solution for the homogeneous system , we can find
the solution to the corresponding nonhomogeneous system.
Theorem: If Φ(t) is a fundamental matrix solution, then the solution of the nonhomo-
geneous system and the initial condition x(0) = x0 is unique, and is given by
Z t
−1
x(t) = Φ(t)Φ (0)x0 + Φ(t)Φ−1 (τ )b(τ ) dτ
0
∴ ẋ = Ax(t) + b(t) ■
With Φ(t) = eAt , the solution of the nonhomogeneous linear system looks like
Z t
At
x(t) = e x0 + e At
e−Aτ b(τ ) dτ
0
Stability Analysis of ODEs 20
As On is open and ηi is continuous, we can find ϵ small such that f ([[ai − ϵ, ai + ϵ]]) ⊆ Rn .
Evidently, ηi is differentiable and (Dηi ) = [0, . . . , 1, . . . , 0]t = eti over [ai − ϵ, ai + ϵ]. Now,
by the definition of partial derivatives, D(f ◦ ηi )(ai ) = fxi (a).
Again, by chain rule, as f is differentiable at a, D(f ◦ ηi )(ai ) = fxi (a) exists, and
As the index i was arbitrary to begin with, this completes the proof. ■
Continuity: Suppose V1 and V2 be two normed linear spaces with respective norms ||.||1
and ||.||2 . Then f : V1 → V2 is continuous at x0 ∈ V1 if ∀ε > 0, ∃δ > 0 such that x ∈ V1
and ||x − x0 ||1 δ implies ||f (x) − f (x0 )||2 < ε. f is said to be continuous on E ⊆ V1 if it is
continuous ∀ points in E, and we write f ∈ C(E).
Stability Analysis of ODEs 21
In this section, our primary focus will revolve around Piccard’s Classical Method of Suc-
cessive Approximations. We will establish the existence, uniqueness, Continuity and Dif-
ferentiability of the soution of the intial value problem for given intial condition and
parameters under the hypothesis that f ∈ C 1 (E).
Proof: Since E is open, ∃ε > 0 for given x0 ∈ E, such that Nε (x0 ) ⊂ E. Now
we define
K= max ||Df (x)||
||x−x0 ||<ε/2
Proof: Since f ∈ C 1 (E), it follows from the lemma proven above, we can say, ∃ε > 0
such that Nε (x0 ) ⊂ E and a constant K > 0 such that ∀x, y ∈ Nε (x0 ),
|f (x) − f (y)| ≤ K|x − y|
Stability Analysis of ODEs 23
We set b = ε/2. Then the continuous function f (x) is bounded on the compact set
N0 = {x ∈ Rn : |x − x0 | ≤ b}
Let
M = max |f (x)|
x∈N0
It certainly follow that f (uk (t)) is defined and continuous on [−a, a] and therefore that
Z t
uk+1 (t) = x0 + f (uk )(s) ds
0
is defined and continuous on [a, −a]. aand satisfies
Z t
|uk+1 (t) − x0 | ≤ |f (uk (s))| ds ≤ M a, , ∀t ∈ [−a, a]
0
Thus, by choosing 0 < a < b/M , it follows from induction that uk (t) is defined and
continiuous.
Now, since ∀t ∈ [−a, a] and ∀k ∈ N ∪ {0} := N0 we have uk (t) ∈ N0 , it follows from teh
Lipschitz Condition satissfied by f that ∀t ∈ [−a, a]
Z t
|u2 (t) − u1 (t)| ≤ |f (u1 (s)) − f (u0 (s))| ds
0
Z t
≤K |u1 (s) − u0 (s)| ds
0
≤ Ka max |u1 (t) − x0 |
[−a,a]
≤ Kab
And then assuming that
Z t
|uj+1 (t) − uj (t)| ≤ |f (uj (s)) − f (uj−1 (s))| ds
Z t 0
≤ (Ka)j b.
Stability Analysis of ODEs 24
X
m−1
|um (t) − uk (t)| ≤ |uj+1 (t) − uj (t)|
j=k
X∞
≤ |uj+1 (t) − uj (t)|
j=N
X∞
αN
≤ αj b = b
j=N
1−α
This last quantity approaches zero as N → ∞. Therefore, ∀ ε > 0 there exists an N such
that m, k ≥ N implies that
i.e., {uk } is a Cauchy sequence of continuous functions in C([−a, a]). It follows from the
above theorem that uk (t) converges to a continuous function u(t) uniformly ∀ t ∈ [−a, a]
as k → ∞. And then taking the limit of both sides of equation defining the successive
approximations, we see that the continuous function
Z t
u(t) = x0 + f (u(s))ds
0
∀ t ∈ (−a, a]. We have used the fact that the integral and the limit can be interchanged
since the limit in continuous and by the fundamental
theorem of calculus, the right differentiable and
u′ (t) = f (u(t))
∀ t ∈ [−a, a]. Furthermore, u(0) = x0 and from (4) it follows that u(t) ∈ Nε (x0 ) ⊂ E ∀
t ∈ [−a, a]. Thus u(t) is a solution of the initia value problem on [−a, a]. It remains to
show that it is the only solution.
Stability Analysis of ODEs 25
Let u(t) and v(t) be two solutions of the initial value problem on [−a, a]. Then the
continuous function |u(t) − v(t)| achieves its maximum at some point t1 ∈ [−a, a]. It
follows that
But Ka < 1 and this last inequality can only be satisfied if ku−vk = 0. Thus, u(t) = v(t)
on [−a, a]. We have shown that the successive approximations converge uniformly to a
unique solution of the initial value
problem on the interval [−a, a] where a is any number
b 1
satisfying 0 < a < min M , K .
Remark: Exactly the same method of proof shows that the initial value problem
ẋ = f (x)
x (t0 ) = x0
In this section we investigate the dependence of the solution of the initial value problem
ẋ = f (x)
x(0) = y
Gronwall’s Lemma: Suppose that g(t) is a continuous real valued function that
satisfies g(t) ≥ 0 and
Z t
g(t) ≤ C + K g(s)ds
0
∀ t ∈ [0, a] where C and K are positive constants. It then follows that ∀ t ∈ [0, a],
g(t) ≤ CeKt
Rt
Proof: Let G(t) = C + K 0 g(s)ds for t ∈ [0, a]. Then G(t) ≥ g(t) and G(t) > 0 ∀
t ∈ [0, a]. It follows from the fundamental theorem of calculus that
G′ (t) = Kg(t)
d
(log G(t)) ≤ K
dt
or
or
ẋ = f (x)
x(0) = y
has a unique solution u(t, y) with u ∈ C 1 (G) where G = [−a, a] × Nδ (x0 ) ⊂ Rn+1 ;
furthermore, for each y ∈ Nδ (x0 ) , u(t, y) is a twice continiously differentiable function of
t for t ∈ [−a, a].
Proof: Since f ∈ C 1 (E), it follows from the lemma in Section 2.2 that there is an ε-
neighborhood Nε (x0 ) ⊂ E and a constant K > 0 such that ∀ x and y ∈ Nε (x0 ),
u0 (t, y) = y
Z t
uk+1 (t, y) = y + f (uk (s, y)) ds
0
Assume that uk (t, y) is defined and continuous ∀ (t, y) ∈ G = [−a, a | × Nδ (x0 ) and that
∀ y ∈ Nδ (x0 )
where k · k denotes the maximum over all t ∈ [−a, a]. This is clearly satisfied for k =
0. And assuming this is true for k, it follows that uk+1 (t, y), defined by the above
successive approximations, is continuous on G. This follows since a continuous function
of a continuous function is continuous and since the above integral of the continuous
function f (uk (s, y)) is continuous in t by the fundamental theorem of calculus and also
in y. We also have
Z t
kuk+1 (t, y) − yk ≤ |f (uk (s, y))| ds ≤ M0 a
0
for t ∈ [−a, a] and y ∈ Nδ (x0 ) ⊂ N0 . Thus, for t ∈ [−a, a] and y ∈ Nδ (x0 ) with δ = ε/4,
we have
Stability Analysis of ODEs 28
provided M0 a < ε/4, i.e., provided a < ε/ (4M0 ). Thus, the above induction hypothesis
holds ∀ k = 1, 2, 3, . . . and (t, y) ∈ G provided a < ε/ (4M0 ).
We next show that the successive approximations uk (t, y) converge uniformly to a con-
tinuous function u(t, y) ∀ (t, y) ∈ G as k → ∞. As in the proof of the fundamental
existence theorem,
for (t, y) ∈ G. And then it follows exactly as in the proof of the fundamental existence
theorem in Section 2.2 that
for (t, y) ∈ G and consequently that the successive approximations converge uniformly to
a continuous function u(t, y) for (t, y) ∈ G as k → ∞ provided a < 1/K. Furthermore,
the function u(t, y) satisfies
Z t
u(t, y) = y + f (u(s, y))ds
0
for (t, y) ∈ G and also u(0, y) = y. And it follows from the inequality that u(t, y) ∈
Nε/2 (x0 ) ∀ (t, y) ∈ G. Thus, by the fundamental theorem of calculus and the chain rule,
it follows that
and that
Z t
|u (t, y0 + h) − u (t, y0 )| ≤ |h| + |f (u (s, y0 + h)) − f (u (s, y0 ))| ds
Z t 0
∀ t ∈ [−a, a]. We next define Φ (t, y0 ) to be the fundamental matrix solution of the initial
value problem
Φ̇ = A (t, y0 ) Φ
Φ (0, y0 ) = I
with A (t, y0 ) = Df (u (t, y0 )) and I the n × n identity matrix. The existence and con-
tinuity of Φ (t, y0 ) on some interval [−a, a] follow from the method of successive approx-
imations. It then follows from the initial value problems for u (t, y0 ), u (t, y0 + h) and
Φ (t, y0 ) and Taylor’s Theorem,
Z t
|u (t, y0 ) − u (t, y0 + h) + Φ (t, y0 ) h| ≤ |f (u (s, y0 ))
0
− f (u (s, y0 + h)) + Df (u (s, y0 )) Φ (s, y0 ) h | ds
Z t
≤ kDf (u (s, y0 ))k |u (s, y0 ) − u (s, y0 + h) + Φ (s, y0 ) h| ds
Z t
0
It then follows from the conclusion obtained by Gronwall’s Lemma and the inequality
aforededuced that ∀ t ∈ [−a, a], y0 ∈ Nδ/2 (x0 ) and |h| < min (δ0 , δ/2) we have
Z t
g(t) ≤ M1 g(s)ds + ε0 |h|aeKa
0
Hence, it follows from Gronwall’s Lemma that for any given ε0 > 0
∂u
(t, y0 ) = Φ (t, y0 )
∂y
∀ t ∈ [−a, a] where Φ (t, y0 ) is the fundamental matrix solution of the initial value problem
(5) which is continuous in t and in y0 ∀ t ∈ [−a, a] and y0 ∈ Nδ/2 (x0 ). This completes
the proof of the theorem.
Stability Analysis of ODEs 31
Some Remarks
1. A similar proof shows that if f ∈ C r (E) then the solution u(t, y) of the initial
value problem is in C r (G) where G is defined as in the above theorem. And
if f (x) is a (real) analytic function for x ∈ E then u(t, y) is analytic in the
interior of G.
∂u
Φ (t, x0 ) = (t, x0 )
∂x0
satisfies
Φ̇ = Df (x0 ) Φ
Φ (0, x0 ) = I
3. It follows from the continuity of the solution u(t, y) of the initial value problem
that for each t ∈ [−a, a]
It follows from the inequality that this limit is uniform ∀ t ∈ [−a, a].
∂u
Φ(t, y) = (t, y)
∂y
for t ∈ [−a, a] and y ∈ Nδ (x0 ) if and only if Φ(t, y) is the fundamental matrix solution of
Φ̇ = Df [u(t, y)]Φ
Φ(0, y) = I
Theorem: Let E be an open subset of Rn+m containing the point (x0 , µ0 ) where x0 ∈ Rn
and µ0 ∈ Rm and assume that f ∈ C 1 (E). It then follows that there exists an a > 0 and
a δ > 0 such that ∀ y ∈ Nδ (x0 ) and µ ∈ Nδ (µ0 ), the initial value problem
ẋ = f (x, µ)
x(0) = y
has a unique solution u(t, y, µ) with u ∈ C 1 (G) where G = [−a, a]× Nδ (x0 ) × Nδ (µ0 ).
This theorem follows immediately from the previous theorem by replacing the vectors
x0 , x, ẋ and y by the vectors (x0 , µ0 ) , (x, µ), (ẋ, 0) and (y, µ) or it can be proved directly
using Gronwall’s Lemma and the method of successive approximations.
ẋ = f (x)
x(0) = x0
has a unique solution defined on some interval (−a, a). In this section we show that initial
value problem has a unique solution x(t) defined on a maximal interval of existence (α, β).
Furthermore, if β < ∞ and if the limit
x1 = lim− x(t)
t→β
exists then x1 ∈ ∂E, the boundary of E. The boundary of the open set E, ∂E = Ē \ E
where Ē denotes the closure of E. On the other hand, if the above limit exists and x1 ∈ E,
then β = ∞, f (x1 ) = 0 and x1 is an equilibrium point of the initial value problem. Now
we look into the following lemmas and theorems to understand the underlying concepts
in a greater detail.
Stability Analysis of ODEs 33
Proof: Since u1 (t) and u2 (t) are solutions of the initial value problem on I1 and I2
respectively, it follows from Definition 1 in Section 2.2 that 0 ∈ I1 ∩I2 . And if I is an open
interval containing 0 and contained in I1 ∩ I2 , then the fundamental existence-uniqueness
theorem in Section 2.2 implies that u1 (t) = u2 (t) on some open interval (−a, a) ⊂ I. Let
I ∗ be the union of all such open intervals contained in I. Then I ∗ is the largest open
interval contained in I on which u1 (t) = u2 (t). Clearly I ∗ ⊂ I and if I ∗ is a proper subset
of I, then one of the endpoints t0 of I ∗ is contained in I ⊂ I1 ∩ I2 . It follows from the
continuity of u1 (t) and u2 (t) on I that
Call this common limit u0 . It then follows from the uniqueness of solutions that u1 (t) =
u2 (t) on some interval I0 = (t0 − a, t0 + a) ⊂ I. Thus, u1 (t) = u2 (t) on the interval
I ∗ ∪ I0 ⊂ I and I ∗ is a proper subset of I ∗ ∪ I0 . But this contradicts the fact that I ∗ is
the largest open interval contained in I on which u1 (t) = u2 (t). Therefore, I ∗ = I and
we have u1 (t) = u2 (t) ∀ t ∈ I.
Theorem: Let E be an open subset of Rn and assume that f ∈ C 1 (E). Then for
each point x0 ∈ E, there is a maximal interval J on which the initial value problem
has a unique solution, x(t); i.e., if the initial value problem has a solution y(t) on an
interval I then I ⊂ J and y(t) = x(t) ∀ t ∈ I. Furthermore, the maximal interval
J is open; i.e., J = (α, β).
Proof: By the fundamental existence-uniqueness theorem in Section 2.2, the initial value
problem has a unique solution on some open interval (−a, a). Let (α, β) be the union of all
open intervals I such that initial value problem has a solution on I. We define a function
x(t) on (α, β) as follows: Given t ∈ (α, β), t belongs to some open interval I such that
initial value problem has a solution u(t) on I; for this given t ∈ (α, β), define x(t) = u(t).
Then x(t) is a well-defined function of t since if t ∈ I1 ∩ I2 where I1 and I2 are any two
open intervals such that initial value problem has solutions u1 (t) and u2 (t) on I1 and I2
respectively, then by the lemma u1 (t) = u2 (t) on the open interval I1 ∩ I2 . Also, x(t) is a
solution of initial value problem on (α, β) since each point t ∈ (α, β) is contained in some
open interval I on which the initial value problem has a unique solution u(t) and since
x(t) agrees with u(t) on I. The fact that J is open follows from the fact that any solution
of initial value problem on an interval (α, β] can be uniquely continued to a solution on
an interval (α, β + a) with a > 0 as in the proof of Theorem 2 below.
Stability Analysis of ODEs 34
Proof: Since f is continuous on the compact set K, there is a positive number M such
that |f (x)| ≤ M ∀ x ∈ K. Let x(t) be the solution of the initial value problem on
its maximal interval of existence (α, β) and assume that β < ∞ and that x(t) ∈ K ∀
t ∈ (α, β). We first show that limt→β − x(t) exists. If α < t1 < t2 < β then
Z t2
|x (t1 ) − x (t2 )| ≤ |f (x(s))|ds ≤ M |t2 − t1 |
t1
Thus as t1 and t2 approach β from the left, |x (t2 ) − x (t1 )| → 0 which, by the Cauchy
criterion for convergence in Rn (i.e., the completeness of Rn ) implies that limt→β − x(t)
exists. Let x1 = limt→β − x(t). Then x1 ∈ K ⊂ E since K is compact. Next define the
function u(t) on (α, β] by
(
x(t) for t ∈ (α, β)
u(t) =
x1 for t = β
Z t
u(t) = x0 + f (u(s))ds
0
u′ (β) = f (u(β))
i.e., u(t) is a solution of the initial value problem on (α, β]. The function u(t) is called the
continuation of the solution x(t) to (α, β]. Since x1 ∈ E, it follows from the fundamen-
tal existence-uniqueness theorem in Section 2.2 that the initial value problem ẋ = f (x)
together with x(β) = x1 has a unique solution x1 (t) on some interval (β − a, β + a). By
the above lemma, x1 (t) = u(t) on (β − a, β) and x1 (β) = u(β) = x1 . So if we define
u(t) for t ∈ (α, β]
v(t) =
x1 (t) for t ∈ [β, β + a)
Stability Analysis of ODEs 35
then v(t) is a solution of the initial value problem on (α, β + a). But this contradicts the
fact that (α, β) is the maximal interval of existence for
the initial value problem. Hence, if β < ∞, it follows that there exists a t ∈ (α, β) such
that x(t) ∈/ K.
If (α, β) is the maximal interval of existence for the initial value problem then 0 ∈ (α, β)
and the intervals [0, β) and (α, 0] are called the right and left maximal intervals of existence
respectively. Essentially the same proof yields the following result.
x(t) for t ∈ [0, β)
u(t) =
x1 for t=β
is continuous on [0, β]. Let K be the image of the compact set [0, β] under the continuous
map u(t); i.e.,
Proof: This corollary is just the contrapositive of the statement of the aforementioned
theorem.
We next prove the following theorem which strengthens the result on uniform convergence
with respect to initial conditions.
ẋ = f (x)
x(0) = y (2)
ẋ = f (x, µ)
x(0) = y
and
Stability Analysis of ODEs 37
Proof: Let M be the maximal value of the continuous function f on the compact set A.
Suppose that f does not satisfy a Lipschitz condition on A. Then for every K > 0, we
can find x, y ∈ A such that
1 2M
|y∗ − x∗ | = lim |yn − xn | ≤ |f (yn ) − f (xn )| ≤
n→∞ n n
But for n ≥ K, this contradicts the above inequality (*) and this establishes the lemma.
Stability Analysis of ODEs 38
Z t
|x(t, y) − x (t, x0 )| ≤ |y − x0 | + | f (x(s, y)) − f (x (s, x0 ) | ds
Z t 0
≤ |y − x0 | + K |x(s, y) − x (s, x0 )| ds
0
∗
|x (t∗ , y) − x (t∗ , x0 )| ≤ |y − x0 | eK|t | < δeK(b−a) < ε
ẋ = Ax
(i) ϕ0 (x) = x
Observe that these properties follow either from the definitions, or from the properties we
have already proved.
In this section, we define the flow, ϕt , of the nonlinear system
ẋ = f (x)
and show that it satisfies these same basic properties. In the following definition, we
denote the maximal interval of existence (α, β) of the solution ϕ (t, x0 ) of the initial value
problem
ẋ = f (x)
x(0) = x0
by I (x0 ) since the endpoints α and β of the maximal interval generally depend on x0 .
Stability Analysis of ODEs 40
ϕt (x0 ) = ϕ (t, x0 )
is called the flow of the differential equation or the flow defined by the differential
equation; ϕt is also referred to as the flow of the vector field f (x).
If we think of the initial point x0 as being fixed and let I = I (x0 ), then the mapping
ϕ (·, x0 ) : I → E defines a solution curve or trajectory of the system the concerned dif-
ferential equation through the point x0 ∈ E. As usual, the mapping ϕ (·, x0 ) is identified
with its graph in I × E and a trajectory is visualized as a motion along a curve Γ through
the point x0 in the subset E of the phase space Rn ; cf. Figure 1. On the other hand, if
we think of the point x0 as varying throughout K ⊂ E, then the flow of the differential
equation, ϕt : K → E can be viewed as the motion of all the points in the set K.
(a) (b)
If we think of the differential equation as describing the motion of a flund. then a trajec-
tory of the concerned differential equation describes the motion of an individual particle
Stability Analysis of ODEs 41
in the fluid while the flow of the differential equation describes the mostion of the entire
fluid.
We now show that the basic properties (i) (iii) of linear flows are also satisficd by nonlinear
flows. But first we extend Theorem 1 of Section 2.3, establishing that ϕ (t, x0 ) is a locally
smooth function, to a global result. Using the same notation as in Definition 1, let us
define the set Ω ⊂ R × E as
Ω = {(t, x0 ) ∈ R × E | t ∈ I (x0 )}
Proof: If (t0 , x0 ) ∈ Ω and t0 > 0, then according to the definition of the set Ω, the solu-
tion x(t) = ϕ (t, x0 ) of the initial value problem is defined on [0, t0 ]. Thus, as in the proof
of Theorem 2 in Section 2.4, the solution x(t) can be extended to an interval [0, t0 + ε] for
some ε > 0; i.e., ϕ (t, x0 ) is defined on the closed interval [t0 − ε, t0 + ε]. It then follows
from Theorem 4 in Section 2.4 that there exists a neighborhood of x0 , Nδ (x0 ), such that
ϕ(t, y) is defined on | t0 − ε, t0 + ε] × Nδ (x0 ); i.e., (t0 − ε, t0 + ε) × Nδ (x0 ) ⊂ Ω. There-
fore, Ω is open in R × E. It follows from Theorem 4 in Section 2.4 that ϕ ∈ C 1 (G) where
G = (t0 − ε, t0 + ε) × Nδ (x0 ). A similar proof holds for t0 ≤ 0, and since (t0 , x0 ) is an
arbitrary point in Ω, it follows that ϕ ∈ C 1 (Ω).
Proof: Suppose that s > 0, t ∈ I (x0 ) and s ∈ I (ϕt (x0 )). Let the maximal interval
I (x0 ) = (α, β) and define the function x : (α, s + t] → E by
(
ϕ (r, x0 ) if α < r ≤ t
x(r) =
ϕ (r − t, ϕt (x0 )) if t ≤ r ≤ s + t
Then x(r) is a solution of the initial value problem on (α, s + t]. Hence s + t ∈ I (x0 ) and
by uniqueness of solutions
Stability Analysis of ODEs 42
If s = 0 the statement of the theorem follows immediately. And if s < 0, then we define
the function x : [s + t, β) → E by
(
ϕ (r, x0 ) if t ≤ r < β
x(t) =
ϕ (r − t, ϕt (x0 )) if s + t ≤ r ≤ t
Then x(r) is a solution of the initial value problem on [s + t, β) and the last statement of
the theorem follows from the uniqueness of solutions as above.
Theorem: Under the hypotheses of the first theorem of this section , if (t, x0 ) ∈ Ω
then there exists a neighborhood U of x0 such that {t} × U ⊂ Ω. It then follows
that the set V = ϕt (U ) is open in E and that
Proof: If (t, x0 ) ∈ Ω then it follows as in the proof of Theorem 1 that there exists a
neighborhood of x0 , U = Nδ (x0 ), such that (t − ε, t + ε) × U ⊂ Ω; thus, {t} × U ⊂ Ω. For
x ∈ U , let y = ϕt (x) ∀ t ∈ I(x). Then
−t ∈ I(y) since the function h(s) = ϕ(s + t, y) is a solution of the concerned differential
equation on | − t, 0| that satisfies h(−t) = y; i.e., ϕ− , is defined on the set V = ϕt (U ).
It then follows from the previous theorem that ϕ1 (ϕ1 (x)) = ϕ0 (x) = x ∀ x ∈ U and
that ϕt (ϕt (y)) = ϕ0 (y) = y ∀ y ∈ V . It remains to prove that V is open. Let V ∗ ⊃ V
be the maximal subset of E on which ϕ−t is defined. V ∗ is open because Ω is open and
ϕ− : V ∗ → E is continuous because ϕ is continuous. Therefore, the inverse image of the
open set U under the continuous map ϕ−t , i.e., ϕt (U ), is open in E. Thus, V is open in
E.
Later we intend show that the time along each trajectory of the concerned differential
equation can be rescaled, without affecting the phase portrait of the concerned differential
equation, so that ∀ x0 ∈ E, the solution ϕ (t, x0 ) of the initial value problem is defined ∀
t ∈ R; i.e., ∀ x0 ∈ E, I (x0 ) = (−∞, ∞). This rescaling avoids some of the complications
found in stating the above theorems. Once this rescaling has been made, it follows that
Ω = R × E, ϕ ∈ C 1 (R × E), ϕt ∈ C 1 (E) ∀ t ∈ R, and properties (i)-(iii) for the flow of the
nonlinear system the concerned differential equation hold ∀ t ∈ R and x ∈ E just as for
the linear flow eAt .From now on, it will be assumed that this rescaling has been made so
that ∀ x0 ∈ E, ϕ (t, x0 ) is defined ∀ t ∈ R; i.e., we shall assume throughout the remainder
Stability Analysis of ODEs 43
of this chapter that the flow of the nonlinear system the concerned differential equation
ϕt ∈ C 1 (E) ∀ t ∈ R.
We have already showed that the stable, unstable and center subspaces of the linear
system ẋ = Ax are invariant under the linear flow ϕt = eAt . A similar result will be
established for the nonlinear flow ϕt of the concerned differential equation.
2.6 Linearization
ẋ = f (x)
is to determine the equilibrium points of the concerned differential equation and to de-
scribe the behavior of the concerned differential equation near its equilibrium points. In
the next two sections it is shown that the local behavior of the nonlinear system the
concerned differential equation near a hyperbolic equilibrium point x0 is qualitatively
determined by the behavior of the linear system
ẋ = Ax
with the matrix A = Df (x0 ), near the origin. The linear function Ax = D f (x0 ) x is
called the linear part of f at x0 .
1
f (x) = Df (0)x + D2 f (0)(x, x) + · · ·
2
It follows that the linear function Df (0)x is a good first approximation to the nonlinear
function f (x) near x = 0 and it is reasonable to expect that the behavior of the nonlinear
system the concerned differential equation near the point x = 0 will be approximated by
the behavior of its linearization at x = 0. Later it will be shown that this is indeed the
case if the matrix Df (0) has no zero or pure imaginary eigenvalues.
Later we shall see that if x0 is a hyperbolic equilibrium point of the concerned differen-
tial equation then the local behavior of the nonlinear system the concerned differential
equation is topologically equivalent to the local behavior of the linear system ; i.e., there
is a continuous one-to-one map of a neighborhood of x0 onto an open set U containing
the origin, H : Nε (x0 ) → U , which transforms the concerned differential equation into
the linear system, maps trajectories of the concerned differential equation in Nε (x0 ) onto
trajectories of the linear system in the open set U , and preserves the orientation of the
trajectories by time, i.e., H preserves the direction of the flow along the trajectories.
The stable manifold theorem is one of the most important results in the local qualitative
theory of ordinary differential equations. The theorem shows that near a hyperbolic
equilibrium point x0 , the nonlinear system
ẋ = f (x)
has stable and unstable manifolds S and U tangent at x0 to the stable and unstable
subspaces E s and E u of the linearized system
Stability Analysis of ODEs 45
ẋ = Ax
lim ϕt (c) = x0 ∀ c ∈ S
t→∞
and
lim ϕt (c) = x0 ∀ c ∈ U
t→−∞
We first illustrate these ideas with an example and then make them more precise by
proving the stable manifold theorem. It is assumed that the equilibrium point x0 is
located at the origin throughout the remainder of this section. If this is not the case, then
the equilibrium point x0 can be translated to the origin by the affine transformation of
coordinates x → x − x0 .
h = hα ◦ h−1
β : hβ (Uα ∩ Uβ ) → hα (Uα ∩ Uβ )
The manifold M is said to be analytic if the maps h = hα ◦ h−1 β are analytic. The
pair (Uα , hα ) is called a chart for the manifold M and the set of all charts is called an
atlas for M . The differentiable manifold M is called orientable if there is an atlas with
det Dhα ◦ h−1β (x) > 0 ∀ α, β and x ∈ hβ (Uα ∩ Uβ ).
lim ϕt (x0 ) = 0
t→∞
lim ϕt (x0 ) = 0
t→−∞
Before proving this theorem, we remark that if f ∈ C 1 (E) and f (0) = 0, then the system
can be written as
ẋ = Ax + F(x) (3)
where A = Df (0), F(x) = f (x) − Ax, F ∈ C 1 (E), F(0) = 0 and DF(0) = 0. This in turn
implies that ∀ ε > 0 there is a δ > 0 such that |x| ≤ δ and |y| ≤ δ imply that
−1 P 0
B=C AC =
0 Q
where the eigenvalues λ1 , . . . , λk of the k × k matrix P have negative real part and the
eigenvalues λk+1 , . . . , λn of the (n − k) × (n − k) matrix Q have positive real part. We
can choose α > 0 sufficiently small that for j = 1, . . . , k,
ẏ = By + G(y)
where G(y) = C −1 F(Cy) ∈ C 1 (Ẽ) where Ẽ = C −1 (E) and G satisfies the Lipschitz-type
condition above.
It will be shown in the proof that there are n − k differentiable functions ψj (y1 , . . . , yk )
such that the equations
yj = ψj (y1 , . . . , yk ) , j = k + 1, . . . , n
eP t 0 0 0
U (t) = and V (t) =
0 0 0 eQt
It is not difficult to see that with α > 0 chosen as in the penultimate system, we can
choose K > 0 sufficiently large and σ > 0 sufficiently small that
kU (t)k ≤ Ke−(a+σ)t ∀ t ≥ 0
and
kV (t)k ≤ Keσt ∀ t ≤ 0
Z t Z ∞
u(t, a) = U (t)a + U (t − s)G(u(s, a))ds − V (t − s)G(u(s, a))ds
0 t
u(0) (t, a) = 0
and
Z
t
u (j+1)
(t, a) = U (t)a + U (t − s)G u(j) (s, a) ds
Z ∞ 0
− V (t − s)G u(j) (s, a) ds (*)
t
K|a|e−αt
u(j) (t, a) − u(j−1) (t, a) ≤
2j−1
Z t
u (m+1)
(t, a) − u (m)
(t, a) ≤ kU (t − s)kε u(m) (s, a) − u(m−1) (s, a) ds
Z ∞
0
σ
provided εK/σ < 1/4; i.e., provided we choose ε < 4K . In order that the condition hold
for the function G, it suffices to choose K|a| < δ/2; i.c., we choose |a| < 2K δ
. It then
follows by induction that (9) holds ∀ j = 1, 2, 3, . . . and t ≥ 0. Thus, for n > m > N and
t ≥ 0,
X
∞
u (n)
(t, a) − u (m)
(t, a) ≤ u(j+1) (t, a) − u(j) (t, a)
j=N
X∞
1 K|a|
≤ K|a| j
= N −1
j=N
2 2
This last quantity approaches zero as N → ∞ and therefore u(j) (t, a) is a Cauchy
sequence of continuous functions. So, we now know that
uniformly ∀ t ≥ 0 and |a| < δ/2K. Taking the limit of both sides of (*), it follows from the
uniform convergence that the continuous function u(t, a) satisfies the integral equation
and hence the differential equation. It follows by induction and the fact that G ∈ C 1 (Ẽ)
that u(j) (t, a) is a differentiable function of a for t ≥ 0 and |a| < δ/2K. Thus, it follows
from the uniform convergence that u(t, a) is a differentiable function of a for t ≥ 0 and
|a| < δ/2K. The last estimate implies that
uj (0, a) = aj for j = 1, . . . , k
and R∞
uj (0, a) = − 0 V (−s)G (u (s, a1 , . . . , ak , 0)) ds j for j = k + 1, . . . , n.
For j = k + 1, . . . , n we define the functions
ψj (a1 , . . . , ak ) = uj (0, a1 , . . . , ak , 0, . . . , 0)
Stability Analysis of ODEs 50
yj = ψj (y1 , . . . , yk ) for j = k + 1, . . . , n
according
p to the definition. These equations then define a differentiable manifold S̃ for
y12 + · · · + yk2 < δ/2K. Furthermore, if y(t) is a solution of the differential equation
with y(0) ∈ S̃, i.e., with y(0) = u(0, a), then
y(t) = u(t, a)
It follows that if y(t) is a solution of (6) with y(0) ∈ S̃, then y(t) ∈ S̃ ∀ t ≥ 0 and it
follows from the estimate (11) that y(t) → 0 as t → ∞. It can also be shown that if y(t)
is a solution of (6) with y(0) ∈/ S̃ then y(t) ↛ 0 as t → ∞,
∂ψj
(0) = 0
∂yi
The existence of the unstable manifold Ũ of (6) is established in exactly the same way by
considering the differential system with t → −t, i.e.,
ẏ = −By − G(y)
The stable manifold for this system will then be the unstable manifold Ũ for the dif-
ferential system. Note that it is also necessary to replace the vector y by the vector
(yk+1 , . . . , yn , y1 , . . . , yk ) in order to determine the n − k dimensional manifold Ũ by the
above process. This completes the proof of the Stable Manifold Theorem.
Definition: Let ϕt be the flow of the nonlinear system . The global stable and
unstable manifolds
S of the nonlinear S system of our concern at 0 are defined by
W (0) = t≤0 ϕt (S) and W (0) = t≥0 ϕt (U ) respectively; W (0) and W u (0) are
s u s
also referred to as the global stable and unstable manifolds of the origin respectively.
It can be shown that the global stable and unstable manifolds W s (0) and W u (0)
are unique and that they are invariant with respect to the flow ϕt ; furthermore, ∀
x ∈ W s (0), limt→∞ ϕt (x) = 0 and ∀ x ∈ W u (0), limt→−∞ ϕt (x) = 0.
Stability Analysis of ODEs 51
and
respectively.
It follows from the upper bound of u(t, a) in the proof of the stable manifold theorem
that if x(t) is a solution of the differential equation (6) with x(0) ∈ S, i.e., if x(t) = Cy(t)
with y(0) = u(0, a) ∈ S̃, then for any ε > 0 there exists a δ > 0 such that if |x(0)| < δ
then
|x(t)| ≤ εe−αt
∀ t ≥ 0. Just as in the proof of the stable manifold theorem, α is any positive number
that satisfies Re (λj ) < −α for j = 1, . . . , k where λj , j = 1, . . . , k are the eigenvalues of
Df (0) with negative real part. This result shows that solutions starting in S, sufficiently
near the origin, approach the origin exponentially fast as t → ∞.
Corollary: Under the hypotheses of the Stable Manifold Theorem, if S and U are
the stable and unstable manifolds of the system at the origin and if Re (λj ) < −α <
0 < β < Re (λm ) for j = 1, . . . , k and m = k + 1, . . . , n, then given ε > 0 there
exists a δ > 0 such that if x0 ∈ Nδ (0) ∩ S then |ϕt (x0 )| ≤ εe−αt ∀ t ≥ 0 and if
x0 ∈ Nδ (0) ∩ U then |ϕt (x0 )| ≤ εeβt ∀ t ≤ 0.
The Hartman-Grobman Theorem is another very important result in the local qualitative
theory of ordinary differential equations. The theorem shows that near a hyperbolic
equilibrium point x0 , the nonlinear system
ẋ = f (x)
Stability Analysis of ODEs 52
ẋ = Ax
with A = Df (x0 ). Throughout this section we shall assume that the equilibrium point
x0 has been translated to the origin.
Outline of the Proof: Consider the nonlinear system with f ∈ C 1 (E), f (0) = 0 and
A = Df (0).
P 0
A=
0 Q
where the eigenvalues of P have negative real part and the eigenvalues of Q have positive
real part.
2. Let ϕt be the flow of the nonlinear system and write the solution
Stability Analysis of ODEs 53
y (t, y0 , z0 )
x (t, x0 ) = ϕt (x0 ) =
z (t, y0 , z0 )
where
y0
x0 = ∈ Rn
z0
Ỹ (y0 , z0 ) = y (1, y0 , z0 ) − eP y0
and
z̃ (y0 , z0 ) = z (1, y0 , z0 ) − eQ z0
Then Ỹ(0) = Z̃(0) = DỸ(0) = DZ̃(0) = 0. And since f ∈ C 1 (E), Ỹ (y0 , z0 ) and
Z̃ (y0 , z0 ) are continuously differentiable. Thus,
DỸ (y0 , z0 ) ≤ a
and
DZ̃ (y0 , z0 ) ≤ a
on the compact set |y0 |2 + |z0 |2 ≤ s20 . The constant a can be taken as small as we like by
choosing s0 sufficiently small. We let Y (y0 , z0 ) and Z (y0 , z0 ) be smooth functions which
are equal to Ỹ (y0 , z0 ) and Z̃ (y0 , z0 ) for |y0 |2 + |z0 |2 ≤ (s0 /2)2 and zero for |y0 |2 + |z0 |2 ≥
s20 . Then by the mean value theorem
q
|Y (y0 , z0 )| ≤ a |y0 |2 + |z0 |2 ≤ a (|y0 | + |z0 |)
and
Stability Analysis of ODEs 54
q
|Z (y0 , z0 )| ≤ a |y0 |2 + |z0 |2 ≤ a (|y0 | + |z0 |)
4. For
y
x= ∈ Rn
z
By
L(y, z) =
Cz
and
By + Y(y, z)
T (y, z) =
Cz + Z(y, z)
H ◦ T = L ◦ H.
Proof: We establish this lemma using the method of successive approximations. For
x ∈ Rn , let
Φ(y, z)
H(x) =
Ψ(y, z)
First of all, define the successive approximations for the second equation by
Ψ0 (y, z) = z
It then follows by an easy induction argument that for k = 0, 1, 2, . . ., the Ψk (y, z) are
continuous and satisfy Ψk (y, z) = z for |y| + |z| ≥ 2s0 . We next prove by induction that
for j = 1, 2, . . .
where r = c[2 max(a, b, c)]δ with δ ∈ (0, 1) chosen sufficiently small so that r < 1 (which
is possible since c < 1 ) and M = ac (2s0 )1−δ /r. First of all for j = 1
since Z(y, z) = 0 for |y| + |z| ≥ 2s0 . And then assuming that the induction hypothesis
holds for j = 1, . . . , k we have
Thus, just as in the proof of the fundamental theorem of non-linear systems and the stable
manifold theorem, Ψk (y, z) is a Cauchy sequence of continuous functions which converges
uniformly as k → ∞ to a continuous function Ψ(y, z). Also, Ψ(y, z) = z for |y|+|z| ≥ 2s0 .
Taking limits in (4) shows that Ψ(y, z) is a solution of the second equation.
B −1 Φ(y, z) = Φ B −1 y + Y1 (y, z), C −1 z + Z1 (y, z)
where the functions Y1 and Z1 are defined by the inverse of T (which exists if the constant
a is sufficiently small, i.e., if s0 is sufficiently small) as follows:
−1 B −1 y + Y1 (y, z)
T (y, z) =
C −1 z + Z1 (y, z)
Then equation can be solved for Φ(y, z) by the method of successive approximations
exactly as above with Φ0 (y, z) = y since b = kBk < 1. We therefore obtain the continuous
map
Φ(y, z)
H(y, z) = .
Ψ(y, z)
Define
Z 1
H= L−s H0 T s ds
0
It then follows using the above lemma that there exists a neighborhood of the origin for
which
Stability Analysis of ODEs 57
Z 1
t
LH= Lt−s H0 T s−t dsT t
Z0 1−t
= L−s H0 T s dsT t
−t
Z 0 Z 1−t
−s s −s
= L H0 T ds + L H0 T ds T t
s
−t 0
Z 1
= L−s H0 T s dsT t = HT t
0
Z 0 Z 0
−s
L H0 T ds = s
L−s−1 H0 T s+1 ds
−t −t
Z 1
= L−s H0 T s ds
1−t
Thus, H ◦ T t = Lt H or equivalently
In this section, we are not going to explore intricate theoretical details, but just a quick
review of the topological definition of these key terms.
Stability Analysis of ODEs 58
Center: The origin is called a center for the non linear system if there exists a δ > 0 such
that every solution curve of the non linear system in the deleted neighborhood Nδ (0)\{0}
is a closed curve with 0 in its interior.
Center-focus: The origim is known as a center-focus for the non-linear system if there
exists a sequence of closed curves Γn , with Γn+1 in the interior of Γn such that Γn → 0 as
n → ∞ and such that every trajectory between Γn and Γn+1 spirals towards Γn or Γn+1
as t → ±∞.
Stable focus: The origin is known as a stable focus for the non-linear system if there
exist a δ > 0 such that for 0 < r0 < δ and θ0 ∈ R, r(t, r0 , θ0 ) → 0 and |θ(t, r0 , θ0 )| → ∞
as t → ∞.
Unstable focus: The origin is known as a unstable focus for the non-linear system if there
exist a δ > 0 such that for 0 < r0 < δ and θ0 ∈ R, r(t, r0 , θ0 ) → 0 and |θ(t, r0 , θ0 )| → ∞
as t → −∞.
Stable node: The origin is known as a stable node for the non linear system if there
exists a δ > 0 such that for 0 < r0 < δ and θ0 ∈ R, r(t, r0 , θ0 ) → 0 as t → ∞ and
limt→∞ θ(t, r0 , θ0 ) exists.
Unstable node: The origin is known as a unstable node for the non linear system if
there exists a δ > 0 such that for 0 < r0 < δ and θ0 ∈ R, r(t, r0 , θ0 ) → 0 as t → −∞ and
limt→−∞ θ(t, r0 , θ0 ) exists.
Proper Node: The origin is known as a proper node if if it is a node and every ray
through the origin is tangent to some trajectory of the non-linear system.
Topological saddle: The origin is a topological saddle for a non-linear system if there
exists two trajectories Γ1 and Γ2 which approach 0 as t → ∞ and two trajectories Γ3 and
Γ4 which approaches 0 as t → −∞ and if there exists a δ > 0 such that all other trajec-
tories which start in the deleted neighborhood of the origin leave the δ-neighborhood as
t → ±∞. The trajectories Γ1 , Γ2 , Γ3 , Γ4 are known as separatrices.
ẋ = f (x)
with f ∈ C 1 (E) and E an open subset of Rn , has a unique solution ϕt (x0 ), passing through
Stability Analysis of ODEs 59
a point x0 ∈ E at time t = 0 which is defined for all t ∈ I (x0 ), the maximal interval of
existence of the solution. Furthermore, the flow ϕt of the system satisfies (i) ϕ0 (x) = x
and (ii) ϕt+s (x) = ϕt (ϕs (x)) . for all x ∈ E and the function ϕ(t, x) = ϕt (x) defines a
C 1 -map ϕ : Ω → E where Ω = {(t, x) ∈ R × E | t ∈ I(x)}.
In this chapter we define a dynamical system as a C 1 -map ϕ : R×E → E which satisfies (i)
and (ii) above. We first show that we can rescale the time in any C 1 -system (eg: nonlinear
system) so that for all x ∈ E, the maximal interval of existence I(x) = (−∞, ∞). Thus
any C 1 -system (eg: nonlinear system), after an appropriate rescaling of the time, defines
a dynamical system ϕ : R × E → E where ϕ(t, x) = ϕt (x) is the solution of the mentioned
nonlinear system with ϕ0 (x) = x. We next consider limit sets and attractors of dynamical
systems. Besides equilibrium points and periodic orbits, a dynamical system can have
homoclinic loops or separatrix cycles as well as strange attractors as limit sets. We study
periodic orbits in some detail and give the Stable Manifold Theorem for periodic orbits as
well as several examples which illustrate the general theory in this chapter. Determining
the nature of limit sets of nonlinear systems with n ≥ 3 is a challenging problem which is
the subject of much mathematical research at this time.
Mathematically speaking, a dynamical system is a function ϕ(t, x), defined for all t ∈ R
and x ∈ E ⊂ Rn , which describes how points x ∈ E move with respect to time. We
require that the family of maps ϕt (x) = ϕ(t, x) have the properties of a flow have already
been defined.
ϕ:R×E →E
Remark: It follows from definition that for each t ∈ R, ϕt is a C 1 map of E into E which
Stability Analysis of ODEs 60
It is easy to see that if A is an n × n matrix then the function ϕ(t, x) = eAt x defines a
dynamical system on Rn and also, for each x0 ∈ Rn , ϕ (t, x0 ) is the solution of the initial
value problem
ẋ = Ax
x(0) = x0 .
d
f (x) = ϕ(t, x)
dt t=0
defines a C 1 -vector field on E and for each x0 ∈ E, ϕ (t, x0 ) is the solution of the initial
value problem
ẋ = f (x)
x(0) = x0 .
The next theorem shows that any C 1 -vector field f defined on all of Rn leads to a dynamical
system on Rn . While the solutions ϕ (t, x0 ) of the original system may not be defined for
all t ∈ R, the time t can be rescaled along trajectories of the original system to obtain a
topologically equivalent system for which the solutions are defined for all t ∈ R.
Before stating this theorem, we generalize the notion of topological equivalent systems for
a neighborhood of the origin.
Definition: Suppose that f ∈ C 1 (E1 ) and g ∈ C 1 (E2 ) where E1 and E2 are open subsets
of Rn . Then the two autonomous systems of differential equations
Stability Analysis of ODEs 61
ẋ = f (x)
and
ẋ = g(x)
are said to be topologically equivalent if there is a homeomorphism H : E1 → E2 which
maps trajectories of the first differential equation onto trajectories of the second one and
preserves their orientation by time. In this case, the vector fields f and g are also said
to be topologically equivalent. If E = E1 = E2 then the two systems are said to be
topologically equivalent on E and the vector fields f and g are said to be topologically
equivalent on E.
Global Existence Theorem: For f ∈ C 1 (Rn ) and for each x0 ∈ Rn , the initial
value problem
f (x)
ẋ =
1 + |f (x)|
x(0) = x0
has a unique solution x(t) defined for all t ∈ R, i.e., (3) defines a dynamical system
on Rn ; furthermore, (3) is topologically equivalent to (1) on Rn .
Remark: The original system and the modified one in the theorem are topologically
equivalent on Rn since the time t along the solutions x(t) of (1) has simply been rescaled
according to the formula
Z t
τ= [1 + |f (x(s))|] ds
0
i.e., the homeomorphism H is simply the identity on Rn . The solution x(t) of (1), with
respect to the new time τ , then satisfies
dx dx dτ f (x)
= / =
dτ dt dt 1 + |f (x)|
i.e., x(t(τ )) is the solution of the modified system where t(τ ) is the inverse of the strictly
increasing function τ (t) defined by the rescalation. The function τ (t) maps the maximal
interval of existence (α, β) of the solution x(t) of the original system one-to-one and onto
(−∞, ∞), the maximal interval of existence of the modified system.
Stability Analysis of ODEs 62
f
∈ C 1 (Rn )
1 + |f |
For x0 ∈ Rn , let x(t) be the solution of the modified initial value problem on its maximal
interval of existence ( α, β ). So, x(t) satisfies the integral equation (Verify!!)
Z t
f (x(s))
x(t) = x0 + ds
0 1 + |f (x(s))|
Z |t|
|x(t)| ≤ |x0 | + ds = |x0 | + |t|
0
|x(t)| ≤ |x0 | + β
for all t ∈ [0, β); i.e., for all t ∈ [0, β), the solution of the modified system through the
point x0 at time t = 0 is contained in the compact set
K = {x ∈ Rn | |x| ≤ |x0 | + β} ⊂ Rn .
1
ẋ =
2x
x(0) = x0
Stability Analysis of ODEs 63
p
has the unique solution x(t) = t + x20 defined on its maximal interval of existence
I (x0 ) = (−x0 , ∞). The function f (x) = 1/(2x) ∈ C 1 (E) where E = (0, ∞). We have
2
1/2x 1
ẋ = =
1 + (1/2x) 2x + 1
x(0) = x0
q
1
x(t) = − + t + (x0 + 1/2)2
2
defined on its maximal interval of existence I (x0 ) = − (x0 + 1/2)2 , ∞ . We see that in
this case I (x0 ) 6= R.
However, a slightly more subtle rescaling of the time along trajectories of the original
initial value problem does lead to a dynamical system equivalent to the original one even
when E is a proper subset of Rn . This idea is due to Vinograd.
ẋ = F(x)
defines a dynamical system on E and such that the new dynamical system is topo-
logically equivalent to the original one on E.
f (x)
g(x) = ∈ C 1 (E)
1 + |f (x)|
|g(x)| ≤ 1, the original system and the modified one are topologically equivalent on E.
Furthermore, solutions x(t) of the modified system satisfy
Z t Z t
′ ′
|ẋ (t )| dt = |g (x (t′ ))| dt′ ≤ |t|
0 0
i.e., for finite t, the trajectory defined by x(t) has finite arc length. Let (α, β) be the
maximal interval of existence of x(t) and suppose that β < ∞. Then since the arc length
Stability Analysis of ODEs 64
of the half-trajectory defined by x(t) for t ∈ (0, β) is finite, the half-trajectory defined by
x(t) for t ∈ [0, β) must have a limit point
x1 = lim− x(t) ∈ Ė
t→β
d(x, K)
G(x) =
1 + d(x, K)
i.e., for x ∈ E, d(x, K) is the distance of x from the boundary ∂E of E. Then the function
G ∈ C l (Rn ) , 0 ≤ G(x) ≤ 1 and for x ∈ K, G(x) = 0. Let F(x) = g(x)G(x). Then
F ∈ C 1 (E) and the system, ẋ = F(x), is topologically equivalent to our initial modification
on E since we have simply rescaled the time along trajectories of that initially modified
system; i.e., the homeomorphism H is simply the identity on E. Furthermore, the system
ẋ = F(x) has a bounded right-hand side and therefore its trajectories have finite arc-length
for finite t. To prove that the modification by Vinograd defines a dynamical system on
E, it suffices to show that all half-trajectories of the aforementioned modification which
(a) start in E, (b) have finite arc length s0 , and (c) terminate at a limit point x1 ∈ Ė are
defined for all t ∈ [0, ∞). Along any solution x(t) of that modification, ds dt
= |ẋ(t)| and
hence
Z s
ds′
t=
0 |F (x (t (s′ )))|
where t(s) is the inverse of the strictly increasing function s(t) defined by
Z t
s= |F (x (t′ ))| dt′
0
for s > 0. But for each point x = x(t(s)) on the half-trajectory we have
d(x, K)
G(x) = < d(x, K) = inf d(x, y) ≤ d (x, x1 ) ≤ s0 − s
1 + d(x, K) y∈K
Stability Analysis of ODEs 65
Z s
ds′ s0 − s
t≥ = log
0 s0 − s ′ s0
and hence t → ∞ as s → s0 ; i.e., the half-trajectory defined by x(t) is defined for all
t ∈ [0, ∞); i.e., β = ∞. Similarly, it can be shown that α = −∞ and hence, the
modified system defines a dynamical system on E which is topologically equivalent to the
unmodified original system on E.
For f ∈ C 1 (E), E an open subset of Rn , the second theorem implies that there is no loss
in generality in assuming that the original system defines a dynamical system ϕ (t, x0 ) on
E. Throughout the remainder of this book we therefore make this assumption; i.e., we
assume that for all x0 ∈ E, the maximal interval of existence I (x0 ) = (−∞, ∞). In the
next section, we then go on to discuss the limit sets of trajectories x(t) of the original
as t → ±∞. However, we first present two more global existence theorems which are of
some interest.
Theorem: Suppose that f ∈ C 1 (Rn ) and that f (x) satisfies the global Lipschitz
condition
|f (x)| − f (y)| ≤ M |x − y|
for all x, y ∈ Rn . Then for x0 ∈ Rn , the initial value problem (1) has a unique
solution x(t) defined for all t ∈ R.
Proof: Let x(t) be the solution of the original initial value problem on its maximal
interval of existence (α, β). Then using the fact that d|x(t)|/dt ≤ |ẋ(t)| and the triangle
inequality,
d
|x(t) − x0 | ≤ |ẋ(t)| = |f (x(t))|
dt
≤ |f (x(t)) − f (x0 )| + |f (x0 )|
≤ M |x(t) − x0 | + |f (x0 )|
Thus, if we assume that β < ∞, then the function g(t) = |x(t) − x0 | satisfies
Z t Z t
dg(s)
g(t) = ds ≤ |f (x0 )| β + M g(s) ds
0 ds 0
for all t ∈ (0, β). It then follows from Gronwall’s Lemma that
Stability Analysis of ODEs 66
|x(t) − x0 | ≤ β |f (x0 )| eM β
for all t ∈ [0, β); i.e., the trajectory of the original system through the point x0 at time
t = 0 is contained in the compact set
K = x ∈ Rn ||x − x0 | ≤ β|f (x0 ) | eM β ⊂ Rn .
But then by one of the corollaries we have already proven, it follows that β = ∞, a
contradiction. Therefore, β = ∞ and it can similarly be shown that α = −∞. Thus,
for all x0 ∈ Rn , the maximal interval of existence of the solution x(t) of the initial value
problem, I (x0 ) = (−∞, ∞).
with f ∈ C 1 (E) where E is an open subset of Rn . In the previous section, we saw that
there is no loss in generality in assuming that the nonlinear system defines a dynamical
system ϕ(t, x) on E. For x ∈ E, the function ϕ(·, x) : R → E defines a solution curve,
trajectory, or orbit of the nonlinear system through the point x0 in E. If we identify the
function ϕ(·, x) with its graph, we can think of a trajectory through the point x0 ∈ E as
a motion along the curve
Γx0 = {x ∈ E | x = ϕ (t, x0 ) , t ∈ R}
defined by the nonlinear system. We shall also refer to Γx0 as the trajectory of the
nonlinear system through the point x0 at time t = 0. If the point x0 plays no role in the
discussion, we simply denote the trajectory by Γ and draw the curve Γ in the subset E
of the phase space Rn with an arrow indicating the direction of the motion along Γ with
Stability Analysis of ODEs 67
increasing time. By the positive half-trajectory through the point x0 ∈ E, we mean the
motion along the curve
x0 = {x ∈ E | x = ϕ (t, x0 ) , t ≥ 0}
Γ+
Γ− −
x0 , is similarly defined. Any trajectory Γ = Γ ∪ Γ .
+
Figure 5: A trajectory of Γ of the initial value problem which approaches the ω limit
point p ∈ E as t → ∞.
lim ϕ (tn , x) = p
n→∞
lim ϕ (tn , x) = q
n→∞
and the point q ∈ E, then the point q is called an α-limit point of the trajectory ϕ(·, x)
of the initial nonlinear system. The set of all ω-limit points of a trajectory Γ is called the
ω-limit set of Γ and it is denoted by ω(Γ). The set of all α-limit points of a trajectory
Γ is called the α-limit set of Γ and it is denoted by α(Γ). The set of all limit points of
Γ, α(Γ) ∪ ω(Γ) is called the limit set of Γ.
Theorem: The α and ω-limit sets of a trajectory Γ of the initial nonlinear system,
α(Γ) and ω(Γ), are closed subsets of E and if Γ is contained in a compact subset
of Rn , then α(Γ) and ω(Γ), are non-empty, connected, compact subsets of E.
Stability Analysis of ODEs 68
Proof: It follows from Definition 1 that ω(Γ) ⊂ E. In order to show that ω(Γ) is a closed
subset of E, we let pn be a sequence of points in ω(Γ) with pn → p ∈ Rn and show that
p ∈ ω(Γ). Let x0 ∈ Γ. Then since pn ∈ ω(Γ), it follows that for each n ∈ N, there is a
(n)
sequence tk → ∞ as k → ∞ such that
(n)
lim ϕ tk · x0 = pn
k→∞
(n+1) (n)
Furthermore, we may assume that tk > tk since otherwise we can choose a subse-
(n)
quence of tk with this property. The above equation implies that ∀ n ≥ 2, there is a
sequence of integers K(n) > K(n − 1) such that for k ≥ K(n),
1
(n)
ϕ tk · x 0 − pn <
n
(n)
Let tn = tK(n) . Then tn → ∞ and by the triangle inequality,
1
|ϕ (tn , x0 ) − p| ≤ |ϕ (tn , x0 ) − pn | + |pn − p| ≤ + |pn − p| → 0
n
as n → ∞. Thus p ∈ ω(Γ).
If Γ ⊂ K, a compact subset of Rn . and ϕ (tn , x0 ) → p ∈ ω(Γ), then p ∈ K since
ϕ (tn , x0 ) ∈ Γ ⊂ K and K is compact. Thus, ω(Γ) ⊂ K and therefore ω(Γ) is compact
since a closed subset of a compact set is compact. Furthermore, ω(Γ) 6= 0 since the
sequence of points ϕ (n, x0 ) ∈ K contains a convergent subsequence which converges to a
point in ω(Γ) ⊂ K. Finally, suppose that ω(Γ) is not connected. Then there exist two
nonempty, disjoint, closed sets A and B such that ω(Γ) = A ∪ B. Since A and B are both
bounded, they are a finite distance δ apart where the distance from A to B
d(A, B) = inf |x − y|
x∈A.y∈B
Since the points of A and B are ω-limit points of Γ, there exists arbitrarily large t such
that ϕ (t, x0 ) are within δ/2 of A and there exists arbitrarily large t such that the distance
of ϕ (t, x0 ) from A is greater than δ/2. Since the distance d (ϕ (t, x0 ) , A) of ϕ (t, x0 ) from
A is a continuous function of t, it follows that there must exist a sequence tn → ∞ such
that d (ϕ (tn , x0 ) , A) = δ/2. Since {ϕ (tn , x0 )} ⊂ K there is a subsequence converging to
a point p ∈ ω(Γ) with d(p, A) = δ/2. But, then d(p, B) ≥ d(A, B) − d(p, A) = δ/2 which
implies that p ∈ / A and p ∈ / B; i.e., p ∈
/ ω(Γ), a contradiction. Thus, ω(Γ) is connected.
A similar proof serves to establish these same results for α(Γ).
Stability Analysis of ODEs 69
Proof: Let p ∈ ω(Γ) where Γ is the trajectory ϕ (·, x0 ) of the initial nonlinear system
through the point x0 ∈ E. Let q be a point on the trajectory ϕ(·, p) of the initial nonlinear
system through the point p; i.e., q = ϕ(t̄, p) for some t̄ ∈ R. Since p is an ω-limit point
of the trajectory ϕ (·, x0 ), there is a sequence tn → ∞ such that ϕ (tn , x0 ) → p. Thus we
have.
ϕ tn + t̃, x0 = ϕ t̃, ϕ (tn , x0 ) → ϕ(t̃, p) = q
And since tn + t̃ → ∞, the point q is an ω-limit point of ϕ (·, x0 ). A similar proof holds
when p is an α-limit point of Γ and this completes the proof of the theorem.
It follows from this theorem that ∀ points p ∈ ω(Γ), ϕt (p) ∈ ω(Γ) ∀ t ∈ R; i.e., ϕt (ω(Γ)) ⊂
ω(Γ). Thus, according to definition, we have the following result.
Corollary: α(Γ) and ω(Γ) are invariant with respect to the flow ϕt of the initial
nonlinear system.
The α - and ω-limit sets of a trajectory Γ of the initial nonlinear system are thus closed
invariant subsets of E. In the next definition, a neighborhood of a set A is any open set
U containing A and we say that x(t) → A as t → ∞ if the distance d(x(t), A) → 0 as
t → ∞.
3.3 Attractors
A closed invariant set A ⊂ E is called an attracting set of the initial nonlinear system if
there is some neighborhood U of A such that ∀ x ∈ U, ϕt (x) ∈ U ∀ t ≥ 0 and ϕt (x) → A
as t → ∞. An attractor of the initial nonlinear system is an attracting set which contains
a dense orbit.
Note that any equilibrium point x0 of the initial nonlinear system is its own α and ω-limit
set since ϕ (t, x0 ) = x0 ∀ t ∈ R. And if a trajectory Γ of the initial nonlinear system
has a unique ω-limit point x0 , then by the above Corollary, x0 is an equilibrium point of
the initial nonlinear system. A stable node or focus, is the ω-limit set of every trajectory
in some neighborhood of the point; and a stable node or focus of the initial nonlinear
system is an attractor of the initial nonlinear system. However, not every ω limit set of
a trajectory of the initial nonlinear system is an attracting set of the initial nonlinear
system; for example, a saddle x0 of a planar system is the ω-limit set of three trajectories
Stability Analysis of ODEs 70
If q is any regular point in α(Γ) or ω(Γ) then the trajectory through q is called a limit orbit
of Γ. Thus, by the second theorem , we see that α(Γ) and ω(Γ) consist of equilibrium
points and limit orbits of the initial nonlinear system. We now consider some specific
examples of limit sets and attractors.
Circular Attractor
ṙ = r 1 − r2
θ̇ = 1.
We see that the origin is an equilibrium point of this system; the flow spirals around the
origin in the counter-clockwise direction; it spirals outward for 0 < r < 1 since ṙ > 0 for
0 < r < 1; and it spirals inward for r > 1 since ṙ < 0 for r > 1. The counter-clockwise
flow on the unit circle describes a trajectory Γ0 of the initial nonlinear system since ṙ = 0
on r = 1. The trajectory through the point (cos θ0 , sin θ0 ) on the unit circle at t = 0 is
given by x(t) = (cos(t+ θ0 ), sin (t + θ0 ))T . The phase portrait for this system is shown in
the figure. The trajectory Γ0 is called a stable limit cycle.
Figure 6: A stable limit cycle Γ0 which is an attractor of the initial nonlinear system.
Stability Analysis of ODEs 71
Spherical Attractor
The system
ẋ = −y + x 1 − z 2 − x2 − y 2
ẏ = x + y 1 − z 2 − x2 − y 2
ż = 0
has the unit two-dimensional sphere S 2 together with that portion of the z-axis outside
S 2 as an attracting set. Each plane z = z0 is an invariant set and for |z0 | < 1 the ω-limit
set of any trajectory not on the z-axis is a stable cycle on S 2 .
Cylindrical Attractor
The system
ẋ = −y + x 1 − x2 − y 2
ẏ = x + y 1 − x2 − y 2
ż = α
has the z-axis and the cylinder x2 + y 2 = 1 as invariant sets. The cylinder is an attracting
set.
Toroidal Attractor
If in the previous example we identify the points (x, y, 0) and (x, y, 2π) in the planes z = 0
and z = 2π, we get a flow in R3 with a two-dimensional invariant torus T 2 as an attracting
set. The z-axis gets mapped onto an unstable cycle Γ. And if α is an irrational multiple
of π then the torus T 2 is an attractor and it is the ω-limit set of every trajectory except
the cycle Γ.
Stability Analysis of ODEs 73
Lorenz System
The original work of Lorenz in 1963 as well as the more recent work of Sparrow indicates
that for certain values of the parameters σ, ρ and β, the system
ẋ = σ(y − x)
ẏ = ρx − y − xz
ż = −βz + xy
has a strange attracting set. For example for σ = 10, ρ = 28 and β = 8/3, a single
trajectory of this system is shown in the figure along with a ”branched surface” S. The
attractor A of this system is made up of an infinite number of branched surfaces S which
are interleaved and which intersect; however, the trajectories of this system in A do not
intersect but move from one branched surface to another as they circulate through the
apparent branch. The numerical results and the related theoretical work indicate that
the closed invariant set A contains
Figure 10: A trajectory Γ of the Lorenz system and the corresponding branched surface
S.
Stability Analysis of ODEs 75
Halvorsen Attractor
ẋ = ax − 4y − 4z − y 2
ẏ = ay − 4z − 4x − z 2
ż = az − 4x − 4y − x2
GeoGebra is a great tool to observe all the attractors and dynamical systems we have
discussed, it gives us robust customization and free choice of number of observed particles
and parameters.
1 ## Parameters , you can modify it according to the equation ##
2 d = 10
3 b = 8/3
4 p = 28
5
6 ## System of differential equations : Lorenz attractor , you can go for
the others as well ##
7 x'(t,x,y,z) = d * (y - x)
8 y'(t,x,y,z) = x * (p - z) - y
9 z'(t,x,y,z) = x * y - b * z
10
11 ## Initial Condition ##
12 x0 = 1
13 y0 = 1
14 z0 = 1
15
Stability Analysis of ODEs 76
16 ## Numerical solution ##
17 NSolveODE ({x', y', z'}, 0, {x0 , y0 , z0}, 20)
18
19 ## Note ##
20 # The command NSolveODE () creates three curves
21 # containing the numerical solution of the system
22 # per variable (x,y and z) and they are plotted
23 # against time in the 2D graphic view.
24
25 ## Calculate length of solution 1##
26 len = Length ( numericalIntegral1 )
27
References
[1] Lawrence Perko, Differential Equation and Dynamical Systems. Springer, 1998.