0% found this document useful (0 votes)
28 views78 pages

Stability Analysis of ODEs Lecture Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views78 pages

Stability Analysis of ODEs Lecture Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

5y UDGRP Winter 2024

x +
dx = 2 5 Lecture Notes
2y )
dt x +
) -
= 5 (2
dy = (2
-

dt I) =

2
=0

2
5
(A
-
1 25
i
𝑑
2 +
5
A= 5
A-
I
𝑑𝑥
Y

t (
de
-1


-2
-2

-1
SI
SI
N

No. 2
-1
N

0
𝑒
X
1

x!

--
∕x

CO
CO
S
-1
x

y
1
S

5
X
2

0
X

=
TA

5y =
TA
N

𝜙
-1

2
N
e

y =0
LN

x +
5i 5i
LO
10
x
SI
SI

x +
+/
N
-1
N

SI

=
SI

2
N
-1
N

SI
SI
N

𝑖
-1
SI

DYNAMICAL SYSTEMS
SI
N
-1
N

SI
SI
N
-1
N
SI
SI
N

𝜔
-1

ARITRABHA MAJUMDAR (BMAT2311)


N
Undergraduate Directed Group
Reading Project Winter 2024

Stability Analysis of Ordinary Differential


Equations

Aritrabha Majumdar (BMAT2311)

Indian Statistical Institute, Banglaore


Contents
1 Linear Systems 2
1.1 Autonomous System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Uncoupled System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Diagonalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4 Matrix Norm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Exponentials of Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.6 The Fundamental Theorem for Linear Systems . . . . . . . . . . . . . . . . 10
1.7 Linear Systems in R2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.8 System with Complex eigenvalues . . . . . . . . . . . . . . . . . . . . . . . 13
1.9 Multiple eigenvalues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
1.10 Stability Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.11 Nonhomogeneous Linear Systems . . . . . . . . . . . . . . . . . . . . . . . 19

2 Nonlinear Systems: Local Theory 20


2.1 Some Preliminary Concepts and Definitions . . . . . . . . . . . . . . . . . 20
2.2 The Fundamental Existence Uniqueness Theorem . . . . . . . . . . . . . . 21
2.3 Dependence on Initial Conditions and Parameters . . . . . . . . . . . . . . 25
2.4 The Maximal Interval of Existence . . . . . . . . . . . . . . . . . . . . . . 32
2.5 The Flow defined by a Differential Equation . . . . . . . . . . . . . . . . . 39
2.6 Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
2.7 Stable Manifold Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
2.8 The Hartman-Grobman Theorem . . . . . . . . . . . . . . . . . . . . . . . 51
2.9 Saddles, Nodes, Foci and Centers . . . . . . . . . . . . . . . . . . . . . . . 57

3 Nonlinear Systems: Global Theory 58


3.1 Dynamical Systems and Global Existence Theorems . . . . . . . . . . . . . 59
3.2 Limit Sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
3.3 Attractors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

1
Stability Analysis of ODEs 2

1 Linear Systems

1.1 Autonomous System

An autonomous System of Ordinary Differential Equations is a system which does NOT


explicitly depend on the independent variables. It is of the form
d
x(t) = f (x(t)) ; x ∈ Rn
dt

Solutions are invariant under horizontal translations.

Proof: Say, x1 (t) is a solution of the ODE dx dt


= f (x), x(0) = x0 .
Then x2 (t) = x1 (t − t0 ) solves dx
dt
= f (x), x(t 0 ) = x0 .
Now we set s = t − t0 which essentially gives x2 (t) = x1 (s) and ds = dt. Thus,

d d
x2 (t) = x1 (s) = f (x1 (s)) = f (x2 (t))
dt ds
And for the initial condition, we have x2 (t0 ) = x1 (t0 − t0 ) = x0

An autonomous system of two first order differential equations has the form
dx
= f (x, y)
dt
dy
= g(x, y)
dt
If the system is linear, we can express it in the given format
dx
= ax + by
dt
dy
= cx + dy
dt
For which we can write
 dx    
a b x
ẋ = dt
dy = = Ax ; (a, b, c, d) ∈ R4
dt
c d y

1.2 Uncoupled System

An uncoupled system of Ordinary Differential Equations is a system in which differential


equation of one of the dependent variables is independent of the others. Clearly in this
Stability Analysis of ODEs 3

case, the matrix A is (NOT always) be diagonal.

dx
= ax =⇒ x = c1 eat
dt
dy
= by =⇒ y = c2 ebt
dt
   at  
a 0 e 0 c1
ẋ = x =⇒ x = = CeAt
0 b 0 ebt c2
After a bit careful examination, it is evident that the solutions of this differential equation
1
b c2b
lies on R and they have the form y = kx , where k =
2 a 1 .
c1a

Phase Plane: While trying to describe the motion of the particle governed by the
provided differential equations, we can draw the solution curves in the plane Rn , and this
is known as the Phase Plane. Clearly, in the above uncoupled system, R2 is the Phase
Plane.

Phase Portrait: The set of all solution curves drawn in the Phase space is known as
Phase Portrait.

Dynamical Systems: A dynamical system governed by ẋ = Ax is a function ϕ :


Rn × R 7→ Rn and it is given by ϕ(C, t) = CeAt . Geometrically, it describes motion of
points in phase plane along the solution curves.

Equilibrium Point: For c1 = c2 = 0, x(t) = 0 ∀t ∈ R and the origin is referred to as


an equilibrium point of a system of Differential Equations.

The function f (x) = Ax defines a mapping f : Rn 7→ Rn ; which defines a vector field


on Rn . If we draw each vector along with its initial points, then we get a pictorial rep-
resentation of the vector field. It is an interesting observation that at each point in the
phase space, the solution curve is tangent to the vectors in the vector field. Actually, it is
pretty obvious, as at time t, the velocity vector v(t) = ẋ(t) is tangent to the solution curve.

We observe this for ẋ = Ix


Stability Analysis of ODEs 4

Figure 1: Vector field representation for ẋ = Ix

Asymptotic Stability of Origin: Here, we look at

lim (x(t), y(t)) = lim (c1 eat , c2 ebt )


t→∞ t→∞

If a < 0 and b < 0, then this limit goes to (0, 0). Otherwise, most of the solutions
diverge to infinity,
Roughly speaking, an equilibrium (x0 , y0 ) is asymptotically stable if every trajectory
(x(t), y(t)) beginning from an initial condition near (x0 , y0 ) stays near (x0 , y0 ) for
t > 0, and
lim (x(t), y(t)) = (x0 , y0 )
t→∞

The equilibrium is unstable if there are trajectories with initial conditions arbitrar-
ily close to the equilibrium that move far away from that equilibrium.
Later on, we will discuss about this in greater detail.

Invariance of the Axes: There is another observation that we can make for
uncoupled systems. Suppose that the initial condition for an uncoupled system lies
on the x axis; that is, suppose y0 = 0, then the solution (x(t), y(t)) = (x0 eat , 0) also
lies on the x axis ∀ time. Similarly, if the initial condition lies on the y axis, then
the solution (0, y0 ebt ) lies on the y axis ∀ time.
Stability Analysis of ODEs 5

1.3 Diagonalization

Theorem: If eigenvalues λ1 , λ2 , ..., λn of a matrix A are real and distinct, then any
set
 of corresponding
 eigenvectors {v1 , v2 , ...vn } forms a basis of Rn . The matrix P =
v1 v2 ... vn is invertible and
 
λ1
 .. 
P −1 AP =  . 
λn

This theorem can be used to reduce the linear system ẋ = Ax to an uncoupled linear
system. To do so, we first define the change of coordinates x = P y. So we have,

ẏ = P −1 ẋ = P −1 Ax = P −1 AP y
 
λ1
 ... 
=⇒ ẏ =  y
λn
 
e λ1 t
 ... 
=⇒ y(t) =   y(0)
λn t
e
 
e λ1 t
 ...  −1
=⇒ P −1 x(t) =   P x(0)
e λn t
 
e λ1 t
 ..  −1
=⇒ x(t) = P  .  P x(0)
λn t
e
Stability Analysis of ODEs 6

Stable, Unstable and Center Subspace


It is evident that the solution is stable ∀ t ∈ R iff all eigenvalues are negative.
Keeping this in mind, we consider {v1 , . . . , vk } to be the eigenvectors corresponding
to negative eigenvalues, and {vk+1 , . . . , vn } to be the eigenvectors corresponding to
positive eigenvalues.
Then we denote the stable subspace of the Linear System by

E S = span{v1 , . . . , vk }

and the unstable subspace of the Linear System by

E U = span{vk+1 , . . . , vn }

If we have pure imaginary eigenvalues, then we also get a center subspace, namely
EC .

1.4 Matrix Norm

Here, while performing all the calculations, we consider L2 norm.


We define the norm of a matrix A to be
||Ax||
||A|| = max = max ||Ax||
n
x∈R \{0} ||x|| ||x||=1

Some Properties:

• ||A|| ≥ 0 ; ||A|| = 0 ⇐⇒ A = 0.

• ||λA|| ≤ |λ| · ||A|| , λ ∈ R, A ∈ Rn .

• ||A + B|| ≤ ||A|| + ||B||.

• ||Ax|| ≤ ||A|| · ||x||.

• ||AB|| ≤ ||A|| · ||B||.

• ||Ak || ≤ ||A||k , k ∈ N ∪ {0}

• kT −1 k ≥ 1
∥T ∥

Now, ||Ax|| ≥ 0 ∀ ||x|| ≤ 1; hence ||A|| ≥ 0.


Now A = 0 =⇒ ||Ax|| = 0 ∀x ∈ Rn =⇒ ||A|| = 0. q
Say, the (i, j) entry of A is non zero. Hence ||Ax|| = a2ij x2j > 0. (xj 6= 0) Similarly by
th

induction, we can show that if k elements of A are non-zero, then ||Ax|| > 0. Hence if
Stability Analysis of ODEs 7

||A|| = 0, then A = 0.
∴ ||A|| ≥ 0 and ||A|| = 0 ⇐⇒ A = 0. ■

On the other hand,

||λA|| = max ||λAx|| = max |λ| · ||Ax|| = |λ| max ||Ax|| = |λ| · ||A|| ; λ ∈ R ■
||x||≤1 ||x||≤1 ||x||≤1

Again,
 
||A+B|| = max ||(A+B)x|| ≤ max (||Ax||+||Bx||) ≤ max ||Ax|| + max ||Bx|| = ||A||+||B|| ■
||x||≤1 ||x||≤1 ||x||≤1 ||x||≤1

Again,

||Ax|| ||Ax||
||A|| = max =⇒ ≤ ||A|| =⇒ ||Ax|| ≤ ||A|| · ||x|| ■
n x∈R \{0} ||x|| ||x||

Moreover,
||AB|| = max ||ABx|| ≤ ||A|| max ||Bx|| = ||A|| · ||B|| ■
||x||≤1 ||x||≤1

We also observe
||Ak || ≤ ||A|| · ||Ak−1 || ≤ · · · ≤ ||A||k ■
And lastly,
1
1 = kT T −1 k ≤ kT k · kT −1 k =⇒ kT −1 k ≥ ■
kT k

Limit of a Linear Operator: A sequence of linear operators {Tk }k≥1 ⊆ L(Rn )


is said to converge to a limiting linear operator T ∈ L(Rn ) as k → ∞ if for every
ε > 0, ∃N ∈ N such that ∀k ≥ N , ||Tk − T || < ε.

Show that for each t ∈ R, the solution of ẋ = Ax is a continuous


function of the initial condition.

Proof: Say, the solution obtained is given by ϕ(t, x0 ) = x0 eAt . For a fixed t, we take
ε
the matrix norm to be defined analogously as L2 norm. We also define δ := ||eAt ||
.
Now for ||y0 − x0 || < δ, we have ||ϕ(t, y0 ) − ϕ(t, x0 )|| ≤ ||e || · ||y0 − x0 || < ε ■
At

1.5 Exponentials of Operators

Say we have a given T ∈ L(Rn ) and a given t0 ∈ R. Say ||T || = a

T k tk ||T k || · |t|k ||T ||k · tk0 ak · tk0


≤ ≤ =
k! k! k! k!
Stability Analysis of ODEs 8

∀ |t| < t0 . Now


X

(at0 )k
= eat0
k=0
k!
P
So, by Weierstrass M test, the sum ∞ k=0
T k tk
k!
converges uniformly and absolutely. So,
now we define the matrix exponential as
X

Ak tk
e At
= ; t∈R
k=0
k!

Note that keAt k ≤ e∥T ∥·|t|

Theorem 1: If P, T ∈ L(Rn ) and S = P T P −1 , then eS = P eT P −1 .

Proof: According to the definition,


X

Sk X

(P T P −1 )k X

Tk
e =S
= =P P −1 = P eT P −1 ■
k=1
k! k=1
k! k=1
k!

Theorem 2: If S, T ∈ L(Rn ) and they commute, then eS+T = eS eT .

Proof: If S and T commute, then


n  
X
n n
(S + T ) = S k T n−k
k=0
k

Therefore
! !
X

(S + T )n X∞
1 X n! k j X

Sk X

Tj
eS+T = = S T = · = eS eT ■
n=0
n! n=0
n! k+j=n
k!j! k=0
k! j=0
j!

   
a −b A a cos b − sin b
Theorem 3: if A = , then e = e
b a sin b cos b

Proof: we take z = a + ib = reiθ ; under standard notations. Now we can write


    2   
2 r cos θ −r sin θ r cos θ −r sin θ r cos 2θ −r2 sin 2θ Re(z 2 ) −Im(z 2 )
A = = 2 =
r sin θ r cos θ r sin θ r cos θ r sin 2θ r2 cos 2θ Im(z 2 ) Re(z 2 )
Stability Analysis of ODEs 9

 
k Re(z k ) −Im(z k )
Thus by induction,A = . Now we have
Im(z k ) Re(z k )
" #  
X∞
A k X∞ k
Re( k! ) −Im( k! )
z z k
Re(ez ) −Im(ez )
A
e = = k k =
k! Im( zk! ) Re( zk! ) Im(ez ) Re(ez )
k=0 k=0

Now ez = ea+ib = ea (cos b + i sin b), so we have Re(ez ) = ea cos b and Im(ez ) = ea sin b.
 
a cos b − sin b
∴e =e
A

sin b cos b

Note: If a = 0, this matrix represents anticlockwise rotation by b degrees.

   
a b A a 1 b
Theorem 4: If A = , then e = e
0 a 0 1


0 b
Proof: A = aI + = aI + B. Clearly aI and B commute. Moreover B k = 0 ∀k ≥
0 0
2 =⇒ eB = I + B. So we can hereby conclude
 
a 1 b
A
e =e aI+B a B
=e e =e ■
0 1

Theorem 5: If A = P DP −1 , where D is diagonal, then det(eA ) = etrace(D) .

Proof: eA = P eD P −1 =⇒ det (eA ) = det (P eD P −1 ) = det (eD ) As D is diagonal we


can write,
−1 −1
eA = etrace(D) = etrace(P AP ) = etrace(P P A) = etrace(A) ■

Theorem 6: If x is an eigenvector of T with eigenvalue λ, then x is also an eigenvector


of eT with eigenvalue eλ

Proof: T 2 x = T (λx) = λ(T x) = λ2 x. Thus by induction, T k x = λk x. Now we have


! !
X∞
T k X∞
λ k
eT x = x= x = eλ x ■
k=0
k! k=0
k!
Stability Analysis of ODEs 10

Theorem 7: T ∈ L(Rn ) and E ⊂ Rn is T invariant; then show E is also eT invariant.

Proof: Clearly if v ∈ E, then it is a linear combination of the basis vectors, so is v


k!
,
k ∈ N ∪ {0}.
On the other hand, it is trivial that E ⊇ T (E) ⊇ T 2 (E) ⊇ · · · ⊇ T k (E) ⊇ . . .
Say, we have \
v= T k (E) (We define T 0 (E) = E)
k∈N∪{0}

Now, !
X

Tk X
∞ v
T k
e (v) = (v) = T
k=0
k! k=0
k!
We know v
Tk ∈ E ∀k ∈ N ∪ {0}
k!
These altogether concludes

eT (v) ∈ E =⇒ eT (E) ⊆ E ■

1.6 The Fundamental Theorem for Linear Systems

Here, our aim is to establish the fact that for x0 ∈ Rn , the initial value problem

ẋ = Ax

x(0) = x0
has a unique solution ∀t ∈ R which is given by

x(t) = x0 eAt

Lemma: Let A be a square matrix. Then


d At
e = AeAt
dt
Proof:
!
d At eA(t+h) − eAt eAh − I X∞
Ak+1 hk
At At
e = lim = e lim = e lim A + = AeAt
dt h→0 h h→0 h h→0
k=1
(k + 1)!

Note: Here, we can place the limit inside the summation because |h| ≤ 1
Stability Analysis of ODEs 11

If x(t) has the mentioned form, then we can easily observe


d
x′ (t) = x0 eAt = x0 AeAt = Ax(t)
dt
Now to show that this is the only solution, we consider x(t) to be any solution of the
provided initial value problem. Now we fix y(t) = e−At x(t). Now we differentiate both
side to obtain

y′ (t) = −Ae−At x0 + e−At x′ (t) = −Ae−At x0 + e−At Ax(t) = 0

Setting t = 0, we obtain y(0) = x0 , and this suffices the proof of uniqueness. ■

1.7 Linear Systems in R2

In this section, we describe various phase portraits of the equation

ẋ = Ax , x ∈ R2

Say, v is an eigenvector of A with eigenvalue λ. Now, x = Av, where a is a scalar. Hence

ẋ = A(av) = aλv

The derivative is a multiple of v and hence points along the line determined by v. As
λ > 0, the derivative points in the direction of v when a is positive and in the opposite
direction when a isnegative.

1 1
We consider A = and we draw the vector field and a couple of solutions (go to
0 2
next page). Notice that the picture looks like a source with arrows coming out from the
origin. Hence we call this type of picture a source or sometimes an unstable node.
 
−1 −1
If A = , then both the eigenvalues are negative. We call this kind of pic-
0 −2
ture a sink or sometimes a stable node.
 
1 1
If A = , then one eigenvalue is positive, and the other is negative. Then, we
0 −2
reverse the arrows on one line (corresponding to the negative eigenvalue) in Figure 2.
This is known as a Saddle

Suppose the eigenvalues


 are purely
  ±ib.
imaginary. That is, suppose the eigenvalues are
0 1 1
For example, let A = . Consider the eigenvalue 2i and its eigenvector . The
−4 0 2i
real and imaginary parts of ⃗v ei2t are
       
1 i2t cos(2t) 1 i2t sin(2t)
Re e = , Im e =
2i −2 sin(2t) 2i 2 cos(2t)
Stability Analysis of ODEs 12

We can take any linear combination of them to get other solutions, which one we take
depends on the initial conditions. Now note that the real part is a parametric equation for
an ellipse. Same with the imaginary part and in fact any linear combination of the two.
This is what happens in general when the eigenvalues are purely imaginary. So when the
eigenvalues are purely imaginary, we get ellipses for the solutions. This type of picture is
sometimes called a center.

Now
 suppose
 the complex eigenvalues have a positive
  real part. For example, let A =
1 1 1
. We take 1 + 2i and its eigenvector , and find the real and imaginary of
−4 1 2i
⃗v e(1+2i)t are
       
1 (1+2i)t t cos(2t) 1 (1+2i)t t sin(2t)
Re e =e Im e =e
2i −2 sin(2t) 2i 2 cos(2t)

Note the et in front of the solutions. This means that the solutions grow in magnitude
while spinning around the origin. Hence we get a spiral source.

Finally suppose the complex eigenvalues have a negative real part. Here we get a e−t
in front of the solution. This means that the solutions shrink in magnitude while spinning
around the origin. Hence we get a spiral sink.
Stability Analysis of ODEs 13

(a) (b)

(c) (d)

Figure 2: (a) Source (b) Sink (c) Saddle (d) Center

1.8 System with Complex eigenvalues

If A ∈ GL2n (R) and has complex eigenvalues, they occur as conjugate pairs. The following
Theorem gives us an insight about this.

Theorem: If A ∈ GL2n (R) has 2n distinct complex eigenvalues, λj = aj + ibj and


λj = aj − ibj , ∀j = 1(1)n with corresponding eigenvectors wj = uj + ivj and wj =
 j − ivj ; then {u1 , v1 , . . . , un , vn } forms a basis for R . Moreover the matrix P =
2n
u
v1 u1 . . . vn un is invertible and
 
−1 aj −bj
P AP = diag
b j aj

is a 2n × 2n matrix with 2 × 2 blocks across the diagonal.


Stability Analysis of ODEs 14

(a) (b)

Figure 3: (a) Spiral Source (b)Spiral Sink

Proof: We say, if V is a real vector space, its complexification V C is the complex vector
space consisting of elements x + iy where x, y ∈ V . If T : V → W , its complexification
T C : V C → W C is defined by

T C (x + iy) = T x + iT y

Clearly T C has same eigenvalues as of T . So we have w = u + iv and w = u − iv in V C


with eigenvalues λ and λ. Clearly
w+w w−w
u= , v=
2 2i
Clearly, u and v are linearly independent, and they form a basis for V . Now we want to
compute the matrix of T with respect to this new basis. So we compute

T C (w) = λw = (a + ib)(u + iv) = (au − bv) + i(av + bu)

Moreover we also have


T C (w) = T u + iT v
So, on comparison, we have
   
  a   −b
T v = av + bu = v u , T u = au − bv = v u
b a
 
a −b
So, clearly in the basis {v, u}, the matrix of T is and hereby we can conclude
  b a
the matrix P = v1 u1 . . . vn un is invertible and
 
−1 aj −bj
P AP = diag
b j aj
is a 2 × 2 matrix with blocks across the diagonal. ■
Stability Analysis of ODEs 15

 
If we use P = u1 v1 . . . un vn , then we have
 
−1 aj bj
P AP = diag
−bj aj

So, now we have the solution of the initial value problem

ẋ = Ax , x(0) = x0

as  
aj t cos bj t − sin bj t −1
x(t) = P diag e P x0
sin bj t cos bj t

1.9 Multiple eigenvalues

Till now, we have only dealt with those systems which have distinct eigenvalues. Now,
we want to solve the system where A has multiple eigenvalues.

Definition:

Let λ be a eigenvalue of a n × n matrix A with multiplicity m ≤ n. Then for


k = 1(1)m, any non-zero solution v of

(A − λI)k v = 0

is known as a generalised eigenvector of A.

Theorem: If T ∈ L(V ) with real eigenvalues, then there is only one way of writing T
as S + N , where S is diagonalizable, N is nilpotent, and SN = N S.

Proof: Let Ek be the generalised eigenspace of T , ∀k = 1(1)m. We define Tk = T |Ek .


Now we have
Mm Mm
V = Ek , T = Tk
k=1 k=1

Note that S and N both commute with S and N , hence both of them commute with T =
S + N as well. So we have Ek is invariant under S and N . Now we say Sk = λk I ∈ L(Ek )
and Nk = Tk − Sk . If we can show S|Ek = Sk , it will then conclude N |Ek = Nk , and thus
we can show the uniqueness.
Enough to show S|Ek − Sk = 0.
Now, it is given that S is diagonalizable, so is S|Ek , then S|Ek − λk I is also diagonalizable.
Stability Analysis of ODEs 16

Hence S|Ek − Sk is diagonalizable.


Now, on the other hand, S|Ek − Sk = T |Ek − N |Ek − Tk + Nk = Nk − N |Ek . Here, N |Ek
commutes with Tk and λk I; so it also commutes with Nk . Using Binomial Theorem, we
can hereby conclude that Nk − N |Ek is nilpotent.
So, S|Ek − Sk is a nilpotent and diagonal matrix, i.e S|Ek − Sk = 0 ■

So, now we have the solution of the initial value problem


ẋ = Ax , x(0) = x0
as  
  −1 N k tk
x(t) = P diag e λj t
P I + Nt + · · · + x0
k!

If λ is aneigenvalue with multiplicity n, then the solution of the initial value problem
is  
N k tk
x(t) = e I + N t + · · · +
λt
x0
k!

Under the light of this theorem, we can right the theorem discussed in the previous section
in a newly tailored way.

Theorem: If A ∈ GL2n (R) has 2n complex eigenvalues, λj = aj + ibj and λj = aj − ibj ,


∀j = 1(1)n with corresponding eigenvectors wj = uj + ivj and wj = uj − ivj ; then 
{u1 , v1 , . . . , un , vn } forms a basis for R2n . Moreover the matrix P = v1 u1 . . . vn un
is invertible, A = S + N and
 
−1 aj −bj
P SP = diag
b j aj
is a 2n × 2n matrix with 2 × 2 blocks across the diagonal, the matrix N is nilpotent of
order k ≤ 2n.

So, now we have the solution of the initial value problem


ẋ = Ax , x(0) = x0
as    
cos bj t − sin bj t −1 N k tk
x(t) = P diag e aj t
P I + Nt + · · · + x0
sin bj t cos bj t k!

1.10 Stability Theory

Say, for am matrix A has generalised eigenvalues λj = aj + ibj and generalised eigenvector
vj = uj + iwj . Then the stable subspace, unstable subspace and central subspace is given
by
E S = Span{uj , vj | aj < 0}
Stability Analysis of ODEs 17

E U = Span{uj , vj | aj > 0}
E C = Span{uj , vj | aj = 0}
Solutions in E S tend to approach x(0) as t → ∞; and solutions in E U tend to approach
x(0) as t → −∞.
The set of mappings eAt : Rn → Rn may be regarded as the movement of points x0 ∈ Rn
along the trajectories.

Hyperbolic flow: If all eigenvalues of A has non-zero real parts, then the flow eAt :
Rn → Rn is called hyperbolic flow, and the corresponding linear system is known as hy-
perbolic linear system.

A subspace E ⊂ Rn is said to be invariant with respect to the flow if eAt E ⊂ E, ∀t ∈ R.

Lemma: Let E be a generalised eigenspace of matrix A with respect to its


generalised eigenvalue λ. Show that AE ⊂ E.

Proof: Let {v1 , . . . , vn } be basis of generalised eigenvectors for E. Then for


v∈E
X
n X
n
v= ck vk =⇒ Av = ck Avk
k=1 k=1

Now each of the vk s being generalised eigenvectors, we say

Vk = (A − λI) Vk ∈ Ker(A − λI)j−1 ⊂ E

Thus, by induction, Avk = λvk + Vk ∈ E, so does their linear combination. Hence


AE ⊂ E ■

Clearly, according to the definition Rn = E S ⊕ E U ⊕ E C .


For x0 ∈ E S ,
X
ns
x0 = ck Vk {Vk }nk=1
s
⊂ B is a basis for the stable subspace E S
k=1

Now,
X
ns
At
e x0 = ck eAt Vk
k=1

As Ak Vj ∈ E S , then eAt x0 ∈ E S , ∀t ∈ R.

So, E S is invariant with respect to the flow, so is E U and E C . ■


Stability Analysis of ODEs 18

Sink (or Source): If all eigenvalues has negative (or positive) real part, then the origin
is known as sink (or source) of the linear system.

Theorem: The following statements are equivalent

(a) ∀ x0 ∈ Rn limt→∞ eAt x0 = 0 and for x0 6= 0, limt→−∞ eAt x0 = ∞

(b) All eigenvalues of A has negative real part.

(c) There are positive constants a, c, m, M ∈ R such that ∀x0 ∈ Rn

|eAt x0 | ≤ M e−ct x0 t≥0

|eAt x0 | ≥ me−at x0 t≤0

Proof: Here we use the fact that any solution of the linear system is the linear combi-
nation of functions of the form tk eat cos bt or tk eat sin bt.
Say, one of the eigenvalues has positive real part. For that particular eigenvalue ∀x0 6= 0
limt→∞ eAt x0 = ∞ and for x0 ∈ Rn , limt→−∞ eAt x0 = 0, contradicting (a). If one of the
eigenvalues has a zero real part, then the solution is of the form tk cos bt or tk sin bt, and
again clearly ∀x0 ∈ Rn limt→∞ eAt x0 6= 0 So, we can say (a) =⇒ (b). ■
sin and cos being periodic function, for eigenvalues with a negative real part, We can give
them a bound as described in (c). So (b) =⇒ (c). ■
Using squeeze theorem on the relation obtained at (c) and by taking t → ∞, we get
∀x0 ∈ Rn limt→∞ eAt x0 = 0, and the second inequality in part (c) gives us x0 6= 0,
limt→−∞ eAt x0 = ∞. Hence (c) =⇒ (a). ■

In the similar fashion, we can devise another theorem, with similar proof.

Theorem: The following statements are equivalent

(a) ∀ x0 ∈ Rn limt→−∞ eAt x0 = 0 and for x0 6= 0, limt→∞ eAt x0 = ∞

(b) All eigenvalues of A has positive real part.

(c) There are positive constants a, c, m, M ∈ R such that ∀x0 ∈ Rn

|eAt x0 | ≤ M ect x0 t≤0

|eAt x0 | ≥ meat x0 t≥0


Stability Analysis of ODEs 19

1.11 Nonhomogeneous Linear Systems

In this section, we are concerned about differential equations of the type

ẋ = Ax + b(t)

Where A is a n × n matrix and b(t) is a vector valued function.

Fundamental Matrix Solution: A fundamental matrix solution of ẋ = Ax is any


nonsingular n × n matrix function Φ(t) that satisfies Φ′ (t) = AΦ(t), ∀t ∈ R.

Once we find a Fundamental Matrix Solution for the homogeneous system , we can find
the solution to the corresponding nonhomogeneous system.

Theorem: If Φ(t) is a fundamental matrix solution, then the solution of the nonhomo-
geneous system and the initial condition x(0) = x0 is unique, and is given by
Z t
−1
x(t) = Φ(t)Φ (0)x0 + Φ(t)Φ−1 (τ )b(τ ) dτ
0

Proof: We differentiate x(t) as defined above.


Z t
′ −1 −1
ẋ = Φ (t)Φ (0)x0 + Φ(t)Φ (t)b(t) + Φ′ (t)Φ−1 (τ )b(τ ) dτ
0
 Z t 
−1 −1
=⇒ ẋ = A Φ(t)Φ (0)x0 + Φ(t)Φ (τ )b(τ ) dτ + b(t)
0

∴ ẋ = Ax(t) + b(t) ■

With Φ(t) = eAt , the solution of the nonhomogeneous linear system looks like
Z t
At
x(t) = e x0 + e At
e−Aτ b(τ ) dτ
0
Stability Analysis of ODEs 20

2 Nonlinear Systems: Local Theory

2.1 Some Preliminary Concepts and Definitions

Differentiability: The function f : Rn → Rn is said to be differentiable at x0 ∈ Rn if


there exists a linear transformation Df (x0 ) ∈ L(Rn ) that satisfies
||f (x0 + h) − f (x0 ) − Df (x0 )h||
lim =0
||h||→0 ||h||
The linear transformation Df (x0 ) is the derrivative of f at x0 . Now we look into a theorem
that enables us to compute derrivative in coordinates.

Theorem: Consider a function f : Rn → Rm differentiable at a ∈ Rn . Then all the


∂fi
partial derivatives ∂x j
exist at a. In particular, for f differentiable at a, we have,
 
∂fi
(Df )(a) = Jf (a) = (a)
∂xj m×n

Proof: Without loss of generality, we take m = 1, and let a = (a1 , a2 , . . . , an ). Fix an


arbitrary index i ∈ {1, 2, . . . , n}. We define ηi : [ai − ϵ, ai + ϵ] → Rn , defined by

ηi (t) = (a1 , . . . , ai−1 , t, at+1 , . . . , an ) = a + (t − ai )ei

As On is open and ηi is continuous, we can find ϵ small such that f ([[ai − ϵ, ai + ϵ]]) ⊆ Rn .
Evidently, ηi is differentiable and (Dηi ) = [0, . . . , 1, . . . , 0]t = eti over [ai − ϵ, ai + ϵ]. Now,
by the definition of partial derivatives, D(f ◦ ηi )(ai ) = fxi (a).

Again, by chain rule, as f is differentiable at a, D(f ◦ ηi )(ai ) = fxi (a) exists, and

D(f ◦ ηi )(ai ) = Df (ηi (ai )) · Dηi (ai )


=⇒ fxi (a) = Df (a) · eti = [Df (a)]i

As the index i was arbitrary to begin with, this completes the proof. ■

Continuity: Suppose V1 and V2 be two normed linear spaces with respective norms ||.||1
and ||.||2 . Then f : V1 → V2 is continuous at x0 ∈ V1 if ∀ε > 0, ∃δ > 0 such that x ∈ V1
and ||x − x0 ||1 δ implies ||f (x) − f (x0 )||2 < ε. f is said to be continuous on E ⊆ V1 if it is
continuous ∀ points in E, and we write f ∈ C(E).
Stability Analysis of ODEs 21

C 1 (E) Functions: If the function f : E → Rn is differentiable on E, then we say


f ∈ C 1 (E).
The following theorem, almost analogous to the previous one, helps us to decide whether
a function belongs to C 1 (E).

Theorem: Suppose E is an open subset of Rn and f : E → Rn . Then f ∈ C 1 (E)


∂fi
iff ∂x j
exists ∀i, j = 1(1)n, and they are continuous.

Remarks: Higher order derrivatives can be defined in a similar fashion. And


the similar notion holds for the conditionof f ∈ C k .
A function f : R → Rn is said to be analytic, if each of its components are analytic,
i.e for j = 1(1)n and x0 ∈ E, fj (x) has a taylor series which converges to fj (x) in
some neighborhood of x0 in E.

2.2 The Fundamental Existence Uniqueness Theorem

In this section, our primary focus will revolve around Piccard’s Classical Method of Suc-
cessive Approximations. We will establish the existence, uniqueness, Continuity and Dif-
ferentiability of the soution of the intial value problem for given intial condition and
parameters under the hypothesis that f ∈ C 1 (E).

Definition: Suppose f ∈ C(E), where E is an open subset of Rn . Then x(t) is a solution


of the differential equation on an interval I if x(t) is differentiable on I and if ∀ t ∈ I,
x(t) ∈ E and x′ (t) = f (x(t)).

Locally Lipschitz: The function f is said to be locally lipschitz on E if for each x0 ∈ E,


there is an ε-neighborhood of x0 , Nε (x0 ) ⊂ E and a constant K > 0 such that x, y ∈
Nε (x0 )
||f (x) − f (y)|| ≤ K||x − y||
Stability Analysis of ODEs 22

Lemma: If f : E → Rn where E is an open subset of Rn and f ∈ C 1 (E), then f


is locally lipschitz on E.

Proof: Since E is open, ∃ε > 0 for given x0 ∈ E, such that Nε (x0 ) ⊂ E. Now
we define
K= max ||Df (x)||
||x−x0 ||<ε/2

We say that the ε/2 neighborhood around x0 as N0 . Now, for x, y ∈ N0 , we set


u = x − y. So for 0 ≤ s ≤ 1, we have x + su ∈ N0 , since N0 is convex. We define
F : [0, 1] 7→ Rn by

F(s) = f (x + su) =⇒ F′ (s) = Df (x + su)u

Therefore, now we have


Z 1 Z 1

|f (y) − f (x)| = |F(1) − F(0)| = F (s) ds ≤ |Df (x + su)u| ds
0 0
Z 1
≤ ||Df (x + su)u|| |u| ds ≤ K|u| = K|y − x|
0

Complete Space: Let V be a normed linear space. Then a sequence uk ⊂ V is called


a Cauchy Sequence if ∀ ε > 0 there is an N such that k, m ≥ N implies that
||uk − um || < ε
The space V is caleed Complete if every Cauchy Sequence in V converges to some element
in V .
The space C(I) is complete normed linear space, as a sequence of functions is uniformly
convergent if and only if it is a Cauchy Sequence.

Theorem: Let E be an open subset of Rn containing x0 and assume f ∈ C 1 (E). Then


∃a > 0 such that the initial value problem
ẋ = f (x)
x(0) = x0
has a unique solution in the interval [−a, a].

Proof: Since f ∈ C 1 (E), it follows from the lemma proven above, we can say, ∃ε > 0
such that Nε (x0 ) ⊂ E and a constant K > 0 such that ∀x, y ∈ Nε (x0 ),
|f (x) − f (y)| ≤ K|x − y|
Stability Analysis of ODEs 23

We set b = ε/2. Then the continuous function f (x) is bounded on the compact set
N0 = {x ∈ Rn : |x − x0 | ≤ b}
Let
M = max |f (x)|
x∈N0

Now, we use Piccard’s Successive Approximations. We assume ∃a ∈ R+ such that uk (t)


is defined and continuous on [−a, a], as well as it satisfies
max |uk (t) − x0 | ≤ b
[−a,a]

It certainly follow that f (uk (t)) is defined and continuous on [−a, a] and therefore that
Z t
uk+1 (t) = x0 + f (uk )(s) ds
0
is defined and continuous on [a, −a]. aand satisfies
Z t
|uk+1 (t) − x0 | ≤ |f (uk (s))| ds ≤ M a, , ∀t ∈ [−a, a]
0
Thus, by choosing 0 < a < b/M , it follows from induction that uk (t) is defined and
continiuous.
Now, since ∀t ∈ [−a, a] and ∀k ∈ N ∪ {0} := N0 we have uk (t) ∈ N0 , it follows from teh
Lipschitz Condition satissfied by f that ∀t ∈ [−a, a]
Z t
|u2 (t) − u1 (t)| ≤ |f (u1 (s)) − f (u0 (s))| ds
0
Z t
≤K |u1 (s) − u0 (s)| ds
0
≤ Ka max |u1 (t) − x0 |
[−a,a]

≤ Kab
And then assuming that

max |uj (t) − uj−1 (t)| ≤ (Ka)j−1 b


[−a,a]

for some integer j ≥ 2, it follows that ∀ t ∈ [−a, a]

Z t
|uj+1 (t) − uj (t)| ≤ |f (uj (s)) − f (uj−1 (s))| ds
Z t 0

≤K |uj (s) − uj−1 (s)| ds


0
≤ Ka max |uj (t) − uj−1 (t)|
[−a,a]

≤ (Ka)j b.
Stability Analysis of ODEs 24

Thus, it follows by induction that our assumption holds for j = 2, 3, . . . Setting α = Ka


and choosing 0 < a < 1/K, we see that for m > k ≥ N and t ∈ [−a, a]

X
m−1
|um (t) − uk (t)| ≤ |uj+1 (t) − uj (t)|
j=k
X∞
≤ |uj+1 (t) − uj (t)|
j=N
X∞
αN
≤ αj b = b
j=N
1−α

This last quantity approaches zero as N → ∞. Therefore, ∀ ε > 0 there exists an N such
that m, k ≥ N implies that

kum − uk k = max |um (t) − uk (t)| < ε


[−a,a]

i.e., {uk } is a Cauchy sequence of continuous functions in C([−a, a]). It follows from the
above theorem that uk (t) converges to a continuous function u(t) uniformly ∀ t ∈ [−a, a]
as k → ∞. And then taking the limit of both sides of equation defining the successive
approximations, we see that the continuous function

u(t) = lim uk (t)


k→∞

satisfies the integral equation

Z t
u(t) = x0 + f (u(s))ds
0

∀ t ∈ (−a, a]. We have used the fact that the integral and the limit can be interchanged
since the limit in continuous and by the fundamental
theorem of calculus, the right differentiable and

u′ (t) = f (u(t))

∀ t ∈ [−a, a]. Furthermore, u(0) = x0 and from (4) it follows that u(t) ∈ Nε (x0 ) ⊂ E ∀
t ∈ [−a, a]. Thus u(t) is a solution of the initia value problem on [−a, a]. It remains to
show that it is the only solution.
Stability Analysis of ODEs 25

Let u(t) and v(t) be two solutions of the initial value problem on [−a, a]. Then the
continuous function |u(t) − v(t)| achieves its maximum at some point t1 ∈ [−a, a]. It
follows that

ku − vk = max |u(t) − v(t)|


[−a,a]
Z t1
= f (u(s)) − f (v(s))ds
0
Z |t1 |
≤ |f (u(s)) − f (v(s))|ds
0
Z |t1 |
≤K |u(s) − v(s)|ds
0
≤ Ka max |u(t) − v(t)|
≤ Kaku − vk

But Ka < 1 and this last inequality can only be satisfied if ku−vk = 0. Thus, u(t) = v(t)
on [−a, a]. We have shown that the successive approximations converge uniformly to a
unique solution of the initial value
 problem on the interval [−a, a] where a is any number
b 1
satisfying 0 < a < min M , K .

Remark: Exactly the same method of proof shows that the initial value problem

ẋ = f (x)
x (t0 ) = x0

has a unique solution on some interval [t0 − a, t0 + a].

2.3 Dependence on Initial Conditions and Parameters

In this section we investigate the dependence of the solution of the initial value problem

ẋ = f (x)
x(0) = y

on the initial condition y. If the differential equation depends on a parameter µ ∈ Rm ,


i.e., if the function f (x) in initial value problem is replaced by f (x, µ), then the solution
u(t, y, µ) will also depend on the parameter µ. Roughly speaking, the dependence of the
solution u(t, y, µ) on the initial condition y and the parameter µ is as continuous as the
Stability Analysis of ODEs 26

function f. In order to establish this type of continuous dependence of the solution on


initial conditions and parameters, we first establish a result due to T.H. Gronwall.

Gronwall’s Lemma: Suppose that g(t) is a continuous real valued function that
satisfies g(t) ≥ 0 and
Z t
g(t) ≤ C + K g(s)ds
0

∀ t ∈ [0, a] where C and K are positive constants. It then follows that ∀ t ∈ [0, a],

g(t) ≤ CeKt
Rt
Proof: Let G(t) = C + K 0 g(s)ds for t ∈ [0, a]. Then G(t) ≥ g(t) and G(t) > 0 ∀
t ∈ [0, a]. It follows from the fundamental theorem of calculus that

G′ (t) = Kg(t)

and therefore that

G′ (t) Kg(t) KG(t)


= ≤ =K
G(t) G(t) G(t)

∀ t ∈ [0, a]. And this is equivalent to saying that

d
(log G(t)) ≤ K
dt

or

log G(t) ≤ Kt + log G(0)

or

G(t) ≤ G(0)eKt = CeKt

∀ t ∈ [0, a], which implies that g(t) ≤ CeKt ∀ t ∈ [0, a].


Stability Analysis of ODEs 27

Theorem: Let E be an open subset of Rn containing x0 and assume that f ∈ C 1 (E).


Then there esists an a > 0 and aδ > 0 such that ∀ y ∈ Nδ (x0 ) the initial value problem

ẋ = f (x)
x(0) = y

has a unique solution u(t, y) with u ∈ C 1 (G) where G = [−a, a] × Nδ (x0 ) ⊂ Rn+1 ;
furthermore, for each y ∈ Nδ (x0 ) , u(t, y) is a twice continiously differentiable function of
t for t ∈ [−a, a].
Proof: Since f ∈ C 1 (E), it follows from the lemma in Section 2.2 that there is an ε-
neighborhood Nε (x0 ) ⊂ E and a constant K > 0 such that ∀ x and y ∈ Nε (x0 ),

|f (x) − f (y)| ≤ K|x − y|

As in the proof of the fundamental existence theorem, let N0 = {x ∈ Rn |x − x0 | ≤ ε/2},


let M0 be the maximum of |f (x)| on N0 and let M1 be the maximum of kDf(x)k on N0 .
Let δ = ε/4, and for y ∈ Nδ (x0 ) define the successive approximations uk (t, y) as

u0 (t, y) = y
Z t
uk+1 (t, y) = y + f (uk (s, y)) ds
0

Assume that uk (t, y) is defined and continuous ∀ (t, y) ∈ G = [−a, a | × Nδ (x0 ) and that
∀ y ∈ Nδ (x0 )

kuk (t, y) − x0 k < ε/2

where k · k denotes the maximum over all t ∈ [−a, a]. This is clearly satisfied for k =
0. And assuming this is true for k, it follows that uk+1 (t, y), defined by the above
successive approximations, is continuous on G. This follows since a continuous function
of a continuous function is continuous and since the above integral of the continuous
function f (uk (s, y)) is continuous in t by the fundamental theorem of calculus and also
in y. We also have

Z t
kuk+1 (t, y) − yk ≤ |f (uk (s, y))| ds ≤ M0 a
0

for t ∈ [−a, a] and y ∈ Nδ (x0 ) ⊂ N0 . Thus, for t ∈ [−a, a] and y ∈ Nδ (x0 ) with δ = ε/4,
we have
Stability Analysis of ODEs 28

kuk+1 (t, y) − x0 k ≤ kuk+1 (t, y) − yk + ky − x0 k


≤ M0 a + ε/4 < ε/2

provided M0 a < ε/4, i.e., provided a < ε/ (4M0 ). Thus, the above induction hypothesis
holds ∀ k = 1, 2, 3, . . . and (t, y) ∈ G provided a < ε/ (4M0 ).

We next show that the successive approximations uk (t, y) converge uniformly to a con-
tinuous function u(t, y) ∀ (t, y) ∈ G as k → ∞. As in the proof of the fundamental
existence theorem,

ku2 (t, y) − u1 (t, y)k ≤ Ka ku1 (t, y) − yk


≤ Ka ku1 (t, y) − x0 k + Ka ky − x0 k
≤ Ka(ε/2 + ε/4) ≤ Kaε

for (t, y) ∈ G. And then it follows exactly as in the proof of the fundamental existence
theorem in Section 2.2 that

kuk+1 (t, y) − uk (t, y)k ≤ (Ka)k ε

for (t, y) ∈ G and consequently that the successive approximations converge uniformly to
a continuous function u(t, y) for (t, y) ∈ G as k → ∞ provided a < 1/K. Furthermore,
the function u(t, y) satisfies

Z t
u(t, y) = y + f (u(s, y))ds
0

for (t, y) ∈ G and also u(0, y) = y. And it follows from the inequality that u(t, y) ∈
Nε/2 (x0 ) ∀ (t, y) ∈ G. Thus, by the fundamental theorem of calculus and the chain rule,
it follows that

u̇(t, y) = f (u(t, y))

and that

ü(t, y) = Df (u(t, y))u̇(t, y)


Stability Analysis of ODEs 29

∀ (t, y) ∈ G; i.e., u(t, y) is a twice continuously differentiable function of t which satisfies


the initial value problem ∀ (t, y) ∈ G. The uniqueness of the solution u(t, y) follows from
the fundamental theorem in the previous Section.

We now show that u(t, y) is a continuously differentiable function of y ∀ (t, y) ∈ [−a, a] ×


Nδ/2 (x0 ). In order to do this, fix y0 ∈ Nδ/2 (x0 ) and choose h ∈ Rn such that |h| < δ/2.
Then y0 + h ∈ Nδ (x0 ). Let u (t, y0 ) and u (t, y0 + h) be the solutions of the initial value
problem with y = y0 and with y = y0 + h respectively. It then follows that

Z t
|u (t, y0 + h) − u (t, y0 )| ≤ |h| + |f (u (s, y0 + h)) − f (u (s, y0 ))| ds
Z t 0

≤ |h| + K |u (s, y0 + h) − u (s, y0 )| ds


0

∀ t ∈ [−a, a]. Thus, it follows from Gronwall’s Lemma that

|u (t, y0 + h) − u (t, y0 )| ≤ |h|eK|t|

∀ t ∈ [−a, a]. We next define Φ (t, y0 ) to be the fundamental matrix solution of the initial
value problem

Φ̇ = A (t, y0 ) Φ
Φ (0, y0 ) = I

with A (t, y0 ) = Df (u (t, y0 )) and I the n × n identity matrix. The existence and con-
tinuity of Φ (t, y0 ) on some interval [−a, a] follow from the method of successive approx-
imations. It then follows from the initial value problems for u (t, y0 ), u (t, y0 + h) and
Φ (t, y0 ) and Taylor’s Theorem,

f (u) − f (u0 ) = Df (u0 ) (u − u0 ) + R (u, u0 )

where |R (u, u0 )| / |u − u0 | → 0 as |u − u0 | → 0, that


Stability Analysis of ODEs 30

Z t
|u (t, y0 ) − u (t, y0 + h) + Φ (t, y0 ) h| ≤ |f (u (s, y0 ))
0
− f (u (s, y0 + h)) + Df (u (s, y0 )) Φ (s, y0 ) h | ds
Z t
≤ kDf (u (s, y0 ))k |u (s, y0 ) − u (s, y0 + h) + Φ (s, y0 ) h| ds
Z t
0

+ |R (u (s, y0 + h) , u (s, y0 ))| ds


0

Since |R (u, u0 )| / |u − u0 | → 0 as |u − u0 | → 0 and since u(s, y) is continuols on G, it


follows that given any ε0 > 0, there exists a δ0 > 0 such that
|h| < δ0 then |R (u (s, y0 ) , u (s, y0 + h))| < ε0 |u (s, y0 ) − u (s, y0 + h)| ∀ s ∈ [−a, a].
Thus, if we let

g(t) = |u (t, y0 ) − u (t, y0 + h) + Φ (t, y0 ) h|

It then follows from the conclusion obtained by Gronwall’s Lemma and the inequality
aforededuced that ∀ t ∈ [−a, a], y0 ∈ Nδ/2 (x0 ) and |h| < min (δ0 , δ/2) we have

Z t
g(t) ≤ M1 g(s)ds + ε0 |h|aeKa
0

Hence, it follows from Gronwall’s Lemma that for any given ε0 > 0

g(t) ≤ ε0 |h|aeKa eM1 a


∀ t ∈ [−a, a] provided |h| < min (δ0 , δ/2). Thus,

|u (t, y0 ) − u (t, y0 + h) + Φ (t, y0 ) h|


lim =0
|h|→0 |h|

uniformly ∀ t ∈ [−a, a]. Therefore,

∂u
(t, y0 ) = Φ (t, y0 )
∂y

∀ t ∈ [−a, a] where Φ (t, y0 ) is the fundamental matrix solution of the initial value problem
(5) which is continuous in t and in y0 ∀ t ∈ [−a, a] and y0 ∈ Nδ/2 (x0 ). This completes
the proof of the theorem.
Stability Analysis of ODEs 31

Some Remarks

1. A similar proof shows that if f ∈ C r (E) then the solution u(t, y) of the initial
value problem is in C r (G) where G is defined as in the above theorem. And
if f (x) is a (real) analytic function for x ∈ E then u(t, y) is analytic in the
interior of G.

2. If x0 is an equilibrium point of initialvalueproblem, i.e., f (x0 ) = 0 so that


u (t, x0 ) = x0 ∀ t ∈ R, then

∂u
Φ (t, x0 ) = (t, x0 )
∂x0
satisfies

Φ̇ = Df (x0 ) Φ
Φ (0, x0 ) = I

And according to the Fundamental Theorem for Linear Systems

Φ (t, x0 ) = eDf (x0 )t

3. It follows from the continuity of the solution u(t, y) of the initial value problem
that for each t ∈ [−a, a]

lim u(t, y) = u (t, x0 )


y→x0

It follows from the inequality that this limit is uniform ∀ t ∈ [−a, a].

Now we arrive at two interesting results on the basis of this theorem.

Corollary: Under the hypothesis of the above theorem,

∂u
Φ(t, y) = (t, y)
∂y

for t ∈ [−a, a] and y ∈ Nδ (x0 ) if and only if Φ(t, y) is the fundamental matrix solution of

Φ̇ = Df [u(t, y)]Φ
Φ(0, y) = I

for t ∈ [−a, a] and y ∈ Nδ (x0 ).


Stability Analysis of ODEs 32

Theorem: Let E be an open subset of Rn+m containing the point (x0 , µ0 ) where x0 ∈ Rn
and µ0 ∈ Rm and assume that f ∈ C 1 (E). It then follows that there exists an a > 0 and
a δ > 0 such that ∀ y ∈ Nδ (x0 ) and µ ∈ Nδ (µ0 ), the initial value problem

ẋ = f (x, µ)
x(0) = y

has a unique solution u(t, y, µ) with u ∈ C 1 (G) where G = [−a, a]× Nδ (x0 ) × Nδ (µ0 ).

This theorem follows immediately from the previous theorem by replacing the vectors
x0 , x, ẋ and y by the vectors (x0 , µ0 ) , (x, µ), (ẋ, 0) and (y, µ) or it can be proved directly
using Gronwall’s Lemma and the method of successive approximations.

2.4 The Maximal Interval of Existence

The fundamental existence-uniqueness theorem established that if f ∈ C 1 (E) then the


initial value problem

ẋ = f (x)
x(0) = x0

has a unique solution defined on some interval (−a, a). In this section we show that initial
value problem has a unique solution x(t) defined on a maximal interval of existence (α, β).
Furthermore, if β < ∞ and if the limit

x1 = lim− x(t)
t→β

exists then x1 ∈ ∂E, the boundary of E. The boundary of the open set E, ∂E = Ē \ E
where Ē denotes the closure of E. On the other hand, if the above limit exists and x1 ∈ E,
then β = ∞, f (x1 ) = 0 and x1 is an equilibrium point of the initial value problem. Now
we look into the following lemmas and theorems to understand the underlying concepts
in a greater detail.
Stability Analysis of ODEs 33

Lemma: Let E be an open subset of Rn containing x0 and suppose f ∈ C 1 (E). Let


u1 (t) and u2 (t) be solutions of the initial value problem on the intervals I1 and I2 .
Then 0 ∈ I1 ∩ I2 and if I is any open interval containing 0 and contained in I1 ∩ I2 ,
it follows that u1 (t) = u2 (t) ∀ t ∈ I.

Proof: Since u1 (t) and u2 (t) are solutions of the initial value problem on I1 and I2
respectively, it follows from Definition 1 in Section 2.2 that 0 ∈ I1 ∩I2 . And if I is an open
interval containing 0 and contained in I1 ∩ I2 , then the fundamental existence-uniqueness
theorem in Section 2.2 implies that u1 (t) = u2 (t) on some open interval (−a, a) ⊂ I. Let
I ∗ be the union of all such open intervals contained in I. Then I ∗ is the largest open
interval contained in I on which u1 (t) = u2 (t). Clearly I ∗ ⊂ I and if I ∗ is a proper subset
of I, then one of the endpoints t0 of I ∗ is contained in I ⊂ I1 ∩ I2 . It follows from the
continuity of u1 (t) and u2 (t) on I that

lim u1 (t) = lim u2 (t)


t→t0 t→t0

Call this common limit u0 . It then follows from the uniqueness of solutions that u1 (t) =
u2 (t) on some interval I0 = (t0 − a, t0 + a) ⊂ I. Thus, u1 (t) = u2 (t) on the interval
I ∗ ∪ I0 ⊂ I and I ∗ is a proper subset of I ∗ ∪ I0 . But this contradicts the fact that I ∗ is
the largest open interval contained in I on which u1 (t) = u2 (t). Therefore, I ∗ = I and
we have u1 (t) = u2 (t) ∀ t ∈ I.

Theorem: Let E be an open subset of Rn and assume that f ∈ C 1 (E). Then for
each point x0 ∈ E, there is a maximal interval J on which the initial value problem
has a unique solution, x(t); i.e., if the initial value problem has a solution y(t) on an
interval I then I ⊂ J and y(t) = x(t) ∀ t ∈ I. Furthermore, the maximal interval
J is open; i.e., J = (α, β).

Proof: By the fundamental existence-uniqueness theorem in Section 2.2, the initial value
problem has a unique solution on some open interval (−a, a). Let (α, β) be the union of all
open intervals I such that initial value problem has a solution on I. We define a function
x(t) on (α, β) as follows: Given t ∈ (α, β), t belongs to some open interval I such that
initial value problem has a solution u(t) on I; for this given t ∈ (α, β), define x(t) = u(t).
Then x(t) is a well-defined function of t since if t ∈ I1 ∩ I2 where I1 and I2 are any two
open intervals such that initial value problem has solutions u1 (t) and u2 (t) on I1 and I2
respectively, then by the lemma u1 (t) = u2 (t) on the open interval I1 ∩ I2 . Also, x(t) is a
solution of initial value problem on (α, β) since each point t ∈ (α, β) is contained in some
open interval I on which the initial value problem has a unique solution u(t) and since
x(t) agrees with u(t) on I. The fact that J is open follows from the fact that any solution
of initial value problem on an interval (α, β] can be uniquely continued to a solution on
an interval (α, β + a) with a > 0 as in the proof of Theorem 2 below.
Stability Analysis of ODEs 34

Theorem: Let E be an open subset of Rn containing x0 , let f ∈ C 1 (E), and let


(α, β) be the maximal interval of existence of the solution x(t) of the initial value
problem. Assume that β < ∞. Then given any compact set K ⊂ E, there exists a
t ∈ (α, β) such that x(t) ∈
/ K.

Proof: Since f is continuous on the compact set K, there is a positive number M such
that |f (x)| ≤ M ∀ x ∈ K. Let x(t) be the solution of the initial value problem on
its maximal interval of existence (α, β) and assume that β < ∞ and that x(t) ∈ K ∀
t ∈ (α, β). We first show that limt→β − x(t) exists. If α < t1 < t2 < β then

Z t2
|x (t1 ) − x (t2 )| ≤ |f (x(s))|ds ≤ M |t2 − t1 |
t1

Thus as t1 and t2 approach β from the left, |x (t2 ) − x (t1 )| → 0 which, by the Cauchy
criterion for convergence in Rn (i.e., the completeness of Rn ) implies that limt→β − x(t)
exists. Let x1 = limt→β − x(t). Then x1 ∈ K ⊂ E since K is compact. Next define the
function u(t) on (α, β] by

(
x(t) for t ∈ (α, β)
u(t) =
x1 for t = β

Then u(t) is differentiable on (α, β]. Indeed,

Z t
u(t) = x0 + f (u(s))ds
0

which implies that

u′ (β) = f (u(β))

i.e., u(t) is a solution of the initial value problem on (α, β]. The function u(t) is called the
continuation of the solution x(t) to (α, β]. Since x1 ∈ E, it follows from the fundamen-
tal existence-uniqueness theorem in Section 2.2 that the initial value problem ẋ = f (x)
together with x(β) = x1 has a unique solution x1 (t) on some interval (β − a, β + a). By
the above lemma, x1 (t) = u(t) on (β − a, β) and x1 (β) = u(β) = x1 . So if we define


u(t) for t ∈ (α, β]
v(t) =
x1 (t) for t ∈ [β, β + a)
Stability Analysis of ODEs 35

then v(t) is a solution of the initial value problem on (α, β + a). But this contradicts the
fact that (α, β) is the maximal interval of existence for
the initial value problem. Hence, if β < ∞, it follows that there exists a t ∈ (α, β) such
that x(t) ∈/ K.

If (α, β) is the maximal interval of existence for the initial value problem then 0 ∈ (α, β)
and the intervals [0, β) and (α, 0] are called the right and left maximal intervals of existence
respectively. Essentially the same proof yields the following result.

Theorem: Let E be an open subset of Rn containing x0 , let f ∈ C 1 (E), and let


[0, β) be the right maximal interval of existence of the solution x(t) of the initial
value problem. Assume that β < ∞. Then given any compact set K ⊂ E, there
exists a t ∈ (0, β) such that x(t) ∈
/ K.

Corollary: Under the hypothesis of the above theorem, if β < ∞ and if


limt→β − x(t) exists then limt→β − x(t) ∈ Ė.

Proof: If x1 = limt→β − x(t), then the function


x(t) for t ∈ [0, β)
u(t) =
x1 for t=β

is continuous on [0, β]. Let K be the image of the compact set [0, β] under the continuous
map u(t); i.e.,

K = {x ∈ Rn | x = u(t) for some t ∈ [0, β]}

Then K is compact. Assume that x1 ∈ E. Then K ⊂ E and it follows from Theorem 3


that there exists a t ∈ (0, β) such that x(t) ∈/ K. This is a contradiction and therefore
x1 ∈
/ E. But since x(t) ∈ E ∀ t ∈ [0, β), it follows that x1 = limt→β − x(t) ∈ Ē. Therefore
x1 ∈ Ē ∼ E; i.e., x1 ∈ Ė.

Corollary: Let E be an open subset of Rn containing x0 , let f ∈ C 1 (E), and let


[0, β) be the right maximal interval of existence of the solution x(t) of the initial
value problem. Assume that there exists a compact set K ⊂ E such that

{y ∈ Rn | y = x(t) for some t ∈ [0, β)} ⊂ K


It then follows that β = ∞; i.e. the initial value problem has a solution x(t) on
[0, ∞).
Stability Analysis of ODEs 36

Proof: This corollary is just the contrapositive of the statement of the aforementioned
theorem.

We next prove the following theorem which strengthens the result on uniform convergence
with respect to initial conditions.

Theorem: Let E be an open subset of Rn containing x0 and let f ∈ C 1 (E). Suppose


that the initial value problem has a solution x (t, x0 ) defined on a closed interval
[a, b]. Then there exists a δ > 0 and a positive constant K such that ∀ y ∈ Nδ (x0 )
the initial value problem

ẋ = f (x)
x(0) = y (2)

has a unique solution x(t, y) defined on [a, b] which satisfies

|x(t, y) − x (t, x0 )| ≤ |y − x0 | eK|t|


and

lim x(t, y) = x (t, x0 )


y→x0

uniformly ∀ t ∈ [a, b].

Remark: If in Theorem 4 we have a function f (x, µ) depending on a parameter µ ∈ Rm


which satisfies f ∈ C 1 (E) where E is an open subset of Rn+m containing (x0 , µ0 ), it can
be shown that if for µ = µ0 the initial value problem has a solution x (t, x0 , µ0 ) defined
on a closed interval a ≤ t ≤ b, then there is a δ > 0 and a K > 0 such that ∀ y ∈ Nδ (x0 )
and µ ∈ Nδ (µ0 ) the initial value problem

ẋ = f (x, µ)
x(0) = y

has a unique solution x(t, y, µ) defined for a ≤ t ≤ b which satisfies

|x(t, y, µ) − x (t, x0 , µ0 )| ≤ [|y − x0 | + |µ − µ0 |] eK|t|

and
Stability Analysis of ODEs 37

lim x(t, y, µ) = x (t, x0 , µ0 )


(y,µ)→(x0 ,µ0 )

uniformly ∀ t ∈ [a, b].


In order to prove this theorem, we first establish the following lemma.

Lemma: Let E be an open subset of Rn and let A be a compact subset of E.


Then if f : E → Rn is locally Lipschitz on E, it follows that f satisfies a Lipschitz
condition on A.

Proof: Let M be the maximal value of the continuous function f on the compact set A.
Suppose that f does not satisfy a Lipschitz condition on A. Then for every K > 0, we
can find x, y ∈ A such that

|f (y) − f (x)| > K|y − x|

In particular, there exist sequences xn and yn in A such that

|f (yn ) − f (xn )| > n |yn − xn | (*)

for n = 1, 2, 3, . . .. Since A is compact, there are convergent subsequences, call them xn


and yn for simplicity in notation, such that xn → x∗ and yn → y∗ with x∗ and y∗ in A.
It follows that x∗ = y∗ since ∀ n = 1, 2, 3, . . .

1 2M
|y∗ − x∗ | = lim |yn − xn | ≤ |f (yn ) − f (xn )| ≤
n→∞ n n

Now, by hypotheses, there exists a neighborhood N0 of x∗ and a constant K0 such that


f satisfies a Lipschitz condition with Lipschitz constant K0 ∀ x and y ∈ N0 . But since
xn and yn approach x∗ as n → ∞, it follows that xn and yn are in N0 for n sufficiently
large; i.e., for n sufficiently large

|f (yn ) − f (xn )| ≤ K |yn − xn | .

But for n ≥ K, this contradicts the above inequality (*) and this establishes the lemma.
Stability Analysis of ODEs 38

Proof of Theorem: Since [a, b] is compact and x (t, x0 ) is a continuous function of


t, {x ∈ Rn | x = x (t, x0 ) and a ≤ t ≤ b} is a compact subset of E. And since E is open,
there exists an ε > 0 such that the compact set

A = {x ∈ Rn ||x − x (t, x0 ) |≤ ε and a ≤ t ≤ b}

is a subset of E. Since f ∈ C 1 (E), f is locally Lipschitz in E; and then by the above


lemma, f satisfies a Lipschitz condition

|f (y) − f (x)| ≤ K|y − x|

∀ x, y ∈ A. Choose δ > 0 so small that δ ≤ ε and δ ≤ εe−K(b−a) . Let y ∈ Nδ (x0 ) and


let x(t, y) be the solution of the initial value problem on its maximal interval of existence
(α, β). We shall show that [a, b] ⊂ (α, β). Suppose that β ≤ b. It then follows that
x(t, y) ∈ A ∀ t ∈ (α, β) because if this were not true then there would exist a t∗ ∈ (α, β)
such that x (t, x0 ) ∈ A for t ∈ (α, t∗ ] and x (t∗ , y) ∈ Ȧ. But then

Z t
|x(t, y) − x (t, x0 )| ≤ |y − x0 | + | f (x(s, y)) − f (x (s, x0 ) | ds
Z t 0

≤ |y − x0 | + K |x(s, y) − x (s, x0 )| ds
0

∀ t ∈ (α, t∗ ]. And then by Gronwall’s Lemma, it follows that


|x (t∗ , y) − x (t∗ , x0 )| ≤ |y − x0 | eK|t | < δeK(b−a) < ε

since t∗ < β ≤ b. Thus x (t∗ , y) is an interior point of A, a contradiction Thus, x(t, y) ∈ A


∀ t ∈ (α, β). But then (α, β) is not the maximal interval of existence of x(t, y), a
contradiction. Thus b − β It is similarly proved that α < a. Hence, ∀ y ∈ Nδ (x0 ),
the initial value problem has a unique solution defined on [a, b]. Furthermore, we assume
that there is a t∗ ∈ [a, b) such that x(t, y) ∈ A ∀ t ∈ [a, t∗ ]. and x (t∗ , y) ∈ Ȧ, a repeat of
the above argument based on Gronwall’s Lemma leads to a contradiction and shows that
x(t, y) ∈ A ∀ t ∈ [a, b] and hence that

|x(t, y) − x (t, x0 )| ≤ |y − x0 | eK|t|

∀ t ∈ [a, b]. It then follows that


Stability Analysis of ODEs 39

lim x(t, y) = x (t, x0 )


y→x0

uniformly ∀ t ∈ [a, b].

2.5 The Flow defined by a Differential Equation

We have already defined the flow, eAt : Rn → Rn , of the linear system

ẋ = Ax

The mapping ϕt = eAt satisfies the following basic properties ∀ x ∈ Rn :

(i) ϕ0 (x) = x

(ii) ϕs (ϕt (x)) = ϕs+t (x) ∀ s, t ∈ R



(iii) ϕ−t (ϕt (x)) = ϕt ϕ−t (x) = x ∀ t ∈ R.

Observe that these properties follow either from the definitions, or from the properties we
have already proved.
In this section, we define the flow, ϕt , of the nonlinear system

ẋ = f (x)

and show that it satisfies these same basic properties. In the following definition, we
denote the maximal interval of existence (α, β) of the solution ϕ (t, x0 ) of the initial value
problem

ẋ = f (x)
x(0) = x0

by I (x0 ) since the endpoints α and β of the maximal interval generally depend on x0 .
Stability Analysis of ODEs 40

Definition: Let E be an open subset of Rn and let f ∈ C 1 (E). For x0 ∈ E, let


ϕ (t, x0 ) be the solution of the initial value problem defined on its maximal interval
of existence I (x0 ). Then for t ∈ I (x0 ), the set of mappings ϕt defined by

ϕt (x0 ) = ϕ (t, x0 )
is called the flow of the differential equation or the flow defined by the differential
equation; ϕt is also referred to as the flow of the vector field f (x).

If we think of the initial point x0 as being fixed and let I = I (x0 ), then the mapping
ϕ (·, x0 ) : I → E defines a solution curve or trajectory of the system the concerned dif-
ferential equation through the point x0 ∈ E. As usual, the mapping ϕ (·, x0 ) is identified
with its graph in I × E and a trajectory is visualized as a motion along a curve Γ through
the point x0 in the subset E of the phase space Rn ; cf. Figure 1. On the other hand, if
we think of the point x0 as varying throughout K ⊂ E, then the flow of the differential
equation, ϕt : K → E can be viewed as the motion of all the points in the set K.

(a) (b)

Figure 4: (a) A trajectory Γ of the system (b)The flow ϕt of the system

If we think of the differential equation as describing the motion of a flund. then a trajec-
tory of the concerned differential equation describes the motion of an individual particle
Stability Analysis of ODEs 41

in the fluid while the flow of the differential equation describes the mostion of the entire
fluid.

We now show that the basic properties (i) (iii) of linear flows are also satisficd by nonlinear
flows. But first we extend Theorem 1 of Section 2.3, establishing that ϕ (t, x0 ) is a locally
smooth function, to a global result. Using the same notation as in Definition 1, let us
define the set Ω ⊂ R × E as

Ω = {(t, x0 ) ∈ R × E | t ∈ I (x0 )}

Theorem: Let E be an open subset of Rn and let f ∈ C 1 (E). Then Ω is an open


subset of R × E and ϕ ∈ C 1 (Ω).

Proof: If (t0 , x0 ) ∈ Ω and t0 > 0, then according to the definition of the set Ω, the solu-
tion x(t) = ϕ (t, x0 ) of the initial value problem is defined on [0, t0 ]. Thus, as in the proof
of Theorem 2 in Section 2.4, the solution x(t) can be extended to an interval [0, t0 + ε] for
some ε > 0; i.e., ϕ (t, x0 ) is defined on the closed interval [t0 − ε, t0 + ε]. It then follows
from Theorem 4 in Section 2.4 that there exists a neighborhood of x0 , Nδ (x0 ), such that
ϕ(t, y) is defined on | t0 − ε, t0 + ε] × Nδ (x0 ); i.e., (t0 − ε, t0 + ε) × Nδ (x0 ) ⊂ Ω. There-
fore, Ω is open in R × E. It follows from Theorem 4 in Section 2.4 that ϕ ∈ C 1 (G) where
G = (t0 − ε, t0 + ε) × Nδ (x0 ). A similar proof holds for t0 ≤ 0, and since (t0 , x0 ) is an
arbitrary point in Ω, it follows that ϕ ∈ C 1 (Ω).

Remark: This theorem can be generalized to show that if f ∈ C r (E) with r ≥ 1,


then ϕ ∈ C r (Ω) and that if f is analytic in E, then ϕ is analytic in Ω.

Theorem: Let E be an open set of Rn and let f ∈ C 1 (E). Then ∀ x0 ∈ E, if


t ∈ I (x0 ) and s ∈ I (ϕt (x0 )), it follows that s + t ∈ I (x0 ) and

ϕs+t (x0 ) = ϕs (ϕt (x0 )) .

Proof: Suppose that s > 0, t ∈ I (x0 ) and s ∈ I (ϕt (x0 )). Let the maximal interval
I (x0 ) = (α, β) and define the function x : (α, s + t] → E by

(
ϕ (r, x0 ) if α < r ≤ t
x(r) =
ϕ (r − t, ϕt (x0 )) if t ≤ r ≤ s + t

Then x(r) is a solution of the initial value problem on (α, s + t]. Hence s + t ∈ I (x0 ) and
by uniqueness of solutions
Stability Analysis of ODEs 42

ϕs+t (x0 ) = x(s + t) = ϕ (s, ϕt (x0 )) = ϕs (ϕt (x0 ))

If s = 0 the statement of the theorem follows immediately. And if s < 0, then we define
the function x : [s + t, β) → E by

(
ϕ (r, x0 ) if t ≤ r < β
x(t) =
ϕ (r − t, ϕt (x0 )) if s + t ≤ r ≤ t

Then x(r) is a solution of the initial value problem on [s + t, β) and the last statement of
the theorem follows from the uniqueness of solutions as above.

Theorem: Under the hypotheses of the first theorem of this section , if (t, x0 ) ∈ Ω
then there exists a neighborhood U of x0 such that {t} × U ⊂ Ω. It then follows
that the set V = ϕt (U ) is open in E and that

ϕ−t (ϕt (x)) = x ∀ x ∈ U


and

ϕt ϕ−t (y) = y ∀ y ∈ V

Proof: If (t, x0 ) ∈ Ω then it follows as in the proof of Theorem 1 that there exists a
neighborhood of x0 , U = Nδ (x0 ), such that (t − ε, t + ε) × U ⊂ Ω; thus, {t} × U ⊂ Ω. For
x ∈ U , let y = ϕt (x) ∀ t ∈ I(x). Then
−t ∈ I(y) since the function h(s) = ϕ(s + t, y) is a solution of the concerned differential
equation on | − t, 0| that satisfies h(−t) = y; i.e., ϕ− , is defined on the set V = ϕt (U ).
It then follows from the previous theorem that ϕ1 (ϕ1 (x)) = ϕ0 (x) = x ∀ x ∈ U and
that ϕt (ϕt (y)) = ϕ0 (y) = y ∀ y ∈ V . It remains to prove that V is open. Let V ∗ ⊃ V
be the maximal subset of E on which ϕ−t is defined. V ∗ is open because Ω is open and
ϕ− : V ∗ → E is continuous because ϕ is continuous. Therefore, the inverse image of the
open set U under the continuous map ϕ−t , i.e., ϕt (U ), is open in E. Thus, V is open in
E.

Later we intend show that the time along each trajectory of the concerned differential
equation can be rescaled, without affecting the phase portrait of the concerned differential
equation, so that ∀ x0 ∈ E, the solution ϕ (t, x0 ) of the initial value problem is defined ∀
t ∈ R; i.e., ∀ x0 ∈ E, I (x0 ) = (−∞, ∞). This rescaling avoids some of the complications
found in stating the above theorems. Once this rescaling has been made, it follows that
Ω = R × E, ϕ ∈ C 1 (R × E), ϕt ∈ C 1 (E) ∀ t ∈ R, and properties (i)-(iii) for the flow of the
nonlinear system the concerned differential equation hold ∀ t ∈ R and x ∈ E just as for
the linear flow eAt .From now on, it will be assumed that this rescaling has been made so
that ∀ x0 ∈ E, ϕ (t, x0 ) is defined ∀ t ∈ R; i.e., we shall assume throughout the remainder
Stability Analysis of ODEs 43

of this chapter that the flow of the nonlinear system the concerned differential equation
ϕt ∈ C 1 (E) ∀ t ∈ R.

Definition: Let E be an open subset of Rn , let f ∈ C 1 (E), and let ϕt : E → E be


the flow of the differential equation defined ∀ t ∈ R. Then a set S ⊂ E is called
invariant with respect to the flow ϕt if ϕt (S) ⊂ S ∀ t ∈ R and S is called positively
(or negatively) invariant with respect to the flow ϕt if ϕt (S) ⊂ S ∀ t ≥ 0 (or t ≤ 0
).

We have already showed that the stable, unstable and center subspaces of the linear
system ẋ = Ax are invariant under the linear flow ϕt = eAt . A similar result will be
established for the nonlinear flow ϕt of the concerned differential equation.

2.6 Linearization

A good place to start analyzing the nonlinear system

ẋ = f (x)

is to determine the equilibrium points of the concerned differential equation and to de-
scribe the behavior of the concerned differential equation near its equilibrium points. In
the next two sections it is shown that the local behavior of the nonlinear system the
concerned differential equation near a hyperbolic equilibrium point x0 is qualitatively
determined by the behavior of the linear system

ẋ = Ax

with the matrix A = Df (x0 ), near the origin. The linear function Ax = D f (x0 ) x is
called the linear part of f at x0 .

Definition: A point x0 ∈ Rn is called an equilibrium point or critical point of the


concerned differential equation if f (x0 ) = 0. An equilibrium point x0 is called a
hyperbolic equilibrium point of the concerned differential equation if none of the
eigenvalues of the matrix Df (x0 ) have zero real part. The linear system with the
matrix A = Df (x0 ) is called the linearization of the concerned differential equation
at x0 .

If x0 = 0 is an equilibrium point of the concerned differential equation, then f (0) = 0


and, by Taylor’s Theorem,
Stability Analysis of ODEs 44

1
f (x) = Df (0)x + D2 f (0)(x, x) + · · ·
2

It follows that the linear function Df (0)x is a good first approximation to the nonlinear
function f (x) near x = 0 and it is reasonable to expect that the behavior of the nonlinear
system the concerned differential equation near the point x = 0 will be approximated by
the behavior of its linearization at x = 0. Later it will be shown that this is indeed the
case if the matrix Df (0) has no zero or pure imaginary eigenvalues.

Note that if x0 is an equilibrium point of the concerned differential equation and ϕt : E →


Rn is the flow of the differential equation, then ϕt (x0 ) = x0 ∀ t ∈ R. Thus, x0 is called
a fixed point of the flow ϕt ; it is also called a zero, a critical point, or a singular point
of the vector field f : E → Rn . We next give a rough classification of the equilibrium
points of the concerned differential equation according to the signs of the real parts of the
eigenvalues of the matrix Df (x0 ).

Definition: An equilibrium point x0 of the concerned differential equation is called


a sink if all of the eigenvalues of the matrix Df (x0 ) have negative real part; it is
called a source if all of the eigenvalues of Df (x0 ) have positive real part; and it is
called a saddle if it is a hyperbolic equilibrium point and Df (x0 ) has at least one
eigenvalue with a positive real part and at least one with a negative real part.

Later we shall see that if x0 is a hyperbolic equilibrium point of the concerned differen-
tial equation then the local behavior of the nonlinear system the concerned differential
equation is topologically equivalent to the local behavior of the linear system ; i.e., there
is a continuous one-to-one map of a neighborhood of x0 onto an open set U containing
the origin, H : Nε (x0 ) → U , which transforms the concerned differential equation into
the linear system, maps trajectories of the concerned differential equation in Nε (x0 ) onto
trajectories of the linear system in the open set U , and preserves the orientation of the
trajectories by time, i.e., H preserves the direction of the flow along the trajectories.

2.7 Stable Manifold Theorem

The stable manifold theorem is one of the most important results in the local qualitative
theory of ordinary differential equations. The theorem shows that near a hyperbolic
equilibrium point x0 , the nonlinear system

ẋ = f (x)

has stable and unstable manifolds S and U tangent at x0 to the stable and unstable
subspaces E s and E u of the linearized system
Stability Analysis of ODEs 45

ẋ = Ax

where A = Df (x0 ). Furthermore, S and U are of the same dimensions as E s and E u ,


and if ϕt is the flow of the nonlinear system, then S and U are positively and negatively
invariant under ϕt respectively and satisfy

lim ϕt (c) = x0 ∀ c ∈ S
t→∞

and

lim ϕt (c) = x0 ∀ c ∈ U
t→−∞

We first illustrate these ideas with an example and then make them more precise by
proving the stable manifold theorem. It is assumed that the equilibrium point x0 is
located at the origin throughout the remainder of this section. If this is not the case, then
the equilibrium point x0 can be translated to the origin by the affine transformation of
coordinates x → x − x0 .

Definition: Let X be a metric space and let A and B be subsets of X. A home-


omorphism of A onto B is a continuous one-to-one map of A onto B, h : A → B,
such that h−1 : B → A is continuous. The sets A and B are called homeomorphic
or topologically equivalent if there is a homeomorphism of A onto B. If we wish to
emphasize that h maps A onto B, we write h : A ↠ B.

Defnition: An n-dimensional differentiable manifold, M (or a manifold S of class


C k ), is a connected metric space with an open covering {Uα }, i.e., M = α Uα , such
that

1. ∀ α, Uα is homeomorphic to the open unit ball in Rn , B = {x ∈ Rn | |x| < 1},


i.e., ∀ α there exists a homeomorphism of Uα onto B, hα : Uα → B

2. if Uα ∩ Uβ 6= ∅ and hα : Uα → B, hβ : Uβ → B are homeomorphisms, then


hα (Uα ∩ Uβ ) and hβ (Uα ∩ Uβ ) are subsets of Rn and the map

h = hα ◦ h−1
β : hβ (Uα ∩ Uβ ) → hα (Uα ∩ Uβ )

is differentiable (or of class C k ) and ∀ x ∈ hβ (Uα ∩ Uβ ), the Jacobian deter-


minant det Dh(x) 6= 0.
Stability Analysis of ODEs 46

The manifold M is said to be analytic if the maps h = hα ◦ h−1 β are analytic. The
pair (Uα , hα ) is called a chart for the manifold M and the set of all charts is called an
atlas for M . The differentiable manifold M is called orientable if there is an atlas with
det Dhα ◦ h−1β (x) > 0 ∀ α, β and x ∈ hβ (Uα ∩ Uβ ).

The Stable Manifold Theorem: Let E be an open subset of Rn containing the


origin, let f ∈ C 1 (E), and let ϕt be the flow of the nonlinear system . Suppose
that f (0) = 0 and that D f (0) has k eigenvalues with negative real part and n − k
eigenvalues with positive real part. Then there exists a k-dimensional differentiable
manifold S tangent to the stable subspace E s of the linear system at 0 such that ∀
t ≥ 0, ϕt (S) ⊂ S and ∀ x0 ∈ S,

lim ϕt (x0 ) = 0
t→∞

and there exists an n − k dimensional differentiable manifold U tangent to the


unstable subspace E u of the linear system at 0 such that ∀ t ≤ 0, ϕt (U ) ⊂ U
and ∀ x0 ∈ U ,

lim ϕt (x0 ) = 0
t→−∞

Before proving this theorem, we remark that if f ∈ C 1 (E) and f (0) = 0, then the system
can be written as

ẋ = Ax + F(x) (3)

where A = Df (0), F(x) = f (x) − Ax, F ∈ C 1 (E), F(0) = 0 and DF(0) = 0. This in turn
implies that ∀ ε > 0 there is a δ > 0 such that |x| ≤ δ and |y| ≤ δ imply that

|F(x) − F(y)| ≤ ε|x − y| (4)


Furthermore, there is an n × n invertible matrix C such that

 
−1 P 0
B=C AC =
0 Q

where the eigenvalues λ1 , . . . , λk of the k × k matrix P have negative real part and the
eigenvalues λk+1 , . . . , λn of the (n − k) × (n − k) matrix Q have positive real part. We
can choose α > 0 sufficiently small that for j = 1, . . . , k,

Re (λj ) < −α < 0


Stability Analysis of ODEs 47

Letting y = C −1 x, the system then has the form

ẏ = By + G(y)

where G(y) = C −1 F(Cy) ∈ C 1 (Ẽ) where Ẽ = C −1 (E) and G satisfies the Lipschitz-type
condition above.

It will be shown in the proof that there are n − k differentiable functions ψj (y1 , . . . , yk )
such that the equations

yj = ψj (y1 , . . . , yk ) , j = k + 1, . . . , n

define a k-dimensional differentiable manifold S̃ in y-space. The differentiable manifold S


in x-space is then obtained from S̃ under the linear transformation of coordinates x = Cy.

Proof: Consider the system ẏ = By + G(y). Let

   
eP t 0 0 0
U (t) = and V (t) =
0 0 0 eQt

Then U̇ = BU, V̇ = BV and

eBt = U (t) + V (t)

It is not difficult to see that with α > 0 chosen as in the penultimate system, we can
choose K > 0 sufficiently large and σ > 0 sufficiently small that

kU (t)k ≤ Ke−(a+σ)t ∀ t ≥ 0

and

kV (t)k ≤ Keσt ∀ t ≤ 0

Next consider the integral equation


Stability Analysis of ODEs 48

Z t Z ∞
u(t, a) = U (t)a + U (t − s)G(u(s, a))ds − V (t − s)G(u(s, a))ds
0 t

If u(t, a) is a continuous solution of this integral equation, then it is a solution of the


differential equation considered initially in the proof. We now solve this integral equation
by the method of successive approximations. Let

u(0) (t, a) = 0

and

Z
t 
u (j+1)
(t, a) = U (t)a + U (t − s)G u(j) (s, a) ds
Z ∞ 0

− V (t − s)G u(j) (s, a) ds (*)
t

Assume that the induction hypothesis

K|a|e−αt
u(j) (t, a) − u(j−1) (t, a) ≤
2j−1

holds for j = 1, 2, . . . , m and t ≥ 0. It clearly holds for j = 1 provided t ≥ 0. Then using


the Lipschitz-type condition (4) satisfied by the function G and the above estimates on
kU (t)k and kV (t)k, it follows from the induction hypothesis that for t ≥ 0

Z t
u (m+1)
(t, a) − u (m)
(t, a) ≤ kU (t − s)kε u(m) (s, a) − u(m−1) (s, a) ds
Z ∞
0

+ kV (t − s)kε u(m) (s, a) − u(m−1) (s, a) ds


Z
t
t −αs
−(α+σ)(t−s) K|a|e
≤ε Ke ds
2m−1
Z 0∞
K|a|e−αs
+ε Keσ(t−s)
ds
t 2m−1
εK 2 |a|e−αt εK 2 |a|e−αt
≤ m−1
+
 σ2  σ2m−1
−αt
1 1 K|a|e K|a|e−αt
< + =
4 4 2m−1 2m
Stability Analysis of ODEs 49

σ
provided εK/σ < 1/4; i.e., provided we choose ε < 4K . In order that the condition hold
for the function G, it suffices to choose K|a| < δ/2; i.c., we choose |a| < 2K δ
. It then
follows by induction that (9) holds ∀ j = 1, 2, 3, . . . and t ≥ 0. Thus, for n > m > N and
t ≥ 0,

X

u (n)
(t, a) − u (m)
(t, a) ≤ u(j+1) (t, a) − u(j) (t, a)
j=N
X∞
1 K|a|
≤ K|a| j
= N −1
j=N
2 2


This last quantity approaches zero as N → ∞ and therefore u(j) (t, a) is a Cauchy
sequence of continuous functions. So, we now know that

lim u(j) (t, a) = u(t, a)


j→∞

uniformly ∀ t ≥ 0 and |a| < δ/2K. Taking the limit of both sides of (*), it follows from the
uniform convergence that the continuous function u(t, a) satisfies the integral equation
and hence the differential equation. It follows by induction and the fact that G ∈ C 1 (Ẽ)
that u(j) (t, a) is a differentiable function of a for t ≥ 0 and |a| < δ/2K. Thus, it follows
from the uniform convergence that u(t, a) is a differentiable function of a for t ≥ 0 and
|a| < δ/2K. The last estimate implies that

|u(t, a)| ≤ 2K|a|e−αt

for t ≥ 0 and |a| < δ/2K.


It is clear from the integral equation that the last n − k components of the vector a do
not enter the computation and hence they may be taken as zero. Thus, the components
uj (t, a) of the solution u(t, a) satisfy the initial conditions

uj (0, a) = aj for j = 1, . . . , k

and R∞ 
uj (0, a) = − 0 V (−s)G (u (s, a1 , . . . , ak , 0)) ds j for j = k + 1, . . . , n.
For j = k + 1, . . . , n we define the functions

ψj (a1 , . . . , ak ) = uj (0, a1 , . . . , ak , 0, . . . , 0)
Stability Analysis of ODEs 50

Then the initial values yj = uj (0, a1 , . . . , ak , 0, . . . , 0) satisfy

yj = ψj (y1 , . . . , yk ) for j = k + 1, . . . , n

according
p to the definition. These equations then define a differentiable manifold S̃ for
y12 + · · · + yk2 < δ/2K. Furthermore, if y(t) is a solution of the differential equation
with y(0) ∈ S̃, i.e., with y(0) = u(0, a), then

y(t) = u(t, a)

It follows that if y(t) is a solution of (6) with y(0) ∈ S̃, then y(t) ∈ S̃ ∀ t ≥ 0 and it
follows from the estimate (11) that y(t) → 0 as t → ∞. It can also be shown that if y(t)
is a solution of (6) with y(0) ∈/ S̃ then y(t) ↛ 0 as t → ∞,

∂ψj
(0) = 0
∂yi

for i = 1, . . . , k and j = k + 1, . . . , n; i.e., the differentiable manifold S̃ is tangent to the


stable subspace E s = {y ∈ Rn | y1 = · · · yk = 0} of the linear system ẏ = By at 0 .

The existence of the unstable manifold Ũ of (6) is established in exactly the same way by
considering the differential system with t → −t, i.e.,

ẏ = −By − G(y)

The stable manifold for this system will then be the unstable manifold Ũ for the dif-
ferential system. Note that it is also necessary to replace the vector y by the vector
(yk+1 , . . . , yn , y1 , . . . , yk ) in order to determine the n − k dimensional manifold Ũ by the
above process. This completes the proof of the Stable Manifold Theorem.

Definition: Let ϕt be the flow of the nonlinear system . The global stable and
unstable manifolds
S of the nonlinear S system of our concern at 0 are defined by
W (0) = t≤0 ϕt (S) and W (0) = t≥0 ϕt (U ) respectively; W (0) and W u (0) are
s u s

also referred to as the global stable and unstable manifolds of the origin respectively.
It can be shown that the global stable and unstable manifolds W s (0) and W u (0)
are unique and that they are invariant with respect to the flow ϕt ; furthermore, ∀
x ∈ W s (0), limt→∞ ϕt (x) = 0 and ∀ x ∈ W u (0), limt→−∞ ϕt (x) = 0.
Stability Analysis of ODEs 51

It can be shown that in a small neighborhood, N , of a hyperbolic critical point at the


origin, the local stable and unstable manifolds, S and U , of the concerned system at the
origin are given by

S = {x ∈ N | ϕt (x) → 0 as t → ∞ and ϕt (x) ∈ N for t ≥ 0}

and

U = {x ∈ N | ϕt (x) → 0 as t → −∞ and ϕt (x) ∈ N for t ≤ 0}

respectively.

It follows from the upper bound of u(t, a) in the proof of the stable manifold theorem
that if x(t) is a solution of the differential equation (6) with x(0) ∈ S, i.e., if x(t) = Cy(t)
with y(0) = u(0, a) ∈ S̃, then for any ε > 0 there exists a δ > 0 such that if |x(0)| < δ
then

|x(t)| ≤ εe−αt

∀ t ≥ 0. Just as in the proof of the stable manifold theorem, α is any positive number
that satisfies Re (λj ) < −α for j = 1, . . . , k where λj , j = 1, . . . , k are the eigenvalues of
Df (0) with negative real part. This result shows that solutions starting in S, sufficiently
near the origin, approach the origin exponentially fast as t → ∞.

Corollary: Under the hypotheses of the Stable Manifold Theorem, if S and U are
the stable and unstable manifolds of the system at the origin and if Re (λj ) < −α <
0 < β < Re (λm ) for j = 1, . . . , k and m = k + 1, . . . , n, then given ε > 0 there
exists a δ > 0 such that if x0 ∈ Nδ (0) ∩ S then |ϕt (x0 )| ≤ εe−αt ∀ t ≥ 0 and if
x0 ∈ Nδ (0) ∩ U then |ϕt (x0 )| ≤ εeβt ∀ t ≤ 0.

2.8 The Hartman-Grobman Theorem

The Hartman-Grobman Theorem is another very important result in the local qualitative
theory of ordinary differential equations. The theorem shows that near a hyperbolic
equilibrium point x0 , the nonlinear system

ẋ = f (x)
Stability Analysis of ODEs 52

has the same qualitative structure as the linear system

ẋ = Ax

with A = Df (x0 ). Throughout this section we shall assume that the equilibrium point
x0 has been translated to the origin.

Definition: Two autonomous systems of differential equations are said to be topologically


equivalent in a neighborhood of the origin or to have the same qualitative structure
near the origin if there is a homeomorphism H mapping an open set U containing the
origin onto an open set V containing the origin which maps trajectories of one in U onto
trajectories of the other in V and preserves their orientation by time in the sense that if a
trajectory is directed from x1 to x2 in U , then its image is directed from H (x1 ) to H (x2 )
in V . If the homeomorphism H preserves the parameterization by time, then the systems
are said to be topologically conjugate in a neighborhood of the origin.

The Hartman-Grobman Theorem: Let E be an open subset of Rn containing


the origin, let f ∈ C 1 (E), and let ϕt be the flow of the nonlinear system. Suppose
that f (0) = 0 and that the matrix A = Df(0) has no eigenvalue with zero real part.
Then there exists a homeomorphism H of an open set U containing the origin onto
an open set V containing
the origin such that for each x0 ∈ U , there is an open interval I0 ⊂ R containing
zero such that ∀ x0 ∈ U and t ∈ I0

H ◦ ϕt (x0 ) = eAt H (x0 ) ;


i.e., H maps trajectories of the non-linear system near the origin onto trajectories
of the linear system near the origin and preserves the parameterization by time.

Outline of the Proof: Consider the nonlinear system with f ∈ C 1 (E), f (0) = 0 and
A = Df (0).

1. Suppose that the matrix A is written in the form

 
P 0
A=
0 Q

where the eigenvalues of P have negative real part and the eigenvalues of Q have positive
real part.
2. Let ϕt be the flow of the nonlinear system and write the solution
Stability Analysis of ODEs 53

 
y (t, y0 , z0 )
x (t, x0 ) = ϕt (x0 ) =
z (t, y0 , z0 )

where

 
y0
x0 = ∈ Rn
z0

y0 ∈ E s , the stable subspace of A and z0 ∈ E u , the unstable subspace of A.


3. Define the functions

Ỹ (y0 , z0 ) = y (1, y0 , z0 ) − eP y0

and

z̃ (y0 , z0 ) = z (1, y0 , z0 ) − eQ z0

Then Ỹ(0) = Z̃(0) = DỸ(0) = DZ̃(0) = 0. And since f ∈ C 1 (E), Ỹ (y0 , z0 ) and
Z̃ (y0 , z0 ) are continuously differentiable. Thus,

DỸ (y0 , z0 ) ≤ a

and

DZ̃ (y0 , z0 ) ≤ a

on the compact set |y0 |2 + |z0 |2 ≤ s20 . The constant a can be taken as small as we like by
choosing s0 sufficiently small. We let Y (y0 , z0 ) and Z (y0 , z0 ) be smooth functions which
are equal to Ỹ (y0 , z0 ) and Z̃ (y0 , z0 ) for |y0 |2 + |z0 |2 ≤ (s0 /2)2 and zero for |y0 |2 + |z0 |2 ≥
s20 . Then by the mean value theorem

q
|Y (y0 , z0 )| ≤ a |y0 |2 + |z0 |2 ≤ a (|y0 | + |z0 |)

and
Stability Analysis of ODEs 54

q
|Z (y0 , z0 )| ≤ a |y0 |2 + |z0 |2 ≤ a (|y0 | + |z0 |)

∀ (y0 , z0 ) ∈ Rn . We next let B = eP and C = eQ . Then assuming that we have carried


out the normalization, we have

b = kBk < 1 and c = C −1 < 1

4. For

 
y
x= ∈ Rn
z

define the transformations

 
By
L(y, z) =
Cz

and

 
By + Y(y, z)
T (y, z) =
Cz + Z(y, z)

i.e. L(x) = eA x and locally T (x) = ϕ1 (x).

Lemma: There exists a homeomorphism H of an open set U containing the origin


onto an open set V containing the origin such that

H ◦ T = L ◦ H.

Proof: We establish this lemma using the method of successive approximations. For
x ∈ Rn , let

 
Φ(y, z)
H(x) =
Ψ(y, z)

Then H ◦ T = L ◦ H is equivalent to the pair of equations


Stability Analysis of ODEs 55

BΦ(y, z) = Φ(By + Y(y, z), Cz + Z(y, z))


CΨ(y, z) = Ψ(By + Y(y, z), Cz + Z(y, z))

First of all, define the successive approximations for the second equation by

Ψ0 (y, z) = z

Ψk+1 (y, z) = C −1 Ψk (By + Y(y, z), Cz + Z(y, z))

It then follows by an easy induction argument that for k = 0, 1, 2, . . ., the Ψk (y, z) are
continuous and satisfy Ψk (y, z) = z for |y| + |z| ≥ 2s0 . We next prove by induction that
for j = 1, 2, . . .

|Ψj (y, z) − Ψj−1 (y, z)| ≤ M rj (|y| + |z|)δ

where r = c[2 max(a, b, c)]δ with δ ∈ (0, 1) chosen sufficiently small so that r < 1 (which
is possible since c < 1 ) and M = ac (2s0 )1−δ /r. First of all for j = 1

|Ψ1 (y, z) − Ψ0 (y, z)| = C −1 Ψ0 (By + Y(y, z), Cz + Z(y, z)) − z |


= C −1 (Cz + Z(y, z)) − z
= C −1 Z(y, z) ≤ C −1 |Z(y, z)|
≤ ca(|y| + |z|) ≤ M r(|y| + |z|)δ

since Z(y, z) = 0 for |y| + |z| ≥ 2s0 . And then assuming that the induction hypothesis
holds for j = 1, . . . , k we have

|Ψk+1 (y, z) − Ψk (y, z)| = | C −1 Ψk (By + Y(y, z), Cz + Z(y, z))


− C −1 Ψk−1 (By + Y(y, z), Cz + Z(y, z)) |
≤ C −1 |Ψk (”) − Ψk−1 (′′ )|
≤cM rk [|By + Y(y, z)| + |Cz + Z(y, z)|]δ
≤cM rk [b|y| + 2a(|y| + |z|) + c|z|]δ
≤cM rk [2 max(a, b, c)]δ (|y| + |z|)δ
=M rk+1 (|y| + |z|)δ
Stability Analysis of ODEs 56

Thus, just as in the proof of the fundamental theorem of non-linear systems and the stable
manifold theorem, Ψk (y, z) is a Cauchy sequence of continuous functions which converges
uniformly as k → ∞ to a continuous function Ψ(y, z). Also, Ψ(y, z) = z for |y|+|z| ≥ 2s0 .
Taking limits in (4) shows that Ψ(y, z) is a solution of the second equation.

The first equation in the proof above can be written as


B −1 Φ(y, z) = Φ B −1 y + Y1 (y, z), C −1 z + Z1 (y, z)

where the functions Y1 and Z1 are defined by the inverse of T (which exists if the constant
a is sufficiently small, i.e., if s0 is sufficiently small) as follows:

 
−1 B −1 y + Y1 (y, z)
T (y, z) =
C −1 z + Z1 (y, z)

Then equation can be solved for Φ(y, z) by the method of successive approximations
exactly as above with Φ0 (y, z) = y since b = kBk < 1. We therefore obtain the continuous
map

 
Φ(y, z)
H(y, z) = .
Ψ(y, z)

And it follows that H is a homeomorphism of Rn onto Rn .


5. We now let H0 be the homeomorphism defined above and let Lt and T t be the one-
parameter families of transformations defined by

Lt (x0 ) = eAt x0 and T t (x0 ) = ϕt (x0 ) .

Define

Z 1
H= L−s H0 T s ds
0

It then follows using the above lemma that there exists a neighborhood of the origin for
which
Stability Analysis of ODEs 57

Z 1
t
LH= Lt−s H0 T s−t dsT t
Z0 1−t
= L−s H0 T s dsT t
−t
Z 0 Z 1−t 
−s s −s
= L H0 T ds + L H0 T ds T t
s
−t 0
Z 1
= L−s H0 T s dsT t = HT t
0

since by the above lemma H0 = L−1 H0 T which implies that

Z 0 Z 0
−s
L H0 T ds = s
L−s−1 H0 T s+1 ds
−t −t
Z 1
= L−s H0 T s ds
1−t

Thus, H ◦ T t = Lt H or equivalently

H ◦ ϕt (x0 ) = eAt H (x0 )

and it can be shown that H is a homeomorphism on Rn . This completes the outline of


the proof of the Hartman Grobman Theorem.

Theorem: Let E be an open subset of Rn containing the point x0 , let f ∈ C 2 (E),


and let ϕt be the flow of the nonlinear system. Suppose that f (x0 ) = 0 and that all
of the eigenvalues λ1 , . . . , λn of the matrix A = Df (x0 ) have negative (or positive)
real part. Then there exists a C 1 -diffeomorphism H of a neighborhood U of x0 onto
an open set V containing the origin such that for each x ∈ U there is an open
interval I(x) ⊂ R containing zero such that ∀ x ∈ U and t ∈ I(x)

H ◦ ϕt (x) = eAt H(x)

2.9 Saddles, Nodes, Foci and Centers

In this section, we are not going to explore intricate theoretical details, but just a quick
review of the topological definition of these key terms.
Stability Analysis of ODEs 58

Center: The origin is called a center for the non linear system if there exists a δ > 0 such
that every solution curve of the non linear system in the deleted neighborhood Nδ (0)\{0}
is a closed curve with 0 in its interior.

Center-focus: The origim is known as a center-focus for the non-linear system if there
exists a sequence of closed curves Γn , with Γn+1 in the interior of Γn such that Γn → 0 as
n → ∞ and such that every trajectory between Γn and Γn+1 spirals towards Γn or Γn+1
as t → ±∞.

Stable focus: The origin is known as a stable focus for the non-linear system if there
exist a δ > 0 such that for 0 < r0 < δ and θ0 ∈ R, r(t, r0 , θ0 ) → 0 and |θ(t, r0 , θ0 )| → ∞
as t → ∞.

Unstable focus: The origin is known as a unstable focus for the non-linear system if there
exist a δ > 0 such that for 0 < r0 < δ and θ0 ∈ R, r(t, r0 , θ0 ) → 0 and |θ(t, r0 , θ0 )| → ∞
as t → −∞.

Stable node: The origin is known as a stable node for the non linear system if there
exists a δ > 0 such that for 0 < r0 < δ and θ0 ∈ R, r(t, r0 , θ0 ) → 0 as t → ∞ and
limt→∞ θ(t, r0 , θ0 ) exists.

Unstable node: The origin is known as a unstable node for the non linear system if
there exists a δ > 0 such that for 0 < r0 < δ and θ0 ∈ R, r(t, r0 , θ0 ) → 0 as t → −∞ and
limt→−∞ θ(t, r0 , θ0 ) exists.

Proper Node: The origin is known as a proper node if if it is a node and every ray
through the origin is tangent to some trajectory of the non-linear system.

Topological saddle: The origin is a topological saddle for a non-linear system if there
exists two trajectories Γ1 and Γ2 which approach 0 as t → ∞ and two trajectories Γ3 and
Γ4 which approaches 0 as t → −∞ and if there exists a δ > 0 such that all other trajec-
tories which start in the deleted neighborhood of the origin leave the δ-neighborhood as
t → ±∞. The trajectories Γ1 , Γ2 , Γ3 , Γ4 are known as separatrices.

3 Nonlinear Systems: Global Theory

We have seen that any nonlinear system

ẋ = f (x)

with f ∈ C 1 (E) and E an open subset of Rn , has a unique solution ϕt (x0 ), passing through
Stability Analysis of ODEs 59

a point x0 ∈ E at time t = 0 which is defined for all t ∈ I (x0 ), the maximal interval of
existence of the solution. Furthermore, the flow ϕt of the system satisfies (i) ϕ0 (x) = x
and (ii) ϕt+s (x) = ϕt (ϕs (x)) . for all x ∈ E and the function ϕ(t, x) = ϕt (x) defines a
C 1 -map ϕ : Ω → E where Ω = {(t, x) ∈ R × E | t ∈ I(x)}.
In this chapter we define a dynamical system as a C 1 -map ϕ : R×E → E which satisfies (i)
and (ii) above. We first show that we can rescale the time in any C 1 -system (eg: nonlinear
system) so that for all x ∈ E, the maximal interval of existence I(x) = (−∞, ∞). Thus
any C 1 -system (eg: nonlinear system), after an appropriate rescaling of the time, defines
a dynamical system ϕ : R × E → E where ϕ(t, x) = ϕt (x) is the solution of the mentioned
nonlinear system with ϕ0 (x) = x. We next consider limit sets and attractors of dynamical
systems. Besides equilibrium points and periodic orbits, a dynamical system can have
homoclinic loops or separatrix cycles as well as strange attractors as limit sets. We study
periodic orbits in some detail and give the Stable Manifold Theorem for periodic orbits as
well as several examples which illustrate the general theory in this chapter. Determining
the nature of limit sets of nonlinear systems with n ≥ 3 is a challenging problem which is
the subject of much mathematical research at this time.

3.1 Dynamical Systems and Global Existence Theorems

A dynamical system gives a functional description of the solution of a physical problem


or of the mathematical model describing the physical problem. For example, the motion
of the undamped pendulum is a dynamical system in the sense that the motion of the
pendulum is described by its position and velocity as functions of time and the initial
conditions.

Mathematically speaking, a dynamical system is a function ϕ(t, x), defined for all t ∈ R
and x ∈ E ⊂ Rn , which describes how points x ∈ E move with respect to time. We
require that the family of maps ϕt (x) = ϕ(t, x) have the properties of a flow have already
been defined.

Definition: A dynamical system on E is a C 1 -map

ϕ:R×E →E

where E is an open subset of Rn and if ϕt (x) = ϕ(t, x), then ϕt satisfies

(i) ϕ0 (x) = x for all x ∈ E, and,


(ii) ϕt ◦ ϕs (x) = ϕt+s (x) for all s, t ∈ R and x ∈ E.

Remark: It follows from definition that for each t ∈ R, ϕt is a C 1 map of E into E which
Stability Analysis of ODEs 60

has a C 1 -inverse, ϕ−t ; i.e., ϕt with t ∈ R is a one-parameter family of diffeomorphisms on


E that forms a commutative group under composition.

It is easy to see that if A is an n × n matrix then the function ϕ(t, x) = eAt x defines a
dynamical system on Rn and also, for each x0 ∈ Rn , ϕ (t, x0 ) is the solution of the initial
value problem

ẋ = Ax
x(0) = x0 .

In general, if ϕ(t, x) is a dynamical system on E ⊂ Rn , then the function

d
f (x) = ϕ(t, x)
dt t=0

defines a C 1 -vector field on E and for each x0 ∈ E, ϕ (t, x0 ) is the solution of the initial
value problem

ẋ = f (x)
x(0) = x0 .

Furthermore, for each x0 ∈ E, the maximal interval of existence of ϕ (t, x0 ), I (x0 ) =


(−∞, ∞). Thus, each dynamical system gives rise to a C 1 -vector field f and the dynamical
system describes the solution set of the differential equation defined by this vector field.
Conversely, given a differential equation with f ∈ C 1 (E) and E an open subset of Rn , the
solution ϕ (t, x0 ) of the initial value problem with x0 ∈ E will be a dynamical system on
E if and only if for all x0 ∈ E, ϕ (t, x0 ) is defined for all t ∈ R; i.e., if and only if for
all x0 ∈ E, the maximal interval of existence I (x0 ) of ϕ (t, x0 ) is (−∞, ∞). In this case
we say that ϕ (t, x0 ) is the dynamical system on E defined by the mentioned initial value
problem.

The next theorem shows that any C 1 -vector field f defined on all of Rn leads to a dynamical
system on Rn . While the solutions ϕ (t, x0 ) of the original system may not be defined for
all t ∈ R, the time t can be rescaled along trajectories of the original system to obtain a
topologically equivalent system for which the solutions are defined for all t ∈ R.

Before stating this theorem, we generalize the notion of topological equivalent systems for
a neighborhood of the origin.

Definition: Suppose that f ∈ C 1 (E1 ) and g ∈ C 1 (E2 ) where E1 and E2 are open subsets
of Rn . Then the two autonomous systems of differential equations
Stability Analysis of ODEs 61

ẋ = f (x)

and

ẋ = g(x)
are said to be topologically equivalent if there is a homeomorphism H : E1 → E2 which
maps trajectories of the first differential equation onto trajectories of the second one and
preserves their orientation by time. In this case, the vector fields f and g are also said
to be topologically equivalent. If E = E1 = E2 then the two systems are said to be
topologically equivalent on E and the vector fields f and g are said to be topologically
equivalent on E.

Global Existence Theorem: For f ∈ C 1 (Rn ) and for each x0 ∈ Rn , the initial
value problem

f (x)
ẋ =
1 + |f (x)|
x(0) = x0

has a unique solution x(t) defined for all t ∈ R, i.e., (3) defines a dynamical system
on Rn ; furthermore, (3) is topologically equivalent to (1) on Rn .

Remark: The original system and the modified one in the theorem are topologically
equivalent on Rn since the time t along the solutions x(t) of (1) has simply been rescaled
according to the formula

Z t
τ= [1 + |f (x(s))|] ds
0

i.e., the homeomorphism H is simply the identity on Rn . The solution x(t) of (1), with
respect to the new time τ , then satisfies

dx dx dτ f (x)
= / =
dτ dt dt 1 + |f (x)|

i.e., x(t(τ )) is the solution of the modified system where t(τ ) is the inverse of the strictly
increasing function τ (t) defined by the rescalation. The function τ (t) maps the maximal
interval of existence (α, β) of the solution x(t) of the original system one-to-one and onto
(−∞, ∞), the maximal interval of existence of the modified system.
Stability Analysis of ODEs 62

Proof: It is not difficult to show that if f ∈ C 1 (Rn ) then the function

f
∈ C 1 (Rn )
1 + |f |

For x0 ∈ Rn , let x(t) be the solution of the modified initial value problem on its maximal
interval of existence ( α, β ). So, x(t) satisfies the integral equation (Verify!!)

Z t
f (x(s))
x(t) = x0 + ds
0 1 + |f (x(s))|

for all t ∈ (α, β) and since |f (x)|/(1 + |f (x)|) ≤ 1, it follows that

Z |t|
|x(t)| ≤ |x0 | + ds = |x0 | + |t|
0

for all t ∈ (α, β). Suppose that β < ∞. Then

|x(t)| ≤ |x0 | + β

for all t ∈ [0, β); i.e., for all t ∈ [0, β), the solution of the modified system through the
point x0 at time t = 0 is contained in the compact set

K = {x ∈ Rn | |x| ≤ |x0 | + β} ⊂ Rn .

But then, we know that β = ∞, a contradiction. Therefore, β = ∞. A similar proof


shows that α = −∞. Thus, for all x0 ∈ Rn , the maximal interval of existence of the
solution x(t) of the modified initial value problem, (α, β) = (−∞, ∞).

An Interesting Example to Note: For x0 > 0 the initial value problem

1
ẋ =
2x
x(0) = x0
Stability Analysis of ODEs 63

p
has the unique solution x(t) = t + x20 defined on its maximal interval of existence
I (x0 ) = (−x0 , ∞). The function f (x) = 1/(2x) ∈ C 1 (E) where E = (0, ∞). We have
2

x(t) → 0 ∈ Ė as t → −x20 . The related initial value problem

1/2x 1
ẋ = =
1 + (1/2x) 2x + 1
x(0) = x0

has the unique solution

q
1
x(t) = − + t + (x0 + 1/2)2
2

defined on its maximal interval of existence I (x0 ) = − (x0 + 1/2)2 , ∞ . We see that in
this case I (x0 ) 6= R.

However, a slightly more subtle rescaling of the time along trajectories of the original
initial value problem does lead to a dynamical system equivalent to the original one even
when E is a proper subset of Rn . This idea is due to Vinograd.

Theorem: Suppose that E is an open subset of Rn and that f ∈ C 1 (E). Then


there is a function F ∈ C l (E) such that

ẋ = F(x)
defines a dynamical system on E and such that the new dynamical system is topo-
logically equivalent to the original one on E.

Proof: First of all, as in global existence theorem, the function

f (x)
g(x) = ∈ C 1 (E)
1 + |f (x)|

|g(x)| ≤ 1, the original system and the modified one are topologically equivalent on E.
Furthermore, solutions x(t) of the modified system satisfy

Z t Z t
′ ′
|ẋ (t )| dt = |g (x (t′ ))| dt′ ≤ |t|
0 0

i.e., for finite t, the trajectory defined by x(t) has finite arc length. Let (α, β) be the
maximal interval of existence of x(t) and suppose that β < ∞. Then since the arc length
Stability Analysis of ODEs 64

of the half-trajectory defined by x(t) for t ∈ (0, β) is finite, the half-trajectory defined by
x(t) for t ∈ [0, β) must have a limit point

x1 = lim− x(t) ∈ Ė
t→β

Now define the closed set K = Rn \ E and let

d(x, K)
G(x) =
1 + d(x, K)

where d(x, y) denotes the distance between x and y in Rn and

d(x, K) = inf d(x, y)


y∈K

i.e., for x ∈ E, d(x, K) is the distance of x from the boundary ∂E of E. Then the function
G ∈ C l (Rn ) , 0 ≤ G(x) ≤ 1 and for x ∈ K, G(x) = 0. Let F(x) = g(x)G(x). Then
F ∈ C 1 (E) and the system, ẋ = F(x), is topologically equivalent to our initial modification
on E since we have simply rescaled the time along trajectories of that initially modified
system; i.e., the homeomorphism H is simply the identity on E. Furthermore, the system
ẋ = F(x) has a bounded right-hand side and therefore its trajectories have finite arc-length
for finite t. To prove that the modification by Vinograd defines a dynamical system on
E, it suffices to show that all half-trajectories of the aforementioned modification which
(a) start in E, (b) have finite arc length s0 , and (c) terminate at a limit point x1 ∈ Ė are
defined for all t ∈ [0, ∞). Along any solution x(t) of that modification, ds dt
= |ẋ(t)| and
hence

Z s
ds′
t=
0 |F (x (t (s′ )))|

where t(s) is the inverse of the strictly increasing function s(t) defined by

Z t
s= |F (x (t′ ))| dt′
0

for s > 0. But for each point x = x(t(s)) on the half-trajectory we have

d(x, K)
G(x) = < d(x, K) = inf d(x, y) ≤ d (x, x1 ) ≤ s0 − s
1 + d(x, K) y∈K
Stability Analysis of ODEs 65

And therefore since 0 < |g(x)| ≤ 1, we have

Z s
ds′ s0 − s
t≥ = log
0 s0 − s ′ s0

and hence t → ∞ as s → s0 ; i.e., the half-trajectory defined by x(t) is defined for all
t ∈ [0, ∞); i.e., β = ∞. Similarly, it can be shown that α = −∞ and hence, the
modified system defines a dynamical system on E which is topologically equivalent to the
unmodified original system on E.
For f ∈ C 1 (E), E an open subset of Rn , the second theorem implies that there is no loss
in generality in assuming that the original system defines a dynamical system ϕ (t, x0 ) on
E. Throughout the remainder of this book we therefore make this assumption; i.e., we
assume that for all x0 ∈ E, the maximal interval of existence I (x0 ) = (−∞, ∞). In the
next section, we then go on to discuss the limit sets of trajectories x(t) of the original
as t → ±∞. However, we first present two more global existence theorems which are of
some interest.

Theorem: Suppose that f ∈ C 1 (Rn ) and that f (x) satisfies the global Lipschitz
condition

|f (x)| − f (y)| ≤ M |x − y|
for all x, y ∈ Rn . Then for x0 ∈ Rn , the initial value problem (1) has a unique
solution x(t) defined for all t ∈ R.

Proof: Let x(t) be the solution of the original initial value problem on its maximal
interval of existence (α, β). Then using the fact that d|x(t)|/dt ≤ |ẋ(t)| and the triangle
inequality,

d
|x(t) − x0 | ≤ |ẋ(t)| = |f (x(t))|
dt
≤ |f (x(t)) − f (x0 )| + |f (x0 )|
≤ M |x(t) − x0 | + |f (x0 )|

Thus, if we assume that β < ∞, then the function g(t) = |x(t) − x0 | satisfies

Z t Z t
dg(s)
g(t) = ds ≤ |f (x0 )| β + M g(s) ds
0 ds 0

for all t ∈ (0, β). It then follows from Gronwall’s Lemma that
Stability Analysis of ODEs 66

|x(t) − x0 | ≤ β |f (x0 )| eM β

for all t ∈ [0, β); i.e., the trajectory of the original system through the point x0 at time
t = 0 is contained in the compact set


K = x ∈ Rn ||x − x0 | ≤ β|f (x0 ) | eM β ⊂ Rn .

But then by one of the corollaries we have already proven, it follows that β = ∞, a
contradiction. Therefore, β = ∞ and it can similarly be shown that α = −∞. Thus,
for all x0 ∈ Rn , the maximal interval of existence of the solution x(t) of the initial value
problem, I (x0 ) = (−∞, ∞).

If f ∈ C 1 (M ) where M is a compact subset of Rn , then f satisfies a global Lipschitz


condition on M and we have a result similar to the above theorem for x0 ∈ M . This
result has been extended to compact manifolds by Chillingworth.

Theorem: Let M be a compact manifold and let f ∈ C 1 (M ). Then for x0 ∈ M ,


the initial value problem has a unique solution x(t) defined for all t ∈ R.

3.2 Limit Sets

Consider the autonomous system


ẋ = f (x)

with f ∈ C 1 (E) where E is an open subset of Rn . In the previous section, we saw that
there is no loss in generality in assuming that the nonlinear system defines a dynamical
system ϕ(t, x) on E. For x ∈ E, the function ϕ(·, x) : R → E defines a solution curve,
trajectory, or orbit of the nonlinear system through the point x0 in E. If we identify the
function ϕ(·, x) with its graph, we can think of a trajectory through the point x0 ∈ E as
a motion along the curve

Γx0 = {x ∈ E | x = ϕ (t, x0 ) , t ∈ R}

defined by the nonlinear system. We shall also refer to Γx0 as the trajectory of the
nonlinear system through the point x0 at time t = 0. If the point x0 plays no role in the
discussion, we simply denote the trajectory by Γ and draw the curve Γ in the subset E
of the phase space Rn with an arrow indicating the direction of the motion along Γ with
Stability Analysis of ODEs 67

increasing time. By the positive half-trajectory through the point x0 ∈ E, we mean the
motion along the curve

x0 = {x ∈ E | x = ϕ (t, x0 ) , t ≥ 0}
Γ+
Γ− −
x0 , is similarly defined. Any trajectory Γ = Γ ∪ Γ .
+

Figure 5: A trajectory of Γ of the initial value problem which approaches the ω limit
point p ∈ E as t → ∞.

Definition: A point p ∈ E is an ω-limit point of the trajectory ϕ(·, x) of the linear


system if there is a sequence tn → ∞ such that

lim ϕ (tn , x) = p
n→∞

Similarly, if there is a sequence tn → −∞ such that

lim ϕ (tn , x) = q
n→∞

and the point q ∈ E, then the point q is called an α-limit point of the trajectory ϕ(·, x)
of the initial nonlinear system. The set of all ω-limit points of a trajectory Γ is called the
ω-limit set of Γ and it is denoted by ω(Γ). The set of all α-limit points of a trajectory
Γ is called the α-limit set of Γ and it is denoted by α(Γ). The set of all limit points of
Γ, α(Γ) ∪ ω(Γ) is called the limit set of Γ.

Theorem: The α and ω-limit sets of a trajectory Γ of the initial nonlinear system,
α(Γ) and ω(Γ), are closed subsets of E and if Γ is contained in a compact subset
of Rn , then α(Γ) and ω(Γ), are non-empty, connected, compact subsets of E.
Stability Analysis of ODEs 68

Proof: It follows from Definition 1 that ω(Γ) ⊂ E. In order to show that ω(Γ) is a closed
subset of E, we let pn be a sequence of points in ω(Γ) with pn → p ∈ Rn and show that
p ∈ ω(Γ). Let x0 ∈ Γ. Then since pn ∈ ω(Γ), it follows that for each n ∈ N, there is a
(n)
sequence tk → ∞ as k → ∞ such that

 
(n)
lim ϕ tk · x0 = pn
k→∞

(n+1) (n)
Furthermore, we may assume that tk > tk since otherwise we can choose a subse-
(n)
quence of tk with this property. The above equation implies that ∀ n ≥ 2, there is a
sequence of integers K(n) > K(n − 1) such that for k ≥ K(n),

  1
(n)
ϕ tk · x 0 − pn <
n

(n)
Let tn = tK(n) . Then tn → ∞ and by the triangle inequality,

1
|ϕ (tn , x0 ) − p| ≤ |ϕ (tn , x0 ) − pn | + |pn − p| ≤ + |pn − p| → 0
n

as n → ∞. Thus p ∈ ω(Γ).
If Γ ⊂ K, a compact subset of Rn . and ϕ (tn , x0 ) → p ∈ ω(Γ), then p ∈ K since
ϕ (tn , x0 ) ∈ Γ ⊂ K and K is compact. Thus, ω(Γ) ⊂ K and therefore ω(Γ) is compact
since a closed subset of a compact set is compact. Furthermore, ω(Γ) 6= 0 since the
sequence of points ϕ (n, x0 ) ∈ K contains a convergent subsequence which converges to a
point in ω(Γ) ⊂ K. Finally, suppose that ω(Γ) is not connected. Then there exist two
nonempty, disjoint, closed sets A and B such that ω(Γ) = A ∪ B. Since A and B are both
bounded, they are a finite distance δ apart where the distance from A to B

d(A, B) = inf |x − y|
x∈A.y∈B

Since the points of A and B are ω-limit points of Γ, there exists arbitrarily large t such
that ϕ (t, x0 ) are within δ/2 of A and there exists arbitrarily large t such that the distance
of ϕ (t, x0 ) from A is greater than δ/2. Since the distance d (ϕ (t, x0 ) , A) of ϕ (t, x0 ) from
A is a continuous function of t, it follows that there must exist a sequence tn → ∞ such
that d (ϕ (tn , x0 ) , A) = δ/2. Since {ϕ (tn , x0 )} ⊂ K there is a subsequence converging to
a point p ∈ ω(Γ) with d(p, A) = δ/2. But, then d(p, B) ≥ d(A, B) − d(p, A) = δ/2 which
implies that p ∈ / A and p ∈ / B; i.e., p ∈
/ ω(Γ), a contradiction. Thus, ω(Γ) is connected.
A similar proof serves to establish these same results for α(Γ).
Stability Analysis of ODEs 69

Theorem: If p is an ω-limit point of a trajectory Γ of the initial nonlinear system,


then all other points of the trajectory ϕ(·, p) of the initial nonlinear system through
the point p are also ω-limit points of Γ; i.e., if p ∈ ω(Γ) then Γp ⊂ ω(Γ) and
similarly if p ∈ α(Γ) then Γp ⊂ α(Γ).

Proof: Let p ∈ ω(Γ) where Γ is the trajectory ϕ (·, x0 ) of the initial nonlinear system
through the point x0 ∈ E. Let q be a point on the trajectory ϕ(·, p) of the initial nonlinear
system through the point p; i.e., q = ϕ(t̄, p) for some t̄ ∈ R. Since p is an ω-limit point
of the trajectory ϕ (·, x0 ), there is a sequence tn → ∞ such that ϕ (tn , x0 ) → p. Thus we
have.  
ϕ tn + t̃, x0 = ϕ t̃, ϕ (tn , x0 ) → ϕ(t̃, p) = q

And since tn + t̃ → ∞, the point q is an ω-limit point of ϕ (·, x0 ). A similar proof holds
when p is an α-limit point of Γ and this completes the proof of the theorem.

It follows from this theorem that ∀ points p ∈ ω(Γ), ϕt (p) ∈ ω(Γ) ∀ t ∈ R; i.e., ϕt (ω(Γ)) ⊂
ω(Γ). Thus, according to definition, we have the following result.

Corollary: α(Γ) and ω(Γ) are invariant with respect to the flow ϕt of the initial
nonlinear system.

The α - and ω-limit sets of a trajectory Γ of the initial nonlinear system are thus closed
invariant subsets of E. In the next definition, a neighborhood of a set A is any open set
U containing A and we say that x(t) → A as t → ∞ if the distance d(x(t), A) → 0 as
t → ∞.

3.3 Attractors

A closed invariant set A ⊂ E is called an attracting set of the initial nonlinear system if
there is some neighborhood U of A such that ∀ x ∈ U, ϕt (x) ∈ U ∀ t ≥ 0 and ϕt (x) → A
as t → ∞. An attractor of the initial nonlinear system is an attracting set which contains
a dense orbit.

Note that any equilibrium point x0 of the initial nonlinear system is its own α and ω-limit
set since ϕ (t, x0 ) = x0 ∀ t ∈ R. And if a trajectory Γ of the initial nonlinear system
has a unique ω-limit point x0 , then by the above Corollary, x0 is an equilibrium point of
the initial nonlinear system. A stable node or focus, is the ω-limit set of every trajectory
in some neighborhood of the point; and a stable node or focus of the initial nonlinear
system is an attractor of the initial nonlinear system. However, not every ω limit set of
a trajectory of the initial nonlinear system is an attracting set of the initial nonlinear
system; for example, a saddle x0 of a planar system is the ω-limit set of three trajectories
Stability Analysis of ODEs 70

in a neighborhood N (x0 ), but no other trajectories through points in N (x0 ) approach x0


as t → ∞.

If q is any regular point in α(Γ) or ω(Γ) then the trajectory through q is called a limit orbit
of Γ. Thus, by the second theorem , we see that α(Γ) and ω(Γ) consist of equilibrium
points and limit orbits of the initial nonlinear system. We now consider some specific
examples of limit sets and attractors.

Circular Attractor

Consider the system 


ẋ = −y + x 1 − x2 − y 2

ẏ = x + y 1 − x2 − y 2 .

In polar coordinates, we have


ṙ = r 1 − r2
θ̇ = 1.

We see that the origin is an equilibrium point of this system; the flow spirals around the
origin in the counter-clockwise direction; it spirals outward for 0 < r < 1 since ṙ > 0 for
0 < r < 1; and it spirals inward for r > 1 since ṙ < 0 for r > 1. The counter-clockwise
flow on the unit circle describes a trajectory Γ0 of the initial nonlinear system since ṙ = 0
on r = 1. The trajectory through the point (cos θ0 , sin θ0 ) on the unit circle at t = 0 is
given by x(t) = (cos(t+ θ0 ), sin (t + θ0 ))T . The phase portrait for this system is shown in
the figure. The trajectory Γ0 is called a stable limit cycle.

Figure 6: A stable limit cycle Γ0 which is an attractor of the initial nonlinear system.
Stability Analysis of ODEs 71

Spherical Attractor

The system 
ẋ = −y + x 1 − z 2 − x2 − y 2

ẏ = x + y 1 − z 2 − x2 − y 2
ż = 0

has the unit two-dimensional sphere S 2 together with that portion of the z-axis outside
S 2 as an attracting set. Each plane z = z0 is an invariant set and for |z0 | < 1 the ω-limit
set of any trajectory not on the z-axis is a stable cycle on S 2 .

Figure 7: A dynamical system with S 2 as attracting set


Stability Analysis of ODEs 72

Cylindrical Attractor

The system 
ẋ = −y + x 1 − x2 − y 2

ẏ = x + y 1 − x2 − y 2
ż = α
has the z-axis and the cylinder x2 + y 2 = 1 as invariant sets. The cylinder is an attracting
set.

Figure 8: A dynamical system with cylinder as a attracting set

Toroidal Attractor

If in the previous example we identify the points (x, y, 0) and (x, y, 2π) in the planes z = 0
and z = 2π, we get a flow in R3 with a two-dimensional invariant torus T 2 as an attracting
set. The z-axis gets mapped onto an unstable cycle Γ. And if α is an irrational multiple
of π then the torus T 2 is an attractor and it is the ω-limit set of every trajectory except
the cycle Γ.
Stability Analysis of ODEs 73

Figure 9: A dynamical system with an invariant torus as an attracting set.

Lorenz System

The original work of Lorenz in 1963 as well as the more recent work of Sparrow indicates
that for certain values of the parameters σ, ρ and β, the system

ẋ = σ(y − x)
ẏ = ρx − y − xz
ż = −βz + xy
has a strange attracting set. For example for σ = 10, ρ = 28 and β = 8/3, a single
trajectory of this system is shown in the figure along with a ”branched surface” S. The
attractor A of this system is made up of an infinite number of branched surfaces S which
are interleaved and which intersect; however, the trajectories of this system in A do not
intersect but move from one branched surface to another as they circulate through the
apparent branch. The numerical results and the related theoretical work indicate that
the closed invariant set A contains

(i) a countable set of periodic orbits of arbitrarily large period,

(ii) an uncountable set of nonperiodic motions and

(iii) a dense orbit.

The attracting set A having these properties is referred to as a strange attractor.


Stability Analysis of ODEs 74

Figure 10: A trajectory Γ of the Lorenz system and the corresponding branched surface
S.
Stability Analysis of ODEs 75

Halvorsen Attractor

This is another famous strange attractor, governed by the differential equations

ẋ = ax − 4y − 4z − y 2
ẏ = ay − 4z − 4x − z 2
ż = az − 4x − 4y − x2

For different values of the parameter a, different results are obtained.

Figure 11: Halvorsen Attractor

Note: Plotting Attractors in GeoGebra

GeoGebra is a great tool to observe all the attractors and dynamical systems we have
discussed, it gives us robust customization and free choice of number of observed particles
and parameters.
1 ## Parameters , you can modify it according to the equation ##
2 d = 10
3 b = 8/3
4 p = 28
5
6 ## System of differential equations : Lorenz attractor , you can go for
the others as well ##
7 x'(t,x,y,z) = d * (y - x)
8 y'(t,x,y,z) = x * (p - z) - y
9 z'(t,x,y,z) = x * y - b * z
10
11 ## Initial Condition ##
12 x0 = 1
13 y0 = 1
14 z0 = 1
15
Stability Analysis of ODEs 76

16 ## Numerical solution ##
17 NSolveODE ({x', y', z'}, 0, {x0 , y0 , z0}, 20)
18
19 ## Note ##
20 # The command NSolveODE () creates three curves
21 # containing the numerical solution of the system
22 # per variable (x,y and z) and they are plotted
23 # against time in the 2D graphic view.
24
25 ## Calculate length of solution 1##
26 len = Length ( numericalIntegral1 )
27

28 ## Define points from the solution ##


29 L_1 = Sequence ( (y( Point ( numericalIntegral1 , i)), y( Point (
numericalIntegral2 , i)), y( Point ( numericalIntegral3 , i))), i, 0, 1,
1 / len )
30
31 ## Draw curve ##
32 f = Polyline (L_1)
33
34 ## Finally , you need to hide numericalIntegra1 , numericalIntegra2 ,
numericalIntegra3 , and L_1 ##

References

[1] Lawrence Perko, Differential Equation and Dynamical Systems. Springer, 1998.

[2] M. W. Hirsch, S. Smale, Differential Equations, Dynamical Systems, and Linear


Algebra Academic Press, 1974.

You might also like