Wilson ODE
Wilson ODE
in 2024
[Link]
ORDINARY DIFFERENTIAL EQUATIONS
i
i]
‘
=
Weegee
H. K. WILSON
Southern Illinois University, Edwardsville
Consulting Editor
Lynn H. Loomis
This textbook has grown out of lecture material which the author has assembled for
both elementary- and intermediate-level courses in ordinary differential equations.
The introductory material, consisting of topics from Chapters 1 through 6, was
originally designed for a group of mathematics majors who studied differential equa-
tions for a period of two quarters bridging the sophomore and junior years. The
presentation makes use of elementary matrix methods. This approach is taken in
order to reduce the student’s difficulties in relating his first course in the subject to
more advanced studies which he might later take. A formal course in linear algebra
is not assumed as background. The linear algebra which is needed is contained in
Sections 3.2, 3.3, 3.6, and 3.10 through 3.13. It is assumed, however, that the student
has had some exposure to determinants, the solution of specific systems of linear
algebraic equations by row elimination, complex numbers, and polynomial algebra.
A familiarity with the material in, say, the first edition of the algebra and trigonometry
textbook by Fisher and Ziebur is adequate background in this area.
The intermediate-level material, consisting of topics from Chapters 7 through 10,
is directed toward senior mathematics majors and engineering graduate students.
The understanding of it requires some familiarity with several notions from advanced
calculus and linear algebra. A few topics from elementary analysis have been in-
corporated into the exposition (Sections 7.4, 8.2, 8.3, and 8.4) to minimize prere-
quisites as much as possible.
One can base a variety of courses at different levels on the material in this book,
and several possibilities are suggested in the course outline table following the contents.
The asterisks in the table indicate sections which an instructor might wish to omit,
and the sections marked with a dagger may be omitted if the students have had
equivalent material in linear algebra or analysis. The courses labeled A through M
are first courses, and the others are intermediate or advanced courses. More specific
descriptions are given with the table. By his choice of material from Chapters 1 and
Vii
Viii Preface
2, which are intended primarily for motivation, the compiler of a syllabus can vary
the flavor of a course from very applied to almost pure. Sections 1.1, 1.2, and 1.3
must be covered, however; they teach solution methods for the linear equations
x = 0 and x’ = a(t)x + b(t). Sections 3.13 and 6.6, together with the proofs of
Theorems 4.1 and 4.2, and Lemma 6.1 may be omitted.
I should like to thank those of my colleagues who have read portions of the
manuscript and especially acknowledge the helpful suggestions of R. Kurth. The
ideas for several of the physical problems were suggested to me by M. B. Sledd.
Example 5 of Section 8.9 is based on J. Serrin’s treatment of the Blasius boundary
value problem, and the material in Section 9.8 is based on an illustrative problem of
L. Markus. Finally, I should like to acknowledge the contributions of my wife who,
in addition to providing much encouragement, has worked tirelessly in typing and
editing the manuscript and its revisions.
Edwardsville, Ill. H. K. W.
November 1970
CONTENTS
Chapter 2 Solution Methods for Special First and Second Order Nonlinear Equations
Dal Preliminaries
Ded Initial value problems
2s The separable equation Z
2.4 First integrals and implicit poltiens ;
Des The exact equation .
2.6 Integrating factors
Wey Reduction of order .
2.8 A soft spring oscillator
ix
x Contents
4.1 Preliminaries 90
4.2 Initial value problems 91
4.3 The existence of solutions . 94
4.4 The first order scalar equation 99
4.5 Fundamental solution sets for the Homioseneste canon 103
4.6 Relations between solutions of the homogeneous equation 110
4.7 Solutions of the nonhomogeneous equation.. 115
4.8 Fundamental solution sets for homogeneous cautions win coment
coefficients 1A
el Preliminaries : 182
Yer The homogeneous praion ah consent eoeticents ; 184
7.3 The homogeneous equation with periodic coefficients . 193
7.4 The homogeneous equation with continuous coefficients 204
“ES The nonhomogeneous equation 218
Contents xi
Chapter 10 Stability
Index 373
Description
es May be omitted
May be omitted if students have had necessary background
T
xX To be used
xii
7
OF xox Ox 8A ++ +H
2.8 x x XX! 8.5 XXX X
8.6 XX XX
3.1 XXXKXXKXKKXKXKXKKXKXKXX 8.7 XX XX
Se aimiei
aie) bot tet et 8.8 XXX X
oeetaia
te eter tet 8.9 XX XX
EN NO NO NOG NO NON ONO NON 8.10 xX XXX
BS) IPO OOK DS IOSD DOK
SO Meena
kl. Jt akeda tetas 9.1 XX XX
SON NON KON OXON ONG KOXKE NGG 9.2 xX XXX
BS DOCK ORO O.O,.OR IS 9.3 XXX X
39 * ee OK KR KK RK KK 9.4 YOSCOMOK
41 XXXXXKXKXKKXKXKXKKXKXX
AD X XOXOXO XOXOXO XOX XE 10.1 xX
43 XXXXXXXXKXXXXKXKXKXKXKX 10.2 xXX
44 XXXXXKXKXXKKKXX 1033 xX X
45 XXXXKXXXXKXKXKXKKXKXKXKX 10.4 xXX
ANS KDR YK OK ROK OOK OOK OS YK OKO 10.5 xXX
ANTE YK IER DDK EDDIE
DEDEDE IOS
48 *« * © OR RR KOK OK OK RK OK RK OK
5.1 * oe Ef) ES fy x
Ew) * CRS
Re be ee
ee * “pes
62 fo Se
5.4 * “ou ) Cae xe
xiii
CHAPTER 1
What functions equal their second derivatives? In asking such a question, one is
really ageing for the solutions of the aot equation en sink, coaht,
= ;
dp 7 * PONS oot
The equation is said to have been solved when ai// of the unknown functions x = ¢(t)
which satisfy it have been found. It is easy to guess a few solutions. For example,
x = e'and x = e ‘ equal their second derivatives. This is also true of x= 2e‘ + 7e~.
In fact, any function having the form x = c,e’ + coe*, where c, and cp are con-
stants, equals its second derivative. We shall see later that no other type of function
can have this property. twice aiff,
The remarks in the previous paragraph indicate that purely academic questions
can be a source of differential equations. Another, richly varied source of differential
equations has been the attempt to describe and study the physical world. One of the
oldest sources of differential equations has been the study of the motion of objects
and of fluids, but differential equations have also been used to describe electrical
effects, elastic deformations, and many other phenomena which are not directly
associated w:th motion.
This chapter consists of examples of physical situations which can be discussed
with the aid of differential equations. The examples are specifically intended to
illustrate origins for some of the differential equations occurring in later chapters.
Sections 1.1, 1.2, and 1.3 contain important solution techniques. Other sections
contain explanations of physical techniques which may be ofinterest to some students,
but not to others.
1.1 AN OBJECT FALLING IN VACUUM
Consider an object having mass m that is dropped from an altitude Ro, measured
from the center of the earth. Several questions might be asked about the motion of
1
11
2 Differential equations and the physical world
the object. For example, what are its altitude and velocity at a given time? What
is its velocity at a given altitude?
Two of Newton’s laws are used to analyze this situation:
Newton’s Law of Gravitation. Two bodies, having masses m, and Mo, will exert
an attractive force on each other; the force acts along a line through their centers of
mass; and if r is the distance between their centers of mass, then the magnitude of the
force is proportional to m ym Bes
This force of attraction between two bodies is called gravitational force. The
magnitude of the gravitational force exerted on an object by the earth is called its
weight. It is given by Gmm,/r®, where m, is the mass of the earth, r is the distance
of the object from the center of the earth, and G is a proportionality constant. The
value of G, which we shall call the Newtonian gravitational constant, is 6.670 X
10~'! newton
- m?/kg?. ME/-a = GM /L) C= L3/ (m7)
The quantity g = Gm,/r® is called the acceleration due to gravity at altitude r.
The value of g at the earth’s surface varies since the earth is not perfectly spherical.
In Central America, g = 32.094 ft/sec”; in the arctic, g = 32.235 ft/sec”. Thus
the weight mg of an object having mass m depends on its altitude.
We shall assume for the study of our falling object that the earth is perfectly
spherical and that its radius R is such that g = 32 ft/sec” exactly. We neglect atmos-
pheric friction and the rotation of the earth. Further, we assume that the altitude Ro
from which our object is dropped is so near R that the weight mg of the object is
constant during its descent.
The time rate-of-change of the momentum mu of the object is m dv/dt, where v
is its velocity, and the net external force acting on it is the negative —mg of its weight
mg. By Newton’s second law of motion, then, we have
Oh as (1.1a)
Equation ([Link]) is solved for the velocity v by integrating each of its sides. This
yields
v(t) = Vo — gt, (1.1b)
(1.14)
This problem, which we shall call an initial value problem, is a basic topic in the
calculus. It is solved by integrating each side of Eq. (1.1c) n times and using the
data (1.1d) to evaluate the constants of integration. = i |
Sey le) pagel a 5% ea
* Teny “
EXERCISE 1.1
egy —|\ Ee . = VoOAe ss
1. Solve the following initial value problems. x = Kam t+c,t oe a cine C2=O
a) SY = GF te sO) = 2, sOy= il. NOR how tot= ton t a \e STE
lo) 2 = 6t + 2,5 ca) x(1) = Dexa2, se (hl) = 1 Il. -) Aa bids
Oe 0) = 0 = titan t-+£ (thu) +¢,
d) x = 1/0 — P), Ie <1, x) =2 35%, XE tA 1 A(t)+3
e) x = —27/ + 272 + 2%), x(0) = 3, x’) = 0, x"(0) = 1.
2. An object is dropped from the top of a building 1024 ft high. Exactly 1 second later,
another object is thrown downward with an initial velocity vo. If air friction is neglected,
how large must vo be for the objects to reach the ground simultaneously?
3. Suppose an object is falling in vacuum with a constant acceleration —g. Express its
velocity as a function of its altitude.
4. Suppose an object is traveling along a straight line in space with a speed of 60 mph
(88 ft/sec). What acceleration would be necessary to bring the object to a stop within
1 second? If the object has a mass of 5 slugs, how many pounds of force would have to be
exerted on the object to produce this acceleration? Interpret your results for an automobile
collision.
5. If air friction, bearing friction, and transmission inefficiencies are negligible, how large
an engine (in horsepower) would a 1600-lb dragster have to have in order to travel a quarter-
mile in 10 seconds? [Hint: 1 hp = 550 ft - lb/sec.]
Figure 1.1
We lump together the effects of atmospheric friction and the friction between the
table top and the block. In order to make an appropriate mathematical model, first
observe that the frictional force is in the direction opposite to that of the velocity and
tends to increase in magnitude as the velocity does. As a first approximation, then,
it is not unreasonable to assume that the frictional force is proportional to the negative
of some power of the speed |u|, that is,
Frictional force = —)v|v|', (1.2a)
where € > O and \ > Oare constants. The constants \ and e€ vary from situation to
situation. They depend on the material and shape of the block and on the range of
values that v may have.
A simplifying assumption which is frequently made for the frictional force (1.2a)
is thate = 1. We then say that the frictional force is negatively proportional to the
velocity v and call the friction /inear. Making this assumption and applying Newton’s
second law of motion to our block, we find that
vo @) = av), (1.2b)
where a = —)/m.
Equation (1.2b) states that the time rate-of-change of the velocity of the block is
proportional to the velocity at every instant 7. Integrating both of its sides, we do
not obtain a solution as we did with Eq. ([Link]). Such an integration merely produces
the relation
The determination of v(t) above is a special case of the following purely mathe-
matical problem: Given
i) a continuous function a on an interval a < t < ,
il) a point fy in (a, w), and
ili) a constant c,
find all functions x satisfying
In | Oa ip oteTs
Consequently,
t
EXERCISE 1.2
1. Find all solutions to the following equations.
a) x’ = x. lo) oe = Sak
c) x’ = 3t2x. (eee —wi ax, t= 0:
er (SCC 1) et <a) 2c i) of = (COLDER Ke < ae
Re (1 — 7), lf] <1. A) = Or 7) et > 0,
2. One way of estimating the number of bacteria in a culture is to assume that the rate-of-
increase of the population at any time is proportional to the size of the population at that
time.
a) Suppose that a certain type of bacteria reproduces at a rate equal to one-third the
number of bacteria present. If a culture were started from a single individual, what
would be the size of the population 24 hours later?
b) The population of the United States was 5.31 million in 1800, 76.0 million in 1900,
and 179 million in 1960. If the population had increased at a rate proportional to the
size of the population, what should it have been in 1960? How do you account for
the discrepancy?
Differential equations and the physical world 1.3
6
3. When a bank or savings and loan association advertises that it compounds interest
continuously, it means that interest is paid at the advertised rate in such a way that the time
rate-of-increase of a deposit is proportional to the amount on deposit. The proportionality
constant is the interest rate. Suppose that $1000 is kept on deposit for ten years at an interest
rate of 6%. How much greater will the yield on the investment be if interest is compounded
continuously rather than quarterly?
4. The real estate value of Manhattan was estimated at $13.4 billion in 1967. Suppose that
Peter Minuet had deposited his $39 in 1626 in a bank which paid 6% interest. Would he
have been able to buy Manhattan in 1967?
5. Anoctopus is kept in a 10,000-gal tank in an aquarium. The water in the tank is changed
continuously at a rate of 2 gal/min. If the octopus discharges ink into the water and the ink
mixes instantly, how long does it take for 90% of the ink to be removed from the tank?
6. Even a ventilation fan may not remove enough carbon monoxide from a closed garage
to make working on a running automobile safe. Suppose that an automobile engine produces
10 ft?/min of exhaust gases and that the exhaust gases contain 1 part carbon monoxide per
1000 parts of exhaust. How many cubic feet of air per minute must an exhaust fan move
through a closed garage containing the engine if the carbon monoxide concentration is to
be kept below 1 part per 1 million parts of air?
7. Instudying radioactive decay, it is frequently assumed that the rate at which a radioactive
substance decays is proportional to the amount of the substance present. Suppose the
amount of Carbon 14 in a prehistoric bone is now one-millionth of the amount that was
present when the creature from which it came was alive. Use the fact that Carbon 14 has a
half-life of 5440 years to date the bone.
Notice that Eq. (1.1b) implies that a falling object can be made to strike the earth
with as high a speed as desired by simply dropping it from a sufficiently high altitude.
This prediction does not, of course, hold true for the actual situation. Objects falling
in an atmosphere under constant acceleration have terminal velocities due to atmos-
pheric friction.
Let us restudy the fall of the object discussed in Section 1.1 and attempt to take
frictional effects into account. Now we shall postulate (as we did in Section 1.2) that
the frictional force opposes the motion of the object and is proportional to its velocity.
Equation ([Link]) modified accordingly becomes
This gives —at. Then the function e~, which is called an integrating factor, is formed.
When Eq. (1.3b) is multiplied by the integrating factor, the left side becomes an exact
derivative; in fact,
d a7 oe —ax = See:
di COcmayi Oeste (1.3c)
represents every solution to the problem. Different solutions are obtained by assigning
different values to x(Q).
EXERCISE 1.3
1. Use the method of integrating factors to solve the following differential equations.
2D) oe) == Dee SE Bh b) xe = 2 e2"
©) x! = 2x + 3e~ 2? (al) Set OBS Hi,
2. Suppose an object falls from the top of a building 1225 ft tall. According to the physical
model of Section 1.1, what would be its speed when it reached the ground? How long would
the object take to reach the ground?
3. Assume that the object of Problem 2 has a terminal velocity of 280 ft/sec with air
friction taken into account. How long does it take for the object to attain 99% of its terminal
velocity? Compare your result with the time of fall and impact velocity in Problem 2.
4. A ball bearing is released from rest in a reservoir of oil. The deceleration due to fluid
friction is 1000 times the instantaneous speed (in meters per second). What is the terminal
velocity of the bearing? What time is required for the bearing to attain 99% of its terminal
velocity?
gO,
280% = SKN moh
oo a (N\
1.4
8 Differential equations and the physical world
5. The motor ona boat can exert a maximum thrust of 44 Ib. Suppose the force of fluid
friction at any instant equals (in pounds) twice the speed of the boat at that instant. What is
the maximum cruising speed of the boat? If the boat weighs 160 lb, what time is required
to attain 99% of top speed? If the boat weighs 1600 Ib, what time is required to attain 99%
of top speed? Notice that the weight of the boat has no effect on its maximum speed;
however, the weight does affect the time required to attain that speed.
6. Suppose that two spheres having identical air friction characteristics are dropped from
a very great altitude. If one sphere is ten times as heavy as the other, which will strike the
ground first? Neglect wind, buoyancy, and the rotation of the earth, but take air friction
into account.
7. Suppose the rate of decrease in the temperature of water is proportional to the environ-
mental temperature less the temperature of the water. Two cups of water, one at 70°F and
the other at 210°F, are placed in a refrigerator (14°F). Which will freeze first?
8. Aman has $10,000. He wishes to spend it in equal monthly sums over his entire lifetime
so that none of it will remain for his heirs when he dies. He decides to deposit his money in
a bank which pays 6% annual interest (compounded continuously) and withdraw a fixed
amount each month. If his life expectancy is 400 months, what will his monthly income from
the investment be? How much more lucrative is this scheme than keeping the money in a
mattress and spending equal amounts monthly ?
A rocket can be fired into space in such a way that it neither falls back to earth nor
goes into orbit, i.e., in such a way that it escapes. Just what is it that will cause a
rocket to escape? It seems plausible that the crucial factors are its speed and altitude
at the instant when the fuel is completely consumed (burn-out). The considerations
of Section 1.1 allow us to examine the escape problem to some extent.
We assume that the rocket is fired vertically from the earth and neglect the rotation
of the earth. For times ¢ > 0, we denote the rocket’s altitude (measured from the
center of the earth) by 7(f), its mass by m(t), and its exhaust velocity by —u(f).
If a mass —Am of burned fuel is discharged during the time interval ¢ to ¢ + At,
the net change Ap in the momentum p of the system consisting of the rocket frame,
the unburned fuel, and the discharged mass is
Here, v is the speed of the rocket at time ¢, and v + Av is the speed at time ¢ + Art.
The average time rate-of-change of p is thus
Av A Am
= Thala St AD
At
We want to use Eq. (1.4a) to see what information Newton’s laws give about the
flight of the rocket. There will, of course, be a frictional force F(t) on the rocket as
long as it remains within the atmosphere. The only other force acting on the rocket
will be its weight, m(t) Gm,/r?(t). Notice that it is now incorrect to assume the weight
of the rocket to be 32m(t). Since rockets rise to great heights, the variation in weight
due to altitude must be considered. Newton’s second law of motion yields the
equation
Gm.m(t)
m(t)r’"(t) + u(t)m'(t) = + F(), (1.4b)
aC)
where 7’ = v and primes denote differentiation with respect to ¢.
The motion of the rocket prior to burn-out is difficult to analyze; if burn-out
occurs while the rocket is still within the atmosphere, then the situation is still com-
plicated. Let us suppose, however, that the fuel lasts until a time fy > 0 at which the
rocket has left the atmosphere behind. Then u(t) = 0 for t > to and F(t) = 0 for
t > to. The subsequent motion of the rocket is then described by the differential
equation
rif) = — 22. (1.4c)
We denote the velocity and altitude at burn-out by vy and ro respectively. How should
Vo and ro be related in order that the rocket escape? It is rather difficult to solve
Eq. (1.4c) for r(t) in terms of ¢t. Our question about vg and 79 can be answered, how-
ever, without an explicit solution of the differential equation.
We multiply Eq. (1.4c) by 7’(t) and obtain
d 2 d 2Gm,
— (r' = —|(——}]. 4
dt ('O) Al r(t) ) oe)
Then integration from fo to ¢ yields the relation
to evaluate the constant Gm,, finding that Gm, = 3.86 X 10'* newton - m?/kg.
1.5
10 Differential equations and the physical world
The worst possible choice for an escape altitude ro is the radius R of the earth
if Vo is to be as small as possible. Thus a lower bound for the speed required for a
“real” rocket to escape is ~/2Gm,/R = 11,200 m/sec. For escape altitudes ro > R,
the speed vo required for escape is, of course, less.
EXERCISE 1.4
1. If burn-out for a rocket occurs at an altitude ro = 100 km, what is its minimum escape
speed vo?
2. Is the satisfaction of the inequality (1.4f) sufficient for the escape of a rocket?
3. Suppose the burn-out altitude ro and velocity vo for a rocket are such that
2
Lore ey 0
2: ro
What is the maximum altitude to which the rocket can rise?
4. Explain why
| ene a
Let us now suppose that a satellite of mass m has been fired into space in such a way
that it goes into orbit. What can be said about the nature of the orbit? In the first
place, we may as well assume that the gravitational force between the earth and the
satellite does not cause the earth to move. Also, it is known from physics that the
satellite will move along a plane curve rather than a space curve. Consequently its
motion can be studied using two-dimensional coordinates with the earth located at
the origin (Fig. 1.2).
-atellite
(x, y)
Earth
Figure 1.2
An orbiting satellite 11
xX = To, y = 0, x’ = 0, y’ = v9
at the instant t = 0 when we begin to observe the orbiting satellite. Newton’s second
law of motion, applied in the directions of the x- and y-axes separately, yields the
equations
where
XT COSi0 and ve musi: (1.5b)
ME at
Equations (1.5a) and (1.5b) imply that
(59
and
ce ze (1.5d)
The term m,.G/r? in Eq. (1.5c) represents the downward acceleration of the
satellite due to gravity, and the term r6’? is called the centrifugal acceleration of the
satellite. The interplay of the forces corresponding to these two accelerations makes
the orbital motion possible: gravitational force tends to cause the satellite to fall and
centrifugal force tends to cause the satellite to rise. The expression —2r’6’/r in
Eq. (1.5d) is called the Coriolis acceleration of the satellite.
Equations (1.5c) and (1.5d) are the classical differential equations which are used
to describe orbital motion. The exercises below are concerned with extracting from
them physical information about the orbit of the satellite.
EXERCISE 1.5
1. Multiply Eq. (1.5d) by r? and integrate to find a relation between r and 6’. Use this
relation to eliminate 6’ from Eq. (1.5c). Then multiply Eq. (1.5c) by 2r’ and integrate to
12 Differential equations and the physical world 1.6
find a relation between r and r’. Finally use the fact that dr/dt = (dr/d@)(d6/dt) to show
that
dr re r ie 15
srl“) +a-a(Z)-1 , (1.5e)
where a = 1 — 2Gm,/rovd.
2. Write Eq. (1.5e) in the form
rT (7)
+ /r0 i
Uu
du
Uu
=|a (1.5f)
ulai{—) +0A-—-a-—-1
ro ro
Then evaluate the integral on the left by making the substitution u = 1/w and express r
in terms of 6. Note that the sign of a is important.
3. Show that the orbit of the satellite is an ellipse, parabola, or hyperbola when vo is
respectively less than, equal to, or greater than \/2Gm,/ro. The satellite will, of course,
escape from the earth if its orbit is a parabola or hyperbola. Explain why your results are
consistent with the discussion of the escaping rocket in Section 1.4. [Hint: Look up the polar
equations for the conic sections in a calculus book.]
Let us place a rectangular block having mass m on a tabletop and attach it to a wall
by a coil spring (Fig. 1.3a). We then stretch the spring xo units and release the block
at time ¢ = O with an initial velocity v). Where will the block be at each future time
tes
NVA
(a)
Figure 1.3
Let x denote the displacement of the block from its equilibrium position, that is,
from the position which it occupied when the spring was unstressed. The time rate-
of-change of momentum for the block is mx’’. We assume, as we did in Section 12:
that the frictional force on the block is of the form —)x’, where ) is a positive constant
and x’ is the velocity of the block.
We assume further that the force which the spring exerts on the block is of the
form —kx, where k is a positive constant. A spring that behaves in this way is called
a linear spring, and k is called its stiffness coefficient.
A linear oscillator with one degree of freedom 13
Newton’s second law of motion requires that the total force —\x’ — kx on the
block equals mx’’.. Thus
x” + ax’ + wx = 0, (1.6a)
where a = \/mandw = k/m. Equation (1.6a) is called the damped linear oscillator
equation if a > 0.
To locate the block at each time ¢ during the motion, one must determine the
solution x that satisfies x = x9 and x’ = Vg when t = 0. This is called an initial
value problem, and methods for solving it are derived in Chapters 3 and 4. For the
time being, we shall merely state a solution procedure. Let B = 4w? — a?. Then
where c, and cy» are constants which are uniquely determined by xo and vo.
If a = Oin Eq. (1.6a), then it has the form
x + w*x = 0, (1.6c)
and it is called the undamped linear oscillator equation. Each solution has the form
for some choice of constants c; and cy. Conversely, the relation (1.6d) defines a
solution of Eq. (1.6c) for every choice of the constants c; and C2.
EXERCISE 1.6
1. Find the displacement x of the block in Fig. 1.3 if x(0) = 1, x’(0) = 0, and friction is
negligible.
2. Find the displacement of the block in Fig. 1.3 if x(0) = 0, x’(0) = 1, and friction is
negligible. How does one physically start the block into motion so that these initial conditions
are met?
3. Show that x = (e’#! + e—‘#)/2, where i? = —1, satisfies Eq. (1.6c) and the conditions
x(0) = 1, x’/(0) = 0. Deduce, on physical grounds, that cos wt = (e’*! + e*#‘)/2 and
sin wt = (e*”! + e+) /2i, assuming that Eq. (1.6c) and the initial conditions truly determine
the motion of the block. [Hint: Use the results of Problems 1 and 2.]
4. Assuming that w2 > a@2/4, find the solution of Eq. (1.6a) that satisfies x(0) = 1,
x'(0) = 0. How does one physically start the block into motion in such a way that these
initial conditions are met?
5. How much time is required for the block in Problem 4 to undergo one complete
oscillation?
ils7/
14 Differential equations and the physical world
The principles explained in the last section can be used to analyze the motion of a
system consisting of several masses and springs. Such a system is said to have n
degrees of freedom if n coordinates are required for its description. For example, the
system in Figs. 1.4(a) and 1.4(b) has two degrees of freedom. In analyzing it, we
assume that the springs are linear with stiffness coefficients k,; and kz and that friction
is negligible. The coordinates w and y measure the displacements of the blocks from
the positions that they would occupy if the springs were unstressed.
Figure 1.4
Newton’s second law of motion is easiest to apply if we imagine that both blocks
are to the right of equilibrium and that y > w. The right-hand spring is then stretched
a distance y — w and it exerts forces +k2(y — w) on the blocks. The spring on the
left exerts a force —k,w on the first block. Thus
mw"
—kyw + ko(y — w) (1.7a)
and
may” = —k2(y — w). (1.7b)
Equations (1.7a) and (1.7b) are said to form a system of linear differential equations
for the unknown displacement functions y and w. In studying such a system theo-
retically, it is helpful to eliminate the second derivatives. This can be done by intro-
ducing the velocities x = w’ and z = y’ of the blocks as additional variables. The
equations can then be put in the form
a x ;
! =< ky aia ko ko
x = air a Ww =. is 3
aoe oe (1.7c)
baie La ed
Me Mo
This is the form in which linear differential systems will be studied in Chapters 3 and 4.
EXERCISE 1.7
1. If the spring with stiffness coefficient k; were removed from the system in Fig. 1.4(a),
how would the equations of motion (1.7a) and (1.7b) for the system be altered?
The series RLC circuit 15
2. Let the block in Fig. 1.4(a) with mass mz be connected to a wall on the right by a third
spring with stiffness coefficient k3 in such a way that the third spring is unstressed when the
blocks are in equilibrium. How are the equations of motion for the system altered?
3. A large plate (mass M), resting on a frictionless tabletop, is connected to a wall by a
linear spring with stiffness coefficient k. A small rectangular block (mass m) rests on top of
the plate at its center. Assuming that friction between the block and plate will be linear if
the block moves on the plate, introduce coordinates and derive equations of motion for the
system.
Figure 1.5 is a schematic diagram of an electrical circuit which is called a series RLC
circuit. The symbol labeled E denotes a battery which produces a voltage of E volts,
and the symbol labeled S denotes a switch, shown in the “‘off” position. The straight
lines in the diagram represent wires, and the dots (a, b, c, d, e) represent electrical
connections called nodes.
Figure 1.5
The symbols R and / in the diagram can be explained for our purposes by drawing
an analogy with hydraulics. The wires are like pipes that are filled with fluid. The
switch S corresponds to a valve, and the battery is analogous to a pump which main-
tains a pressure E. When the switch S is in the “on” position, the voltage E which
the battery produces causes the electrons in the circuit components to move in a
counterclockwise direction around the circuit, just as molecules of fluid under
pressure would move in a pipe.
Each electron carries a negative charge and, when 6 X 10!® electrons have
passed a point in the circuit, one says that 1 coulomb of negative electrical charge has
passed the point. If 1 coulomb of negative charge per second is passing a point in
the circuit, then it is conventional to say that a current of | positive ampere is passing
through the point in the opposite direction. Thus the electrical current J corresponds
hydraulically to the number of gallons of fluid per second passing a point on a pipe.
When analyzing a circuit, one does not, in practice, worry about the direction in
which the electrons are physically moving. One merely designates an arbitrary
direction for the current J. The algebraic sign of I(t) indicates the direction in which
the electrons are actually moving. When /(t) < 0, the electrons are moving in the
direction of J; when J(t) > 0, the electrons are moving in the direction opposite to J.
1.8
16 Differential equations and the physical world
a | ap NS | b
{sd
A aaJ
Figure 1.6
The voltage drop V(t) across a capacitor at any time f¢ is given in terms of the
amount of charge Q(t) stored at that time by the relation V(t) = Q(t)/C, where C
is a constant called the capacitance of the capacitor. (The unit of capacitance is
called the farad.) The voltage drop V(t) is expressed in terms of the current J flowing
through the capacitor by the relation
t
Kirchhoff’s Voltage Law. The sum of all the voltage drops across the components in
a Series circuit is zero.
Now assume that the switch S in Fig. 1.4 has been in the “on” position for a very
long time. Then the capacitor is fully charged and there is no current flowing in the
circuit. Consequently, there is no voltage drop across the inductor or resistor, and the
initial voltage Vo across the capacitor equals —E. At time t = 0, the switch S is
thrown into the “off” position, and current begins to flow around the circuit as the
capacitor discharges. Thus /(0) = 0 and, by Kirchhoff’s voltage law,
t
Letting t — 0+, we find that /’(0) = E/L, since Vp) = —E. Finally, differentiation
of (1.8a) yields the differential equation
Wrae” 1ae
R!
lt=
ah
(1.8b)
for the series RLC circuit. The current in the circuit is described by that solution J
which satisfies the initial conditions
JOy=0, FO)=E/L.
Notice that Eq. (1.8b) is mathematically identical with Eq. (1.6a) for the damped
linear oscillator. Inductance is analogous to mass; resistance is analogous to a co-
efficient of friction; the reciprocal of capacitance is analogous to a stiffness coefficient;
and current is analogous to displacement.
EXERCISE 1.8
1. Ifa battery which produces a voltage E is switched into a series circuit containing a
resistance R and an inductance L at time ¢ = 0, the current J satisfies the initial condition
1(0) = 0. Compute J(2).
2. If the inductance L in Problem 2 is replaced by a capacitance C, the initial condition for
the current is (0) = E/R. Find J(0).
3. Suppose R, L, C, and E in Fig. 1.5 have the values R = 104 ohms, Z = 1 henry, C =
2 « 10—8 farad, and E = 100 volts. Use Eq. (1.6b) to find /().
4. Consider the series RLC circuit of Fig. 1.5 with R = 0. Show that the current oscillates
indefinitely when the battery is switched out of the circuit. What is the frequency of the
oscillations?
1.9
18 Differential equations and the physical world
Figure 1.7 is a schematic diagram of a circuit which we shall call a parallel RLC
circuit. AS was the case with the circuit in Section 1.8, we assume that the switch has
been left in the ‘‘on” position. Then the capacitor is fully charged, and there are no
currents flowing in the circuit. At time ¢ = 0, the switch is turned off, and the currents
x, y, and z begin to flow.
Figure 1.7
To analyze the circuit, we shall need to use not only Kirchhoff’s voltage law but
also Kirchhoff’s current law.
Kirchhoff’s Current Law. The sum of the currents entering any node equals the
sum of the currents leaving that node.
Kirchhoff’s current law (applied at node d, for example) tells us that x + y = z.
We regard the circuit around nodes a, d, e, fas a series RL circuit. By Kirchhoff’s
voltage law,
Lx’ — Ry = 0: (1.9a)
The initial condition for x is x(O) = 0.
We similarly regard the circuit around nodes a, b, c, d as a series RC circuit and
find that t
aaah z(u) du — E+ Ry(t) = 0. (1.9b)
The initial condition y(0) = E/R for y is found by taking the right-hand limit
t— 0+. Differentiation of the equation gives
C'z+ Ry’ = 0. (1.9c)
Using the fact that z = x + y, we then write Eq. (1.9a), Eq. (1.9c), and the initial
conditions in the forms fe
iS LT» (1.9d)
To determine the values x(t), y(t), and z(t) of the currents at times ¢ > 0, we must
therefore solve a system of differential equations subject to initial conditions, that is,
we must solve an initial value problem.
EXERCISE 1.9
(The first three problems refer to the circuit in Fig. 1.7.)
12 eshow that x ©) = E/E:
2. Differentiate Eq. (1.9d), eliminate y from Eq. (1.9e), and obtain a differential equation
for x.
3. Use Egs. (1.6b) and the results of Problems 1 and 2 to find x(a), y(0, and z(#).
4. The capacitor in Fig. 1.8 is charged so that the voltage across it is E. The resistors
have a common resistance R, and the switch S is closed at time ¢ = 0. Formulate an initial
value problem for the currents x and y.
Figure 1.8
Figure 1.9
20 Differential equations and the physical world 1.10
This differential equation, which we will rederive below, has a periodic solution
toward which every other solution tends with increasing time. Thus the voltage V,
is periodic for all practical purposes.
The indicated vacuum tube, which is called a triode, has three internal elements
called the plate, the grid, and the cathode. These are connected into the circuit at
nodes p, g, and c respectively. The voltage V, is called the grid voltage, and the
voltage from p to c, which we shall denote by Vj, is called the plate voltage.
In the triode, electrons move from the cathode to the plate, creating a positive
plate current I, which flows from plate to cathode. The grid is a wire screen through
which electrons leaving the cathode must pass if they are to reach the plate. A few
electrons will actually strike the grid rather than pass through its openings. These
collisions are detected as a current flow within the tube from the grid to the cathode,
but the magnitude of this current is so small in the context of the present discussion
that we shall ignore it. If the voltage V, is negative, the electrons, which are negatively
charged, will be repelled as they approach the grid. The flow of current J, will thus
be impeded. If, on the other hand, V, is positive, the grid will attract approaching
electrons, accelerating them so much in the process that they pass through the open-
ings in the grid and continue onward toward the plate. In this case, the flow of plate
current J, is aided.
The two inductors in the diagram are wound on the same core. Consequently
they have a common magnetic field, and a change in current through one coil produces
a voltage across the other. The extent to which this interaction occurs is described
quantitatively by a constant M, called the mutual inductance of the coils. The voltage
from p to e equals MI’ — L,I;, where primes denote differentiation with respect
to time f. The voltage from d to a equals LI’ — MI}. We shall assume, as is the
case in an actual circuit, that the inductances M and L, are so small that MI’ — L,J;,
has a negligible value. Since Kirchhoff’s voltage law, applied around the nodes p, c
d, e, p, yields the relation
Vez b= 10 Myre
we may therefore take V, = E.
For each type of triode, there is a function y = ¢(x) and a constant u > O such
that J, = ¢(V, + pV,). The function ¢ is called the characteristic of the tube, and
wis called its amplification factor. In our case, V has the constant value E. Thus
I, = W(V,), where ¥(V,) = (E+ wV,). The function y is called the transfer
characteristic (of the tube) at voltage E. If one knows the type number of the tube,
he can look up its amplification factor and graphs of its transfer characteristics in
tube manufacturers’ handbooks.
For the oscillator being studied, a transfer characteristic of the type graphed in
Fig. 1.10 is required. The horizontal asymptote indicates the maximum plate current
I, possible for the tube, and the inflection point occurs at V, = 0. We then approxi-
mate ¥(V,) by a cubic polynomial ¥(0) + aV, — 8V}/3, where a and 8 are positive
constants.
An electronic oscillator 21
Sel
Figure 1.10
ey g eM a RG) ny?
Mae RE g VO
g Lt lee at (1.10b)
.
This is the differential equation which describes the operation of the oscillator circuit.
Its appearance can be simplified dramatically by the substitutions w = LC, € =
(Ma — RC)w, A? = €/wMB, and V,(t) = Ax(wt). The simplified form of Eq. (1.10b)
is then
x” + ex? — 1x’ + x = 0. (1.10c)
This equation is called van der Pol’s equation, and we shail see in Chapter 8 that, if
€ > 0, it has a periodic solution toward which every other solution tends as f
increases.
EXERCISE 1.10
(All the problems below refer to the circuit in Fig. 1.9.)
1. In order that the circuit oscillate, it is necessary that the graph of the transfer character-
istic have a sufficiently steep slope at V, = 0. How steep should the slope be?
2. Given that the voltage V, is periodic, explain why the current J is periodic.
3. To use the oscillator as a source of periodic voltage for some external device, it is neces-
sary to couple it to the device. How could one do this without changing the differential
equation (1.10b) for the circuit? [Hint: Make use of the mutual inductance phenomenon. ]
4. Make the formal change of variables x = s, x’ = 1/y in van der Pol’s equation and
reduce it to a differential equation involving only the first derivative dy/ds.
CHAPTER: 2
2.1 PRELIMINARIES
Example 1. An equation for the distance x traveled by a body falling from rest under
a constant acceleration g is (x’)” = 2gx. This is a first order, ordinary differential
equation. It is not normal. The same physical phenomenon is also described by the
equation
Xl =N/ 26x. (2.1b)
where a1, ..., Gn, 6 are functions defined on an interval a < t < w. In this case, it
is customary to write the equation in the form
22
Preliminaries 23
Example 2. The equations in Chapter 1 which have the form (2.1c) are Eqs. (1.1a),
(1.1c), (1.2b), (1.2d), (1.3b), (1.3e), (1.6a), (1.6c), and (1.8b). Equations (1.4c) and
(1.10c) are nonlinear. ||
Example 3. Let us consider a viscous fluid which flows parallel to the x-axis in the
xy-plane with a constant speed U,,. A flat plate is immersed in the fluid in such a way
that its edge coincides with the positive x-axis (Fig. 2.1). The plate creates a wake,
which alters the velocity of the fluid at each point p in the plane. If p has coordinates
(x, y), where x and y are positive, let u(x, y) and v(x, y) denote, respectively, the
horizontal and vertical components of the fluid velocity at p.
Fluid flow
eee ss
|
Figure 2.1
Flat plate
It has been found by experiment that the fluid has velocity zero at the surface of
the plate, that is,
tix, 0) — tx 0) for 59 Pl Oe (2.1d)
The wake is, of course, not very pronounced at large distances y from the plate. This
physical observation is expressed mathematically by the relations
ime x,y) =U, lim. v(x, y= 0 (2a1€)
y>+o Uae
for each fixed x > 0.
In one mathematical model for viscous fluid flow, it is found that wu and v satisfy
the partial differential equations
ge OE ae = 0. (2.1f)
The constant v is called the (kinematic) viscosity of the fluid. To find the velocity of
the fluid at points with positive coordinates (x, y), one tries to solve the differential
equations (2.1f) subject to the boundary conditions (2.1d) and ([Link]). Such a problem
is called a boundary value problem.
Blasius studied the problem (2.1d, e, f) by seeking a solution of the form
We shall show that this boundary value problem has a solution in Section 8.9.
The values of this solution can be used in conjunction with the formulas (2.1g) to find
the velocity of the flowing fluid. Here, however, we merely comment that this is
another example of a nonlinear equation which occurs in an attempt to study the
world around us. ||
EXERCISE 2.1
1. Which of the following equations are linear?
2) oe! = Phe ls) Se = fae.
(ep ante amon dix. = fa ox: |
e) x” + w*x = 0. f) x” + w2x + x? = 0.
g) x” + di — cos fx = 0. h) x’ + — cos/)sinx = 0.
Up until now, we have used the expressions “solution” and “‘initial value problem”
rather informally. In this section, we shall give them precise meanings.
By a solution of Eq. (2.1a), we shall mean a function x = ¢(t) such that
In this chapter, we consider Eq. ([Link]) only for the cases n = 1 and n = 2.
Naively, one would hope to perform upon the equations x’ = f(t, x) and x’ =
S(t, x, x’) a finite number of operations from the calculus and obtain finally the
symbol x on the left and a simple combination of elementary functions of ¢ on the
right. We saw in Chapter 1 that this can sometimes be done. It cannot always be
done, however. Consider, for example, the equation x’’ + sinx = 0. One can
easily check that x = 0 is a solution, but no other e/ementary solutions are known.
Nevertheless, the equation has infinitely many solutions: precisely one for each of its
initial value problems. The truth of this assertion is a consequence of the theory
developed in Chapter 8. For completeness, however, we briefly discuss here the
existence of solutions for the equations x’ = f(t, x) and x” = f(t, x, x’).
Notice first that the graphs of solutions for x’ = f(t, x) are curves in the real
tx-plane, and these curves must lie in regions for which the function /f is defined. For
example, the graph of a real solution for x’ = \/x — ¢ must lie in the region D
defined by the inequality x — ¢ > 0. The graph of a solution for x’ = 1/tx must
lie in one of the four regions defined by the inequalities x > 0 and tx < 0.
The following theorem, which we shall call an existence and uniqueness theorem,
is actually a corollary of Theorem 8.3 (Chapter 8). We shall therefore merely state
and explain it here. The theorem, which is illustrated in Fig. 2.2, guarantees that
many initial value problems
x =f (bX), X= Xo when bets (2.2a)
have uniquely determined solutions.
Theorem 2.1. Let f and df/dx be continuous on a region © in the tx-plane and let
(to, Xo) be a given point interior to D. There is one and only one solution x = ¢(t),
Te <t< 74", ofx’ = f(t, x) with the following properties:
1) $(to) = Xo
ii) (t, (2)) is a point of D forts” <t < T4°;
iii) either |t| + |(t)| ~ +o or (t, (1)) approaches a boundary point of D as
t— Ts and as t—T¢".
Example. For the equation x’ = 1/tx, f(t, x) = 1/tx. Let xo and fo be positive
constants so that (fo, xo) is in the first quadrant of the tx-plane. Both f and
of (t, x)/ax = —1/tx® are continuous on the interior of the first quadrant; we may
therefore choose it as the region D of Theorem 2.1. The theorem guarantees that the
initial value problem
1
X=
a =
2 x = Xo when bes (2.2b)
tx
Figure 2.2
Since (t, (2) is contained in the first quadrant, (7) is given by the positive radical.
As t—> 0+, (1) > +o. As t— to exp (x2/2), the point (¢, ¢()) approaches the
point (to exp (x2/2), 0) on the ¢-axis. For this initial value problem, the interval
Tan < tere" is therefore 0-7 Tyexp:(G/2)) ||
A result similar to Theorem 2.1 holds for equations of the form
x Xe): (2.2e)
Theorem 2.2. If f(t, x, y), Of (t, x, y)/dx, and of (t, x, y)/dy are continuous on a
region D in real three-dimensional space and if (to, Xo, Yo) is a point interior to D, then
there is a unique solution x = $(t), Ts < t < T¢*, of Eq. (2.2c) with the following
properties:
i) $(t0) = Xo, $'(to) = Yo:
ii) (¢, 60), 6’) is in D for tT4— < t < tet;
iii) either \t| + |6()| + |e’O| > +a or (t,¥(1), ¥/(D) approachesa boundary
point of Das t— Tg" and ast—T,~.
In the remaining sections of this chapter, we shall concern ourselves with methods
for solving x’ = f(t, x) and x’’ = f(t, x, x’) in terms of elementary functions. Only
special cases are considered, and the reader is cautioned that the collection of tech-
niques is not exhaustive. The most complete catalog of solution methods has been
assembled by Kamke [11].*
EXERCISE 2.2
1. Carefully graph the solution x = ¢() found in the example above.
* Bracketed numbers refer to the books listed in the Further Reading section at the end of
the text.
The separable equation 27
2. Find the function f and a region D for each of the initial value problems listed below.
a) x’ = (1 — x? — 72?)!/2, x» = 0 when ¢ = 0.
b) x’ = t/(x? — #2 — 1), x = 0 when ¢ = O.
c) x’ = 1/(1 — x? — (%’)?), x = 0 and x’ = 0 when ¢ = 0.
d) x” = dn d/(x? — (’)?), x = 1 and x’ = 0 when ¢ = 1.
3. The solution to the initial value problem
is x = tan?f. Illustrate Theorem 2.2 by sketching the locus of the parametric equations
We tanto ve= SOC1 i. 2 = 11, — 1/2 em oem2.
In Section 1.2, we solved the first order linear equation x’ = a(t)x, x # 0 by writing
it in the form x’/x = a(t) and integrating. In the last example, we solved an initial
value problem for the equation x’ = 1/tx by writing xx’ = 41, x = ¢(t) and
integrating.
These differential equations are examples from a class of differential equations
called the separable equations. The equation x’ = f(t, x) is called separable if the
values of f have the form f(t, x) = a(t)b(x), where a and 6b are continuous.
To solve an initial value problem
xX sau), X= XG when b= to; (2.3a)
posed for a separable equation, one first determines whether or not b(xo) = 0. If
b(xo) = 0, then x = x9 is a constant solution of the initial value problem. If
b(xo) ¥ 0, then b(x) # 0 for |x — xo| sufficiently small. One then separates the
variables and integrates to obtain the relation
/ Inudu = / Saks
xo to
and the solution x = ¢(f) to the initial value problem (2.3c) must therefore satisfy
the algebraic equation (2.3d). It is, unfortunately, not possible to solve this equation
for x as an elementary function of f. One nevertheless analyzes it to study the
solution ¢. ||
P48)
28 Solution methods for special first and second order nonlinear equations
itu—* du = 2 | sds
ro to
and
—— = 15),
ml
Example 3. Let an object of mass m be dropped in vacuum from an altitude x9 and
assume that the acceleration g due to gravity is constant. The altitude x of the object
then satisfies the initial value problem
Figure 2.3
£ (0) = —3a’v,
where v = ds/dt is the velocity of efflux. Provided the orifice is in a flat bottom, the
velocity of efflux of a nonviscous fluid from a tank of otherwise arbitrary shape is
known to be v = +/2gh, where h is the depth of the liquid. Thus
r’r! = —a’x/2gh.
Since h = r — a, it follows that
The student will be asked to find the time required to empty the tank in an exercise
below. A word of caution about the use of such a mathematical model is in order:
do you think the model is a good one if the tank happens to be filled with a very
viscous fluid? ||
EXERCISE 2.3
1. Solve the following equations:
a) x = V1 — x?, b) x’ = @ +. ~*)/( + #),
cx = 12x2/3, d= x/(t? +2¢+1)¢>-1.
and x’ = 0 whent = 0. Here C > Oisa constant characteristic of the particular chain used.
Identify the curve of suspension.
5. The chemical law of mass action states that, at constant temperature, the rate of forma-
tion of a compound is proportional to the product of the concentrations of the reactants.
Suppose a manufacturer of magnesium hydroxide wishes to produce it by the reaction
MgO + H20 — Mg(OH)e.
A mole of MgO is 2.4 times as massive as a mole of water. Thus the mass ratio of the reac-
tants for complete reaction is 2.4/1. Suppose the process is started with a mixture of 2400 kg
of MgO and 1000 kg of water. If x denotes the number of kilograms of Mg(OH)2 at time f,
then according to the law of mass action,
2.4 3%
aele 2400 — —
k ( (1000——]-
34 x) x)
Express x in terms of t and k. Suppose 1 kg of Mg(OH)z is produced during the first hour
of the reaction process. How many hours are required for the manufacture of the batch?
6. Describe all curves x = ¥(f) which have constant curvature w’’(A)/(1 + ¥/2(0)?/?
and are tangent to the f-axis at the origin.
7. Amotor boat of mass M is cruising in a straight line on a calm lake. The motor is shut
off and the boat decelerates in a straight line due to the friction of the water. If x denotes the
distance of the boat from the spot at which the motor was stopped, then x’ is its speed.
Discuss the motion of the boat if the frictional force is proportional to
Ae KeemeO <ce cals b) x. cy Gtts 6.0)
We showed that, in the absence of friction, the displacement x of the block satisfies
the undamped linear oscillator equation
¢’'(t) + wo(t) = 0.
Then
4 ony? — 4 |ow at = 0.
Thus U is indeed a first integral for x” = f(x). ||
In studying a second order equation
He as Me
4g SRS ce)Bb (2.4f)
it is frequently worthwhile to look for first integrals. Suppose, for example, that we
wish to find a solution x = ¢(t) to some initial value problem for Eq. (2.4f). Ifa
first integral U is known, then the defining relation
7d Ul oO, ye
#') = 0
may be integrated to yield U(t, ¢(2), ¢’(t)) = ¢ for some constant c. Thus
UGA xe) =-¢ (2.4g)
32 Solution methods for special first and second order nonlinear equations 2.4
2 (OY + 074%(0] = 0
for each solution x = $(t) of the undamped linear oscillator equation (2.4c). The
specialization of Eq. (2.4g) to this case is therefore
(x')* + w*x? =e:
Solving for x’, we find the analog
dx = — w2x?)1/?
+ |Sane |e dx
where
c¢, = —Vc(siny)/w, c2. = Vc (cos Y)/w.
We therefore conclude that every solution x = ¢(t) of the undamped linear oscillator
equation has the form ¢(t) = c; sin wt + cg cos wt for some choice of constants c,
and cy. Conversely, one verifies by substitution that x = c, sin wt + Cc, cos wt is
a solution for every choice of the constants c; and cy. This is precisely the assertion
which was made in Section 1.6 about the solutions of x” + w?x = 0. ||
It is natural to ask at this point whether or not there is a notion of second or third
or fourth integral for nth order differential equations. In fact, there is such a notion.
A kth integral, 1 < k <n, forx™ = f(t, x, x’,...,x@7) is a function U, defined
where f is defined, such that
1
for every solution x = ¢(t). One computes a kth integral (when he can) by performin g
k integrations. This is the origin of the name.
Consider now a first integral of a first order differential equation, that is, a
function U such that
d
7 Ul ) = 0 (2.4i)
for each explicit solution x = ¢(/). Integrating Eq. (2.4i), we have U(t, (t)) = c
for some constant c. Thus each explicit solution x = (ft) is also a solution of the
equation
UGIX) =" (2.4j)
For this reason, Eq. (2.4j) is called an implicit solution of the original differential
equation. The value of the constant c is determined by an initial condition x = xo
when ¢ = fo.
EXERCISE 2.4
1. Find a first integral and implicit solution of the differential equation (1.4c) r’’ = —Gm,/r?
derived in Section 1.4.
2. Find first integrals and implicit solutions for each of the following differential equations.
a) exe — 1/0 b) x’ = 2t/(1 + cos x).
c) x= (1 + x8)1/2, d) x” = —k?/x8.
e) x’ + sinx = 0. f) x” = —kxlt?,
This theorem does not, however, guarantee that every solution of (2.5a) can be
expressed in the form
U(t, x) = c¢ (2.5b)
for some one function U and various constants c. The particular equations (2.5a) for
which this can be done are called exact equations. More formally, one says that
(2.5a) is exact if and only if there is a function U defined on D such that
é U(t, x) = 0 (2.5c)
»_ _ MG,x), (2.5d)
where M(t, x) = OU(t, x)/dt and N(t, x) = dU(t, x)/ax for some function U.
To decide whether or not an equation given in the form (2.5d) is in fact exact,
one must determine whether or not there is a function U such that dU(t, x)/dt =
M(t, x) and dU(t, x)/dx = N(t, x). The next theorem gives the most common way
of making this determination.
Theorem 2.3. Let M/N, 9M/dx, and dN/dt be continuous for a < t < w and
a<x <b. Then anecessary and sufficient condition that Eq. (2.5d) be exact is that
OM(t, x)/dx = ON(t, x)/dt fora <t<wanda<x<b.
Proof. Assume first that the equation is exact. Then there is a function U such that
dU(t, x)/dt = M(t, x) and dU(t, x)/dx = N(t, x). Since M and N have continuous
first partial derivatives,
aM a a°U - a°U _ aN
ax) = axa; age age
Conversely, suppose that 0M/dx = dN/dt. To prove that the differential equa-
tion is exact, we shall verify that
a<t, to < w,a< x, Xo < 5b, is a first integral. First note that
aU * 9N
OL (t,x) = [ Or (t,u) du + M(t, xo)
“aM
= ifiy (t,u) du + M(t, xo)
I M(t, x)
and
Then
d au
i UG x)= “ai (t,x) + x ee CST)
,oU mI MG) ;
ENG)
7= 20;
UG, x) = iexa x) dx + io dt
that
Thus g’(t) = 1”, g(t) = t°/3 + ¢, and it follows
2
U(t,x) = tx? + ae
is a first integral of x’ = —(t? + x”)/2tx. ||
Example 2. Let a denote a constant and let b denote a continuous function. The
equation
ae“'x — b(t)e™
x= = eat
(2.5g)
is exact with M(t, x) = ae“x — b(tje% and N(t, x) = e”, since dM(t, x)/dx =
ae“ = AN(t, x)/dt. In this case, Eq. (2.5f) has the form
for some constant c. Cancelling the factor e*’ in Eq. (2.5g), one finds that every
solution of x’ + ax = b(f) is given by
Xe Cie. /b(t)e™dt
for some constant c. Compare this result with the material in Section 1.3. ||
EXERCISE 2.5
Vie Se ha 2
12 x+ £2
Integrating factors 37
g) x= — XOO Ga pe ea4
3
COS xt
+ tcos xt 12 + 3xe"')
2 3
: 2i eal a | , x
i) xi = — —_____.. ze :
) x2+t+1 ae! tln xt
7
dx
——— F
Fes =dyS = Gx,y)
—_—_ =
is called a planar Hamiltonian system if there is a function H such that F(x, y) = 0H(x, y)/dy
and G(x, y) = —dH(x, y)/dx. Verify that each solution x = $(4), y = W(t) of a planar
Hamiltonian system satisfies H(x, y) = c for some constant c.
is not exact. If we multiply the numerator and denominator of its right side by x,
however, we obtain an equivalent equation,
x? cos xt
x’ = — = >] (2.6a)
2x sin xt + x2tcos xt
which is exact. Applying the solution procedure for exact equations, we find that
each solution of Eq. (2.6a) satisfies x” sin xt = c, for some constant c.
Similarly, if we write the linear equation x’ + ax = b(t) in the form
pl) Bee) :
iv ]
(2.6b)
then we find that it is exact if and only if a = 0. We saw in Example 2 of the last
section, however, that
ae™ — b(t)e”
x = — eat
is exact for all values of a. This equation differs from Eq. (2.6b) in only one way: its
numerator and denominator contain the common factor e“’.
As these examples suggest, it is frequently possible to make a first order equation
exact by multiplying the numerator and denominator on the right by the value of a
function p called an integrating factor. To find u(t, x), one requires that
and tries to solve this partial differential equation for y(t, x). When the differentiations
are performed, it is found that » must satisfy the equation
on , OM _ 4, 9p ON ; (2.6c)
Solutions of Eq. (2.6c) can sometimes be calculated by use of the following trick:
Set u(t, x) = a(x)b(t). Then the equation reduces to
a(x) , oM DO ON.
uesa(x) “i oxen a b(t) a Ot Ged)
One solves Eq. (2.6d) for a(x) and b(t) (when he can) essentially by inspection.
There is a general guideline for its solution, however: Set one of the ratios a’(x)/a(x)
or b’(t)/b(t) equal to the value of a known function of x or ¢ in such a way that there
results an equation involving only one independent variable. This equation is then
solved for the remaining unknown a or b.
Example 1. Let x’ = — M(t, x)/N(t, x) with M(t, x) = —x? and M(t, x) = t ln xt.
For this case, Eq. (2.6d) becomes
eae)
as (x) Seay =14+ 1nxt+
b'(t)
Gy In xt.
for a and find that a(x) = x7? exp (1/2x”). Thus p(t, x) = tf
1x—3 exp (1/2x?”) is
an integrating factor for the original differential equation, and we are assured that the
equivalent equation
; polel!22?
xs 5
a er maalinoct
is exact. ||
Example 2. The equation x’ = —(2x? + 4xt?)/(4x7t + 22°) is not exact. To find
an integrating factor wu of the form u(t, x) = a(x)b(t), we must choose a and b so that
7x3 a'(x) = b(t)
(x2 Ants) AO) ONS (47 PyA(t) + 2r.2 (2.6e)
4x?
x* +4 27? == (4x9)
z b'(t)
BD)”| (2.6f)
from which it follows that b’ QO) = I/t. Thus a(x)= x, b(t) = t, and p(x, t) =
xt. The equation x’ = —(2x*t + 4x?t3)/(4x31? + 2xt*) is exact and has the same
solutions as the original equation for xt 4 0. ||
Reduction of order 39
EXERCISE 2.6
1. Use the integrating factor e** to solve the equation x’ = —(1 + tx)/t?:
2. Find an integrating factor for each of the following equations.
3
Qin Pees atx ; ae x8 ap Bi
Pe 2t + 212x2 2 ange eye
[pce 6x" + 6tx 3 [ag a
XG
.
ae Oxt + 472 os 1+ xe
avers cot x — fo 6 + 6sin x f
x+ft 2t2 + 3r(sin x + cos x)
3. Verify that the equation x’ = — M(t, x)/N(t, x) has an integrating factor p(t, x) = b(t)
if (OM(t, x)/ax — ON(t, x)/dt)/N(t, x) depends only on ¢.
4. Verify that x’ = —M(t,x)/N(t,x) has an integrating factor u(t, x) = a(x) if
(ON(t, x)/dt — OM(t, x)/dx)/M(t, x) depends only on x.
5. Construct a nonseparable, nonexact equation x’ = —M(t, x)/N(t, x) which has
g(t, x) = e+) for an integrating factor.
‘|wor — /CO)LO a = 0.
Integrating this equation, we find that
eerie
A generalization of the technique above produces the same result for the more
general equation
Xf (Oo) (2.7b)
To illustrate the generalization, let x = ¢(t) be a solution of Eq. (2.7b) and let
y= ¢'(t). If ¢’(@ # 0, then
dx _ @ xd) de dy 1a)
Poe ee Wigs dre didtedx
Substitution of these identities into the equation reduces it to a first order equation
dy _ fy),
dx y
8
9X54|
40 Solution methods for special first and second order nonlinear equations
Now suppose that Eq. (2.7c) can be solved fory in terms of x, say y = g(x). Then,
since x = $(t) and y = ¢/(f), it follows that @ satisfies the separable first order
equation x’ = g(x). This equation, when solved, yields an implicit solution of
Eq. (2.7b).
Example 1. Setting y = x’ in the equation x” = —x'/x, we have y dy/dx = —y/x,
or dy/dx = —x~+. Thus y = —In |x| + c, where c is a constant. It follows that
U(x, y) = y + In |x| is a first integral of x’’ = —x’/x, and
ae
¢ — In|x|
is an implicit solution. ||
Example 2. Let us solve the boundary value problem
= 0, x = 0
x! +. 2x'x when 7=0 “and 4— lt as°%— 4.
| dx ke
wae
and
1 [eer
EXERCISE 2.7
In Problems 1 through 3, solve the indicated initial value problems for x in terms of ¢. In
Problem 4, solve for ¢ in terms of x. In Problems 5 through 18, find (first) integrals of the
equations.
Le ex 8 = 0x0) =x Oar
2x DN x = 10, xr /4) = Le 4) = 2)
3. x" + x’? cot x = 0, x(0) = 7/3, x'(0) = —2/V3.
AS x! + 2x! Sx ys == 0 x(0) = (0x O)i=rt.
5. x" + e* = 0, 6x 07 He er 0!
7. x" — xx' +x =0. 8. x” + f(x) = 0, f continuous.
9. x’ + xsecx’ = 0. 10>x" + x/3x’ = 0.
11. x! xx 2S Ov x= 0) 12. x!” + xx’? — S5xx’ + 4x = 0.
13. x" + xx’? +x =0. 14, lex Fe eet 0:
ley, oe! NE Se SS so) = (0). 16. x!’ + xx’? = 0.
17 oe ee) te 18. x” + x3e-*’ = 0,
A soft spring oscillator 41
yo-x+5+2E bls,
42 Solution methods for special first and second order nonlinear equations 2.8
for all constants E > 0 is called a phase portrait. In this context, the xy-plane is
called the phase space for the differential equation.
To make a phase portrait, one first finds constant solutions of the original dif-
ferential equation. This is done for Eq. (2.8a) by setting x = c, x’ =0, x” =0. The
resulting algebraic equation c — c* = 0 shows that x =0, x = 1, and x = —]1 are
constant solutions. In the phase plane, y = x’. Graphically, then, we depict the
constant solutions by plotting the points (0, 0), (1, 0), and (—1, 0) (Fig. 2.5). These
points are called critical points.
Figure 2.5
The next step in the construction of the phase portrait is to plot each one of the
curves y? = 2E — x? + x*/2, |x| < 1 that has a critical point on it. If x = 0 and
y = 0, then E = 0 and we must plot y? = —x?(1 — x?/2), |x| < 1. The only
point satisfying these conditions is the critical point (0,0). If x = +1 and y = 0,
then 2E = 4 and we must ploty? = (1 — 2x? + x*)/2 = (1 — x?)?/2 for |x| < 1.
Doing this, we obtain the curves C; and C, shown in Fig. 2.6.
Figure 2.6
Now suppose the block is put into motion with initial conditions x = xo and
Y = Yo which satisfy
4
Pats eeears!
yotx Ape a
A soft spring oscillator 43
and let x = ¢(t), y = $’() denote the corresponding solution of Eq: @.8a). If we
plot this solution parametrically, its trace will be the closed curve in Fig. 2.6. There
are four physical motions possible for the block.
The first (second) motion is initiated by stretching (compressing) the spring until
x = 1 (x = —1)and then releasing the block with velocity y= x’ = 0. The velocity
of the block remains zero and the displacement remains x = +1. The spring is
thus too soft to return the block to the position x = 0.
The third (fourth) type of motion occurs when the point (xo, yo) is in the upper
(lower) half-plane. The trace of the equations x = ¢(f), y = $/(f) is then the curve
labeled C, (Cz). Thus ¢(f) > +1 and ¢’(t) > 0 as t > +0, and the block must
move toward one of its extreme stationary positions x = +1 as time passes. The
orientation arrowheads point to the right (left) in the upper (lower) half-plane,
since x, = y-
If the block is at rest with x = +1, say, and if it is very slightly displaced to the
left, it will move to the stationary position x = —1, its displacement and velocity
functions x = ¢(f) and y = ¢’(f) tracing the curve Co.
If the block is set into motion with initial conditions x9 and yo such that
0 < yo + x% — x§/2 < 1/2, then the curve y? = 2E — x? + x*/2, |x| < 1, is
an oval C3 lying inside the curve y? = (1/2) — x? + x*/2. Since there are no
critical points on this oval, it is the complete trace of the corresponding solution
x = ¢(t), y = ¢’(t) of Eq. (2.8a); and it corresponds to a periodic motion of the
block.
The period of motion can be found with the aid of Eq. (2.8d). To do this, we
take xo = 0 and 0 < po = V2E < 1/\/2. By symmetry, then, the period T for
one complete oscillation satisfies
ete dx a Te
[QE — x2 + x4/2pl2 4
By analyzing this integral carefully, one can show that T— +o as E— 1/4 and
T—0Oas E—O0. Thus the period of oscillation depends on the total energy of the
block, and hence on the amplitude 21/?E/* of the oscillation. Such a dependence is
characteristic of nonlinear oscillators. In linear oscillators, amplitude and period are
completely independent.
EXERCISE 2.8
Wa 3% p
Xe = ea kee 4. Ka F
my my,
Jos ue
|ip ae
ko
eee
ko
eS,
of differential equations.
Systems of differential equations involving many unknown functions can occur.
It is therefore desirable to have a systematic scheme for keeping track of the data and
unknowns in such a system.
The purpose of this chapter is to set up such a scheme and apply it to the study of
differential systems having the form
Ma gt a Oe
(3.1a)
Xn = GniX1 Sa AnnXns
44
Simply coupled systems of differential equations 45
The system ([Link]) will be studied by our making a change of variables so that it
assumes the form
Yi = Ary + M172 5
yo = Moye) TT2y3- .
(3.1b)
Ya-1 = An—1Yn—1 + Vn—1Yns
yn = AnYns
where the \,’s are real or complex numbers and each 7; is either zero or one.
We illustrate a solution method, called the method of integrating factors, for the
system (3.1b) by explicitly calculating the solutions for the case n = 3, y; = x,
Yo = y, ¥3 = z. The system is then
ca Nyx s Yiy 5
y’ = Aeoy + Y 22,
i \3Z.
The last equation is solved by the method used in Section 1.2 to give z = Ae‘,
where A is a constant of integration. The second equation is then written
y= hoy = AY xe.
One multiplies by the integrating factor e~*2", as was done in Section 1.3, and observes
that
d (ve?) = Axe
dt
There are two cases. If \3 # Xo, then one finds upon integrating that
One multiplies the appropriate equation by the integrating factor ei! and proceeds
as before. In performing the integrations, we find that five cases arise: \y = \2 = Xs,
1 = \o # X3, Ay = A3 # No, AQ = AZF Nt, A1 FAQ# \3 ~ Ay. The student
will be asked to solve a specific equation for x in each of the five cases as an exercise
LP.
46 Matrix methods for linear equations with constant coefficients
below. Notice that the calculations would be considerably simplified if one or both
of the constants Y;, Y3 were zero.
A system of differential equations having the special form (3.1b) will be called a
simply coupled system. This chapter begins with the development of vector/matrix
notation so that systems of the form ([Link]) can be written compactly. We shall then
turn our attention to changes of variables that reduce systems of the form ([Link]) to
simply coupled systems of a very special type called Jordan systems.
EXERCISE 3.1
z' = 2z z' = 2z
ee + oy ’
y= 2yi 2:
nS Pe
If m real or complex numbers are written in a vertical (horizontal) array, the resulting
mathematical object is called a column (row) n-vector, and the numbers so arrayed
are called its components. For example,
1
3 X2
a S z and FO tox) = X3
©
eon —X X1X34X3 — ACL
/ — x2)
X9
are column 4- and 3-vectors, respectively. Notice that f(x,, x2, x3) has for its com-
ponents the values of three functions of the variables x1, x2, x3. For every f,
[Ei Vleet aT
is a row 5-vector. We shall use the unqualified word “vector” to mean a column
n-vector. Column vectors will be denoted by lower case boldface letters a, b, ¢,
¢(t),.... When a vector is to be regarded as a row vector, we attach to it the super-
script T. Thus a’, b’, c”, 7(2), .. . are the vectors a, b, c, #(f), ... written as rows
rather than columns. If x is a (column) vector with components xj,...,Xn, We
shall write x = [x1,...,Xn»]’ for it as a matter of typographical convenience. The
set of all n-vectors with complex (real) components will be denoted by V;,(@)(Vn(@)).
Addition and salar multiplicati
of vectors
on 47
LAx = [x;,...,
and
%)y = [y1,..., yn] denote two n-vectors, One says
that x = y if and only if x, = y,fork =1,...,n. The swn of x and y is by defini-
Hon x+y = [4,4 1,---,%,
+ Yn}. Vector addiis
tion
associative and com-
that is,
mutative,
K+ Vt+D=K+y¥)4+z and x+y=y+x
for al) n-vectors x, ¥, and z. The vector 0 = [0,..., Of with all components equal
to zex0 is called the zero vector. By —x, we shall mean the vector[~x,,..., —x,J’.
LA a be any number. The product vector ax isthe vector [ax,,..., ax,J", and
the multiplication so defined is called scalar multiplication. We shall refer to real or
complex numbers 25 scalars. Mt is easy to verify the following properties of scalar
multiplication for arbitrary vectors x and y and arbitrary scalars a and b:
i) l-x = x,
ii) (abjx = athx),
ii) (a + bx + y) = ax + ay + bx + by.
EXERCISE 3.2
1, Construct 2 row vector and a column vector using the numbers 2, 7, 3i, »/2, 0 for
K+Y ;
& . =
aot miy+zi,thnx=-?,y=27,z=7
won!
—n
|
Zz
Ne) 4 NY \\ i \l ~»
~
\l
Ww
UA & NS, N
&lA
J a
We
ee
eas|
‘ene
| 5 i 0 deg de
2 2=7('= 3):
=
Sal —-~ oN = x10 4 y13 +2]1], ten x= y=
0 0
3 1 0 outs
M1, if = *10|+ y131+z
hen x= 2,y=27,
]1| , z=? wer”) bs
AY
WS
ae
SS
0 0 Z
48 Matrix methods for linear equations with constant coefficients ApS!
In each of the Problems 12 through 15 below, there are given three vectors p;=
Lx; Z;]’, 7 = 1, 2, 3, and a simply coupled system of differential equations. (1) Check
that the components x;, y;, and z; of each p; satisfy the given differential equations. (2) Let
C1, C2, c3 denote arbitrary constants. Form the vector q = cipi + c2Pp2 + cap3 and
check that the components of q also satisfy the differential equations. Are they the solutions
produced by the method of integrating factors?
12. pi = [e**, 0, 0]7, po = [0, e—*, OJ”, ps = [0, 0, e2t]T,
Ke ee ce
13. pins [ent 0, 0)ee p2 = [0, ees Oe ps3 [0, 16°%, eles
XC — ey) Vl ee
X=
OX ny ey Dylans
15. pi: = [cost + isin#, 0, 0]", po = [0, cost — isins,0]", p3 = [0, 0, e?']”.
x’ = ix, y’ = —iy, z’ = 2z. [Note: e* = cost + isint, i? = —1.]
3.3 MATRICES
If mn real or complex numbers are arranged in a rectangular array with m rows and
ncolumns, the resulting mathematical object is called anm X n matrix. For example,
: t 0
A= eel and BQ) = 2) sire
i Orel oil 7
c-[ a
isa 2 X 2 square matrix. The only nonsquare matrices that we shall have occasion to
use in this book are row and column vectors. We therefore use the unqualified word
matrix to mean ann X n square matrix in all subsequent discussions. The integer n
will be called the size of the matrix.
In certain discussions, it is necessary to give designations to the elements of a
matrix. The symbolism A = [a;;] means that a;; is the number in the ith row and
jth column of A. We shall also write A = [a,,...,a,] for the matrix with columns
a; = [aij,..., jn)’, 1 <j <n. In using this notation, one thinks of A as n column
vectors written side by side.
Let A = [a;;] and B = [b,;] denote two n X n matrices and let a and b denote
scalars. One says that A = Bif and only if a;; = b,; fori, j= 1,...,n. We define
the sum of A and B to be the matrix (4 + B) = [a;; + b;;]. Matrix addition is
associative and commutative, that is,
for all X n matrices A, B, and C. The matrix 0 with all elements zero is called the
zero matrix. If A = [a;;], we denote [—a;;] by —A.
If A = [a;;], the product aA is by definition [aa,;;].
Then 1- A = A, (ab)A =
a(bA), and (a + b)(A + B) = aA + bA+ aBe bB.
The matrix A” obtained from an n X n matrix A by interchanging its rows and
columns is called the transpose of A. If A = [a;;], then A? = [a;,].
An n X n matrix may be multiplied from the right by an n-vector x to give a
product n-vector Ax. This multiplication is defined by the equation
a male fay qe oe
AXA) | Te: P= (3.3)
Ani +++ Ann}|Xn An1X} Se OS eee es
As a numerical example,
|! ;H ns E + i).
AS TN Sie 43
A fundamental property of this multiplication is the distributive property: If x
and y are any n-vectors and if A and B are any n X n matrices, then
(A + B)(ax + by) = aAx + bAy + aBx + bBy
LSE
The differential equations (3.1a) can, in view of Eq. (3.3), be written as
,
x ral
: = 3 5
AG Ani +++ Ann|| Xn
Anaifetond|
1 —2
rane Zieh
OY
Then AC = 0, the zero matrix. If A had an inverse, say B, the computation
EXERCISE 3.3
Perform the indicated operations.
27070)16
en Oe 1
0 0 1
Vector- and matrix-valued functions 51
lg e Ce a
mek Sy i Olli Xe el ON es
Za © OQ Alike 4h 0 0 —i||z
Perform the indicated matrix multiplications in Problems 15, 16, and 17.
Tews) eos eeeoe 91 a = 0
feed Ctwee S| 66 15 36 Bate
canine Ty) aaa en
ee? e341 fae260r 54 ae78:1) 62> al
fone te 4eOe 131-282-6191 ||.7 41) 1)
omcures||— 12 22709'41)|| oe =01
ee 4 420 % les
17.4=]4 3 1|| 10 9 1 Sr
2a 24 —3 A ; 0 8
18. Solve the differential equation
Siena” = Aa,
where A is the product matrix of Problem 15.
19. Work Problem 18 taking A to be the product matrix of Problem 16.
20. Work Problem 18 taking A to be the product matrix of Problem 17.
denote the values of n functions fi, ...,f, at the point (%1,..., Xn, 1). We identify
the n-tuple (x1, ..., Xn) with the vector x = [x1,..., Xn] and define
fi, s+ ey Xns t)
{(x,1) = :
Iris +3 Xn;5 t)
GPS)
oy ’ ’ ait se ice
0 Oz ~) 3 ;
2
The notion of vector function allows us, therefore, to write the differential
systems ([Link]) and (3.3) in the condensed form
x’ = AX, (LHC)
where x = [x1,...,%n]” and A = [a;;]. A solution of this equation is then a
vector-valued function @ such that $/(t) = A@(?) for all t. One can regard (LHC) as
one vector equation or as the system ([Link]) of n scalar equations. It will be con-
venient for our purposes to refer to it as a first order, n Xn, linear, homogeneous
differential equation with constant coefficients A = [a;;].
Vector- and matrix-valued functions 53
In the subsequent study of (LHC), use is made of the product rules for dif-
ferentiating matrix and vector functions. These are much the same as the product
rule for scalar functions, but the factors must not be commuted.
Theorem 3.1. Let A and B denote differentiable matrix functions defined on an
interval a < t < w and let x denote a differentiable vector function defined on the
same interval. Then
i) (AMx()) = A'Ox() + A(x’, and
ii)(ABW)!
= A()BO) + ABC.
Proof. \f A(t) = [ai;(t)] and x(t) = [x1(t),..., Xn(O]", then the ith component of
A(t)x(t) is Dik=1 Gin(t)xz(t), and
LY aula) = > alex) + anlOxLO
k=1 k=1 k=1
by the usual product rule. But >°j_, a/,(t)x;,(t) is the ith component of A’(t)x(t)
and >o7_1 axz(t)xi(t) is the ith component of A(t)x’(t). This establishes part (i)
of the theorem. To prove part (ii), write B(t) = [bi(t),..., bn»(]. Then
(AMBOY = (AMD), ---, (AMb»D)]
= [A’(t)bi(t) + Abi (D), ... , A’Obn»(D) + AMD, (CD)
= [A’()bi(2), ..., A’ br()] +--+ * + (AMD, ..., Ab]
= A’(t)B(t) + AC)B'(A). ||
EXERCISE 3.4
4. Let (4 = [x(), y(d, z(]" be a 3-vector with ¢’(t) = 0 for all t. If o(0) = 0, find
x(t), y(t), and z(z).
5. Let o(1) = [x(), yO, z@]"_ be a 3-vector with ¢’(4) = ¢(t) for all ¢. If 6) =
[1, 3, 7]7, find x(4), yO), and z(z).
6. Find a 2-vector $(1) = [x(4), y()]* such that $’()) = —¢(d and $(0) = [1, 0].
7. Find a 2-vector x(‘) = [x1(4), x2()J" that satisfies the equation x(‘) = x(0) +
{5 x(s) ds, where x(0) = [3, 5]”. [Hint: Differentiate the equation by components. ]
8. Find a 2-vector x(t) = [x1(), x2()]" that satisfies the equation x(‘) = x(0) +
J (t — s)x(s) ds, where x(0) = [1, 0]”. [Hint: Differentiate the equation by components.]
9, Let x() = [x1(), x2()]" satisfy the equation
t
x(t) = x(0) + i.
3x1(s)1x9(s)
+ S260] ds,
where x(0) = [1, 1]7. Find the components of x(7¢).
BES)
54 Matrix methods for linear equations with constant coefficients
B= ‘
Yn-1
0 Wee =
all diagonal elements being equal and all superdiagonal elements having the common
value 1. The second type is merely called a Jordan matrix. One says that a bidiagonal
matrix Bis a Jordan matrix if its diagonal and superdiagonal elements may be grouped
into Jordan blocks without rearranging them, Jordan blocks with equal diagonal
elements being contiguous. The matrix B in Fig. 3.1 is a Jordan matrix. The dashed
lines indicate the required grouping into Jordan blocks. We shall denote a Jordan
matrix B with Jordan blocks B;, 1 < j < k, on its diagonal by either
By, 0
By
yil’ M4 Ae alone 0 al
Va 0) AE, aS Va
since the integration procedure at each stage depends on which 7,’s equal zero and
which equal one.
There is a standard procedure which enables one to write from memory the
solutions of Eq. (3.5b) without actually performing the integrations. We now develop
this procedure.
Recall from calculus that if a is any number, the exponential function e®’ has a
Maclaurin series expansion
1+ at+ at?/2! +--++ a*t®/k! +--
which converges to it for all ¢. In the currently popular calculus texts, the exponential
function e” is defined to be the inverse of the logarithmic function t = a7} f u7! du.
It is nevertheless possible to define
fea}
Xi
PLL as a fe
—|
&
for every constant c, and that every solution x = ¢(¢) of Eq. (3.5c) has the form
¢(t) = e“cy for some constant cy. Since matrices can be squared, cubed, and in
general raised to any power, it is not implausible that a similar result should hold for
the general equation
x’ = Ax, (LHC)
We shall show in the next aie that the matrix power series
k
T+ a+ 5syoaoe ftp (3.5d)
converges to ann X n matrix for 4 t and that the series may be differentiated term
by term to obtain
o o k
ay ae Sy (3.5e)
The sum of the series (3.5d) is, not implausibly, denoted by e“'. Equation (3.5e) can
then be succinctly written as
A — | . = —
Thus
Foyle hipaa MO} ik
ett. itcom as oe hae onc Ee Ss agape
Onn ay Pe Opies
ee ea 0
[L 0 ee hOg Tee ee
E ert 0 T
0 ox
Example 2. To find e4’ when A = E \ compute
oe he 3 eee BS Niet
A =|; ol A kK eal ’ A =| * |:
Then
Nupe yl yk x ie f 3 ye-1k-1
each of these n? component power series converges for all t. To do this, write A = [a;;]
and denote the sum )°7_; 50”, |a;;| of the absolute values of the elements of A
by |A|. Now the ijth element of A? is
(451415 + °° + + Gin@nj),
and summation of the inequality
|ai141; = Rae =F QinQn;| < la; | : lay, = Ig \din| 5 lay,;|
—eA'e = c= >) AS c= ||
dt dtjao k! ka1 (k — 1)!
Corollary 3.2. For any constant vector ¢, x = e*‘e is a solution of x' = Ax.
Do not misinterpret the corollary: it does not say that every solution x = @(t)
of x’ = Ax has the form ¢(t) = e“’c for some constant vector c. We shall show that
this is true for Jordan matrices in the next theorem, but a proof that it is true in
general will not be given until Section 3.7.
Anticipating this result, however, we comment here that the problem of solving
x’ = Ax is really the problem of finding e*’. If A is not a Jordan matrix, it is usually
not very easy to compute e“! using only the defining power series (3.5d). Series
substitution, however, works very nicely for the computation of e”' if B is a Jordan
matrix. We verified by substitution in Example 1 above that e?' = diag [e*"’,...,
e")] for each diagonal matrix B = diag [\1,..., An]. Similarly,
je}
ib fie hen eae
(k — 1)!
et Ol ae ew (3.5f)
<aieiy ah as t
Oe.0F SR: I
58 Matrix methods for linear equations with constant coefficients AYS)
for each k X k Jordan block matrix (3.5a). This was verified for k = 2 in Example 2.
The general verification goes through in the same way, but the computations are more
elaborate.
The next theorem allows us to immediately write the solutions of Eq. (3.5b)
without performing any integrations.
Theorem 3.3. Let B denote ann Xn Jordan matrix and let y = Y(t) denote an
arbitrary solution of
y’ = By. (3.5g)
Then (t) = e®'c for some constant vector c.
Proof. First assume that B = diag [\1,..., An] is a diagonal matrix. Then each
component y; of y satisfies
Oe BPE Oy
vi(t) = Av; (Z).
Applying the method of integrating factors, we find that there are constants
C1, 2 5 Ce such; that
Wilt) = ec, ;
Yr-1(t) = ev epes + tec, 3
- Mt Mt Mee ye
¥idt) = ec, + te C2 tasins eo (ea Ck.
In matrix form,
let) eS ie Sie
ie . C2
W(t) =e:
t
0 i sete Ck (3.5i)
But (3.5i) is precisely the equation y(t) = e?’c, where ec = [cy,..., Cl
The exponential matrix 59
Finally assume that B = diag [B,,..., B;,..., By] is a general Jordan matrix.
Then the system y’ = By actually consists of k independent subsystems u; = B,u;,
where the components of y are the components of all the u;’s. Since each B; is a
Jordan block, uj = e?%'c; for some constant vector ¢;. Thus
ePit 0 Cy
W(t) = 1. a Ls
0 eBxt Ch
But e?' = diag [e?'’,..., e?*]. Thus y(t) = e?’c, where the components of ¢ are
the components of all the c;’s. ||
Example 3. If B = diag [B;, Bo, B3, B4] is the matrix in Fig. 3.1, then y’ = By
has the form
ae ah)
uj =]0 3 Llu, uy = 2uo,
Om OFS
1 x 1 O
i= [; {|U3, u=|0- 9 Illus,
0 O
where uy = Lyi, Yo. Val’, Ug = [ya]’, us = ys, yel’; us = [V7 ys; yo]’. Since
Ft a
eBit mr e2t 1 t : eB2t = en.
0 O 1
l ae)
eBst sale I eBat af nt F
Oe Oy 0 |
EXERCISE 3.5
2. In parts (a) through (k), there is given a matrix B. Find the matrix e®' either by drawing
an appropriate analogy with Examples 1 and 2 or by substituting the matrix B into the
exponential power series.
270 170
9 |Tv 1 d)|0et7 0
»|0 3 »|Oni | 0m atone
oe it r 0 O xr 1 O
a) iO ze © f) tO am i g)|0
7 1
00fn OY OD as 0 On
aie OO 7 1 ONO
07 0 O x IO) ae OY ©)
Nowy et De Ont
000 - 000f
1.0) 0 ahti
hy 2 OF aie.
Dlo on 0 Sa aria
0 00cm Oy (0) ) a
4. Substitute the matrix (3.5a) into the exponential power series and derive the formula
GSt):
5. Show by power series substitution that if B= diag[B,,...,Bn,], then e?! =
diag [eP ite cea":
6. Let P denote a nonsingular matrix. Verify by power series substitution that P—1e4*P =
e(P-1AP)t.
3.6 DETERMINANTS
Definition
1) If A = [ay,]isal X 1 matrix, we define det A = aj}.
li) Assume that determinant has been defined for n—1Xn-—1 matrices,
n > 2, and let A = [a;;] denote ann X n matrix. We define
Thus
A229 a3 ao, a a a
Get A= ars det| <= 12 det : ae —- 413 det a 22 . ||
432 433 431 433 431 432
We shall assume the following theorems which are proved in linear algebra
courses.
The number det A;; is called the minor of the element a;;, and the signed minor
(—1)'*? det A;; is called the cofactor of the element a,;;._ Equation (3.6b) is called
the Laplace expansion for det A along the ith row.
We defined det A above as the value of the Laplace expansion along the first row.
Laplace’s theorem implies that any row could have been used. In fact, since the
columns of A are the rows of A’, it follows from Eq. (3.6d) that
for each j = 1,...,”. Equation (3.6f) is called the Laplace expansion for detA
along the jth column.
It follows easily from Laplace’s theorem that the determinant of a triangular
matrix (one with only zero elements above or below the principal diagonal) is the
product of the elements on the principal diagonal. If A is a large nontriangular
matrix, the evaluation of det A by Laplace’s theorem can be a very laborious task.
The difficulties can be mitigated somewhat, however, by taking advantage of zero
elements. For example,
6 16 —23 6 16
det} —1 —2 7T/= (-it?-3-det| _f | - 42:
Ora 3
If a matrix A has no zero element, one can construct a matrix B which has at least
one zero element and satisfies det B = det A. This is a consequence of the following
theorem.
Theorem 3.6. If the n X n matrix B is obtained from the matrix A by adding a
scalar multiple of a row (column) ofA to another row (column) of A, then det B =
det A.
Proof. We prove the assertion for rows. If A = [a,;] and
aii 400 Ain
An1 6 As Ann
then B = CA, where C is the matrix obtained from the identity matrix J by adding
the product of \ and its mth row to its kth row. It follows easily from Laplace’s
theorem that det C = 1. By Eq. G.6e),
det B = (det C)(det A) = det A. ||
At least one element in the kth row of B can be made equal to zero by choosing
\ properly. If, for example, a, # 0, the choice \ = —a;,1/am , reduces the first
element in the kth row to zero.
Example. Let
Multiply the first row of A by —3 and add the result to the second row of A to obtain
| ers a
B, =|0 —-1 —-3
PP yi TA
General solutions for linear differential systems 63
The matrix B, has the same determinant as A. Now multiply the first row of
B,
by —i and add the result to the last row of B, to obtain
re ee.
By =|0 -1 —3
0-i 0
We have det By = detB, = det A. Thus
EXERCISE 3.6
Having digressed a bit to review determinants in the last section, we now return to
the study of the differential equation
x’ = Ax (LHC)
that we began in Section 3.5.
The student has made changes of variables in many mathematical problems,
e.g., in evaluating integrals. A common method for changing variables in (LHC) is
to make the substitution x = Py, where P is a nonsingular matrix and y is the new
vector variable. This yields Py’ = APy, or
Ve (Ps APy. (3.7a)
64 Matrix methods for linear equations with constant coefficients Sul
Now let us assume that we wish to find some solution x = ¢(t) of (LHC). We
make the change of variables x = Py, where P is the matrix in Jordan’s theorem.
The equivalent equation (3.7a) then has the form
View By, (3.7b)
Problem 1 of Exercise 3.5 was included to show that computation of e4¢ by power
series substitution can be very complicated. Also, it is difficult to obtain enough
cae directly from the power series (3.7e) to graph, say, the solutions x =
4te. One can nevertheless make very profitable use of Theorem 3.8. To do this, we
Hoe the notion of eigenvalue for the matrix A.
Let us subtract a variable \ from each diagonal element of a matrix A. The
matrix notation for this operation is A — \J. The determinant det (A — XJ) will
be an nth degree polynomial in the symbol \ (see Problems 5 through 10 of Ex-
ercise 3.6). The solutions of the polynomial equation
Ge ee)
B=}; ~ a
0 neal
Then det (B — dJ) = (a — )*, and B has but one eigenvalue \ = a of multi-
plicity k. ||
Example 2. Let B = diag [B,,..., B,] be a Jordan matrix; then
Thus the eigenvalues of a Jordan matrix are precisely its diagonal elements. ||
The final theorem of this section provides a method for solving x’ = Ax, pro-
vided the eigenvalues of A are known. To prove it, we need a lemma.
By the same theorem, it follows from the identity P~'P = J that (det P~*)(det P) =
Leas
det (ALT) det BN
and the lemma is proved. ||
Theorem 3.9. Suppose that the matrix A has eigenvalues \1,..., x with multi-
plicities my,...,m, respectively. If x = $(t) is any solution of (LHC) xt = Ax;
then
k
where the Y,,;’s are scalar constants. This proves the theorem, for we have only to
define c,; = [Vrj1, Vrja) +++» Yrjn]’ to obtain the solution (3.7g). ||
Example 3. Let us find the solutions of the system
Bali DO meealiiexs
yi =|0 2 O}lyi-
Zz 0) Jt S§i\lLz
Thus
ZA yt 2B, + Bs 2G +5 Cz
= DAS eo 2 Be fee a 263 (eae
Aoi Ae B. + 3B3 Co+ 3C3
Equating coefficients of the corresponding exponentials yields the equations
A ee Sa Bi, Bz = 0, C3 Cr .
Thus
x Ay B, Ci
y| =| —B,|e7?+] 0 |re?+1] 0 Je**. ||
Z B, 0 Ci
Example 4. By introducing the auxiliary variable y = x’, one can convert the damped
linear oscillator equation
x” + ax’ + w?x = 0 (3.71)
Hl 2 Ee =| Et (3.7))
The eigenvalues of the coefficient matrix are given by the quadratic formula
2s is VAD
Wi ee ae (3.7k)
They will be complex, real and equal, or real and distinct as 8 = w? — a?/4 is posi-
tive, zero, or negative. We consider here the cases 8 ~ O and leave the case B = 0
to the exercises.
cl-filesign anon
By Theorem 3.9, each solution of the system (3.7)) has the form
le MAT ee Bales:
68 Matrix methods for linear equations with constant coefficients 3E7/
We differentiate this equation and substitute the result into the system to obtain
\,A eu ao NeBie. => Age! a Byer?!
and
NeA se 2 + AoBoer?! = —(wA, + aA>)e™! = (w?B, + aBy)e",
Ap = —5 + Vo?/4—
ow?Ay and By = — 5 — Vo2/4 — o By.
Thus A, and B, are arbitrary and it follows from Eq. (3.71) that
x = Aye! + Byed2!, (3.70)
EXERCISE 3.7
se eid, 8 9
wae E | ya=|_' At
eth alee 2
Se70u0
lei | PP
eee
Al
e) 4=|=1 20 f)A=]1 01
DO 2 Soe
TOY «4 =249 ==Ann
g) A= | — 24) 45. h)4 =|110) 941
cee) tt Ae oeel
General solutions for linear differential systems 69
‘|te 2 |X
y Ey
approaches zero ast > +0,
b) Let A be ann X n matrix. Under what circumstances will each component of every
solution of x’ = Ax approach zero as t > + ?
3. Find a solution of the equations
w | —-1 0 O-—I1I|iw
ait om OO My BON ix
yl2= OP OR= 2a eco liy
Z Jl (il WL || 4
8. Two blocks having unit mass are connected by a linear spring with stiffness coefficient
k = 2 and rest on a frictionless surface. The block on the right is held fixed, the block on
the left is moved 1 unit to the right. The blocks are released with zero initial velocity. If
70 Matrix methods for linear equations with constant coefficients 3.8
the spring is L > 1 units long when unstressed, how much does it stretch and compress as
the blocks oscillate? [Hint: Use the differential equations that you derived in Problem 1,
Exercise 1.7.]
9. Consider the system of blocks and springs described in Problem 2 of Exercise 1.7,
assuming that the masses of the block and the stiffness coefficients of the springs are equal
to one. The block on the left is moved 1 unit to the right and set into motion with unit
initial velocity while the block on the right is held fixed. Explain why neither block can
execute a periodic motion. What sort of initial conditions give rise to periodic motions?
is called an nth order, linear, homogeneous differential equation with constant co-
efficients. The damped linear oscillator equation
which we solved in Example 4 of the last section, has this form. We solved the
oscillator equation there by converting it into a system and using Theorem 3.9. Since
it is computationally inefficient to go through this procedure for each specific equation
of the form (LHC-n), let us, once and for all, convert the general equation into a
system, describe its solutions, and thus provide ourselves with a direct solution
method.
Let x = y(t) denote some solution of (LHC-n) that we wish to compute. We
introduce the auxiliary variables x; = x, Xo = x’, ..., Xn = x‘*— and obtain
the equivalent system
0 1 0 0
0 0 1 0
ces : aA Ee hax (3.8b)
0 0 0 1
mh mi) Ie Bo 1G
for some constant vectors ¢),;,Co,,..., Cred DLCUMY a eee ee Vd Fea oe wi)
denote the first components of ¢;,, Co;,..., Cm Then
k
(nan
p=
oe ty, et (3.8)
We state this result as a theorem.
Theorem 3.10. Each solution of
NP A Re eel. Quy 50 (LHC-n)
has the form (3.8d) for some choice of the constants ¥ 1, ... 5 Vm,r-
Let us henceforth call Eq. (3.8c) the characteristic equation for (LHC-n). The
polynomial p(\) may be found directly without converting the differential equation
into a system. One makes the substitution x = e* and obtains e*p(\) = 0. For
example, the characteristic polynomial for x’ + 2x” + 7x’ + x = 0 is p(d) =
3 + 2\2 + 7A + 1. It follows from this general observation that x = e is a
solution of (LHC-n) if p(\;) = 0, that is, if \; is any solution of its characteristic
equation.
Let us write the characteristic equation in factored form
pO) = = AM = AI = AN" = 0
and notice that
d’p men
2 Ar) = 9, sie a 9 Oe) = 0! (3.8e)
dN
Verbally this observation is expressed by saying that the multiple zeros of a poly-
nomial are also zeros of its derivatives. Keeping this in mind, we substitute
x = tte, 4 > 0, into the left side of (LHC-n) and obtain
BQ), __ ot a "PQ) | Zeer dp) 2
fe Ge = 1) a 2(u — 2)! dav? ees
Xa OY A be ay yx" +b ax
= Ss Bi ($9(t) + arg P(t) + +++ + Gn—164() + ano(d)) = 0.
j=
Matrix methods for linear equations with constant coefficients 3.8
1p
Identifying the B’s with the 7’s in the sum (3.8d), we then see that it defines a solution
(LHC-n) no matter how the 7’s are chosen. We state this observation, which is the
converse of Theorem 3.10, as Theorem 3.11.
defines a solution of (LHC-n) for every choice of the constants V tr, Vary «+ + » Vm,r-
In view of Theorems 3.10 and 3.11, we shall call (3.8h) a general solution for
(LHC-n).
Example. Let us find a solution x = y(t) of x’” — 4x” + 5x’ — 2x = 0 that
satisfies ¥(0) = 1, y’(0) = 0, ¥’’(O) = 0.
The characteristic equation is \? — 4\? + 5A — 2 = 0, with solutions \ = 1,
) = 1, and \ = 2. Theorem 3.10 guarantees that the solution will be of the form
EXERCISE 3.8
In each problem below, find a general solution of the indicated differential equation. If
initial conditions are given, find the solution that satisfies the conditions.
il, ge ae Abe? ae abe = (0), Daxtie-
2x ae =e (0)!
3, ae” Se Be 0), 4. x’ + 2ax' + a*x = 0.
Sax”? = 3x Bx = 0: 6. x” + 3x" + 3x’ +x = 0.
Tope ee ae Tx x= OF (0) = 1 xO) ale Ole
Soc — i te O erae(()) ae, 1 (Os On tO) poten
9, xO) - 2x" 4+ x% = 0; 10!" 4.2” = x! = 0,77 = 4.
Li XO?) 8x 12) 20,
12 x2) De 6x = 150 ON Aine lr noe
13. x% + 6x!" + 10x” + 6x’ + 9x = 0.
14. 25x’ — 15x’ — 4x’ = 0.
The annihilation method 73
15, x9) + (4 + S5ix’” + (—2 + 20x" + (—24 + 20i)x’ — 24x = 0. [Hint: Try \ =
—2 twice.]
16. Solve the initial value problem LCI” + RCI’ + I = 0, 1(0) = 0, 1’(0) = E/R, which
was formulated in Section 1.8.
17. A very flexible cable of length L is held so that half its length hangs over the edge of a
high ledge. If the cable is released from rest, how much time will pass until it falls free from
the ledge?
18. A very flexible cable is draped over a very small pulley with two-thirds of its length on
one side. If the cable is released from rest, how much time will pass until it falls free?
In the last section, we developed a method for solving the homogeneous linear equation
SO Nae sla So al (apenie meme he ool 0 (3.9a)
Me Ge 2 ES An—1h + an = 0
to obtain
(A = hy)™ aeiceee (0 = yA ee = (0)
Se I a ee ae yee (3.9b)
rl!
where each A; is a constant and each 6,(f) is given by one of the expressions
te, eeeet tk3 sin wt, t**coswt, t*e% sinwt, t**e%' cos wt. (3.9e)
It will be convenient in the discussion to refer to an expression ¢,f\(t) + °** +
Cmfm(t), Where the c;’s are constants, as a linear combination of fi(2),.-- NI EAUS
We may then say that the annihilation method is a technique for solving Eq. (3.9c)
when A(t) is a linear combination of the values (3.9e).
The technique is best explained by way of examples. Suppose, then, that we wish
to solve the equation x” + x = 7”. We differentiate three times and obtain
x™ + x’ = 0. Every solution of the first equation satisfies the second, but the
Matrix methods for linear equations with constant coefficients 3.9
74
second equation has solutions that are not solutions of the first. By Theorem 3:10;
each solution of x® + x’’’ = 0 has the form
X= Cy Col 1 eal cue” iciee (3.9f)
Now multiply Eq. (3.9c) by v and subtract the products from Eq. (3.9h) to obtain
Cae el Crlen On C Ren yee hee | |
Example 3. If b(t) = sin wf, differentiate Eq. (3.9c) twice to obtain
Hee ay) eat gag! oo ten) Se (3.91)
Multiply Eq. (3.9c) by w” and add the products to Eq. (3.9i) to obtain
x rt 2 ) ai axe ae (ao a Ox” a nee
Now multiply Eq. (3.9c) by Y and subtract the products from Eq. (3.9j) to obtain
POO) + a) Ho + ra + ad
= j=xyCVS) F aWF PD Fo + an) + ans)
r
= 2) eb) = 5). I
2
Theorem 3.13. Let» > 0 bean integer and let8and denote any complex numbers.
The equation
x gx eg, a td, = oree (3.90)
x” —x=t+e! (3.9q)
that satisfies the conditions Y(0) = 0 and y’/(0) = 0. We differentiate the equation
and eliminate e’ from the result ofthe differentiation to obtain x/” — x” — x/+x=
1 — t. After two more differentiations, we find that x — x% — x/” + x’ = 0.
All solutions of this equation have the form
xX = Cy + Cot + cge’ + cate’ + cs5e—, (3.9r)
by Theorem 3.10.
The solutions of Eq. (3.9q) are also of the form (3.9r), but not all the constants
are arbitrary. To discover which constants are arbitrary and which are not, we make
the substitution (3.9r) in Eq. (3.9q) and obtain the relation
EXERCISE 3.9
What values ofZ and C will cause the bulb to burn most brightly ? Could a dimmer be built
using these principles ?
22. Consider a metal block of mass m which is constrained to move on a horizontal friction-
less surface under the influence of a linear spring of stiffness coefficient k and a time de-
pendent force of magnitude Fo sin wt. The differential equation for the displacement x from
equilibrium is mx’ + kx = Fo sinwf. Assume that w = /k/m and solve for x. What
happens as tf > + © regardless of the values of x(0) and x’(0)?
23. If friction is considered in Problem 22, the appropriate differential equation for the
displacement x is mx’ + ax’ + kx = Fosinwt, a > 0. What value of w will give the
largest amplitude to the displacement?
A set {a;,..., 8m} of n-vectors is called /inearly dependent if there are constants
C1,..+, Cm, not all of which are zero, such that
I 2 3
eedel eee
7 8 9
The indicated vectors are therefore linearly dependent.
It is proved below that every set of n + 1 or more n-vectors is linearly dependent.
It is useful to know (and the reader is asked to verify the assertion in the exercise
below) that every set of vectors containing the zero vector is linearly dependent.
The definition of linear dependence above can be phrased more compactly by
introducing a little more terminology: If a;,..., a, are n-vectors, an expression of
the form c,a; + °°* + Cmam, Where c1,...,Cm are constants, will be called a
linear combination of the vectors. Thus {a;,...,4,} is linearly dependent if and
only if there is a nontrivial linear combination of its elements that equals the zero
vector.
The following short theorems are useful in the discussion of linear independence.
CW tied St Witenes,
for any constants ky,..., km, then
ay hoes ek ae
and {a1,...,@,} is linearly dependent. ||
Let us say that a set {y1,..., y,} of n-vectors spans V,,(€)—or is a spanning set
for V,,(C)—if every n-vector can be expressed as a linear combination of its elements.
Linear independence of vectors 79
A spanning set that is linearly independent is called a basis for V,(@). If u; is the
n-vector with ith component equal to one and all other components equal to zero,
then it is easy to verify that {u,,...,u,} is a basis for V,(€). We shall call it the
standard basis.
A fundamental result from linear algebra is the following theorem.
Theorem 3.16. Any set {¥1,.--,Yn41} of n+ 1 n-vectors is linearly dependent.
Proof. Suppose for contradiction that {y1,..., Yn, Yn41} is a linearly independent
set of n-vectors and let X = {x),...,Xn} denote the standard basis for V,,(C).
(Note that it is not assumed that x; = u;.)
Since X spans V,,(@), the set {y1, x1, ..., Xn} is linearly dependent. Thus there
exist constants bj, c1,..., Cn—not all zero—such that
It is not true that cy = --- = c, = 0, for this would imply that y; = 0 and would
contradict the linear independence of {y1,...,¥m41}. We may therefore assume
that the standard basis vectors have been arranged in such a way that c; ~ 0. Then
x, can be expressed as a linear combination of the vectors y;, X2,..., Xn by merely
solving Eq. (3.10). Since {x;,...,Xn} spans V,(C), it follows that Y, =
Ry XG ex coesealso, —Ihis-being the case, the set 4yi,[Link],- 9) Xp) 1S
linearly dependent. Repeating the argument n-times, we find that Y, = {y1,..., Yn}
spans V,,(C). This contradicts the independence of {y1,..., ¥n41} since there must
now exist constants c;,..., Cn such that yniy = C1y1 +°°* + CnYn- ||
Theorem 3.16 implies, in less abstract terms, that every system of algebraic
equations
a11X1 9° OimXen = 0,
EXERCISE 3.10
Which of the following sets of vectors are linearly dependent? (Note that this is really a
question of solving linear algebraic equations.)
18 6 9 18 6 9
1. |—66], |—18/, |—36 2. |—48], |—12], |—18
2 0 9 42 12 18
80 Matrix methods for linear equations with constant coefficients 3.11
ese camer bo
‘Labbe BERE
3.11 LINEAR ALGEBRAIC EQUATIONS
Proof. It is convenient to show first that statements (i) and (ii) are equivalent. Let
the columns a,,...,a, of A be linearly independent and suppose that x =
[c1,..-,¢n/’ is a solution of Ax = 0. Then cia; +---+ c,a, = 0.” Since
@1,.--, a, are linearly independent, c¢, = -*-= c¢, = 0. Thus x = OF "if con-
versely, Ax = 0 has only the solution x = 0, it follows from the definition of linear
independence that the columns aj,...,a, of A are linearly independent.
We complete the proof by showing that statement (ii) implies (iii), that (iii)
implies (iv), and that (iv) implies (ii).
Suppose that statement (ii) is true. Let u; (¢ = 1,...,m) denote the standard
basis vectors defined in Section 3.10. Then a,;, the ith column of A, satisfies a; = Au;.
By Theorem 3.16, the vectors u;, a;,...,@, are linearly dependent (i = 1,..., 7).
Thus, there exist constants d;;, ... , din Such that
uy = dja, oF ar ae Gan
Next, assume that (iii) is true. Then 4A~! = I. By Theorem 3.5, 1 = det J =
det (AA~') = det A- det A~!, and det A ~ 0. Thus statement (iv) is true.
Finally, suppose that (iv) is true. For each = 1,..., 1, let
b; = [(—1)!*/ det Ay;,..., (—1)"*# det Ans]
and let B have b; for its jth row. By Eqs. (3.6b), (3.6c), and (3.6f), BA = (det A)I.
Now suppose Ax = 0. Then 0 = BO = BAx = (det A)x. Since det A ¥ 0,
x = 0. Thus statement (ii) is true. ||
The matrix B in the last theorem is the transpose of the matrix of cofactors of A,
that is, if B = (6,;), then
b;; = (—1)'*) det Aj:.
If det A # 0, then A~' = B/(det A). This equation is a useful representation
of A~' and provides an algorithm for its computation.
It follows from Theorem 3.17 that the truth of any of the following contrapositive
statements implies the truth of all the others.
i’) The columns of A are linearly dependent.
il’) Ax = 0 has a nonzero solution.
iii’) A is singular (A~! does not exist).
iv’) det A = 0.
The nonhomogeneous equation Ax = b may have no solution or infinitely many
solutions. Notice that x = 0 is not a solution since b ¥ 0 for a nonhomogeneous
equation.
Theorem 3.18. The nonhomogeneous equation Ax = b has a unique solution if
and only ifA~* exists.
Proof. If A~' exists, then x = A~'b is the unique solution.
Suppose, conversely, that A~1 does not exist. We show that if Ax = b has a
solution x = d, then it is not unique. Let x = ce be any nonzero solution of Ax = 0.
Then A(c + d) = Ac + Ad = 0+ Ad =D. Thus x = c+d is a solution of
Ax = b different from x = d. ||
To actually solve a system Ax = b, it is most efficient, for the problems in this
book, to use some method of elimination of variables such as reduction to triangular
form.
EXERCISE 3.11
Find all vectors x satisfying Ax = b given the following values of A and b.
18 6 9 21 6 9 =
1. A = | —66 —21 —36|, b = O. 2. A=|— —18|], b=
—12 42
2 0 3 42 12 18 UW
Col
eSel
82 Matrix methods for linear equations with constant coefficients 3.12
—6 -—4 0 —2 G7 3
yy Zhe |) iO 7 illo i= Ble 6. b0.
A =|—24 —7 —15|,=
—4 —3 —] —] —4 -—2 0)
3.12 EIGENVECTORS
Let us recall that the eigenvalues of an n X n matrix A are those values \y,..., xz
of \ that satisfy the characteristic equation
det (A — AJ) = (A — Ay)™ + A — A)” = 0. (3.12a)
Thus a given matrix A will always have at least one eigenvalue, and it can have n
distinct eigenvalues. Now let \; denote an eigenvalue of an n X n matrix A. Since
det (A — i,J) = 0, it follows from parts (ii) and (iv) of Theorem 3.17 that there is
at least one nonzero vector x such that
(A= x= 0. (3.12b)
Such a vector is called an eigenvector of A corresponding to the eigenvalue \;. Eigen-
vectors are sometimes called characteristic vectors or proper vectors.
Example. The eigenvalues of the matrix ; | are the values of \ for which
3— F
det | 3 1 2 vanishes. These are the zeros of the polynomial (3 — A)(1 — A) —
ae 5 2} = (2) at ee mae ea Om
3 1 — 6|| xe 0 3 1+2]||x2} [0
Pi = al?
§ and p2 = 6 i
=
are eigenvectors for any nonzero choice of the scalars a and b. Notice that Pp; and p>
are then linearly independent. ||
Eigenvectors 83
We saw, in Lemma 3.9, that similar matrices have precisely the same character-
istic polynomial, hence the same eigenvalues. The eigenvectors of similar matrices
are related by the following theorem.
EXERCISE 3.12
In each of the problems below, there is given a matrix. In Problems 1 and 2, there are also
given two vectors. Check that the vectors are eigenvectors and find the corresponding
eigenvalues. In the other problems, find the eigenvectors and eigenvalues of the given
matrix.
1 ol 1 1 25 <3 1 3
7 OR a fea OM pe | yFame 5)ale I fd
aeQO #4 < |
1
5. EE2 |
24 6 9 26. 334 “S73
Oni 665-150 36 ir ne -18 8. |—28 —61 —91
oS TES) 42 12 te ee eee
Sey at 2 m1
Oni Ome Onn -nA = Lid hay (1
Annet 2
3 1 oar6)
12. Let A; denote an eigenvalue of the matrix
0 1 0 0
Pa hao 1 0
we SS UR Rag
Let us assume that A is given and that we have in hand a matrix P that reduces A
to a Jordan canonical form B. Since A and B are then similar, Lemma 3.9 implies
that they have precisely the same eigenvalues. But the eigenvalues of B equal its
diagonal elements since Jordan matrices are triangular. Thus the eigenvalues of A
form the principal diagonal of B.
Now let us write P = [pi,...,p,] and B = diag [B;,..., B,,], where Bis a
v; X v; Jordan block matrix
ee! 0
B; = i1
0 dj
for j= 1,...,m. It may happen that some of the Jordan blocks have equal diagonal
elements; thus the listing \1,...,A;,..., Am Of eigenvalues does not imply that
they are distinct.
We write the equation P~'AP = B in the form AP = PB and consider, for
notational simplicity, only the first y; columns of P:
AP Alpi,
a. Piss0 ol
Axl 0! 0
1|
|
0 o) ale eT
= EB = Api, os sys 07] Me EF Paeree tag = ue
0) orale By
CASE I. The matrix A does not have two linear linearly independent eigenvectors,
that is, every eigenvector of A is a scalar multiple of some one fixed eigenvector p;
and the eigenvalues \; and \g are equal.
CASE II. The matrix A has two linearly independent eigenvectors pj, po, correspond-
ing respectively to the eigenvalues )j, Xo.
Theorem 3.22. Assume that the Case I hypotheses hold and let q be any nonzero
vector which is not an eigenvector of A, that is, q ~ 0 andq # cp, for any constant c.
Define y by y = (A — XA )q. Then y is an eigenvector of A; hence y = cp, for some
constant c # 0.
Denote the matrix on the left by B and note that it is nonsingular. Equation (3.13c)
then tells us the pp = d,B7'p,/co. Thus
(be
A1P2 = MB p= “1B'Ap,
a:
Ch
2
[a+ (+2)C2 r/o,
i dz\ 3-1
. {t+ (u+2)p Jp.
acd ds
ah Pi + (x Se 22)Po,
Jordan canonical form for 2 < 2 matrices 87
dy
e Dis; Cees
es Po = 0. (3.13d)
Since p2 was chosen linearly independent of p,, Eq. (3.13d) implies that d> = 0. |
Example 1. Every eigenvector of
(A 37 ||
7 2
omenEERE$(G5— le
qi)
M —% 214 3(92 — 41)
The last vector is an eigenvector of A, just as Theorem 3.22 predicts, since it can be
expressed in the form $(¢2 — q1)p1._ ||
Corollary 3.22. Under the hypotheses of Case I, the equation (A — 4I)x2 = py
has a solution.
Proof. Let q ¥ 0’be any vector that is linearly independent of p;. By Theorem 3.22,
y = (A — X,/)q is an eigenvector of A, that is, there exists a constant c ¥ 0 such
that (A — AJ)q = cp;. Define xg = q/c. Then (A — A,J)xe = pi, and the
equation is solved. ||
Continuing the study of Case I, we rename the vector x of the corollary py and
summarize the results of the discussion with the equations
implies that
a Poee ee BE
P AP =|": A, (3.13f)
Now consider Case II]. We set \ = \,; and x; = p, in Eq. (3.13b) and attempt
to solve the equation (A — \4)x2 = pi. We shall see from Corollary 3.23 below
that this cannot be done.
Theorem 3.23. Assume that the Case II hypotheses hold and let q be any nonzero
vector that is not an eigenvector of A corresponding to \;. Define y = (A — d1)q.
Then y is an eigenvector of A corresponding to dj, 1 # J.
88 Matrix methods for linear equations with constant coefficients 3.13
Proof. Since p; and pg are linearly independent, we may express q in the form
q = CiPi1 + CoP2, where co # 0. Then
4-[
are \; = 6 and \» = —2. Choose any nonzero vector x = [x1, Xe)’ and form
the product
y = (A — 61)x = [—3x1 + 5x2, 3x1 — 5x2)’.
If 3x, = 5x2, then y = O and x is an eigenvector of A corresponding to \; = 6.
If 3x, ¥ 5x9, then y ¥ 0 and
Ay = [6x1 — 10xo, —6x, + 10xg]’ = —2y.
Thus y is an eigenvector of A corresponding to \y = —2 just as Theorem 3.23
predicts. ||
Corollary 3.23. Under the hypotheses of Case II, the equations (A — ;I)x2 = pi
(i = 1, 2) do not have solutions.
Proof. Suppose (A — d,J)xe = p; has a solution x» = q. Since (A — X;x)q is an
eigenvector of A corresponding to ij, j ¥ i, (A — \;x)q = cp;. Thus p; = cp;,
with c ¥ 0, a contradiction. ||
Continuing the study of Case II, we note that the eigenvectors p; and pz satisfy
the equations
(A — dyJ)p; = 0, (A — Aol )p2 = 0.
If P = [pi, po], then
AP = [Api, Apo] = [\1P1, AoPo]
= [P1, Po]: diag [\1, \2] = P diag [\q, do).
Since p; and pg are linearly independent, P is nonsingular and P~'AP = diag [\4, do].
Thus A can be reduced to Jordan form under the hypotheses of Case IJ. There is
one further question to be answered for Case II: Would it be possible to find a matrix
QO = [q1, qe], different from P, such that
ae
ovao=[%) 4.)
21 At ae
The answer is no, for the columns of Q would have to satisfy the equations
(A — dil)qi = 0 (3.13g)
Jordan canonical form for 2 < 2 matrices 89
and
(A — Aol )qo = qu. (3.13h)
To satisfy Eq. (3.13g), q, would have to be an eigenvector of 4 corresponding to ).
According to Corollary 3.23, Eq. (3.13h) would then have no solution.
EXERCISE 3.13
1. For each of the following matrices A, find a matrix P that reduces A to Jordan form.
= (4 3 es 8 9
a) 4= (5 Al ya=|_! a
y= 92 ‘98 VAL
2. Let A bea 2 X 2 matrix and let c denote any constant vector. Show that there exist
constants k; and ke such that
is not in Jordan form. Solve the system x’ = Ax anyway. [Hint: Use the method of in-
tegrating factors.]
4. Construct a 3 X 3 matrix A that is not in Jordan canonical form, and a matrix P
such that
5. Solve sequences of equations such as (3.13b) to find, for each matrix A below, a matrix P
that reduces A to Jordan canonical form.
CHAPTER 4
4.1 PRELIMINARIES
The student has learned in Chapter 3 how to reduce the problem of solving either
the equation (LHC) x’ = Ax or the equation
yi? gx VELOSO x a0 (LHC-n)
to an algebraic problem: that of finding the zeros of the associated characteristic
polynomial and, in the case of (LHC), solving certain systems of linear algebraic
equations. Let us now consider the equations
x’ = A(t)x + b(t) (L)
and
x™ 4 a(x P + +++ + an_ix’ + anx = BC), (L-n)
where A, b, a},..., 4m and b are continuous functions on an interval a < t < w.
These are called linear differential equations with continuous coefficients. If b(t) = 0,
we say that the equation (L) is homogeneous and denote it by (LH) x’ = A(f)x.
Otherwise (L) is called nonhomogeneous. Analogous terminology holds for (L-n),
where
xo AO)xe7P Se = 5 ap ax! a, =O. (LH-n)
90
Initial value problems 91
It is sufficient for most of our purposes to prove theorems for (L) and specialize
them to (L-n). A connection between the two equations is established by setting
Mie x, Xo = ss ty = Xe Then (1-n) has the form
x, + GE) pact ke An—1(t)xX2 + a,(t)x, = b(t),
and one may write it in the form x’ = A(t)x + b(t), where
Xen e ele BG) =s10F OSB)
and
0 lL earaclc 0
LOS ye
—a,(t) ... —a,(t)
If x = y(¢) is a solution of (L-n), the corresponding solution of the associated system
is therefore x = [¥-(1), V'(), ...,¥° PO).
Since each solution of (L) is the solution to some initial value problem for (L)
and vice versa, one can discuss the existence of solutions to (L) by showing that the
initial value problem
x’ = A(t)x + D(t), xX = Xo when t = 0, (IVP)
where fo in (a, w) and x9 are given, has solutions. Interestingly enough, (IVP) cannot
have more than one solution. To show this, it is convenient to introduce the notions
of rectangular norm and equivalent integral equation.
If x = [xq,..-,Xn]’ is an n-vector, we define the rectangular norm |-| by |x| =
|x,| +--+: + |x,|. This norm has a number of properties that follow directly from
the definition of absolute value:
i) |x| > 0, and |x| = 0 if and only if x = 0.
ii) [x + y| < |x| + lyl.
iii) [cx] = |c|- |x| for all numbers c.
iv) If @ is a continuous vector function, then
Conversely, one differentiates (I) to show that each of its solutions is a solution of
(IVP). In this sense, (I) is an equivalent integral equation for (IVP).
Theorem 4.1. Suppose A and b are continuous on a < t < w and let to in (a, w)
and Xo be given. Then (IVP) has no more than one solution.
Proof. Suppose that x = ¢(f) and x = (‘) are solutions to (IVP). Then, from (1),
t
Let to < t< c < w, where c is fixed. Since |A(s)| is a continuous real-valued func-
tion, it attains a maximum ||A|| on [fo, c]. When properties (iv) and (v) of the
rectangular norm are applied to Eq. (4.2a), the inequality
This inequality can be solved by the methods used in Sections 1.2 and 1.3. Multi-
plication by the integrating factor e—''4'!" gives rise to the inequality
d(u(t)e|'4""")
/dt < 0,
which implies that u(t)e—!'4''* is nonincreasing. Thus
wpe AaNe < u(t oe |!Allto = 0,
and u(t) = ie lo(s) — ¥(s)| dt = 0 for tp) < t <c. Inequality (4.2b) implies, in
turn, that |(¢) — ¥(t)| = Oforto < t < c. Similarly, one shows that |6(1) — ¥(1)| =
Ofora<a<tKX fo. Since a and ¢ are arbitrary, (1) = Y(t) fora <t<w. ||
EXERCISE 4.2
In Problems 1 and 2, identify the point x = (x1, x2) in the x1x2-plane with the 2-vector
x = [x1,x2].
1. How would the points a = (ai, a2) and b = (61, b2) have to be situated in the x1xe-
plane in order that |a + b| = [a] + |b]?
2. Given that x = (x1, x2), a = (1,0), and b = (—1, 0), describe the set of points x in
the x1x2-plane that satisfy |x — a] + |x — b] = 4.
3. Overestimate |Ax| by |A] - |x| if
4 0 1
C)A S12 1-E7 _ 0 x= [1,7
7]?.
0 310 a
4. Let x and y be continuous vector functions on the interval a < ¢ < b and let ||x||,
llyll, [x + y|| denote the maximum values of |x(2)|, |y@|, [x@ + y(d| on [a, 6]. Show
that |[x + y|| < |[x|| + llyl.
5. Let A and x be continuous ona < ¢t < band let ||A||, ||x||, ||Ax|| denote the maximum
values of |A(d)|, |x()|, and |A()x(d| on [a, 5]. Show that
a) ||AMx(|| < |All - x@I, b) [A(x < JA] - [IxlL,
c) |A(@x(| < |All > [xl d) ||Ax|| < |All - [Ixll-
6. Let A and B denote n X n matrices. Show that
a) |A + Bl < |A| + |B, b) |AB| < |A|- |B].
7. Let A and B denote continuous n X n matrix functions on the interval a < t < band
let ||A|| and ||B|| denote the maximum values of |A(f)| and |B()| on [a, 5]. Show that
a) ||A + Bll < ||Al| + ||Bll, b) ||ABl| < |All - ||Bll.
8. Solve the following inequalities for x in terms of f.
AD) oe! SPA. I) oe S Ane
(O) od << (Oye. d) x’ < a(t)x, a continuous.
94 The theory of linear differential equations 4.3
12. The blocks in Fig. 4.1 are not connected. Each block has a mass of kg. The spring
is linear, 10 m long when unstressed, and has stiffness coefficient 2 newtons/m. The fric-
tional force between each block and the table is proportional to velocity. The constant of
proportionality for the left-hand block is 1/2 and that for the right-hand block is 3/2.
Suppose the right-hand block is pushed 1/./2 meters to the left and is released with zero
initial velocity. How far will the right-hand block travel during the first second after the
blocks separate?
Win r
Figure 4.1
13. Prove each of the identities below by considering suitable initial value problems for the
equation x’ + x = 0.
a) sin (—/) = —sint. b) cos (—2) = cost.
c) sin (tf + a) = sinfcosa + cosf sina, a any constant.
d) cos (t + a) = costcosa # sint¢ sina, @ any constant.
e) sin (t + 27) = sint. f) cos (t + 27) = cost.
g) d(cos t)/dt = —sint. h) d(sin t)/dt = cost.
i) sin? ¢+ cos? t = 1.
One can show that (IVP) has a solution x = ¢(t) by constructing a solution to (I).
The construction is formally simple: one defines ¢0(t) = xo and
t
One tye (|
{Aol + bY} ds, KS 0. (4.3a)
Then, if
Jim es = $0 exists (4.3b)
and if t
lim
k—-+Loo
/: {A()ox(s) + b(s)} ds = / {A(s)9(s) + b(s)} ds, (4.3c)
it follows that x = 9(f) is the desired solution.
The existence of solutions 95
oO) = 1+ / 2s $(s) ds
0
bI-L4 ob:
and the corresponding integral equation is
be
We indicate the calculation of the successive approximations without further
comment: ;
0 0 O10 t
bolt) = Hi oi) = 9]a iLie 4Re i Ar
t
0 Oe nits 5 t ;
b2(t) = Hq: i Ae 4He Fi, . 2,
0 01 s _[t- ee
t
and
t — 19/3! +--+ + (-1)
tt 1/Qk — )
g21r—1(t) = ooo Koel,
1 — 27/2! + +++ + (-1)%0?"-7/Qk — 2)!
96 The theory of linear differential equations 4.3
Notice that ;
iene [ee‘|
kw cos t
The student knows from his work in Chapters 1, 2, and 3 that the first component
Y(t) = sin ¢ is the solution to the initial value problem as posed. ||
To verify that the convergence requirements (4.3b) and (4.3c) are satisfied when
A and b are continuous, we apply several standard results from advanced calculus
which we combine into one lemma without proof. The statement in the lemma that
> ¢=0 f, converges means that each component series converges.
Lemma 4.2. Let {f),| denote a sequence of vector-valued functions that are defined
and continuous on an interval a < t < c. Suppose that there exists a sequence of
numbers M;, such that |f,(t)| < M, for all t in [a,c] and all integers k > 0. If the
series > ~~09 M;, converges, then the series >\y—o f(t) converges for every t in [a, c].
If f(t) = doe=0 f(t), then f is continuous on [a, c] and
t t
d
an el +
ae!
t) =
=e
fey, = Far (—1)'t
kik .
if |t| <jile
and
d <1 < We 2k :
pe ta FaSle= 2A it ba] coe.
since the indicated series are geometric series with common ratios —t and —t2
respectively. Let us assume that |t| < r < 1, where r > 0 may be as near one as
desired, and let us define
(Aye
f.(4) = (e€ 1\rre*
;
The existence of solutions 97
By Lemma 4.2, the two series above may be integrated term by term over subintervals
of-lt| <7. Thus
o : peti
Since r may be taken as near one as desired, the series are in fact convergent for
lel <1. |
To apply Lemma 4.2 to the sequence {¢,} of successive approximations defined
by Eq. (4.3a), we introduce the difference function A, = $%41 — 9» for all integers
k>0. Then y41 = Xo + Sy 9Ay and limy ,4« v(t) exists if and only if the
series > ¥~9 A(t) converges.
Theorem 4.2. Assume that A and b are continuous on a < t < w and let to in
(a, w) and Xv be given. Then (IVP) has a solution.
Proof. Leta <a< to, t< c < wand denote the maxima of |A(t)| and |b()| on
[a, c] by ||A]|| and ||b|| respectively. It follows from Eq. (4.3a) that
If M =||A|| - ||xol| + ||b||, then Eq. (4.3d) and the properties of |:| imply that
|Ai(@|| < M|t — tol. (4.3f)
Similarly, Eq. (4.3e) implies that
: a
R+V
||A]|
iA aye
aly M
pllAli(c—ay
a aan) All 1).
98 The theory of linear differential equations 4.3
By Lemma 4.2, (fo Ax(t) converges and limy + $n(t)= o(t) exists, that is,
(4.3b) is established.
Now let & = Ag, — b for each integer kK> 0. Then -
/ lim i {eos ae y
{A(s)on(s) + b(s)} ds = if = (E-41(s) — Ex(s)){ ds. (4.3h)
tg N+
This establishes (4.3c). Since [a, c] was an arbitrary closed and bounded subinterval
of (a, w), (4.3b) and (4.3c) hold for every ¢ in (a, w) and the proof is complete. ||
EXERCISE 4.3
1. Compute the first three successive approximations for solutions to the following initial
value problems. Try to guess the limit function that the approximations will ultimately
approach.
ae — aux (O)y—s le where astasconstant:
b) x’ = f(x, x(0) = 1, where f is continuous. [Hint: Let F(t) = f§ f(s) ds.]
QO) eS e— il, e@) = © Gar = oe Il ax) = th
ee ar Ories XO) he ls Fe te ‘ =
2 3 7 k 4HE Ex + H eels ie
e
k2
b) iO) = Pale ra),
b= il
k2
c:
pil
c) HOPS ft} a=0, b=}:
k+1
sin?" t
d) f.(0) = Ped ’ a=0, b= a
eee
3. Let ao be any v-vector and define a; = (1/k)az_1 for k > 1. How can one tell that the
series >? 9a, converges?
4. Let Yo()= 0 and define ¥x(4) = fo x16) + 1] ds fork > 1. If dD) = v() —
Yx—1(4), show that >°%_1 (4) converges to a limit ¢(/ for all ¢ and compute fj 6(s) ds.
What initial value problem does ¢ satisfy at t = 0?
5. Consider the initial value problem
by making the change of variables x(t) = y(s), s = In¢f. Does your result contradict
Theorem 4.2?
The reader has learned to solve a special case of the scalar equation
x’ = a(t)x + b(t) (L-1)
in Section 1.3. In general, one computes the solution that satisfies the initial condition
x(to) = Xo by first forming the integrating factor exp (—fi a(s) ds]. When (L-1)
is multiplied by this factor, the result is
nit t t
x'(t) exp Ey a(s) a — exp Pedi a(s) as- a(t)x(t) = exp lel a(s) as- b(t).
' , ; (4.4)
100 The theory of linear differential equations 4.4
The left side of Eq. (4.4) is the derivative of the product x(t) exp (—f; a(s) ds].
Thus (4.4) can be written as
t
NE LeXO | a(s) asXo + exp || a(s) as|. / exp aL a(s) as|- b(u) du.
The reader may check that the integrating factor is a solution of the complementary
equation
x’ = alh)x (LH-1)
and that
3. No fundamental solution of (LH-1) can have both zero and nonzero values; thus
a solution whose value equals zero at one point is the identically zero solution.
Proof. Let x = Y(t) be a solution of (LH-1) and let fo in (a, w) be arbitrary. Then
¥'(t) = a(@)Y(.) and, solving with an integrating factor, one finds that y(t) =
W(t) exp Le a(s) ds]. Thus ¥(¢o) = 0 implies y(t) = 0 for every tin (a, w). ||
4. Let x = y(t) be a fundamental solution of (LH-1) and let x = ¢(t) be any other
solution. Then there is a constant c such that $(t) = ¥(f)c. Conversely, (t)c is a
solution of (LH-1) for every value of the constant c.
The first order scalar equation 101
Proof. Since ¥(t) ¥ 0 for a < t < w, the quotient (t)/y(2) is differentiable. In
fact,
(O4O) = WOO — ¢OWO)V720
= Wao — oa(NV))/¥7(D = 0.
Thus $(t)/y(t) = constant. Verification of the converse assertion is left to the
reader. ||
If x = ¥(f) is a fundamental solution of (LH-1), the expression x = ¥(t)c, where
c denotes an arbitrary constant, is called a general solution of the equation since it
is a solution for every value of c and every solution of (LH-1) can be obtained from
it by giving c an appropriate numerical value.
Exercise: Let x = $(t) and x = y(t) satisfy the initial value problems x’ = (cos f£)x,
x = 1 when ¢ = O and x’ = (cos ft)x, x = 1 when t = 7/4, respectively. Find a
constant c such that ¢(t) = Y(t)c.
5. The difference of two solutions for (L-1) is itself a solution for (LH-1).
Proof ;
Example 1. Find a general solution for the equation x’ = 2tx + e”. A fundamental
solution to the complementary equation is exp dh 2s ds = eae A particular solu-
tion is ¢,(t) = e” fi (e)(e”) ds = te”. Thus a general solution is x = ce” +
fe" |||
Example 2. Let 5 denote the sawtooth function of Fig. 4.2. The equation x’ =
—x + b(t) can have at most one solution of period 7. Suppose, on the contrary,
that there are two solutions x = ¢(t) and x = y(t) having period T. Then x =
¢(t) — Y(t) is a solution of x’ = —x. Thus there is a constant c such that ¢(t) —
y(t) = ec. If c #0, then ¢(t) — ¥(t) ~ +o as t— —o. This means that at
least one of the functions is not periodic (a continuous periodic function is bounded).
Thus c = O and ¢(t) = Y(t). This result can also be established by solving the equa-
tion if one cares to perform the variation-of-parameters algorithm on the sawtooth
function. A better way of solving x’ = —x + b(t) is given in the next chapter. ||
b(t)
Figure 4.2
EXERCISE 4.4
1. Solve the following injtial value problems. If you can find particular solutions by
inspection, do so.
a) ee es OM Wheni aa): b) x’ = 2tx -- t, x = 0 when ¢ = 0:
c) x =x+e,x=1whent=0. d) x = (@/d + 38, x = 0 when ¢ = 1.
eC) x =" ek cosit, x — 1) when — 0!
f) x’ = —(tanax + sint, x = 1 when ¢ = O.
g) x’ = (sec¢)x + sect + tant, x = 0 when ¢ = 7/4.
h) x’ = 21x + 4,x =1 whent=0. i) x’ = x + cost, x = 1 when ¢ = O.
j) x’ = x + b(X), x = 0 when ¢ = 0, where b is the sawtooth function of Example 2
and —(7/4) <t < (7/4).
2. Show that the equation x’ = (sin #)x has a nontrivial periodic solution.
3. Show that the equation x’ = (sin? yx has no nontrivial periodic solution.
4. Show that the equation x’ = x + sin thas a periodic solution. Is there more than one?
5. Does the equation x’ = —x + sin? t have a periodic solution?
-
6. Suppose 6 is a real-valued, continuous function such that b(f) > c ¥ 0 as t > Soe
Prove that every solution of x’ = —x + b(A) approaches c as t— +o. [Hint: Use
L’Hospital’s rule on a general solution.]
7. Let i denote the complex unit (i? = —1). What conditions must a continuous function
b satisfy if x’ = ix + (A is to have a solution of period 27?
8. Suppose 6 is a continuous function such that b(f) > 0 as t— +0. Prove that every
solution of x’ = —x + b(1) approaches zero as t—> +. [Hint: There are two cases.]
9. Try to write down a general solution for the equation in Example 2 by using variation
of parameters.
10. Discuss the behavior of the various solutions for x’ = —x + e-tsintast— +o.
11. Discuss the behavior of the various solutions for x’ = —x + sintast— +o.
12. We showed for illustration that the differential equation in Example 2 could have at
most one periodic solution. Could it have a periodic solution at all?
13. Suppose a current-limiting resistor is connected in series with an inductance ofL henries
across a source of voltage Eo sinwt. The resistance of the resistor decreases as it heats,
and in this particular application, the time-dependence of the heating is assumed to be such
that the resistance is given by R(2 + e~‘). What is the largest absolute value that the
current can reach at any time during operation of the circuit? It is not necessary to solve
the differential equation of the circuit in order to answer the question. Just examine it.
Here we concern ourselves with the generalization of the remarks in Section 4.4,
items (1) through (3), to equations (LH) x’ = A(t)x, where A is a continuous” X n
matrix function on the intervala < t < w.
The student should, throughout the discussion, keep clearly in mind the reasons
for the discussion: namely, to represent the solutions of systems such as
matrix function ® is called a matrix solution for (LH) because ®’(t) = A(f)®(1).
Suppose that the only constant vector ¢ for which ®(7)e =0 ona <t<w is the
constant vector c = 0. Then @ is called a fundamental matrix solution, and one says
that its columns 1, ..., » form a fundamental solution set for (LH). The quantity
det B(1), a < t < w, is called the Wronskian of the solutions $1,...,¢n. Now let
$1,--+5%m denote any vector- or scalar-valued functions which are defined on
a <t<w. They are called linearly dependent over (a,w) if there are constants
C1, -+ +5 Cm—not all zero—such that c1@1(t) + +++ + CmOm(t) = 0 for every ¢ in
(a, w). Otherwise, they are called /inearly independent over (a, w). In these terms,
one can say that solutions $;,...,@, of (LH) form a fundamental solution set if
and only if they are linearly independent over (a, w).
58 |// 2 W iiiiss
ylo= Tt 0 Nyt
Z 1 —2 O]]z
and
(a, w), and it follows that & is not a fundamental matrix solution. We have therefore
proved the following theorem.
Theorem 4.3. The matrix solution ® is a fundamental matrix solution for
x’ = A(t)x if and only if the Wronskian det ®(t) does not equal zero at any point
of (a, w).
Example 2. The relations x = sin e’ and x = cos e! denote solutions of x! — x’ +
e”'x = 0. If the differential equation is written as a system
sino e t cos e t
P(t) = : 2
© ibcose’ —e’sin «|
Since the Wronskian det 6(t) = —e’ # 0, © is a fundamental matrix solution. ||
When it is necessary to check that a Wronskian det (t) does not equal zero on
an interval a < t < w, the process may be laborious since the elements of ® are
functions. The matrix @(t) in Example 1 is an illustration. It is enough, however, to
check the Wronskian at one convenient point of (a, w) since det ®(f) is either iden-
tically zero on (a, w) or is never zero on (a, w). This is a corollary of the following
theorem.
Theorem 4.4 (Abel-Liouville-Jacobi Formula). If ® is a matrix solution of
x’ = A(t)x, then ,
det &(t) = det (ty) exp |/ TrA(s) as, (4.5b)
where to and t are arbitrary points of (a,w) and TrA(s) is the sum of the diagonal
elements of A(s).
Proof. The proof is similar for all > 2. To keep the notation simple, however, we
consider the case n = 2 only. Let
so that
oo =f] mi oo -[
x(t) u(t)
Loe bes ae
106 The theory of linear differential equations 4.5
and write
a(t) b(t)
A= Be a ;
Since @; and @» are solutions of (LH),
x = ax + by, u’ |= au+
bo,
y =cx+ dy, v’ = cu + dv.
Then
d x ul se)
=oF det ® == de det k Al+ det
de * A ,
7 ax + by au+ bv 58 u
= det y D |+ et]. 3 cu + dv
atl Cltoes, a
ax au x Uu
The equation £ et &(t) = TrA(t): &(t) can be solved with the integrating factor
Corollary 4.4. Either the Wronskian det ®(t) is identically zero or it is not zero
for any t in the intervala < t < w.
To check that det &(1) ~ 0 for —w < t < +o by evaluating the determinant as
a function of ¢ is unnecessary labor. By Corollary 4.4 it is enough to check the
determinant when ¢ = 0 to find that det (0) = 5. ||
If the system (LH) x’ =A (4)x arises from an nth order equation
x™ = —a,()x—Y —--- — a,(t)x, (LH-n)
by means of the substitution xj = x) x5 =x, se xn ethene Abels
formula (4.5b) is t
since 0 1 ie 0
A ss % - .
© 0 0 itaek|
—Aa,(t) =O, 1t) 6.00 —ay,(t)
ye ee. ae)
The functions yj, . . . , W will be linearly independent and will be called a fundamental
solution set for (LH-n) over a < t < w, if and only if @ is a fundamental matrix
solution of the system. As a matter of practice, one need not write a scalar equa-
tion as a system to check that a set of solution functions is fundamental: the
Wronskian may be written down directly and evaluated at a convenient point.
Example 4. Two solutions of the equation
]
xt ” +t GA
Ape x ae 0, if > 0).
The results above for x’ = A(#)x are analogous to results for the scalar equa-
tion x’ = a(t)x, as indicated in Table 4.1.
Table 4.1
Definition of ce= 0
10 lies c = 0
= 0 BP impAee implies
(Ac(Ac = 0 impli
fundamental solution
Abel’s formula can be used to construct a fundamental solution set for the
second order equation
x” + a,(t)x’ + a2(t)x = 0 (LH-2)
provided one nonzero solution x = y(t) is known. Suppose that y,(t) ¥ 0 for
ty < t < ty. By reasoning analogous to that in the proof of Theorem 4.5, one can
be assured that there is a second solution x = W.(t) defined for ty < t < tg which
is linearly independent of y;. By Abel’s formula,
70) ¥2(0) /
t
ya) — BO
Vi)
c
Y2(t) = 7D exp BP a,(s) as|. (4.5e)
Equation (4.5e) can be solved for y(t) by the method of integrating factors, although
it might be necessary to express the solution in terms of an indicated definite integral.
Example 5. The function x = (1 + on is the solution of the equation
pels 4. x! 2 i
l = oe ATS
1s ore °
which satisfies the initial conditions x = 1 and x’ = 0 when ¢ = O. The solution
x = y(t) which satisfies the initial conditions y(0) = 0, y’/(0) = 1 will satisfy the
equation
t
EXERCISE 4.5
In Problems 1 through 20, find a fundamental set of solutions for the given equation or
system of equations. In Problems 11 through 20, one solution is given.
1. x’ + 4x’ + 3x = 0. Dill oe Phree tee EO)
Li»
y 0 2ILy
EY-L3 5}
; ae ul’ 0 1 Ol} u
y a y w —1 3 -1]|w
6 4 1
pt x = 0, a2 2-0
3 1 1
12.x"
tt
+ pallet
ox + se,
gx —
Ota =->?
2 b> 8
yx” tv
1 1
e (Et 4-2)x-0,
ih ate Gum
2 Vies eT >
Vt At an, =?
4 Wale 1-t
ie ek — x= . = ——
d+ 7) — 2) 1+t
2, 1 Sahl
19. x ” Leis
ieee) [Link]
Gian = Ox = sin (4)
—j>o f¢ > 0.
21. Let g be a continuous function defined for -~» <t< +% and suppose that the
equation x” + q(x = 0 has a solution x = ¥(¢) such that limy,4. |W(D| + WO] = 0.
110 The theory of linear differential equations 4.6
How can one be sure that no solution other than a scalar multiple of y behaves in the same
way as t— +0?
&(1) = ee 4
is a solution matrix for
; e2! 5te2!
eel
a) hh 9 2t 15 2t 4 2t 2t
The matrix X(f) = E ont | = |3 a i : yee |is also a
solution matrix. ||
Corollary 4.6a. If ¢1,...,4n are solutions of (LH) and if c1,...,¢n are any
constants, then
o = cid + * 4-2 Gig,
is also a solution.
Proof. Take
C1 0 0
ee ee :
Cy 0 0
in Theorem 4.6. ||
-
Example 2. The vectors $,(¢) = [e”,0]” and (1) = [5te?*, e?4]? are solution
vectors for
7] = ON =) x].
Veen OFe2\inRy
The vector
6G) = Al2t |4.3 ee2t 2t
of ies Bae 2t
pull ih =Sen ys
V1 eases Vn
/ y
yo) we yord
The function y3(t) = ¢°(t* + 32? + 3)/(1 4 2°) is also a solution of the equation
since ¥3(t) = Yo(t) — yi). ||
If a fundamental set of solutions for (LH) or (LH-n) is known, then any other
solution can be constructed from this fundamental set by forming linear combinations.
The next theorem directly generalizes item (4) of Section 4.4 to systems.
a constant matrix. ||
In the applications, one is frequently interested in finding a vector solution which
satisfies some initial value or boundary value problem. Any solution can be found
if one knows a fundamental solution set.
Corollary 4.7a. If ¢ is any solution of (LH) and © is a fundamental matrix solution,
there is a constant vector ¢ such that o(t) = ®(ft)e.
Proof. The proof is virtually identical to the proof above.
es
5 (® oO) I= he—P 0)Aue ~1¢4) 40FO
F OF "80 + ATO
—& "(DAG(D) + (NAG(Y = 0.
Thus ®~ '¢(t) = constant vector. ||
Example 5. Find that solution @ of the system
; t l /
1 t
ES
a ee) ea een or
which satisfies the initial condition (0) = [7, 3]”, given that [1, ¢]” and [z, 1] are
linearly independent solutions.
+0-o[]] +f
By Corollary 4.7a, the required solution is of the form
sl} [ols f
-
eV wy ame el
is a vector solution and
Vey,
, /
OD = vi ire Wh
yp?
=
) ia yp sce
1)
Example 6. The functions y(t) = sin (e’) and W2(t) = cos (e’) form a fundamental
solution set for the equation x’’ — x’ + e*x = 0. The solution x = y(t) that
satisfies the initial conditions y(In 7/2) = 1, ¥’/(Inw/2) = 1 will be of the form
Notice that Eqs. (4.6) comprise a system of algebraic equations for c; and Co,
and that the determinant of the matrix of coefficients is just the Wronskian of ~;
and Wo. ||
EXERCISE 4.6
In Problems 1 through 20 there is given an initial condition. Find that solution of the same
numbered equation in Exercise 4.5 which satisfies the initial condition. Use your previous
solutions as starting points.
ih, 3) = ty AO: = 1h 2. x(0)'= 2, x'(0) =:1.
Se e(O) See) 15x O)-=1. Ay x(0). =1.0,. x’ (0) =s1,-—"'(0) = 0,
4.6
114 The theory of linear differential equations
a) &(/) = [. a
1
el, Wi) = i
0
ive1 | elt
et! “Sie 0 e#t 5(1 + de** 3e*
()) eX) =|) © ert MO |, %4e) =| © en @) |e
0 0 e8t 0 0) 4e3¢
22. Let denote ann X n matrix function, differentiable on the intervala < t < w. Dif-
ferentiate the equation ®—!(A)@(1) = J to prove that fa = —6—!’6-1,
23. The switch in the circuit of Fig. 4.3 is thrown at time ¢ = 0. The currents x and y
satisfy the initial value problem
, LR, MR> MR, LRe
agesee pr 2 * IF t
[eal ee eee ean
L Re
Figure 4.3
24. What value must the positive constant w have in order that the boundary value problem
x’ + w?x = 0, x(0) = 0, x(1) = 0 have a nontrivial solution?
25. What conditions must the real constants a and b satisfy in order that the boundary value
problem x’’ + ax’ + bx = 0, x(0) = 0, x(+) = 0 have a nontrivial solution?
Solutions of the nonhomogeneous equation 115
®'(t) = A(t)®(2).
If just one solution @, of
x’ = A(t)x + b(t) (L)
is known, then every solution can be expressed in terms of ¢,(f) and ®(). An analo-
gous assertion holds for the nth order scalar equation. As an illustration of the
technique consider an example.
Example 1. Let us find the unique solution x = y(t) of the equation
Si A ae = , (4.7a)
form a fundamental solution set for the complementary equation x’’ + 1~?x = 0.
A particular solution y, of Eq. (4.7a) can be found by inspection: y,(t) = ¢. Notice
that y, is not the solution to the initial value problem which we want to solve since
Yp(1) = Land yi(1) = 1.
If we define u(t) = Y(t) — ¥,(t), however, it is easy to check that u’’(t) +
t- u(t) = 0. That is, uw is a solution of the complementary equation. Thus, by
Corollary 4.7b, there are constants c; and cg such that u(t) = cyi(t) + coo(t).
This implies that Y(t) = ciWi(t) + coW2(t) + t. To solve the problem as posed,
then, we need only determine the numerical values of c; and cy by solving the
equations
1 = WU) = civi(l) + Cove) + I,
0 = Wd) = ci) + cova(l) + 1,
and conclude that
Then, by Theorem 4.8, there is a vector ¢ such that ¢(/) = S(A)e + ¢,(t). Equation
(4.7c) is the first row in this matrix equation. ||
Theorem 4.8 and its corollary generalize item (5) of Section 4.4 to systems of
equations and higher order scalar equations.
We shall refer to the formulas (4.7b) and (4.7c) as general solutions for (L) and
(L-n) respectively. A solution method for these equations may then be summarized
as follows: To find the general solution of a nonhomogeneous equation, one first
solves the corresponding homogeneous equation and then finds any particular
solution of the nonhomogeneous equation.
The method of annihilation (Section 3.9) combines these two steps for equations
of the form
x™ 4 ayx™Y 4 +++ + ay yx! + ax = B(2),
Solutions of the nonhomogeneous equation 117
where the a,’s are constants and the function b has one of the forms (3.9e). For
other types of equations, the method of annihilation is not so useful. There is, how-
ever, a method for finding a particular solution of a nonhomogeneous equation which
can be applied whenever the coefficients of the differential equation are merely con-
tinuous. It is a generalization of the variation-of-parameters technique given in item
(6) of Section 4.4.
and
Finally,
= sin ¢ . cos? sin t
p(t) I x0 | 4” *(9)b(s) ds = bee —sin iFe t) — ‘|
The reader might have noticed that ¢,(t) = [1, 0]’ is also a particular solution which
could have been found by inspection. ||
is a particular solution of (L-n). In fact, one need only solve the equations
|vit)... nl) | |0 |
: : es alan (4.7f)
ee ir SENG) b(t)
for v(t), ...,0,(2) and integrate to find vx(t), .. . , Up(t).
Proof. The values v,(t),...,U,(t) are the first through nth components of
ie @—'(s)b(s)ds in formula (4.7d). To see that we can compute them from
Eq. (4.7f), set
t
COR eHOl = / &'(s)b(s) ds.
Then i
[vPi@),..., v4)" = & (b(t) and &(A)[v,(t),...,v,(O)° = b(2).
The last equation is (4.7f). ||
sint cost|[v;(t) 0
be fe Ssin |Ew zs Be | (4.7g)
Solutions of the nonhomogeneous equation 119
and
x = Csint+ Dcost+ 1 if eS @.
We “paste” the two functions together to form one continuous function by requiring
that B — 1 = D +1, that is, by requiring that B = D+ 2. Thus
Be ston (Di. 2)(cos2) —— 1, L-—20s
* = lCsint
+ D(cos 1) + 1, 7 0,
is a solution to the problem as posed for any choice of the constants A, C, and D.
Notice that the unique solution of the initial value problem
x! x = t/|t\, ve O ema! when t= —1/2 (4.7j)
is
x = —sint-+ cost — l, —o <it< 0. (4.7k)
satisfies the initial conditions, is continuous for —# < t < +o, and satisfies the
differential equation except at t = 0. Yet none of these functions qualifies asa
different solution because none satisfies the differential equation at t= 0. Ina
more general existence theory, (4.71) is admissible as the solution to the initial value
problem (4.7j) if C is chosen so that x’ is continuous at ¢ = 0 (see Problem 11 in
Exercise 4.7). ||
Although the general procedure is rather complicated, the method of annihilation
can be adapted to yield particular solutions of systems x’ = Ax + b(t), where A is
a constant and b(¢) has special forms. We shall not discuss the general situation here,
but it is convenient to know that the system x’ = Ax + ce” has a particular solu-
tion of the form x = pe if \ is not an eigenvalue of A. The vector p may be de-
termined by substitution (which amounts to solving the algebraic equation
(A — d\J)p = —c for p).
If Wy; ..., veare solutions of x’ = Ax biG), .2.,x = Ax) ete
spectively, then Wy= Y; + °°: + Wm is a solution of x’ = Ax + b(t), where b =
b, +-::+b,. This fact is called the Superposition Principle. Using it, one can,
for example, find a particular solution of the equation x’ = Ax + csin wt if iw is
not an eigenvalue of A. This is done by finding particular solutions of the equations
—itwt
x’ = Ax + i cen, x= AX= Dce
EXERCISE 4.7
In Problems 1 through 10 there is given an initial value problem and (in most cases) a funda-
mental set of solutions to the corresponding homogeneous equation. Solve the initial value
problems by variation of parameters.
il, se¥ tee = SiN, KO) = il, XO) = ©.
Clete} Bolla)
CU) Dol-Lih eo -[3} #0 -[8]
Fundamental solution sets for homogeneous equations with constant coefficients 121
e sint
t t 1
gilt) = |0]> got) =|F|> o3(0=] 4
0 0 t
ae E00 Ie t x(1) —2
10. = alot P-Olly| +1 ly@l=| 4b
Zz Peeper |e t z(1) 3
t 0 0
gift) =] 1]> got) =|t]}> g3(t)=] 0]-
0 1 t
11. Show that there is a unique, continuously differentiable function x = (A) defined for
—«°o <t< +o, which satisfies the differential equation x’’ + x = t/|t| for t ¥ 0 and
the initial conditions Y(—7/2) = 0, ¥/(—7/2) = 1.
12. Suppose the metal block of Fig. 1.3 is initially at rest with the spring (unit stiffness
coefficient) uncompressed. It is pushed to the left by a force of magnitude b(f) = ¢,
0<t < 7/2, b(t) = 0, t > 7/2. Describe the motion of the block if there is no friction.
The theory developed in Sections 4.5 and 4.6 guarantees that homogeneous equations
x’ = A(t)x (LH)
or
aL)x ea ee ge ax 4 a,(ty = 0 (LH-n)
have fundamental solution sets and that any solution can be expressed uniquely as a
linear combination of fundamental solutions. Here we wish to connect these results
with the solution methods derived in Sections 3.7 and 3.8 for constant-coefficient
equations
x’ = Ax (LHC)
122 The theory of linear differential equations 4.8
and
XO gy DE oe te gy a, nO, (LHC-n)
Consider the nth order equation first and suppose that its characteristic equation
has the factored form
Oka NY) ec Oe Se Cee)
mter-tmt+-:--+m =n. We saw in Section 3.8 [Eqs. (3.8e) and (3.8f)]
that
x Set Keates ee ate ee (4.8a)
are solutions of (LHC) for r = 1,...,k. Thus Eqs. (4.8a) yield m, solutions cor-
responding to the eigenvalue \;, m2 solutions corresponding to the eigenvalue
Ao, ..-, and m; solutions corresponding to the eigenvalue )x,.
Let us show that the 7 solutions defined by Eqs. (4.8a) are linearly independent
by proving that
k
Now
This is only possible if ¥1, = Yor = ++: = Ym,z = 0. Knowing this, we replace
Eq. (4.8b) by
ee 1
multiply by exp [—)x_1/], and repeat the argument. After doing this k times, we
conclude that all the Y’s are zero. Thus we have proved the following theorem.
Fundamental solution sets for homogeneous equations with constant coefficients 123
Theorem 4.11. The n functions defined by Eqs. (4.8a) JOT \aetien eeform.a
fundamental solution set for (LHC-n).
Note that Theorem 4.11 and Corollary 4.7b together constitute a different proof
of Theorem 3.10.
Next, consider the equation x’ = Ax, where A is ann X n matrix. If \ and p
form an eigenvalue-eigenvector pair for the matrix A, then x = pe is a solution of
the equation since
Ax — x' = Ape — dpe = (A — N)pe™ = 0.
We shall call such a solution an eigensolution. Suppose now that A has n linearly
independent eigenvectors pi, ..., Pp» with corresponding eigenvalues \,..., Xn.
Then
S(O) 1 [pies a. Hapa 2]
is a solution matrix and, since det (0) = det [p,,..., pn] ~ 0, it is in fact a
fundamental solution matrix.
for some constant vector c. Consequently, &(t) = e4’ is a solution matrix for the
system. Since ®(0) = J, the identity matrix, and det J = 1, it follows that ® is a
fundamental matrix solution. Its columns, therefore, form a fundamental solution
set. We wish to examine these columns and thereby discover the forms of these
fundamental solutions.
Recall, from Eq. (3.7d), that &(t) = e4’ can be expressed in the form (1) =
Pe®' if P-1AP = B. Let us assume that P has been chosen, as in Theorem 3.7, so
that B is a Jordan matrix
Bea diag (Ba, aac Das ea
(Po4it + Dae
0?
(peasTI + Poot + pes) eM Ge)
: fi-} Sc
Po+1 @;— 1! erent hae Po+r;-10 2 Po+v; Cale
and the n solution vectors defined by (4.8f) for 7 = 1,...,m thus comprise the
fundamental solution set which we wished to find.
Xx
Fundamental solution sets for homogeneous equations with constant coefficients 125
which was solved in Example 3 of Section 3.7. We found there that the system has a
general solution of the form
x Ay By Cy
y| =|}—By, le +] 0 |te +] 0 |e (4.8h)
ta B, 0 Cy
where A;, By, and C, are arbitrary constants. Let us rearrange Eq. (4.8h) and write
it in the form
|Oi [1, 0, OF |
Uy [0, =k i and |
UE ihe 0, Wie
The matrix P = [pj, po, p3] has the property that P~1AP is a Jordan matrix
Zee 0
OFF240
O03
EXERCISE 4.8
In Problems 1 through 5, use the answers for Exercise 3.12 to solve the differential equation
a
for the indicated values of a, b, c and d.
PT-[2 Se
1@a@=3,b5=1,¢=0, d= 2. 2G = Ae ba 3 0 = 2,40 =),
3. a= 6,b=2, c= 5, d= 3. 4-€a@= 0b —1,¢c=—1,7d=0.
Sea =2,6=3 ,¢= 2.
= —3,d
The theory of linear differential equations 4.8
126
i 24 6 9\|x
= |—42 —9 —18]| y
A) Vas AL |||
vol-tL2 ae
ETL ab
S-Ee)
rae falc
4 -
oo T-fL-IB xa
Z
ee
)
Ol
@
es
Alliiz
w | it 4 1 O|| w
x -1 1 01 3%
Bae On One lay
Zz 0 0-1 1 Z
17. The equation x’”” — 3x’’ + 3x’ — x = 0, has as a fundamental solution set the func-
tions e’, re’, t2e’. Construct the corresponding matrix solution of the associated system.
CHAPTER 5
If \ = a + i@ is any complex number, we shall call a the real part of \ and B the
imaginary part of \. We denote them by a = Red and 8 = Im. Now let f be
a scalar- or vector-valued function such that fe f(t) dt exists for allt > 0. We
shall call a function which satisfies this condition positively integrable. If, for a
positively integrable function f, the improper integral
+0 fo foo
fe dt = : Wiea COS BEAL =o) 4 f(tje—*' sin Btdt — (5.1a)
converges, we shall call the function £f of \ which it defines the Laplace transform
of the function f.
Example 1. Let c be any complex number and write E,(t) = e°’. Then
+o ,
£EA(A) = /f e'e—™dt = wees
= =)
This fact, in conjunction with the result of Example 1, allows us, for example, to
compute the Laplace transforms of the sine and cosine functions.
127
5.1
128 Solving linear equations with Laplace transforms
B —twt ae dwt
Si)
i
= ]ae"
dwt
— 58 and C(t) = 3e° +
i 1 1 w
Oe ‘lS (a+ iw) X= tasl- (X — a)? + w?
Similarly
Neve
ee) = 3[>a - ee | Se !
Frequently, for a given positively integrable function f, one wishes to deduce
that the integral (5.1a) converges, but he wants to do this without actually evaluating
the integral. This end can be achieved by demonstrating the existence of constants
M, n, and fo such that |f()| < Me™ for tp) < t < +o. In this case,
[fe™| = e—“[cos? Bt + sin? Bt]/?| f()| < Me~*™*", X= at if.
Since
+0
Me dt. a >a,
0
converges, it follows from the comparison test that
+00
f(be— dt
0
In this chapter, we shall use Laplace transforms to solve the differential equations
x’ = Ax + b(t) (5.1b)
and
De igh Vays Bg a! Avge b(t). (5.1c)
We assume, throughout the chapter, that A and a,,...,a, are constants and that
b and 5b are piecewise continuous, that is, have at most a finite number of discon-
tinuities in any bounded interval and have finite left- and right-hand limits at any
point of discontinuity. The existence theory of Chapter 4 applies to Eq. (5.1b) and
Eq. (5.1c) with a slight modification when b and b are merely piecewise continuous.
We admit a continuous function as a solution if it satisfies the differential equations
everywhere except at the discontinuities of b and b. In the case of the nth order
equation (5.1c), we also require that each solution have n — 1 continuous derivatives.
In order to use Laplace transforms to solve linear differential equations, we must
know that their solutions, in fact, have Laplace transforms.
Theorem 5.1. Assume that b is positively integrable and of exponential order at
infinity. Then every solution of
x’ = Ax + b(t)
== sale lpr
Picletaije 6ers0F al
Since every nth order equation (5.1c) can be written as a system (see Section 4.1),
we have an immediate corollary to this theorem.
Corollary 5.1. Assume that b is positively integrable and of exponential order at
infinity. Then every solution of
In view of Theorem 5.1 and its corollary, we may now turn to the problem of
computing the Laplace transforms for solutions of linear equations. The fundamental
tool for doing this is the following theorem.
Theorem 5.2. Let ¢,¢',...,¢”» be continuous and of exponential order at
infinity and let ¢” be piecewise continuous. Then, for 1 <r <n, £o” exists and
is given by
SHO) SN LOO) SNe ak aid Ne,
where
Proof. The proof is by mathematical induction. Fix an integer value k for r, say
r = k, where 1 < k <n — 1, and define y(t) = ¢ (4). By hypothesis, there are
constants M and 7 such that |y(t)| < Me™ for0 < t < +o. Assume that Red > 7.
Then
i;We dt = We +» | we at (5.1d)
0 0
We note that |y(r)e—*”| < Me~“®* *—* and that £Y(A) exists for Re \ > 7. Thus,
letting 7 — +o in Eq. (5.1d), we have
ete (5.1h)
to obtain
72x — Ax(0)' = x10) 28x 10:
Definition of the transform 131
It follows that
a (0)
Wena (0)
(x2 + w?)
It turns out that the equality of these Laplace transforms implies the equality of
the untransformed functions. Thus
is a general solution of Eq. (5.1h). This general solution is more convenient for solving
initial value problems at t = 0 than the general solution x’ = c, cos wt + cg sin wt
(with c; and cg arbitrary) since x(0) and x’(0) appear explicitly in it. ||
Applying Theorem 5.2 to Eq. (5.1b), we obtain
ALx — x(0) = ALx + Lb(A) or (A — AI)£x = —x(0) — Lb).
(fier een
gives
(2 )-FO)-(t e+e)
5.1
132 Solving linear equations with-Laplace transforms
Since
Pu Ge?
ctl NaaieadSE:
1-vr -l
Ve co thie le lane (lea “pes In): |
Ly (2 Sn De) re ee
Applying Theorem 5.2 to the nth order equation (5.1c), we find that
ox = x(O)\?
XOM ++ C'O SO) Sr
(x'(0)
+ 3x(0))r (0) x'(0)+ 3x(0
+ 3x'(0) 3x(0))| 1 Gan)
A? + 3A*7 + 3A4+ 1
EXERCISE 5.1
1. Compute the Laplace transform of a general solution for each of the following differential
equations.
a) x” + 4x’ + 3x = 0. b) x + 2x 4+x = 0.
c) x!” + 5x! + Tx! + 3x = 0. GQ) 207 i Be aa X eet Os
@) x tox! ee on) fy x09) + 2x!" + 3x" + 2x’ + 2x = 0.
Solving homogeneous equations 133
ale
2. Compute the Laplace transform for each of the following functions.
a) t b) sinh¢ c) cosh ¢
d) e‘ sin 3¢ e) e?’ sint f) e?' cos 3t
g) e°' cos 2t h) e“ sinh ¢ i) e sinh (if)
j) e?¢ cosh t k) e cosh 2r 1) sin? ¢
m) e*' sin? ¢ n) te?
a ea
DfO={y ig aap ae ae
‘yes ere)
if = ae&.
nso b | a>b>0.
0 if |f—al>ob
qg fO=n+1, feet Sneed. 0.
3. Show that the function e” is not of exponential order at infinity.
4. Suppose that f is piecewise continuous and of exponential order at infinity. Show that
Lf() — 0 as |A| — +0. [Hint: Use the triangle inequality for integrals.]
5. Suppose that £f exists and define g(t) = f(at), a > 0. Show that
Le) =a LG").
6. Suppose that f is of exponential order at infinity and that {> t~!f(d dt exists. Define
g(t) = t~! f(t). Show that Lg(A) exists for real \ and
+00
x” +-3x" + 3x/+x=0.
The partial fraction decomposition theorem guarantees that there are constants
A, B, C such that
x(0)r2 + (x0) + 3x(0))A + (3x(0) + 3x’) + x’(0))
a+ DF
A B om
= Oca > Grp) a OMeaans
Taking a common denominator on the right and equating the numerators, we find
that
x(O)\? + (x'(0) + 3x(0))\ + (3x) + 3x/(0) + x’(0))
= AX? + QA44+ B+ (A+ B4+C).
In order that this equation be an identity in \, it is necessary that
x(0) = A, —x')4 3x0) = 24 + BB, 3x0) 3x0) x70) = Ae ec
that is, it is necessary that
A=x(0), B=x0)+x(0), C= (x(0)+4+ 2x0) + x’()).
With this choice of A, B, C, we have
x(0) x(0) +: x10) xO) 2x OPE Oy
SX + (5.2b)
Atel Chee Oss 12
To find x, we shall first identify the terms on the right of Eq. (5.2b) as the Laplace
transforms of known functions. This is accomplished with the aid of the following
theorem.
Theorem 5.3. Let k = | be an integer and let f be positively integrable and of
exponential order at infinity. Then g(t) = t*f yhas a Laplace transform &g given by
The answer is that Theorem 3.10 does not explicitly specify that cy = x(0), co =
x(0) + x’(0), and cz = (x(0) + 2x’) + x’(0))/2 as Eq. (5.2c) does. To deduce
this from Eq. (5.2d), one must differentiate the equation twice to find x’ and x”,
set ¢ = 0, and solve the resulting algebraic equations for c1, C2, Cg in terms of x(0),
xO) ex(0),
The labor-saving advantage of the Laplace transform method of solving initial
value problems for homogeneous equations is the elimination of the need for com-
puting derivatives of the solution at zero. To see this, consider the equation
Fee ei aoe eg ee le 0: (LHC-n)
We saw in Section 5.1, that the Laplace transform £x of its general solution satisfies
One evaluates the 7;,’s in terms of cy,...,¢n- It then follows immediately from
Theorem 5.3 and Lerch’s theorem that
k — Ay
ae (Vir = Yort aiaeanale ae Vege @ Ye ‘
r=1
This equation has precisely the same form as the general solution (3.8d). The dif-
ference is that here the Y;;’s are evaluated in the process of making the partial fraction
decomposition rather than by substitution, as was the case in Section 3.8.
136 Solving linear equations with Laplace transforms SZ
PO) = Oe) ee Oe
is the characteristic polynomial for A. Making the partial fraction decomposition
EXERCISE 5.2
» Eo pa a! 2 al n2
3. Combine the initial conditions listed as (a) through (j) below with the corresponding
differential equations in Problem 1 of Exercise 5.1 and solve the resulting initial value
problems.
a) x(0) = 1, x’) =. b) x)= 25557 ©), =" 1;
c) x(0) = 1, x’) = 1, x”) = 1. d) xO) = 0,7 x’) = 1, x") = 0.
Laplace transforms for some special functions 137
1 x paps
Cero) >) O2 + 9p o 02+ 97
4) =I yh e210 fi 3
Oe Nene Way aay TE 293
In this section, we shall compute Laplace transforms for several special functions.
The Heaviside Unit Step Function H, defined by
bo fors 7e0:
Ol 0 ior PEC
is illustrated in Fig. 5.1(a). Its Laplace transform is LH(\) = 1/\. The modified
step function A,(t) = H(t — c), c > 0, has its discontinuity shifted c units to the
right of the origin. Its Laplace transform is £H,(\) = e—/d.
H(t) Hi ((ti—=s0)
b——__—_—
t
|
oe
|
|
(a)
Figure 5.1
Now let f be any function which is defined for —0 < t < +o asin Fig. 5.2(a).
The function f,(t) = f(t — c) is defined for all t, and its graph to the right of c is
congruent with the graph of f to the right of the origin (Fig. 5.2b). In working with
Laplace transforms, however, one is usually not interested in the values f(t) of the
function f for negative ¢. Thus it is convenient to introduce the function H.f.
with values
AAC) f=)
Figure 5.2
+00
Figure 5.3
Laplace transforms for some special functions 139
1 Se lgal =e
NOIO) Te See
Thus
aN =
eee CaN ee d
mn Lr? 2» 142
GHa(N\)s en &C = SRO) COD
where C(t) = cos¢ and H,(t) = H(t — 1). By Theorem 5.4, however,
e*SC(A) = £H,C,()).
Thus
£&x = L(A; — H+ C — A,C,]Q)
and
x = A(t — 1)— A(t)
+ cost — A(t — 1) cos(¢t — 1)
Pee ee COs.7. Lote Os< t= 1h
~ (cost
— cos(t — 1) for oe 81)
Nonhomogeneous linear equations in which the forcing term is a periodic function
occur frequently in applications. The next theorem indicates a rapid method for
computing the Laplace transform of a periodic function.
Theorem 5.5. Let b denote a piecewise continuous scalar- or vector-valued function
with minimal period T > 0. Then &b exists and is given by
T
T iM e 42Mp(r 4+. nT) dr
T
=: Ss ol e“'b(r) dr
0
== e—b(r) dr 2 (Cea
A n=0
140 Solving linear equations with Laplace transforms 5.3
Example 2. Let f denote the periodic step function illustrated in Fig. 5.4. Then
T/2 —yt ft! Nt
ofr) = E= iy (2 dt —
eae ie Z dt
ee ee ee N= Go
—AT/2)2 —AT/2
F tanh(7). i
NC cane) Mise ene)
One can also use Theorem 5.5 to compute the Laplace transform of the sawtooth
function 6 illustrated in Fig. 5.5.
The next theorem describes a more efficient method for doing this, however.
i)
E >< »— -— »—— -—
$$
____—_—_—__—_» |
TE
2
—H -—c —— -— -— »—
Figure 5.4
+0 +o
sl f(r) / edt dr
+0
= )} . f(rje—™ dr = rf (a). ||
b(t)
Figure 5.5
Laplace transforms for some special functions 141
If f and g are integrable, one defines a new function f * g, called the convolution
of f and g by
A basic result for the Laplace transform of a convolution is given by the following
theorem.
Theorem 5.7 (The Convolution Theorem). Let f and g be positively integrable and
of exponential order at infinity. Then &f*g exists and
Lf*Z(r) = i oar if
f@)g(t — 1) drat
+ +x
= i! ro | e—'g(t — 7) dt dr
0 T
fe fo
f(r)e—* ‘| eg(u) du dr
0 0
Ef (r)- Ler). ||
Example 4. Let £A(\) = 1/X2(\2 + 1). Since £[4]Q) = \~” and [sin J) =
1/0? + 1), LAQ) = S[sin JO): L[JQ) = eff Gin) — 7) dr]Q). By Lerch’s
theorem,
at
It is not very hard to verify that f * g = g* f. The integral (5.3c) in the last
example is harder to evaluate when written in the equivalent form
t
/ 7 sin (t — 7) dr.
0
It is thus worthwhile, when setting up a convolution integral, to give some thought
to the order of convolution.
We conclude the section with a dual for Theorem 5.4.
Theorem 5.8. Assume that &f exists and define g(t) = e“f(t). Then Lg exists
and £g(\) = £fQ — a).
Proof
+e 2
£g(r) = i e g(t) dt = [ eo
OF dt = Sf — a). ||
Example 5. Since £[sin ¢](\) = 1/(\? + 1), £fe“ sin JQ) = 1/[A — a)? + 1]. ||
Example 6
EXERCISE 5.3
S leven tr b) p ™
iN Sas NOS we
Solving nonhomogeneous equations 143
oN
Cee Ee A |
ACL — e72A) ) wOEEe)
e
e) ————_
tanh d f) Cuctanhi(he
ms =>)
mn CD — 2/2
6. Use Theorems 5.6 and 5.7 to find inverses for the following transforms.
1 1
xQ + 3) 02 +1)
a Ny —————$_______
1 1
©) 3202 $1 D QP 4 12
=3) =4),
e —4
ys
ACA2 + 1)
JN ACL — e744) © 2
ee
1 1 —
g) ACAZ
+ 2A + 1)
h) AZ (A2Cee)
+ 2A+ 1)
One can use Laplace transforms to efficiently solve nonhomogeneous linear equations
when the forcing function is such that variation of parameters is difficult or un-
pleasant.
Example 1. Let us try to compute a general solution for the rather innocuous looking
equation
x’ = —x + did), (5.4a)
where 6 is the triangular wave function shown in Fig. 5.5 (with T = 4). This equation
was briefly examined in Example 2 of Section 4.4, where it was shown that there
could exist at most one periodic solution (also see Problems 1(j), 9, and 12 of
Exercise 4.4).
A general solution
t
Dea a DYany
WHOS a As eo")
144 Solving linear equations with Laplace transforms 5.4
Since @ was assumed to be periodic, its values for t > 4 can be determined from the
relation ¢(t) = ¢(t + 4) beginning with 0 < ¢ < 4.
We have actually shown that if Eq. (5.4a) has a solution x = ¢(¢) of period 4,
then it is given by Eq. (5.4g), where a has the value (5.4f). One checks by substitution
that the function ¢ so defined is actually a particular solution. A general solution of
(5.4a) therefore has the form x = e~‘c + (ft), where c is an arbitrary constant. ||
Laplace transforms also provide an efficient solution method for nonhomogeneous
equations when the method of annihilation is applicable but very complicated.
x = 2 + 2x = fe’ sin?
1
Sint) = 544° I
£fe’ sin (A) = eerceeae
and
ics
£[te’ sin 7](\) =
d
5 |p Cn
1 eee a
201)
[A2 — 2d + 2)? :
Consequently,
ge 10.
8 RE erroneous ~2
Hence
EXERCISE 5.4
1. Use Laplace transforms to solve the following differential equations.
BY ae? ia bail. beaN 0s
ec) x” +x = 1 +7. d) x” +x =342¢42’.
©) xe! = 9G =e" ly 6eY Se ge = 172%
8) x! ey = 17 4 Be", h) x” -2x = fsin 27.
)) 567 abe SO ial ye j) x’ +x =rt+e'sint.
k) x!” + 6x" + 9x’ + 4x = te~. 1) x6" Oy de le Seale ae
2. Let f denote the periodic step function shown in Fig. 5.4 of the last section. Solve
oe Se SE KOE
3. A1kg rectangular block is resting on the top of a cart and is connected to one end of the
cart by an unstressed spring with stiffness coefficient 1 newton/m (Fig. 5.6). Friction between
the block and the cart may be neglected. Suppose that the cart is moving at a speed of
1 m/sec to the right and that it is uniformly stopped in 1 second. Describe the motion of
the block.
es
| LAW
©) ©
Figure 5.6
4. The circuit depicted in Fig. 5.7 represents a full-wave rectifier and filter circuit. Its
purpose is to convert 117 V alternating current into an approximately constant voltage V
across the resistor R. Compute the Laplace transform for V if the voltage from node a to
node b is E|sin wt|,w = 1207. [Hint: The currents x and y satisfy the differential equations
eee R Eo
a Txt zy t Z/sin od,
; 1 1
Ee ites Page
and V = —Ry.]
\e—=Elsin 1207¢| =~ =
Figure 5.7
GHAPTER.6
LINEAR EQUATIONS
II
Ms
Il °
(kK + 2) + Dazaot* + Dy2kayt DyDayar
ll
=iMs
Il
(K+ MK + Danse + 2k + Daa =0.
as
The last power series is nets zero by assumption. All its coefficients therefore
equal zero, and the a;’s must satisfy the recursion formula
ee ae at Wit? ete wih 20) (6.1d)
147
148 Power series solutions for linear equations 6.1
Consequently,
; (6.1f)
and we have shown that if the solution of the initial value problem (6.1c) is repre-
sentable as a power series, then that series is precisely (6.1f). Since the steps in the
derivation of the series are reversible, one can show that it represents the desired
solution by merely demonstrating convergence with the ratio test. Note, incidentally,
that x = e~ is the solution of (6.1c) and that (6.1f) is its Maclaurin series. ||
Recall now that function f is said to be analytic at a point to if its values can be
expressed by a convergent power series >-%_ a4(t — fo)* on an open interval around
to. It is shown in calculus that if f is analytic at fo, then
(Gy = DS feo)
k!
(t — to)",
=0
that is, a, = f"(to)/k!, k > 0. This means that the power series expansions for
analytic functions around various points fo are precisely their Taylor series around
the same points. The function f is called analytic on an interval if it is analytic at
every point of the interval.
Example 2
i) If f@ = sint, then f is analytic on the interval —wo < t < +o.
ii) If f(@) = 1°/%, then f is not analytic at the point ¢ = 0. Since it does not
have a second or higher derivative at ¢ = 0, it has no Taylor series around
t= 0.
ii) If f@ = eV", ¢t ~ 0, and f(0) = O, then Jf has a Taylor series around
t = 0: namely,
With the definitions above in mind, then, we shall say that a vector function f
or a matrix function A is analytic at f9 (or on an interval) if each of its components or
Analytic functions 149
OO
ey ak
for tin a neighborhood of fo.
Now let us regard n scalar power series
(eo) wo
k k = :
Dea Cn tg) es ts to) aay).
k=0
Aa) = [=O
(Y Adi )te=to)!
N70
for |t — to| < R. We conclude the preliminary discussion with a lemma which is
useful in determining the interval of convergence for vector power series.
Lemma 6.1. Suppose the coefficients aj, of the series
iva)
where M and L are positive constants. Then the series converges for |t — to| < L.
Proof. Apply inequality (6.1j) to itself recursively to obtain
Theorem 6.1. If to in (a, w) is arbitrary, then any solution x = $(t) of Eq. (6.2a)
may be expressed in the form
ie)
and the radius of convergence of this series is not less than the smaller of (to — a)
and (w — to). Further, $(to) = ¢o and the other c;,’s may be found by substituting
the series (6.2b) into the differential equation and equating coefficients of like powers
of (t — to).
Proof. Since A is analytic at to, one may write A(t) = ~_9 Ax(t — to)x, and the
matrix series converges for |t — to| < R = min {to — a, w — to}. Now let co =
$(to) and determine ¢;,, for k > 0 from the recursion formula
1 k
Chi = pare 7,Ue (6.2c)
Power series solutions around ordinary points 151
If the series (6.2b) converges, then it must satisfy Eq. (6.2a) since
7) o k
= ket — 1) — 5 ic Aci) — to)”
k=1 k=0 \r=0
k
( + 1 )ex41 — pe eC = Py = (I)
k= 0 i—
Dd Arlt — to)"
k=0
converges for |t — fo| < R, there is a constant M such that |A,(t — to)*| < M for
|t — to| < L and all kK> 0. In particular |A;| < ML~*. Now take norms on
both sides of the recursion formula (6.2c) to obtain
lex4i] < i ({Ax| leol + °-- + Aol lex) < ag eo “7b [eg|).
and it follows from Lemma 6.1 that the series (6.2b) converges for |f — fo| < L,
hence for |f — tol < R. ||
Example 1. The coefficient matrix of the system
lee ieee oh
can be written in the form A(t) = Ay + Ait, where
0 1 ee)
Ao= |i | and lk il
Let us compute the solution [x, y]” = S~%o ext” which satisfies the initial condition
x = 1, y = Owhen? = 0. The recursion formula (6.2c) adapted to this problem is
C4
152 Power series solutions for linear equations 6.2
and the power series expansion for the solution has the form
t+ 5 =t
3 «©
Dl-[ets abl
The matrix A(t) may be expanded in the form
0 1 m
A(t)
See
= .a :
~ Ag+ os:Axt,
k
where
The desired solution x = y(z) is the first component of the vector (ft) = fo ext”,
where
cto (ebeli Be
Thus
-| i+ 13/6 + 74/124 -
1+ 42/2 + £3/3 + 18/24.
Power series solutions around ordinary points 153
The solution y then has a convergent power series expansion for —1 < 14 < 1
which begins
Wt) = t+ 29/64 4/12 + 71°/1204+---. ||
The example above illustrates more than Theorem 6.1. First, one frequently
cannot deduce an explicit formula for the coefficients of the solution series and one
must then content himself either with knowing only finitely many terms of the solu-
tion series or with working with a recursion formula. Second, it is unnecessary labor
to convert a scalar equation
with analytic coefficients a,,..., a, into a system in order to express its solutions
as power series. The scalar version of Theorem 6.1 is presented as a corollary.
Corollary 6.1. If ty in (a, w) is arbitrary, then any solution x = y(t) of Eq. (6.2f)
can be expressed in the form
io)
V(t) = DO
=0
ex(t — to)", (6.2g)
and the radius of convergence of the series is not less than the smaller of (to — «) and
(w — to). Fork = 0,...,n — 1, cy = W(to)/k!, and the other c;s may be found
by substituting the series (6.29) into the differential equation and equating coefficients
of like powers of (t — to).
Example 3. We use Corollary 6.1 to find the unique solution x = y(f) to the initial
value problem x” + (1/t)x = 0,0 < t < +00, x = 0 and x’ = 1 when t¢ = 1.
First, we express the function 1/t as a power series in (1 — ¢). Elementary
facts about the geometric series guarantee that
]
C2) iGeet 195 10S
AME
>
X= eh), co
= 0, cy)
=1
k=0
is assumed to exist and is substituted into the equation
lee (= (SiG = D) 3 = 0.
k=0
This procedure yields
Thus
oo 7) k
Dk + Yk + Vero — DY + »
=)
(os
“=
("ei — 1) =
k=?
In order that the coefficients of like powers of (¢ — 1)* be equal, it is necessary that
The c;’s for k > 2 are computed recursively from the formula (6.2h), starting with
the initial conditions cy = ¥(1) = 0, c, = ¥’/C1) = 1. They are
af — 1 —
Cp = ON 96a Sag Cas, CCS = 5oe
Thus the power series expansion for Y(t) around the point fo = | is
Gh ee. 62%
6
_
ies
yvo=@-yn-E5V ye
ex 3 a 4 os 5
The student should compare this solution method with that suggested in Problem 1
below. ||
Several comments can be made about Example 3. First, no explicit formula for
the coefficients c; in terms of k was obtained. Had we not carried out the computation
to cg, it would have been tempting to assume incorrectly that c, = (—1)*/3- 2*~?
for k > 3. Second, the series (6.21) cannot be used to compute y(t) for t > 2 because
it does not converge outside the interval 0 < t < 2 (see Problem 3 below). These
difficulties do not occur for every differential equation. A classical equation for which
series solution works very neatly is discussed in the next example.
Example 4. For every constant v, the equation x’’ — 2tx’ + 2vx = 0 is called
Hermite’s equation. All its solutions can be expressed in the form x = > f~xo cxt*,
and Corollary 6.1 guarantees that each solution series converges for —a < t < +o.
The c;’s are determined by substituting the series into the differential equation
= (2-1-2 + 2vey)t°tio
+ (kK + 2)(k + enzo + 20 — k)ex}t*
Thus
J) TICS ye
2 ae ae Day 0? C= a — C2 ae,
a 2) Cos
eee 297C = 20 4)
Gass CGF
5 = 6! Co; ea 5
and
one 20 — ate
1 ees C=
= eeé 2° a ue a 3). -
re _ ue20-5
; Le LeDe we = 3) = Se mee
ink VI 1 3 le
oe SI SS Geta et eS eet
A general solution is then
ts
= ¢oj1 =ipy |
ae | ze Ny Qk)!
Feet
See
OC 1)
eee OaQk +1)!
ee ek) tmap
is known to be analytic, one can formulate a more natural way of computing it. The
idea is this: #(f), since @ is analytic at fo, is given by the Taylor series
oe) ifs
o®) ig) SS
2
2? (a k!
for |f — fo| < R = min {w — fo, t9 — a}. We substitute ¢(1) into the differential
equation and compute the successive derivatives by differentiating both sides,
obtaining
I! cy = (fo) = A(fo)€o,
2! Co = A’(fo)eo + A(to)e1,
3!¢3 = A (to)eo + 2A’ (toler + A(to)er,
a
Ato )exee
(Kk +1) cry = ait)Bie--
os
156 Power series solutions for linear equations 6.2
EXERCISE 6.2
1. If an equation with analytic coefficients has the form
Pyiag al OB EO) 0,
POY Sg a
it is frequently easier to compute solution series by first writing
Solve the initial value problems in Examples 2 and 3 by writing the respective differential
equations in the form
a) =x— = 05 b) @ — 1)x” + x” +x =0.
2. There are given below two recursion formulas. Assuming that the power series
> v1 cx” which they generate are nontrivial, what is the radius of convergence of each
series if cg > O and lim, cx/cr41 exists?
1 1
a) Cr41 = rage b) cepa = cx + rageat
Pll allyl
y t OjLy
her lb
oa ion 0 if |||8
coe 0, y+ ty” — 2y +x = 0,
x0=0, xO@=1, yoO=0, yO=0, y"O=0.
Example 3. The differential equation 1°x’’ + x = 0,0 < t < +o, has an irregular
singular point at f = 0. It is not of the form (6.3c). ||
In the rest of this chapter, we shall develop a method for expressing solutions of
the equation (6.3c) in terms of roots, logarithms, and power series 2/9 Cx(t — a)
around the singular point a. Note that the power series used in Section 6.2 were of
a different character in that they were series >)fo cx (t — to)* around ordinary
points f9 rather than about singular points a.
Why should we want to develop such a method? We illustrate some motives
with two more examples.
0°z 1 0z 1 07z
or? "a r or =" r2 062
+ 7z=0 (6.3d)
and the boundary condition u(1, 6) = 1, 0 < @ < 2x7. The desired solution z is
interpreted geometrically as a smooth surface in three-dimensional space lying above
the unit disc in the plane. Because of the cylindrical symmetry and the independence
of the boundary condition u(1, 6) = 1 from 6, it is not implausible to suspect that
Eq. (6.3d) might have solutions of the form z = ¢(r). For such a solution to exist,
it is necessary that
Thus x = (7) is a solution to the boundary value problem if it satisfies the differential
equation
r?x!’ + rx’ + r?r2x = 0 (6.3f)
and the condition
lim x exists.
r>0
Equation (6.3f) has a regular singular point at r = 0. In Section 6.6, we shall find
a pair of linearly independent solutions x = Jo(Ar) and x = Y (Ar) for it in terms
of series around the singular point r = 0. The series expansions will allow us to
deduce that lim,.o,Jo(Ar) exists and lim,9, Yr) = —o. Thus z=
Jo(Ar)/Jo(A) will satisfy the boundary value problem. If a more general boundary
condition u(1, 6) = f(6), O < @ < 2m had been given, the analysis would have
been more complicated. Such topics are treated in Fourier series and partial differen-
tial equations courses. See Weinberger [10]. ||
x foyer
= 7 BOX (6.4d)
where B is a 2 X 2 matrix function which is analytic fora < t < w. The theorems
which are proved for this equation will then be specialized so as to obtain information
160 Power series solutions for linear equations 6.4
When 5,(t) = 6; and bo(t) = bz are constants, Eq. (6.4a) has the form
AA — 1) + 514+ bg = 0 (6.4j)
has distinct roots. Thus, a general solution of Eq. (6.4i) has the form
x = ¢y(t — a) + c(t — a)
if \, and Xz are distinct roots of the characteristic equation and has the form
if \; is a double root.
Notice that the characteristic equation (6.4j) can be found without converting
Eq. (6.41) into a system: one sets x = (t — a)* in the differential equation and
divides by the factor (t — a)* after performing the indicated differentiations.
EXERCISE 6.4
1. Describe the behavior as t > 0+ and as t— +o of the solutions to the following
Euler equations.
a) t2x" + 3tx'’ +x = 0. b) ¢2x” — tr’ +x = 0.
Cte) =x = "0. d) 72x” +x =0.
ear + 2ty! x = 0,
el pi
a B=|7y ie oa=| 4 |
Let us turn our attention again to the equation x” + x/t = 0,0 <1t< +o and
try to find any solution of it in a form which is valid over the entire interval
0<1t1< +o. Corollary 6.1 is applicable only about interior points fo of the
interval 0 < t < +o; but might there not be a power series about ¢ = 0, with
infinite radius of convergence, which satisfies the equation for t > 0? Suppose there
were. It would have to be of the form x = w(t), where
We substitute the series (6.5a) into the differential equation and obtain
Thus convergence is assured for all t. Computing the c;’s, we obtain the relations
on = 0; c,seis arbitrary,
re Co 2 Be x] Bee C3 Se eee
3-9 ef
372-20] :
= a eS O
paint S16 GRMN ic Pack acre,
and in general
= Wick:
+ =
eu Be
Consequently,
oo k
OLj= Ds Sel Gilera (6.5¢)
5
where we have taken c; = 1.
Since cy, was arbitrary, any power series in ¢ which satisfies the equation
x" 4+ x/t = 0 must be a scalar multiple of the series (6.5c). Thus we cannot
expect to express a second solution x = Wo(t) of x’ + x/t = 0 as a power series if
W1 and W» are to be linearly independent. At the end of this section, we shall compute
a solution
and
(t — a)?x” + (t — a@)b,(t)x’ + b2(t)x = 0, (6.5e)
when B, b,, and bs are analytic for a < t < w. The student should keep in mind
that if the system (6.5e) comes from the equation (6.5d) by way of the substitution
x = [x, (t — a)x’]’, then
0 ]
oe re ie et
and the solutions of Eq. (6.5e) coincide with the first components of solutions of the
system (6.5d).
Theorem 6.2. Suppose that B is analytic for a < t < w and that the eigenvalues
A, and Xz of B(a) neither are equal nor differ by a nonzero integer. Then
1
x’ = —— B(t)x, SE <igss
(t — a)
has two solutions with the forms
and they are linearly independent over a < t < w. The coefficients ¢; and dj. can be
evaluated by substitution. -
Example 1. Write the coefficient matrix for the system
~
hmlm
Re
into the differential equation rx’ = Box + B,tx and find that
x
Consequently,
We solve the first of equations (6.5g) by taking ¢g = all, 0]”, where a is an arbitrary
nonzero constant. Solving the second of these equations, we find that
C= —
0
1
=
0 C;_9, r > Ds (6.5h)
2r — |
We compute the coefficients c, recursively from the recursion formula (6.5h) starting
with cy = afl, 0]” and find that
0 Otae a 3 poe
— YK See 2 4 ee
into the differential equation tx’ = Box + B,tx and group like powers of ¢ to obtain
(Bo — 31)do + Dy
=1
[Bo — (K+ Bde + Bide_i}t =
Thus
py
aa
d.=|, dion aed (6.5)
aE
Series solutions around regular singular points 165
The d;’s are then computed recursively beginning with dy = 6[0, 1]’, giving
feces Ol ot Spr.
$2(t) = meeGite Ol eteesGl- taOfte
2143/2 ate 347)? fo -2%
as (a ae 4,712 an oh |
Since a second order equation (6.5e) can always be written as a system (6.5d),
Theorem 6.2 can always be used for its study. It is convenient, however, to state the
results for the second order equation directly.
Corollary 6.2. Suppose that b, and bz are analytic for a < t < w and that the
roots \; and Xz of the characteristic equation
is called Bessel’s equation of index v. Let us solve it for the case vy# n/2, where n
is a non-negative integer. The characteristic equation is
Since v ¥ n/2 for any integer n > 0, the roots \, = v and \2 = —v do not differ
by an integer.
6.5
166 Power series solutions for linear equations
x= ye Gr
k=0
into Bessel’s equation and find that
It is customary to choose cy = [2” fo ew’ du]. With this choice for co, the
resulting solution
sgh (DVS
x= co(5) Lary (=1)" we) \e
is denoted by J,(t) and is called the Bessel function of the first kind of index v. Corre-
sponding to \x = —v, we obtain
Here € is a constant which, along with the coefficients ¢;, and d;, can be evaluated by
substitution.
Example 3. Consider the system tx’ = (By + 1B,)x, where
seer ol pa OREO!
Bo = E | and B, = E 3
l
kik!
k ’ ke Olees
kk!
where a is an arbitrary nonzero constant. Thus
es | peer
gift) = filer
k=0
jm iI ioe
(Bo a mI )do +. ae [Bo os (k + w)1]dy, — By,dy_} aa clia pet = (I).
el : 'k!
We must therefore have (By — J)d,y = 0 and
a eg
[Bo a (k =F aw)Id), = —By,dy_y aP alles for k = Il.
da
: : d;_; + €
= eee
I kale
l k!k!
k 0 =— J
If we were to choose € = 0, then we would have to take dy = ALI, O]" for some
nonzero constant 8 and the d;’s would, except for a scalar factor, equal the c,’s.
This would give #;(t) over again. The choice of € is otherwise arbitrary so we shall
set €= —1 and take dy, = [—1,0]’. Then
11
Sia fs 22 00 22H
a: = loli = ees
Power series solutions for linear equations 6.5
168
and
1 I 2
dg =
32.92. + 32-21* 3-313! . 3
| I
epee es
The other d,’s are found similarly. ||
We state the scalar version of Theorem 6.3 as a corollary.
Corollary 6.3. Suppose that b, and bz are analytic for « < t < w and that the
roots \; and Xz of
AMA — 1) + 4i(@) + bola) = 0
either are equal or differ by an integer, say \y — \2 = M > 0. Then
and
F 7 é ie ee % (Sie
AOE Yee ae Te rine aly
(6.5r)
ADoce ee asa
heE-p eS”
—eE a aban.
Notice that the In ¢ terms cancel out as Corollary 6.3 predicts. Now combine the
series on the right in Eq. (6.5r) to obtain
Wet) + po(t)
S= 2e— et dy tO- [K+ Dhdgs +
RG)
ik!
dE
manera
k
ae
€ = —dp
With the choice dy) = —e = 0, the recursion scheme gives y; again. Choose do =
—e = land d, = 1, say. Then
1 3 5
d= 54 | 1 = Gia} = 7
Alo 5 S)
ee 18
ee! kg? ALA
48k oil 1728
Thus W,2 has the form
1. Each of the systems below has two linearly independent solutions each of which can be
represented by convergent power series in ¢. Compute these solutions.
Se
7 =k
om
© ,ee’ | ape +147) Atet+ aide
—
&we
wS
Se
a
a Geer
ee
Ss eas") @+) JLy
170 Power series solutions for linear equations 6.5
2. Write down the characteristic polynomial for each of the following equations and find
its zeros.
a) 6f2x" + tx’ + (1 + 60x = 0. b) 22x” + tx’ + (f — 3)x = 0.
c) 22x" + tx’ + (1 + Dx = 0. d) t2x"" — tx’ + (2+
Dx = 0.
e) tx’ + x/+ x =0. f) t2x" + 3x’ + 1 + dx = 0.
g) 4r°x"” +x = 0. h) tx’ + 2x/ +x= 0.
i) 422x" + 4tx’ = (1 + 40x = 0.
3. Find two linearly independent solutions for each of the equations in Problem 2.
4. Show that t = 0 is a regular singular point for the equation (sin fx” — x’ + x = 0.
5. Find a nonpolynomial solution of Legendre’s equation
= ale all}
i) —
—Us
Sats
|
Jee eeoas al Fi)
oT -7
9. The solution x = ¥(/) to the initial value problem
is not analytic at the point ¢ = 0. The solution is, however, representable as the sum of a
series of elementary functions. Compute several successive approximations to the solution
and conjecture the form of the series. Make a change of variables x(t). = y(s),-s =f)
which will reduce the given initial value problem to a regular singular point problem. Then
~
Wh) = Z ASS.
99, arn 49 the, whebitrns A Os,vpbemn
bn
Nae i I,
19 429 iin: Al = ¢, AL) = bb), Pb) = AD) 05:8 29y Then
634
£ es - f= t+ E (Em) -a-
om — hie Em GSe)
172 Power series solutions for linear equations 6.6
It follows that
wn
a—k b
@ ck
1 is—k -—b |.
(a—k)d—k)—be| —c a-—k
As a consequence, each element of k(Ay — kI)~' is bounded for all k > 0. Let
us say |k(Ay — kI)~'| < P for k > 0. Then inequality (6.6d) implies that
(Roms
But this inequality has the same form as the inequality (6.1j) in Lemma 6.1. The
lemma implies that y;(t) = \if~o ex(¢ — a)* converges for |t — a| < w — a since
w — a — L may be arbitrarily small. The proof that yo(f) = ?_> dk(t — a)”
converges is analogous.
Next let us show that $; and @» are linearly independent over a < t < w.
Write 4; = a; + 78; and \»y = ag + iBy, where a; > ao, and let cy and cy be
constants such that
If ay > a9,
is constructed in the same way as the solutions in Theorem 6.2. (Keep in mind that
Co iS an eigenvector of B(a) corresponding to 4.)
To construct ¢2(t), we make the substitution
x = (t — a)*y + e¢i(4) In (¢ — a)
in (t — a)x’ = B(t)x and find that
which converges for a < t < w. This will prove the theorem.
When the series (6.6i) is substituted into the differential equation (6.6h), the
result is
are then solvable for each k > 1 since (Ay — kI)~! exists for each such k.
(Ao — OF)do = 0,
(Ay — 1-T)di = —Aido,
(Ao —- 2-I)d> = —Aj,d, = Aodo,
Bt — a)2tMy
1(1)= (t — a@)yo(t) + e(t — ays (d) In (t — aye.
We put this equation into the form
There remains the possibility that M = 0 and e = 0. When ¢ — a+, the equa-
tion By ;(a) = yo(a) results, that is, Bey = do. This is not possible since the algorithm
for computing y.(f) requires that ¢9 and do be linearly independent eigenvectors of
B(a). This completes the proof of Theorem 6.3.
EXERCISE 6.6
l|
X2 SNe
Arle
—1 2
Find a constant P such that |k(Ao — kI)~1| < P for all k > 2.
It is easy to see that (1) = 1. When the integral for T(v + 1) is evaluated by parts,
one finds that
Ty + 1) = 0TO). (6.74)
Thus T(z + 1) = n! for all positive integers n. The product (6.7b) is concisely
expressed in terms of the gamma function, for it equals lv + k + 1)/T(Q).
The identity (6.7d) makes it possible to tabulate the gamma function without
evaluating the integral (6.7c) for vy> 1. In the same spirit, one inductively defines
Tv) = (1/vy)T@ + 1) when vp is negative but is not an integer. It is known, for
example, that ['(4) = Vz. Thus ©) = 4P(Q) = 47 and P(—4) = —2T@) =
—2,/r. The graph of the gamma function is shown in Fig. 6.1.
T(r)
Figure 6.1
Turning now to Bessel’s equation, we observe again that the roots of the charac-
teristic equation are \; = v and \y = —». . .
We saw in Example 2 of Section 6.5 that if 2v is not equal to an integer, then
Bessel’s equation has two linearly independent solutions J, and J_«. Using the gamma
178 Power series solutions for linear equations 6.7
whenever v > 0 is not an integer. If vy> 0 is an integer, it is not hard to show that
J_,(t) = (—1)’J,(1), which implies that J, and J_, are linearly dependent.
Just as the trigonometric functions satisfy identities, so do the Bessel functions.
For example, J,41(¢) = 2v(J,()/t) — J,_1(t) for all »y X 0,1. Thus Bessel func-
tions of the first kind may be tabulated for all vy> 1 from the values of J,(t) and
J_,(t) with 0 < vy < 1. Ina problem below, the student will be asked to show that
Using the identity above, one may express all the functions J,,4.1/;2(t) and J_m—1;2(0)
as finite algebraic combinations of sin f, cos f, and ¢. These are called the spherical
Bessel functions.
Figure 6.2
Bessel’s equation 179
This implies that d2,,4, = O for all m > 1. We may therefore change the index of
summation for the right-hand series in Eq. (6.7f) to 7 = k/2. It is convenient to
change the index of summation on the left to 7 = m+ yp. Then
ee
j=v (Gj — vl j!2°? j=l
I ae a ae.
(6.72)
Consider first the case vy= 0 and recall from the proof of Theorem 6.3 that, for
this case, one may choose € = 1. We shall give to the constant do the value zero.
The resulting solution Wz is called the (Neumann) Bessel function of the second kind
of index zero and it is usually denoted by Y‘°. Equating coefficients of like powers
of ¢ in Eq. (6.7g), we find that
Thus
dy = as d= 42 r oe <n = 4292 fa 5
1 1 1 oe 1 ewe *
de =o {-p(1- V5 soo| = pce tat y
De WOTRUY lenseae
6.7
180 Power series solutions for linear equations
It follows that
¥) = 2[YW + (7 — n2)4,(0),
where
Example. We see from their series expansions that Y(t) > —aw and Jo(t)—>
T(1) = 1 ast— 0+. Thus the only solutions of
which are bounded on an interval 0 < ¢ < € are scalar multiples of Jo. We made
use of this fact in Example 5 of Section 6.3 when we asserted that x = Jo(ar) is the
only solution of
r?x" + rx’ + r?v2x = 0 (6.7i)
which is bounded forO << r< 1. ||
Bessel’s equation 181
EXERCISE 6.7
Fox at te (GF 3x = 90
7.1 PRELIMINARIES
The theory developed in Chapter 4 is valid when the coefficient matrix A(f) in
x =24(0)% (LH)
or
is not constant. Thus, solutions to initial value problems for these equations exist,
but one usually cannot write them down concisely in terms of elementary functions.
Faced with this situation, an investigator must try to describe the behavior of solutions
without actually attempting to express the solutions as elementary functions.
We saw in Chapter 6 that power series techniques can be useful for studying linear
equations if the coefficients are analytic. Power series techniques have some dis-
advantages, however. The unique solution to the initial value problem
EXERCISE 7.1
1. Prove Sturm’s comparison theorem: Let p and q be continuous real-valued functions on
a < t <w such that g(t) > p(t). Suppose that ¢ and yw satisfy
4. Use Sturm’s theorems to deduce the oscillatory character of the Bessel functions depicted
in Fig. 6.2.
5. Show that there is a zero of Jo between every two positive zeros of J1.
6. a) Show that every solution of x’’ + (1 + e~‘)x = 0 has infinitely many zeros.
b) Let p be continuous ona < t < +. Abstract your study of (a) so as to obtain
a condition on p which guarantees that every solution of x” + p()x = 0 has infinitely
many zeros.
Qualitative behavior of solutions for linear equations 1f?4
184
SUP
that ¢ is periodic if and only if a = 0. The only periods possible for ¢ are integral
multiples of 27/8.
In case A is ann X n matrix, the analysis proceeds in a similar way. When we
say that a solution @ is bounded, we mean that @ is bounded over the interval
=O KK =P ec
Thus
k
e"1;(t)| < akDo Cleri| + leralé +--+ + leemelé"* ert! < e—*pr), > 0,
where k
All solutions with ec. # 0 become unbounded as t— +o and all solutions with
C> = 0 approach zero as t— +o. Notice that, in terms of the scalar equation,
(| = |x()| + |x’()|. Hence the behavior of both x(t) and x’(t) is known as
to>+t+o. ||
Example 2. Every solution of the equation
Xj f 1 » 0) 0 0 0) Xi
Xo Ea or OO ee
peel Or 3 100.020) |xe oe
x4 in Oe he Sie a 22)
Xs i we i eee ae
x6 DO ay ay Sees
186 Qualitative behavior of solutions for linear equations 7.2
x = pe, @ X= spre
are linearly independent.
Let ¢ be any solution corresponding to \ = i8. Then, it follows from Theorem 3.9
that for some choice of constant vectors ¢;,...,€m, One may write
equation. Thus (ft) is a linear combination of the solutions x = pe’! and the
proof is complete. ||
Corollary 7.2. All the solutions of x’ = Ax are bounded over the interval —w <
t < +o ifand only if
1) A has n linearly independent eigenvectors; and
ii) every eigenvalue of A has zero real part.
Proof. Suppose that p;,...,p, are linearly independent eigenvectors of A with
respectively corresponding imaginary eigenvalues. Then there are n and no more
than n linearly independent periodic solutions. Every other solution can be expressed
as a linear combination of these periodic solutions and is therefore bounded.
To prove the converse, suppose first that some eigenvalue \; = a; + i8; satisfies
a; ~ 0. If p; is an eigenvector corresponding to \;, then x = p;e**” is a solution
of x’ = Ax and |x| = |p,|e*’. Since a; ¥ 0, e%* > +0 either as t>-+o or
t— —o. Next suppose that A does not have n linearly independent eigenvectors
and Jet P = [p;,...,Pn] be such that PAP = diag [B,,..., B;] is in Jordan
form. Since A and diag [B,,..., B,] are similar, they have the same number of
linearly independent eigenvectors (see Theorem 3.20 of Section 3.12). Thus at least
one of the Jordan blocks By,..., B, is nota 1 X | matrix. Say
MeL! sere)
B; = 4
0 v
Since AP = P diag [B,,..., By], we have
has one eigenvalue 7 of multiplicity three and (up to scalar multiples) but one eigen-
vector [1, 0, 0]’ corresponding to it. One can check that
form a fundamental solution set, and every solution is a linear combination of these
three. A solution is periodic if and only if it is a scalar multiple of ¢,. ||
Let us call a polynomial p(A) stable if and only if each of its zeros has negative
real part. In order to apply Corollary 7.1 to specific equations x’ = Ax, one must
have a way of deciding whether or not the characteristic polynomial of the equation
is stable. This is an involved, but much-studied, problem. An extensive discussion is
given in Marden [18], but a few remarks are in order here.
In the first place, it is not hard to see that any real polynomial has positive co-
efficients if it is stable. If = 2, then the converse of this statement is true. Ifn > 2,
then a polynomial may be unstable even though its coefficients are positive. In this
case, one may decide its stability by application of the following test.
denote the polynomial obtained from p(d) by deleting alternate terms beginning with \”.
The polynomial p(x) is stable if and only if the quotient ro(d)/p(d) has a continued
The homogeneous equation with constant coefficients 189
ro) _ 1
PO) 1
ha aoe
1
ON
CaN gb (7.2d)
C3X + :
a 1
Cn\ + 0 :
The utility of the criterion for specific equations hinges upon the fact that a
continued fraction expansion for ro(\)/p(\) can be computed by performing at most
n polynomial divisions. The proof is beyond the scope of this text, but we shall
illustrate the application of the criterion.
To form a continued fraction expansion of rg(A)/p(), one divides p(d) by ro(\)
until a remainder of degree lower than ro(A) is obtained. Say p(d)/ro(A) =
biQ) + r1Q)/roQ). Then
ro(A) Ti 1
One now divides r,(\) into ro(d) to obtain ro(A)/r1(\) = be(A) + re(A)/riQ) and
continues in this way until the process stops at the mth (m < n) stage. Then
ro(A) jis 1
PQ) 1
by() +
b2(\) + :
at 1
rm).
Dad) + a
Ce ar
— Xr ee
Li) Sy ee Nee
r(A) 3 rah)
which implies that c3 = —4 and r3(\) = 8. Finally
ro(A) _ _ 30
ES ON 7 4°
Thus
ro(A) os 1
P(X) '
ah Ae
N+ :
-p+—
The homogeneous equation with constant coefficients 191
The expansion for ro(A)/p(A) has the required form, but cz and c4 are negative.
Thus p(\) is not stable. (Its zeros are \ = —3 + id\/7 and \ = 44 idV/7.) ||
Example 7. Consider the polynomial p(\) = \* + A + 2? + \ + 1 with ro(A) =
3 +d. Then
PQ) _ +1
OT
ge een)
Thus c; = 1 and r,(A) = 47 +1. Since ro(d)/r:(A) = 2, the expansion process
stops at the second stage with cy = | and ro(d) = 0, and
ro(A) oa 1
(7.2f)
pr) Gad icone
The polynomial p(\) is not stable since the continued fraction (7.2f) does not have
the form (7.2d). When the expansion process terminates in this way, one can use
the expansion to factor p(A). In fact, upon simplifying Eq. (7.2f), we have
pA) _ Aw+A41,
roa) _ d
Since we started with
PO) ie A Dh ed
ro(A) B+ dr
we conclude that
EXERCISE 7.2
1. What is the largest number of linearly independent solutions of x’ = Ax which approach
zero as t—> +0 if
—1 1 Oo —1 s ( )
-1-1 0 0 Eyl [Link] 2 BON LOle
Uy tegen tied eS Has Vier me ea
Oo @ @ i 0 0-3 2
Oe Bar Onn 1 0
=O OI Ow 22 UY 1).
aa act ena mat aed be tag ol hel ere ed bl
0 0O0-2 O 0 O-2 O
3. How many linearly independent periodic solutions can each of the systems in Problem 2
have?
Qualitative behavior of solutions for linear equations Te?
192
x1] =e 0™ On)
X5:| cm | Oa, aly Os
xo Or 0 e— 2 CO eee
Ben =i) 9 0, = 1|| x4
where a1, a2, and a3 are real, be related in order that the equation have a periodic solution?
9. Explicitly solve the equation
10. Which of the following equations have the property that every solution and its first
n — 1 derivatives approach zero as f— +0?
a) xv) 4 x! 4+ 2x" 4 2x’ + 4x = 0. BD) ir ee
c) xX” + x” + x’ + 6x = 0.
11. A large rectangular metal sheet of mass M rests on a horizontal frictionless surface and
is connected to a wall by a linear spring with stiffness coefficient k (Fig. 7.1).
Figure 7.1
The homogeneous equation with periodic coefficients 193
A small metal block rests atop the sheet so that their centers-of-mass coincide. The frictional
force between the block and the sheet is proportional to the difference in their velocities.
Suppose the sheet is slowly pushed to the left and released with the spring under compression.
What will be the subsequent motion of the block? How wide must the sheet be in order
that the block not fall off?
If A is periodic on —x% < t < + with (minimal) period T > 0, the questions
of Section 7.1 can be answered in principle for the linear periodic system
x’ = A(t)x. (7.3a)
The study of this equation can be motivated by an ad hoc discussion for the case
that A(t) = a(t) is a scalar. In this case, every solution x = ¢(f) has the form
Thus
g(t) = p(tye™. (7.3b)
The significance of this equation is that every nontrivial solution of x’ = a(t)x
can be represented as the product of an exponential function and a function of
period T. This representation allows one to study the behavior of all solutions on
the interval —~1 < t < +o. We noted above that ¢ has period T if and only if
\ = 2nri/T. Similarly, ¢(1) > 0 as t— +e and becomes unbounded as tf —x%
if and only if Red < 0.
Qualitative behavior of solutions for linear equations Hess
194
It is possible that ¢ is periodic with a period other than T. In fact, suppose that
\ = i8 and that the ratio BT/2m is a rational number, say r/s, where r and s are
relatively prime integers and s > 0. Then P
o(t as sT) = pit a She. pee wa o(tye*”.
Since 8 = 2nr/sT, \sT = 2nri, and it follows that ¢ has period sT.
If \ = iB and if the ratio BT/2m is an irrational number, then ¢ is bounded for
—o <t< +o, but (t) does not approach zero ast > +o.
Example 1. The characteristic exponent \ of the equation
x
S sin ¢ cs
~ [2 + cos? f]1/2
iS
27
1 sin¢
Ve an cae
The integrand is not the derivative of an elementary function. It is, however, an odd
function of period 2x. Hence \ = 0. One sees from the representation (7.3b), that
every nontrivial solution
t
sin t
¢(t) = cexp| » Db cos? Ai a
x = A@s
which has the form
Bf) = Pe’,
where P is a matrix function having period T and B is a constant matrix in Jordan
canonical form.
Proof. If ® is a fundamental matrix solution, then W(t) = @(t + T) is a matrix
solution since W’(t) = ®(¢+ T) = At +T7)6(¢ + T) = A()V(t). By Abel’s
formula (4.5b),
t
It is known from linear algebra that there is a matrix B such that C = e®7,
x(t)| _ |pil) Be K a |
Fal i Be Pale st. Gal
for some constant vector c.
Qualitative behavior of solutions for linear equations 7.3
196
Kune LB)
iii) If a = 0, then (LHP) has k, and no more than k, linearly independent solutions
corresponding to \ which are bounded for -~ <t< +o.
iv) If a = 0 and if BT/2r is a rational number, then all linear combinations of the
solutions in item (ili) are periodic with common period qT, where q is a positive
integer.
Proof. We may assume that B is in Jordan canonical form and that ) is an eigenvalue
of B appearing in the first m columns. The information in the hypotheses allows us
to write down the matrices B and e* which are shown in Figs. 7.2 and 7.3, respectively.
The (x — m)X(n — m) submatrix R has the eigenvalues of B other than \ on its
diagonal, some distribution of zeros and ones above the diagonal, and zero entries
elsewhere.
We obtain solutions from the Floquet representation ®(f)e = P(t)e?“c by
choosing c in different ways.
eB!
<a)
S
TSOS
I!
eeeete
[ieee
:
L
Sheree
|
K
S|
—
|
y
|
ie
a
atSon |
Etpe | FE e
wee
E
| = S PR =
| | =
: fy
50 ee
iil
| Ss 52 : a |
ee:
Sloa é
|
Figure 7.3
iy, | I a Pe| esate
— s ee SS
oF
a
aes teasaoe ete Sa CN, . —_—_—,
ase | | | =
Te OO ee ee Se Se tn ha ee Se)
| a x~<
| )
Y
— |
Nee eee
=. oe ah ae
| oO eves ney | ee
: |
: ati |
2
|
wm aa
za are
omen!
me, —
a oie —“—_
i | Ie
=
|
te s Re 2
=~ | S
A ah
age eae
ag
is)
x
The homogeneous equation with periodic coefficients
NS)
= oO
197
Qualitative behavior of solutions for linear equations Th
198
To prove (i), set any component of ¢ from the first to the mth equal to one, and
set all other components equal to zero. Then the solution P(t)e®'c has the form
P(tyj(c, + Col = hss a5 et? !)e,
wherej is some integer between one and m and ¢;,...,¢; are constant vectors. If
a < 0, P(t)e?“c > 0 as t— +m since P(t) is bounded and tte!_»0ast—> +o
for integers » > 0. The solution P(t)e®‘c becomes unbounded as t > + since
P(t) is periodic and e’ + +m as t— —x. The proof of (il) is analogous.
To prove (iii), note that e?’ has exactly k columns corresponding to \ which do
not contain positive powers of ¢, say, pie’, poe’, ..., pee’. Each solution P(t)pje™"
is bounded for —~» < t < +o since |P(t)p;e*| = |P(t)p;| |e?"| = |P(p,|. It is
worth noting that the functions P(t)pje*’ (j = 1,...,k) form a maximal set of
linearly independent bounded solutions corresponding to the eigenvalue \. Thus,
any bounded solution which is linearly independent of these must correspond to a
different eigenvalue of B.
To prove (iv), consider one of the bounded solutions P(t)pje'"! Re Pree
and assume that 67/27 = p/q, where gq > O and p are relatively prime integers.
Then
P(t + qT )pje®
Te = P(t)pje’?7? = P(t)p;. ||
Corollary 7.5. A necessary and sufficient condition that every solution of (LHP)
approach zero as t—» + is that every characteristic exponent have negative real
part. Let e®” denote a transformation matrix for the equation. A necessary and sufficient
condition that every solution be bounded over intervals of the form —»n <t< +o
is that (i) every characteristic exponent have zero real part and (ii) B have n linearly
independent eigenvectors.
The proof of this corollary is very similar to that of Corollary 7.1, and it is left
to the exercises.
The last theorem shows the way in which the characteristic exponents (hence the
characteristic multipliers) of a linear periodic system determine the behavior of its
solutions. To utilize this theorem for the study of a specific system, however, one
must have some method of determining the characteristic multipliers. This is a
difficult problem and methods are known only in special cases. For example, Theorem
7.4 guarantees that all the characteristic multipliers of a periodic-coefficient equation
x’ = A(t)x equal one and that every period transformation matrix is diagonalizable
if —A(t) = A(—t). Even in the general case, however, Abel’s formula provides
some helpful information.
Theorem 7.6. Let u1,..., bn denote the characteristic multipliers of a periodic-
coefficient equation x’ = A(t)x. Then
vi
Pible ..» ply = EXD i Tr A(s) as Cie)
Proof. \f ® is the fundamental matrix solution which satisfies the initial condition
(0) = J and if C is a period transformation matrix for 6, then @(t + T) = (ZC.
Thus ®(T7) = #(0)C = C, and it follows that the eigenvalues of ®(T) are the char-
acteristic multipliers of the differential equation. But the determinant of a matrix
is the product of its eigenvalues. By Abel’s formula,
Tt
ae 4cost eS |x).
;] ee eran: eer ia i
The coefficient matrix has period 27. If uw, and ws denote the characteristic
multipliers of the system, then
by Theorem 7.6.
The subsequent analysis of the system is based upon the fact that one solution is
known: namely,
Sh pad,
sin=
2
o(t) = ze
é,'” Cos =
2
From Theorem 7.5, we may conclude that every solution of the system (7.3f)
approaches zero ast—> +o. ||
Example 4. The equation x/’ + p(t)x = 0, where p has minimal period T > 0,
is called Hill’s equation. When converted into a system, it takes the form
| - rey 0] . (7.38)
The characteristic multipliers satisfy the equation
T
Mise = exp || (O + 0) as= |. (7.3h)
0
If corresponding characteristic exponents are
Ay = ay + 18 and Ag = a2 + iBa,
then
|e 1| =— eu and ord = e%27,
Thus exp [(a; + ag)’] = 1, and it follows that ay = —ay. Equation (7.3h) now
assumes the form
where p(t) and po(t) have period 7. One of these solutions is unbounded as t> +0;
the other approaches zero. If a; = 0, then according to Floquet’s theorem there
are two linearly independent solutions having the forms
If Y = 0, each of these solutions is bounded. Each solution will also be periodic if,
in addition 6,7/2r is rational. If Y= 1, then 8; = By. Hence 8, = rk/T for some
integer k, and the solutions are
and the characteristic exponents are (mod 27/) the eigenvalues of the matrix
T
/ A(s) ds.
0
The 2 X 2 matrices A(t) which satisfy the commutativity condition (7.3i) are
those of the form
iO CO) a),
a(f(t) — g()) g(t)
where a and # are any constants. This observation is not very useful in applications;
compare it with the scalar case at the beginning of this section, however.
Example 5. To find the characteristic multipliers of the system
/
xX COS fy;
y' = (1 + cos ty,
we note first that
Thus the characteristic exponents are (mod 27/) the eigenvalues of the matrix
27
shel oe 1 |ai ite i ‘I
2m Jo 0 1 + ccst Ont
It is clear from Eq. (7.3j) that the eigenvalues of A(t) do not coincide in general with
the characteristic exponents of the system, that is, with the eigenvalues of B. In
particular, all eigenvalues of A(t) can be negative for —w <t< +o and yet
there can be solutions of x’ = A(t)x which do not approach zero as t— +o.
Example 6. The equation
C4, sin
o(t) =
e!* cost
Z
The solution @ is unbounded as t > +o, yet \ = —+ is a double eigenvalue of the
coefficient matrix. ||
EXERCISE 7.3
1. Find the characteristic multiplier and a characteristic exponent for each of the equations
below. Describe the behavior of solutions as t > +.
a) x’ = (1 — 2sin dx. b) x’ = (—2 sin? Ax.
c) x’ = V1costx.
—
2. Give a necessary and sufficient condition in terms of a that x’ = a(1)x have only periodic
solutions.
3. Discuss the behavior of solutions as t > +00 for the equations
a) x’ = (cost + i7/3)x, b) x’ = (cost + im)x.
4. Solve the following systems by the method of integrating factors and express their
solutions in Floquet form. What are the characteristic exponents and multipliers?
4 4 1 1 1 HO Ee)
<4 ct teal f= Th Pie fs
in) — Wt) Lf woes [| Oly
ear yee y.
t A
b
z-i 4
W’( s)| GS il|p(s)| ds = ; |p(s)| ds < =
<< / ws)
to obtain the contradiction. Next show that the characteristic multipliers are nonreal roots
of unity.]
204 Qualitative behavior of solutions for linear equations 7.4
15. How do the solutions of x’’ + (? — cos? Ax = 0 behave ast > +2?
which is graphed in Fig. 7.4. Even though lim, 4. f(x) does not exist, it is still
possible to describe the limiting behavior of f quantitatively. To do this, let x be
restricted to an interval t< x < +0, t¢> 1. Then the range of f for x > tis
the projection of the graph of f, x > t, onto the y-axis. No matter how t > 1 is
chosen in the example, this projection will be a closed, finite interval J(t). We denote
the upper endpoint of I(t) by M(t) = supz,; f(x) and the lower endpoint by m(t) =
infz,; f(x). The expressions “sup” and “inf” are abbreviations for the words
supremum and infimum, respectively. Notice that, if t; > t, then J(t;) is contained
in I(t), that is,
m(t) < m(t1) <0 < M(t) < M(0).
se = (2=> *)sin 27x
—_—
m(t) = m(t) a
aa
a
i
Uf
Figure 7.4
-
Since monotone functions which are bounded for 1 < t< +H approach limits
as t—> +a, lim, 4. m(t) and lim,_,,.. M(t) exist. We denote these limits by
lim sup f(x) and lim inf f(x).
r+ xr +0
They are called the /imit superior and limit inferior, respectively, of f as x > +a,
and they have the values +1 and —1, note.
Let us now define lim sup,_,,. g(x) for an arbitrary real-valued function on
an interval a < x < +o.
A set of real numbers S is said to be bounded above (below) if there is a number
M(m) such that x < M (x > m) for every number x in S. The completeness axiom
which is assumed for the real number system R! is that every nonempty set of real
numbers which is bounded above has a /east upper bound. The least upper bound
of a set S is called its supremum and is denoted by sup S. A dual form of the com-
pleteness axiom is that every nonempty set of real numbers which is bounded below
has a greatest lower bound. The greatest lower bound of a set S is called its infimum
and is denoted by inf S. If S is not bounded above (below), one writes sup S =
+o (infS = —o).
One says that a real-valued function is bounded above (below) if its range is
bounded above (below). A function which is bounded both above and below is
simply called bounded, and this is the sense in which we have previously used the term.
It follows immediately from the completeness axiom that if M is a bounded,
nonincreasing function on an interval a < t < +o, then lim; _,,. M(t) exists, the
value of the limit being the infimum of the range of M. A dual theorem for non-
decreasing functions is, of course, true.
Now assume that g is bounded above for a < x < +. For each t > a, then,
the set R; = {y: y = g(x) for some x > f¢} is bounded above. If M(t) = sup R:,
then M is nonincreasing. If M is bounded below, then lim;,,., M(t) exists.
Otherwise lim;,,. M(t) = —o. In either case, we define lim sup, 4. g(x) =
lim,;_,,. M(t). Ifg is unbounded above on the interval t < x < +, for arbitrarily
large t, we define lim supz_.4. g(x) = +o.
The notion of liminf,.,. g(x) is defined analogously, and we observe that
lim,_,.. (x) exists if and only if lim supz.4.« g(x) = lim inf,,.. g(x), where the
indicated limits are finite.
Theorem 7.7. Suppose that A is real-valued and continuous on an interval a <
t < +. and let m(t) and M(t) denote, respectively, the smallest and largest eigenvalues
of A(t) + A’(t). If, for some to in (a, +),
t
lim [ maa = 53
t+ J tg
then every solution of x' = A(t)x approaches zero as t—> +a. If, for some to in
(a, +0), t
GQ) tae oa
Then ¢(t)> 0 as t— +o if and only if ||¢(t)|| 0 as t> +o. In matrix
notation
leA||? = o Oe.
Thus
M(s)ds = —~m,
to
then
lim ||¢()|| = 0.
t>-+0
then every solution of x' = A(t)x is bounded for ty < t < +o.
*
, |—sintt—costt . 3
x
= D sill COS tal x
. (7.4c)
y —cos t sin? t —cos” t sin? t}Ly
The coefficient matrix is periodic, but the Floquet theory is difficult to apply.
Theorem 7.7 provides information about solutions much more readily. First one
solves the eigenvalue equation
in the exercise below. It follows from this that every solution approaches zero as
to>+oa. ||
Example 2. Let us show that all solutions of the equation x’’ — (e~' — 1)x = 0
and their derivatives are bounded for0 < t << +m. If x’ = y, then
Hie =val)
The matrix A(t) + A7(f) is
OMen:
én 10
Corollary 7.7 guarantees that |@(1)| and |’(t)| are bounded over 0 < ¢ < +o for
every solution x = ¢(t). ||
Example 3. Consider Hill’s equation x’’ + p(t)x = 0, where p is continuous on
—« <t< +o and has minimal period T > 0. When converted into a system,
Qualitative behavior of solutions for linear equations 7.4
208
so Theorem 7.7 does not apply. The reader will recall from Example 4 of Section 7.3
that if Hill’s equation has one nontrivial solution ¢; such that lo1(2)| + |6;| - 0
as t— +o, then it has a second linearly independent solution ¢2 such that
|2(t)| + |63(t)| becomes unbounded as t—> +. Ina specific case, Hill’s equation
might have all its solutions bounded, but Theorem 7.7 and its corollaries do not
provide a sufficiently delicate test for the detection of this boundedness. _ ||
In many cases, it is difficult to apply Theorem 7.7 and its corollary because the
eigenvalue M(t) is difficult to compute. The theorem and its corollary remain true
if M(t) is replaced by
SSI Gaye
where L is either periodic or constant in the following way. Let &(t) = P(t)e®*
denote a fundamental matrix solution in (perhaps trivial) Floquet form. Suppose it
is known that all solutions of the equation approach zero as t > +o. Then all the
eigenvalues of B have negative real parts. Thus, there exist positive constants M,
and 7 such that |@(1)| < |P(t)|- My,e—™ for all t > 0. Further, there exist positive
constants M, and M3 such that |P(‘)| < Mz and |P~'(t)| < Ms for all t. Con-
sequently
|B(t)| < cye"”, Cc, = M,\Mo, (7.4d)
for all t > 0, and
Theorem 7.8. Suppose the coefficient matrix A(t) of the equation x' = A(t)x can
be written in the form A(t) = L(t) + R(t), where
i) Land R are continuous fora <<t< +,
ii) either L is constant or L is periodic with minimal period T > 0,
iii) |R(t)| ~ 0 as t> +o,
iv) every solution of x’ = L(t)x approaches zero ast > +a.
Then every solution of x' = A(t)x approaches zero as t— +o.
Proof. Let ¢ be any solution of x’ = A(f)x and let &(7) = P(t)e®' denote a funda-
mental matrix solution for x’ = L(t)x in Floquet form. Let cg and 7 be defined as
in inequality (7.4e) and choose a constant cz > 0 so small that coec3 < 7. Since
|R()| +0 as t— +o, there is a positive to > a@ such that |R(t)| < cs for all
t > to. Express @ by means of (7.4f) and take norms of the resulting equation
to obtain ;
On 0 0
Pa
All solutions of
>| s Oey
TA eal 24 [by
approach zero as t > + as does the matrix
0 0
=
Rie ba
te
t
Thus every solution ¢ of Eq. (7.4g) has the property that |¢(t)| + |¢’(t)| > 0 as
t—> +o. ||
o(t) = EE sine»
2,
e!/* cos Al
2
we see that the eigenvalues of the first term L(t) have negative real parts. Thus every
solution of
y r aoe
| Se eal
A|;
approaches zero as t > +. The second term R(t) is bounded and does not approach
zero as t—> «. This example shows that some generalization of hypothesis (iii) is
necessary for the truth of a result such as Theorem 7.8, and the mere boundedness of
|R(t)| is not enough. ||
for t > 0. The coefficient matrix can be written in the form L(t) + R(t), where
dcost 2— 2sint
Henares —1-+ Zcost
and
1 Ee =
RG = 2s ;
ee Foe 0
D 2 —e-#
It is clear that |R(‘)| — 0 as t > +o, and it was shown in Example 3 of Section 7.3
that every solution of x’ = L(1t)x approaches zero as t—> +. By Theorem 7.8 so
does every solution of the system (7.41). ||
The preceding theorem has a companion which gives conditions under which
the solutions of x’ = A(t)x are merely bounded as t— +o.
Theorem 7.9. Suppose the coefficient matrix A(t) of the equation x' = A(t)x can
be written in the form A(t) = L(t) + R(t), where
i) L and R are continuous ona <t< +o,
ii) L is either constant or periodic with minimal period T > 0,
00
are bounded over every interval tg < t < +, where fo > Ois arbitrary. To verify
this assertion, consider the associated system
56 | 0 il ||||3% 0 I ia 0) Ise
= ] —
The solutions of
where p, and py, are constant vectors. The norm of the last matrix in the system
(7.4m) is t~”, and fi, 1-2 dt = tj! for any to > 0. Thus all the hypotheses of
Theorem 7.9 are satisfied and the conclusion follows. _ ||
Theorems 7.8 and 7.9 do not generally apply to the equation x’ = (L(t) +
R(t))x when L is neither constant nor properly periodic. If & denotes a fundamental
matrix solution for x’ = L(f)x, then either of these conditions on L allows one to
estimate the norms of ®(f) and & '(t) because ®(f) has a Floquet representation.
Theorems similar to 7.8 and 7.9 can be proved when L is neither constant nor periodic
provided |®(f)| and |&~'(1)| can be estimated in some other way.
Let us suppose, for example, that L is neither constant nor periodic but that ®(t)
is known to be bounded on intervals of the form fp < t < +o. Since every element
of &—'(t) is a sum of products of elements of ®(t) divided by det ®(t), one can con-
clude that ~!(f) is bounded as t—> + provided det ®(t) is bounded away from
zero. By Abel’s formula,
t
Thus, &~'(t) is bounded on to < t < +c if &(f) is bounded on that interval and if
t
Theorem 7.10. Suppose the coefficient matrix A(t) of the equation x' = A(t)x can
be written in the form A(t) = L(t) + R(t), where
i) L and R are continuous ona<t<+on,
t
by Lemma 7.8. The hypotheses (ii) and (iv) imply that (1) and 67 '(1) are bounded
as t—> +o. Thus, for t > fo,
This is the same inequality as (7.41) in Theorem 7.9. As there, it follows here that
+a
An example of Perron shows that some hypothesis of the type (ii) in Theorems
7.8, 7.9 and 7.10 is necessary if one is to formulate a similar theorem: The solutions of
aoape —ii @) x
| fe Reem onesie ili Beet
|
y
= | nee
exp[—o¢] sinIn¢
U . t>0
+ cosInr — 44]| y|’
= ea |p te (7.40)
y Bteee sar tme liliy
a
In order to apply Theorem 7.10, the coefficient matrix must be decomposed into a
sum L(t) + R(t), where
+o t
Fe ia
If we take ty = O, then
is ” ds
I. | (s)| S * 1 ae 52 2
Since
=o 40
me ree sb
ie
We may therefore conclude that all solutions of the system (7.40) are bounded for
0 < t < +o as soon as we know the same for the solutions of
Fbepoceclbh
But this system can be solved explicitly for x. In fact, x(t) = cye~”. Thus y’ =
317y + 3c,t?, and it follows that y(t) = coe~ — cy. Since the solution [x(2), y(2)]"
is bounded over 0 < t < +o for any choice of the constants c, and cy, one may
be assured that all the solutions of the system (7.40) are also bounded over the same
interval. ||
EXERCISE 7.4
4. Discuss the behavior of solutions as t > + for the equation x’ = A(x in each of
the following cases.
1 ib
= Se:
by) See
a AG te
—t —l —t —1
=i) sin In ¢ —sin?t sin ‘e
= : A(t) = ;
740) BE Int =) | ore —sint —1
implies that
t
a) x (tv) Pia
nt Neal my
+ 3x" " + 2x0
!
+ ak
tsin~ a === Oy
Oe pales |b
7. Let p be a positive-valued, continuously differentiable function.
a) Show that the equation x’’ + p(x = 0, 0 < t < +, takes the form
t 1
x! Bx = 0, en ond en x” + tx = 0,
x
Pe d Ca OV ees
+(43)r=0,
Py
w+
Gig Sayeoes
(GEM*S) Wo
Pd
8. How do solutions of
' 1 1
x = = a 58
1+ £ Neen;
Gi 1 1
y y
5 Me a a
behave as t> +0?
est oak (6
has two linearly independent solutions @; and @2 with the property that
Show that there is a fundamental solution set @1,...,» for the system x’ =
(ZL + R(t))x such that (¢.(Ne* — p.) > 0 as t> +o.
10. a) Show that the equation x” + (w? + e~)x = 0, t > 0, w > 0 has a fundamental
solution set Yi, Y2 with the property that
b) Conjecture, state, and prove a general theorem for the equation x’ + p(‘)x = 0
which yields the result of part (a) for p() = w? + e~.
has all solutions bounded as t > +2. Note that all solutions of x’ = L{t)x are bounded
over —© <¢ < +o and that f°” |R(D| dt = +. Relate this example to Theorem 7.9.
13. How do solutions of the system
= tf}
behave as t > +0? Relate this example to Theorem 7.9.
14. a) Describe the behavior of solutions as t + for the system
ial: alls:
b) State and prove a theorem for x’ = A(‘)x,a < t < +, of which the result in (a)
is a special case.
c) How do solutions of the system in (a) behave as t > 0+?
15. Let o denote the supremum of a nonempty set S which is bounded above. Show that
there is a sequence of points ¢, in S such that tf, ~ o ask— +.
or approaches zero) as t—> +, then all its solutions are bounded (unbounded, or
approach zero) as t > +o.
Proof. Let ; and Wz be solutions of Eq. (7.5a). Then there is a solution ¢ of Eq.
(7.5b) such that ¥,(t) = Wo(t) + (2). If Wo is bounded as t—> +, the same is
true of ¥; since ¢(t) > 0 as t— +o. The parenthetical assertions are proved
analogously. ||
Corollary 7.11. Suppose every solution of x' = A(t)x approaches zeroast > +o.
If x’ = A(t)x + b(t) has one solution Y, which neither approaches zero nor is un-
bounded as t—> +, then every other solution 2 of x' = A(t)x + b(t) behaves in
the same way. Moreover, limt.40 (Wilt) — Y2(t)) = 0.
Theorem 7.12. Suppose every solution of x' = A(t)x either approaches zero or
becomes unbounded as t—> +a. If x = A(t)x + b(t) has a periodic solution 4,
then @ is its only periodic solution.
Proof. Suppose the equation had a second periodic solution y. Then ¢@ — y isa
solution of its complementary equation. The difference of two periodic solutions
is bounded. Thus it follows from the hypotheses that lim; ,,, [@() — Y()] = 0.
But this can happen only for w nonperiodic or Y(t) = ¢(t). ||
Example 1. Consider the scalar equation
x= = ax + b(t) (7.5d)
on the interval 0 < t< +o. We shall illustrate Theorems 7.11 and 7.12 by
choosing b(f) in different ways. The variation-of-parameters formula assumes the
form..(79 = 1)
t t t s
Leen
t
Lif
i da
ialaeibe
Here #(ft) == 1/f is a fundamental solution of the homogeneous equation
and
op(t) = oi sb(s) ds
is a particular solution of Eq. (7.5d). Since &(t) + 0 as t—> +a, every other solu-
tion of the equation is unbounded, bounded, or approaches zero with ¢,(f) as
t— +o. If, for example, b(s) = 1, then all solutions of Eq. (7.5d) are unbounded
as t—> +00 since ¢,(t) = t/2 — 1/2t. If b(s) = 1/s, then all solutions are bounded
220 Qualitative behavior of solutions for linear equations Ths)
as t—> + since ¢,(7) = L— 1/t. If bG)= 1/s*, then all solutions approach
zero as t—> +o since ¢,(t) = Int/t. ||
Problem. Exhibit a function 6 for which Eq. (7.5d) in Example 1 has a periodic
solution @. Find an explicit representation for any other solution y. Observe directly
that y could not be periodic.
The theorems and corollary above say nothing about the existence of bounded
or periodic solutions for x’ = A(t)x + b(t). Let us first examine it for bounded
solutions.
Theorem 7.13. Suppose the following hypotheses are satisfied:
i) A and b are continuous ona <t< +0;
ii) A is either constant or periodic with minimal period T > 0;
iii) b is bounded over some interval ty < t < +a;
iv) every solution of x’ = A(t)x approaches zero as t—> +o.
Then every solution of x’ = A(t)x + b(t) is bounded over the intervalty <<t< +o.
Proof. \f & is a fundamental matrix solution for the nonhomogeneous equation, then
the estimates (7.4d) and (7.4e) hold for &. Since any solution ¢ of x’ = A(t)x + b(d)
satisfies
We shall see in the next theorem that a trace condition can be used to replace
hypothesis (ii). ||
Theorem 7.14. Let the following hypothesis be satisfied:
1) A and b are continuous ona < t < +o;
The nonhomogeneous equation 221
iv) every solution of x’ = A(t)x is bounded over the interval tp < t < +m.
Then every solution of x’ = A(t)x + b(t) is bounded over ty < t < +m.
Proof. One takes norms in the equation (7.5e) to obtain
for some constants c; and cg. Note that hypothesis (ii) insures the boundedness of
je *(s)|- —||
Theorem 7.15. Let b be continuous and periodic with (minimal) period T > 0
and suppose that the constant matrix A has no eigenvalue of the form \ = 2nki/T,
where k is an integer. Then x’ = Ax + b(t) has a solution @ of period T. If A has
no pure imaginary eigenvalue, then @ is its only periodic solution.
Proof. Theorem 7.12 guarantees uniqueness when A has no pure imaginary eigen-
value.
The integer one is not an eigenvalue of e~“” A since no eigenvalue of A is of the
forth 2rki/T. Thus the matrix (e~47 — J) is nonsingular, and the algebraic equation
t
T 47
eee ee Ce iheA F— hs) ds + i e4A6—D ps) ds
0
¢
Sie ee ee et i e4"b(u) du
0
t
Corollary 7.15. The equation x' = Ax + b(t) will have at least one solution of
period T > Oif
7
i e “*h(s) ds = 0. (7.5)
0
Proof. If Eq. (7.5g) is satisfied, Eq. (7.5f) will have a solution for c even though
(e—47 — J) might be singular. ||
Theorem 7.15 remains true if A(t) is periodic with period T provided the eigenvalue
conditions are imposed on the characteristic exponents of x’ = A(t)x. The proof
will be left as an exercise.
EXERCISE 7.5
5 een eee et le i)
I - al 0 AGIA
and describe their behavior as t — + in the following cases:
1++t
4. Describe the behavior of solutions as t+ + for the system
x |’ 4 —1 inelle|
;
sint |S ¥ 00s(¢)
= 1
=
t
a
y cost —IlJlLy 0
5. Either prove that the following proposition is true or give a counterexample to show
that it is not true: Let A and b be continuous fora < t < +0 and suppose A is constant
or has minimal period T > 0. Assume that every solution of x’ = A(x approaches zero
ast— +. If b(t) > c as t + +>, then every solution of x’ = A()x + b(‘) approaches
a limit as t— +0,
6. Relate the example
to Theorem 7.14.
The nonhomogeneous equation 223
7. Either prove that the following proposition is true or give a counterexample to show
that it is not true: Let A and b be continuous for a < t < + and suppose that A is
constant or has minimal period T > 0. Assume that all solutions of x’ = A(x are bounded
for —%» <?t< +o. If there exists a constant ¢ such that for (b(s) — c) ds converges,
then every solution of x’ = A(Ax + b(t) is bounded as t > +”.
8. How are periods of periodic solutions for x’’ + w?x = sint, w > 0, related to the
common period of the coefficients?
9. Does the equation
x’ + (sinax’ + (1 + cos Ax = sin (20)
{= (a2, ai Z]+[3"
have a periodic solution? Is there more than one?
CHAPTER 8
8.1 PRELIMINARIES
eS,
Gm.
and (1.10c)
x? + e(x?— 1)xX’ +x =0
from Chapter | are special cases of the differential equation
KU nf (ae ee ee 1). (8.1a)
This equation is said to be of the nth order since the highest derivative involved is an
nth derivative, and it is called normal since x is given explicitly in terms of the
other symbols in the equation. Similar terminology is applied to systems of equations.
For example, the system (1.5c, d)
This is called a normal system of order n + m, and the individual equations are said
to be coupled.
Evidently, any finite number of normal ordinary differential equations of arbitrary
order can occur in a coupled system. In order to discuss the existence of solutions
efficiently, it is customary to observe that a judicious change of variables will reduce
224
Open sets and closed sets 225
Eq. (8.1a) or an arbitrary normal system of (finitely many) coupled equations to the
standard form
x} = fi(1, sey Xn, t),
: (8.1b)
Remeeee Ln Xicue oy Xa b)-
Consider, for example, the equations (1.5c,d). If we define x; = 0, xo = 6’,
X3 = r, X4 = 1’, then they assume the form
,
Xi = Xe,
ae
X= —2X4X2/X3,
y
X3 fa X4,
Ke = MG) xs Axes:
The equation ([Link]) takes the form (8.1b) if one sets
The equations (8.1b) can be profitably studied from a geometrical point of view.
To implement the geometry, it is convenient to identify the space V,(@) of n-vectors
[x1,.-.,Xn]’ with the number space ©” consisting of n-tuples (x1,..., Xn) of
numbers. In a discussion involving only n-tuples of real numbers, the number space
corresponding to V,,(®) will be denoted by ®”. We use &” to stand for either ®”
or @”. Then the vector notation developed in Section 3.4 makes it possible to write
the system (8.1b) in the condensed form
x’ = i(x, 7) (S)
for theoretical discussion. We shall, under suitable hypotheses, study the existence
of solutions to (S) and handle in this way not only Eq. ([Link]), but a multitude of
normal systems.
The number space &? can be interpreted as the x; x2-plane in which analytic geometry
is traditionally done. Consider the sets of points
Be (fiyX2)s Xi ko) and =55(X, Xo)i. 1 > Xa)
in ®? (Figs. [Link] and 8.1b). The student has an intuitive notion of what one means
by the boundary of F and boundary of G. These consist of the points on the circle
x? + x2 = 1 and the points on the line x2 = x1, respectively. What is it, however,
The existence of solutions 8.2
226
Figure 8.2
Open sets and closed sets 294)
r2
a
2)
Figure 8.3
Figure 8.4
Figure 8.5
EXERCISE 8.2
In Problems 1 through 10, there is given a set. Identify its boundary and tell whether it is
open, closed, neither open nor closed, or both open and closed.
1. {G15 %2): [x1 2x
Pie, (GRE
Continuity of vector functions 229
fore" lean, Hawhenevel XK = (Ny... 0,.%,) and Y = (X4,.. 5 Yn) are m-vectors.
Let g denote a vector function defined on a set S in &”.
We shall now say that g is continuous at Xo in S if |g(x) — g(xo)| 0 as
[x — Xo|— 0. In view of the inequality (8.3), this definition is equivalent to our
previous one. As is usual, we say that g is continuous on S if it is continuous at each
point of S.
A set in &” is called unbounded if it contains points located at arbitrarily large
distances from the origin; otherwise it is called bounded. To show that a set T in
&” is bounded, one displays a constant M > 0 such that |x| < M for all points x
in T. One can show that a set T is unbounded by displaying, for each positive integer
k, a point x; in T such that |x;| > k.
We say that a function g is bounded if its range is bounded, that is, if there exists
a constant M > 0 such that |g(x)| < M for all x in the domain of g. Similarly, a
sequence {x;} is called bounded if the range of its values x; is bounded. Again we
comment that this is equivalent to the definition in terms of components.
The existence of solutions 8.3
230
2. Consider next a vector function defined and bounded over an interval a < t < b.
There is a sequence of points ty, in [a, b) such that t, > bask — +. and \imy_,40 (ti)
exists. To prove this, first select a sequence of points #/, of [a, b) such that 17, > b
asm-—> +o. There is, of course, no assurance that the sequence {#(Z/,)} converges.
Since @ is bounded, however, the sequence {@(7/,)' is bounded and therefore has a
convergent subsequence {9(f,,,)}. Now define 7, = ¢,, and the construction is
complete.
3. Suppose that @ and y are two vector functions which are continuous on the closed
and bounded interval [a,b] and assume that $(a) = ¥(a) but o(b) ¥ W(b). Then
there is a rightmost point t = c in [a, b] such that $(t) = Y(t). To prove this, we make
use of the notion of supremum defined in Section 7.4. Let c denote the supremum of
the set {f: a<¢ < band $(t) = y(t} and select a sequence of points ft, in [a, b]
such that 4, > cask — +o. Certainly a < c < b. Since @ and y are continuous
and (7) = (tz), one can let k + + and evaluate limits by substitution to obtain
o(c) = ¥(c). There is no point ¢ in (c, b) such that ¢(t) = (ft) since c was defined
as a Supremum, i.e., least upper bound.
Connected sets 231
EXERCISE 8.3
1. Let g(x, y) = (x?y/(x? + y?), x? + y?) if (x, y) ¥ (0,0) and g(0,0) = (0,0). Is
g continuous at (0, 0)?
2. Define a sequence {x;} of points by the recursion scheme
Frequently, it is necessary to speak of sets S in &" which, loosely speaking, are not
broken up into pieces. Continuous vector functions can be used to make the notion
precise. Let Cy denote a curve in &” parameterized by a continuous function @
defined on an interval [a, b]. If Cg does not intersect itself, that is, if @ is one-to-one,
then it is called a simple curve or arc. If o(a) = $(b) and $(t1) ¥ (2) for every
t, and fg in [a, 5), then Cg is called a simple closed curve or Jordan curve. These
notions are illustrated in Fig. 8.6. If @ is a differentiable function, then ¢’(f) is
called a tangent vector to Cg at the point (tf).
Are
Jordan curve
Figure 8.6
232 The existence of solutions 8.4
Figure 8.7
A set S in &” is called arcwise connected if any two points in S can be connected
by an arc which lies entirely in S. The set S = {(x1, xX): x} — x} > 1} in Fig. 8.7
is not arcwise connected. We shall call an open, arcwise connected set a domain.
The union of a domain and any subset of its boundary is called a region. Thus each
of the sets F and G in Fig. 8.1 is a region, but only G is a domain.
Arcwise connectedness is an adequate concept for describing the connectivity of
open sets. It is not a general enough notion for sets which are not open however. To
see this, consider the set C = A U B in ®?, where A is the closed segment |y| <1
of the, y-axis and. Biis'the sraph of y= sml/x%, 0.<ixm<1 (rigas.8)
This set is not arcwise connected, but intuitively one would certainly like to
consider it connected in some sense. In order to formulate an adequate definition of
connectedness, let us first say that a set S in &” is disconnected if it is possible to find
two closed sets F, and Fy, such that (i) S C F; U Fo, (ii) both K; = F; NS and
Ky = F2S are not empty, and (ili) Kj 9 Ko = ¢. One then calls S connected
if and only if it is not disconnected.
Figure 8.8
Connected sets 233
According to this definition, the set C in Fig. 8.8 is connected, but we shall not
dwell upon the subject long enough to prove this. It is shown in analysis that every
arcwise connected set is connected. The set C above is an example which shows that
there are connected sets which are not arcwise connected. It can be shown, however,
that every open connected set is arcwise connected.
In Chapter 9, we shall encounter a closed and bounded set which is assumed to
be disconnected. We wish to establish that such a set is the union of two closed and
bounded subsets which are separated by a positive distance. If K; and K> are non-
empty sets in &”, we define the distance from K, to K as the greatest lower bound
of the set of distances |x — y| where x is in K, and y is in Ky (Fig. 8.9).
Figure 8.9
Suppose, then, that S is a closed and bounded set in &” which is disconnected
and let F, and F 2 be closed sets as in the definition of disconnectedness. The corre-
sponding sets K; and Ke are nonempty, closed, and nonintersecting by definition.
It is easy to see that each is bounded and that S = K; UK». Let d denote the
distance from K, to Ko.
Then, by Problem 15 in Exercise 7.4, one can find points x; in K,; and y; in Ky
such that lim; 4. |xz — yx| = d. Since each x; is in the bounded set Ky, there is
a subsequence {x;z,} of {x;} which converges, to X9 say. The point xo must be in
K, since K, is closed. If we consider the corresponding subsequence {yx,} of the
sequence {y;}, there is no assurance that it converges, but a subsequence {y;,.} of
{yx,} will converge to a point yo in Ky. Discarding some of the terms in {x;,,}, if
necessary, we form the corresponding sequence {xj,} which, of course, converges
to Xo also. Then [xo — yo| = lim;4o |x%, — yh,| = d. Since Xo is in Ky, Yo is
in Ky and since K; and Kg do not intersect, d > 0. This result is summarized as
follows: A closed and bounded set which is disconnected is the union of two nonempty,
closed and bounded sets which are separated by a positive distance.
EXERCISE 8.4
1. Explain why the set C in Fig. 8.8 could not be arcwise connected.
2. Compute the distance between the following sets in ®*:
The independent variable for the function f of the differential equation x’ = f(x, f)
is an ordered pair (x, f) where x is in &” and ¢ isin @'. The set of all such ordered
pairs is called 8” X @'. In order to define open set and convergence, it is necessary
to specify neighborhoods for &” X ®'. A neighborhood of the point (xo, fo) will
be a product rectangle, that is, a set of the form
R= {(x,
1): |x — xo| < a, |t — tol < 5},
where a > 0 and b > O (Fig. 8.10). Such a product rectangle is not one of the
rectangular neighborhoods defined for 8” in Section 8.2: it is, instead, a cylinder of
“altitude” 2b with a “base” consisting of an ordinary rectangular neighborhood in
&” and its boundary.
With this notion of neighborhood, one defines boundary for sets in &” X @!
and thereby gives meaning to the other concepts discussed in Sections 8.2, 8.3, and
8.4 for &”. In the following material the word neighborhood means a product rectangle
in &” X @' and a rectangular neighborhood in 8”.
Throughout this chapter, we consider as given the equation
Kee CXe (S)
U1
Figure 8.10
A geometrical interpretation 235
A Le \
AL |e
G4 tao f 7
7
Figure 8.11
We give the set D on which (S) is to be studied the name phase-time space, and
the projection © of D onto the x hyperplane will be called phase space. This is an
extended use of the term. It is traditionally used in connection with autonomous
systems (Chapter 9.) Note that @ will be domain since 9 is one.
Example 1. Consider the equation
<=
1—x
x
eee)
Phase-time space © can be chosen in several ways for this equation. It can be taken,
for example, as a subset of the xf-plane which lies above the line x = ¢/2 or asa
subset which lies below the same line. In either case, @ is in the x-axis. In Fig. 8.12,
© has been chosen to be the region beneath the line t = 2x, and the graph of the
unique solution @ which satisfies @(0) = 1 is shown. Notice that the graph of ¢
terminates at the boundary of D. ||
Example 2. If x’ = (tan #)/(1 — x”), D may be taken as any one of the rectangular
regions defined by x? # 1, t X kr/2, k an integer. ||
236 The existence of solutions 8.5
Figure 8.12
Let @ be a continuous vector function defined on an interval Jy. The set of all
points (@(f), t) is called the graph of @. Notice that the graph of @ can be regarded
as a curve. In this context, a solution @ of (S) traces a differentiable curve which
lies in @, and the graph of @ corresponds to a differentiable curve which lies in 9.
Example 3. The function @ defined by ¢(t) = (Asin (t — fto)), A cos (t — fo),
which is a general solution of
Pl = ler oll
traces a curve Cg (namely a circle of radius A) in the xy-plane. The graph of ¢ is
a helix about the f-axis. The tangent vector to Cy at (2) is ¢’(t) = (A cos (t — fo),
—A sin (t — fo)). The tangent vector to the graph of ¢ at (¢(2), t) is (A cos (t — fo),
—A sin (t — to), 1). See Fig. 8.13. ||
(¢’(,; 1)
$'(t)
71
Figure 8.13
A geometrical interpretation 237
7)
Figure 8.14
EXERCISE 8.5
1. Consider the differential equation x’ = x. Sketch the associated vector field in the
xt-plane and show several integral curves.
2. Repeat Problem 1 for the equation x’ = x”.
If x’ = f(x), where f is continuous on a domain @ in &", the assignment x — f(x) is
said to establish a direction field on @. The solution paths of x’ = f(x) are integral curves
of this direction field.
3. Sketch the direction field for each of the following systems and depict several typical
solution paths. Then solve the equations and compare your geometrical and analytical
results.
aN) ae! SS Be ah DG a= SX.
Cave —i ny. i—) iy, G) ieee ex
Care =e ey, = ey.
238 The existence of solutions 8.6
4. What regions might one plausibly designate as the phase-time space for the system
x/ def (x; 7) if
a) x Se DS eae and
J x2 + y2 V/Vx2 + y2
b) , Ys / x
x = ——
V x2 + y2 J x2 + y2
Describe their direction fields near the point (0, 0) in R?.
6. Suppose the equation x’ = f(x, #) has a solution y such that Y(t) = constant. Describe
the graph of wy.
7. Suppose the equation x’ = f(x, 7) has a solution with minimal period T > 0. Describe
the graph of w.
8. Describe the graphs of solutions for x’ = y + t, y’ = —x. Depict a typical trajectory
in the xy-plane.
be posed at a point (Xo, fo) in a domain ©. The student has seen, in Sections 4.2 and
4.3, how the method of successive approximations may be used to show that (IVP)
has a unique solution on (a, w) if f(x, t) = A(t)x + b(t) and D is the set &” X (a, w).
An equivalent integral equation for (IVP) is, of course,
globally lipschitzian. Consider, DOMED) a scalar function f such that f and df/ax
are continuous on a domain 9 in ®' X R!. Let R= {(x, 1): |x — xd <a,
|t — to| < b} bea product rectangle in D. Then by the mean value theorem,
gl, = 20,1) + 8D
Og (Si, t
fil, ) = fx.) + Le
ZS
) vu 6i,
oy, 1) eee 4 2FMED
Ox,
Gy x), (8.64)
where 6; = (1 — s,)x + s;y. Now let M be a bound for the partial derivatives of f
on R. Then, by taking norms in Eq. (8.6d), we have
One should not suppose from the preceeding discussion that it is necessary fora
function to be differentiable in order that it satisfy a Lipschitz condition. For example,
the function f(x, 1) = (sin 1)|x| satisfies
a Cre) a CN SW ale ar
for all x sufficiently near xo; then we would have
L>XO Vv |x = Xo|
EXERCISE 8.6
he ete — COStA SINE?
iefind the maximum of |A(A| for 0 < t < 27.
; —sin ¢t cost
Show that the function f(x, )) = x/(1 + 1t?x?) satisfies a global Lipschitz condition.
Compute a Lipschitz constant for each of the following functions on the indicated region.
asf (xy) = sin xt ela, like:
b) f(x, 2) = Gixe, t+ xs, x2), |x| <a, |z < 6.
GC) fst) = te, x S100 =< Ha
d)- f(t) =1e-"x? sin Ax), x = 0. 1 x 1;
4. For what values of a > 0, does the function
0, Xen)
[h(t) — xo] < | |f(g(s), s)| ds < M-|t — to] < Me <a
0
for |t — to| < ¢. As a result, the successive approximations {@;} ;29 given by
wh
Figure S18
By Gronwall’s inequality,
and t
lt —to/?** M (Key* ; ;
WreO| < MK
G+ =KG+m J2° (8.78)
If M; denotes the expression on the right of inequality (8.7f), then
YM; = 2
jo K
— 0)
By Lemma 4.2, the series 5% 5¥,{t) converges to a limit $(1) + x, for 1 satisfying
lt — fol < €, and $(f) = lim; ..,.. Ad).
Since f is continuous, it is immediate that lim;_. f(¢A1). 1) = £(¢(1), 1) also.
Io show that @ satisfies the integral equation (I), then, it is enough to show that
£ t
Since 329 ((Ke)i*1/(j + 1!) converges, it follows from Lemma 4.2 that
to
Seoa=d | soa
j=0
eS a e (8.7h)
eM Koy. Ke aM ey
TK ENG any K WN!
Since (Ke)"/N! — 0 as N— +0, oy can be made to approximate arbitrarily well
on the interval |f — to| < € by taking WN sufficiently large. Moreover, inequality
(8.7h) provides a useful error estimate.
With regard to the role played by the Lipschitz condition, there are several
comments to be made. The first is that (IVP) has at least one solution when f is
merely continuous: the Lipschitz condition is not necessary merely to establish
existence. It is also not necessary that f satisfy a Lipschitz condition for solutions of
(IVP) to be unique. It is necessary, however, that f satisfy some condition in addition
to continuity. The problem
= 2/ ily 0) =
for example, has two distinct solutions x = 0 and x = t|t|. Note that the successive
approximations for this problem converge to the trivial solution x = 0.
Finally, let us observe that a solution to (IVP) may exist on a larger interval
than the interval |t — to| < €. Consider, for example, the problem
=l1+x°, %»=0 When f=0 (8.71)
with solution x = tant, —7/2 < t < 1/2. Here f(x, t) = 1+ x? does not vary
with ¢ and both f and df/dx are continuous on D = ®! X ®!. We determine the
The method of successive approximations 245
largest value of € for which the proof of Theorem 8.2 is valid. Let
EXERCISE 8.7
1. Compute several successive approximations for the solution to the initial value problem
x’ = —2tx, x(0) = 1. Compare this with the Maclaurin series for the solution. Repeat
the procedure for the solution to the initial value problem x’ = x?, x(0) = 1.
2. What is the largest interval of convergence as guaranteed by Theorem 8.2 for successive
approximations to the solution of x’ = x?, x(0) = 1?
3. Find all solutions to the initial value problem x’ = 2\/|x|, x(0) = 0. To which solution
do the successive approximations converge?
4. Let d:() = k?t(1 — O* for 0 < t < 1. Compute
t t
8. Let A bea2 X 2 matrix in Jordan form, let xo be a given 2-vector, and let \ be a param-
eter. Under what circumstances will the sequence {x;} defined by
Xx41 = Xo + AAX:, k= 03
converge to a limit x? Assuming convergence, what can you say about the limit vector x?
9. Modify the proofs of Theorems 8.1 and 8.2 so that they apply to the scalar problem
OE = CRM Dy B= By i = xo when ¢ = fo without its being converted into a system
(see Problem 7).
10. Is there, in any reasonable sense, a solution to the initial value problem
pe / Ea
eex ees pe Ee
uv PS rein yy= Oe
WV x2 + y2 Vx2
+ y2
Observe that the functions ¢ and wy defined by ¢(t) = tant, —17/2 < t < 1/2 and
y(t) = tant, —4 < t < 4 are, strictly speaking, different solutions to the initial
value problem (8.7i). This sort of situation is, in a sense, artificial and is rather
inconvenient. The difficulty is avoided by introducing the notion of a maximal
solution to an initial value problem. Roughly speaking, a maximal solution is a
solution with a most inclusive interval of definition. In this sense, ¢ is a maximal
solution to the initial value problem above.
In order to give a more precise definition, let us write (#, Jg) for a solution @
of the differential equation
Xoo-e (Xt) (S)
with interval of definition Jy. We assume that (x, 7) is in a domain D in &” X @!.
Now it is entirely possible that a function & with an interval of definition J; may
satisfy (S) for every ¢ in J; and that (£(2), t) is not in © for all ¢ in J. For example,
f might be continuous and lipschitzian on a domain which properly includes the
domain 9 which we fixed at the outset. Such functions, however, were excluded from
consideration in the definition of the solution given above. Thus, if (@, J) is called
a solution to (S), it is understood that (#(2), t) is to be in © for all ¢ in Ig, and we
call Ig the interval of existence of @ (relative to D).
Definition. Let f be defined on a domain D in &" X @&’ and let (Xo, to) be in ®.
Assume that (IVP) x’ = f(x, t), x = Xo when t = to has at least one solution. A
solution (@, Ig) of (IVP) will be called a maximal solution (relative to D) if and only ips
Theorem 8.2 are satisfied, then $(t) = V(t) for t in the intersection Ig Q Ty of the
intervals Ig and Ty.
Proof. The intersection Ig M Jy is not empty since it contains an open interval about
to. Let us consider only the portions Bs and ly of Ig and Iy to the right of to. The
interval on which Theorem 8.2 applies is to be denoted by [fo —€, fo te]. If
either Ts or i is contained in [fo, to + €], there is nothing to prove. Therefore
assume that both Te and ly properly contain the interval [¢o, fo + €] and suppose
for contradiction that there is a point tf; > fy) + € in igs aly at which ¢(t;) #
Y(t). (The situation is illustrated in Fig. 8.16 for &" = ®?.)
(u(t), 4)
Figure 8.16
Since @ and wy are continuous, there must then exist a Jast point fg > fp t+ €
such that #(t,) = W(te). If the common value is denoted by a, then (a, fg) is in
® and both ¢ and y are solutions to the initial value problem x’ = f(x, ft), x(t2) = a.
Thus (ft) and y(t) are equal for all t sufficiently near (and particularly to the right
of) tg. This contradicts the existence of fy since it was supposed to be the Jast point ¢
at which g(t) = y(t). Thus f,; cannot exist either. The situation is analogous for
ato |||
With this groundwork having been laid, we are in a position to state a theorem
which underlies all our subsequent discussion.
Theorem 8.3 (Fundamental Existence and Uniqueness Theorem). Let f be contin-
uous and locally lipschitzian on a domain D in &" X &. If (Xo, to) is in D, then the
initial value problem
Xe tx, -2): x= "Xo when = TG (IVP)
Proof. Let & denote the set of all solution functions to the problem (IVP). If y is
one such solution, denote its interval of existence by Jy. We construct a set Ig accord-
ing to the definition .
Ig = {t: t isin Jy for some y in $}.
Notice that Jg is an interval since each Jy is an interval containing fo. Now define a
function @ by the formula $(¢) = Y(#) if ris in Jy. Then ¢ is well-defined (Lemma 8.3),
and Jg is its interval of existence. The function ¢ is a solution to (IVP) by construc-
tion, and Jy contains Ly for every other solution y. Thus ¢ is a maximal solution.
The interval Jy must be an open interval, for if it had a first or a last point, the
solution @ could be extended (by the technique of Lemma 8.3) and @ would not be
a maximal solution. ||
A scalar version of the fundamental existence and uniqueness theorem is stated
as the following corollary.
Example. Consider the initial value problem x’ = 2tx?, x = x9 > O when t = fo.
If xo = 0, then the unique maximal solution is x = 0. If xo ¥ 0, then the unique
maximal solution x = ¢(f) satisfies
oa@e'Oi=21
for |t — fo| sufficiently small. Hence
for
integral curve of (f(x, A), 1). By Theorem 8.3, the corresponding solution can be
extended to a maximal solution @ with interval of existence 7~(xo, to) <t<
T*(Xo, to). One might plausibly guess that the graph of this solution @ extends
from boundary to boundary in phase-time space ®. We shall prove that this is
indeed the case in Theorem 8.4 below, but let us first summarize the geometrical
picture constructed.
Phase-time space D is to be regarded as composed of curves extending from
boundary to boundary. Each such curve is the graph of a maximal solution of
x’ = f(x, f). No curve can intersect itself or any other curve. If © is unbounded,
we shall think of it as having a boundary, portions of which are removed infinitely
far from the origin.
We shall subsequently write (Xo, fo, t) for that solution of (S) x’ = f(x, 4)
which satisfies the initial condition (Xo, fo, fo) = Xo and refer to ¢ as the maximal
solution function for (S) (relative to D). For fixed (Xo, to), the curve traced by
$(X0, fo, t), T(Xo, to) < t < T*(Xo, fo), in @, the projection ofD onto the x hyper-
plane, will be called the trajectory of ¢. The endpoints 7*(Xo, to) and 7 (Xo, fo)
of the interval of existence for (Xo, fo, 1) are called escape times. The example
above illustrates that (Xo, fo, 1) may have finite escape times even though f(x, f)
is continuous and locally lipschitzian for all (x, 1). Several standard results on the
estimation of escape times are given in the next section.
Finally, let us adopt the convention that the unqualified word so/ution is to be
interpreted in all later material as meaning maximal solution.
The next theorem is the basic result in the estimation of 7*(xo, fo) and 7 (Xo, fo)
for (Xo, tos t).
Theorem 8.4. The point (d(x 0; £0, 1), t) on the graph of ¢ approaches the boundary
of D or becomes unbounded as t > T* (Xo, to) and as t > T (Xo, to).
Proof. Write $(t) for (Xo, to, t) and suppose, for contradiction, that the point
(o(2), t) on the graph of @ neither becomes unbounded nor approaches the boundary
of D as t>7*t = 77(Xo, to). Then 7* is finite and |@(1)| is bounded for to <
t < 7*. Choose a sequence of times {t;} such that limy_,4. th = zt and such that
limp 4% ¢(,) =c exists. Then (c, 7*) is in 9, and there exists a unique solution
y of x’ = f(x, 1) on a nondegenerate interval |f — 7*| < € such that ¥(7*) = ¢.
We shall show that (1) = ¢(t) for ¢ in the interval J = [r+ — e,7*). This will
contradict the definition of @ as a maximal solution. Since
t
$(1) = $(te) + ii
k
£(4(s), s) ds
for t in J, it follows that
le) — WO) < rel |e(s) — Ys)| ds < Ke-||¢ — y], (8.9b)
where ||@ — ¥|| = supser |o(s) — ¥(s)|. If |o(t) — Y(D| 4 O for every ¢ in J,
then eK > 1, a contradiction. ||
VA sd Vo
Ci eee
where x” + y? > 0. For this example, we take @ as the xy-plane with the origin
deleted, a = —%, andw = +1. Thus 9 is that half of ®* lying below the plane
jel:
The estimation of escape times 251
Figure 8.17
We examine the solution (x, y) beginning at a point (Xo, yo, fo) with 0 <
x% + y§ < land fy < 1 and show that 7*(xo, yo, fo) = 1 by constructing a closed
and bounded set C which (x(t), y(t), 1) must leave as t > 7*(Xxo,Yo, fo). Fix an
integer m > 1 so that tp < 1 — 2~” and let C denote the cylinder defined by the
equations x? + y? < 1,t9 —-1<t< 1—27”. Thecurve traced by (x(2), y(2), t)
initiates in C (Fig. 8.18) and the point must leave C before it ceases to exist.
Figure 8.18
8.9
252 The existence of solutions
Since the point rises with increasing ¢, it passes either through the side of C or
through the top of C. The first situation is not possible. To see this, let r(t) Zz
[x2(2) + y2(2)]!/? denote the distance from the origin to the point (x(t), y(2)) in
the xy-plane. Then
Proof. Suppose T+ < w and let B denote the closed and bounded subset of ® which
contains $(t) for tp < t < 7*. For sufficiently small e€> 0, the closed and bounded
set C = B X [to — €, T* + €] is in D and has (Xo, fo) in its interior, but (¢(2), t)
does not leave C as t+ 7+. This contradicts Corollary 8.4. Hence T* = w. ||
Notice that Theorem 8.5 makes it possible to study the escape times of solutions
by studying their trajectories in phase space rather than studying their graphs in
phase-time space. When possible, this procedure is more efficient. As an illustration,
let us reconsider Example 1. Phase space @ is the xy-plane with the origin deleted.
The trajectory starting at (xo, yo) # (0,0) at time fp < 1 is a curve in @ which
initiates inside the unit disc. We showed above that the distance r(t) from the origin
to the point (x(2), y(¢)) decreases as t increases. Thus the solution curve is contained
in the disc, and 7* (xo, yo, to) = 1 by Theorem 8.5.
To apply Theorem 8.5 in a specific case, one must show that a solution curve of
interest is contained in some judiciously chosen, closed and bounded subset B of @.
The first problem encountered in applying the result is thus the choice of B. When
® = &", the set B is frequently taken in practice to be a closed ball or a closed cube
because the condition that (Xo, fo, f) remain in a set of either type is that
|6(Xo, to, £)| be bounded for tp < t < Tt (Xo, ft). In such cases, the estimation of
escape times is thus reduced from a geometrical problem to an analytical problem.
Example 2. Let us show that all solutions to initial value problems for the equation
Ke xk xe = sine
The estimation of escape times 253
exist on intervals of the form fg < t < +o. Let ¢ and y = ¢’ denote the solution
which satisfies the conditions (to) = x9 and ¥(to) = yo. Consider the function
V defined by
_¥O,#@
V(t) = 5) zi 4 ale ! .
8
Differentiation yields
Thus V(t) < V(to)e?““—. Now suppose for contradiction that 7+ < +0. Then
V(t) < Vitoje?*— for to < t < 7, and it follows that |$(t)| + |y(1)| is bounded
over the same interval. By Theorem 8.5, 7 = w = +o. This is a contradiction.
Note that if x = ¢(¢) is the displacement of a particle moving with one degree of
freedom, then V(f) is its total energy at time f. ||
A slight variation of Theorem 8.5, which is useful in applications, can be given
when D = &” X (a, w).
Theorem 8.6. Assume that x' = f(x,t) has a cylindrical phase-time space
D = &" X (a, w) and that there is a continuous function m defined on to < t < w
such that |$(Xo, to, 2)| < m(t) for to < t < T*(Ko, to). Then T*(Xo, to) = o.
Proof. Suppose for contradiction that r+ < w. Then 77 is finite and m(t) is bounded
for to < t <7. It follows that (xo, to, f) is bounded over the same interval. By
Theorem 8.5, 77 = w. ||
Example 3. Consider the equation
6” + k(sin t)cos 6 = 0, (8.9e)
and let (0(t), 6’(t)) denote a solution initiating at time fo. Then |6’(‘)| < k for
ig SUK Tt (6(to), 6’(to), toe On the same interval,
Thus
y= ¢,
& —k(sin t) cos @.
Here f,(9, ¢, 1) = ¢ and fo(4, ¢, t) = —k(sin zt) cos 6. All partial derivatives of f;
andf» are uniformly bounded; consequently all solutions exist for —0 <t< +o. ||
The latter remarks do not, of course, apply to equations such as x’ = a+ x?
with polynomial nonlinearities. If an equation exhibits this type of nonlinearity,
then, in general, some solutions will exist for all time and some will not.
To show that 7* = +, one first proves that y(t) > 0 for 0 <t< rt.
Suppose, for contradiction, that there is a time fo in (0,7*) for which W/’"(to) = 0.
Then, if A = Y(to) and B = y’(fto), it is easy to verify that both ¢(f) = A +
B(t — to) and y(t) satisfy the initial value problem
Ke SS sec — 0, X(t) = A, x'(to) = B, x’’(to) = (0h
Consequently, ¢(t) = y(t) for 0 < t < r*. This is a contradiction since $’(0) = 1
and y’’(0) = 0.
Having established that y/’(t) > 0 for 0 < t < 7*, one concludes by integrating
that y’ and wy are positive and monotone increasing over the same interval. Using
the integrating factor exp ie ¥(s)ds on the equation y’’’(t) + y(Hy’’(t) = 0, one
can write
t
v(t) = exp Ei ¥(s) as| (8.9j)
Thus y” is decreasing and
ONTOS GNM ARae (8.9k)
When inequality (8.9k) is integrated twice, the inequalities
Ouea)eat OT et ee eee
result. By Theorem 8.6, 7 = +a. To show that lim;.,. ¥/(f) exists, it is only
necessary to show that y’ is bounded above since it is known to be monotone in-
creasing. Let c = y(1) and note that Y(t) > c for t > 1. Inequality (8.9j) implies
that
Ue) 66g eo
Integrating, we find that
EXERCISE 8.9
1. Consider the system
p= yx, yo=xty?
which satisfies u(0) > 0, v(0) > 0 cannot exist on an interval of the form 0 < f < +.
8.10
256 The existence of solutions
3. Use Theorem 8.6 and the Gronwall inequality (Problem 5, Exercise 7.4) to show that
every solution of the linear equation x’ = A(Z)x exists over each interval a < t < w of
continuity for A.
4. Verify that inequality (8.9f) implies inequality (8.9g). [Hint Argue as in the proof of
Gronwall’s inequality. ]
5. Suppose that f has real-valued components and that these components have continuous
first partial derivatives D;f; at every point of R” GR! (Gj = 1,...,n). Show that every
solution of x’ = f(x, A) is defined on an interval of the form fo < t < + © if there exists
a constant M such that |D;fi(x, )| < M for all (x, 4) and alli, j = 1,...,7.
6. Is the assertion of Problem 5 true if M is a continuous, positive function of ¢ for
—~a <t< +o?
7. Show that all solutions of the following equations exist on intervals of the form to <
IES eC,
a) x’ = yew 4+ 1, y = 1 xe b) x = V1 + y2, yy = V1 + x2.
a) x) = 4, _y’ = —— and
Vx2 + y2 Vx2
+ y2
b) x’ = = Vy
Vx2 + y2 Vx2
+ y2
near the singular point (x, y) = (0,0). Relate your discussion to Theorems 8.5 and 8.6.
9. Let f be continuous and lipschitzian on &" X ®!. Suppose there exist continuous
functions a and b such that |f(x, | < a(d|x| + b(). What hypotheses on a and 6 will
guarantee that all solutions of x’ = f(x, f) exist on intervals of the form fo < t < +a?
10. Let f be continuous and lipschitzian on ®” X G!. Suppose there exists a real valued,
continuously differentiable function W, defined on ®”, such that W(x) — + as |x| >
+co and grad W(x): f(x, 4) < 0 for all (x, 4. Show that every solution of x’ = f(x, A
exists on an interval of the form to < t < +o.
11. Examine the intervals of existence of solutions for the equation
x = f(x)4); (S)
Q = U I(Xo, to),
(xo,to)eD
I(Xo; to) = {(Ko, to, t): T (Ko, to) < t < T*(Xo, fo}.
Several questions of interest arise: (1) What are the properties of 2? (2) Is @ con-
tinuous on Q? (3) Does ¢ have continuous partial derivatives on Q with respect to
one or more of its variables? These questions are discussed below, but proofs are
omitted as they are better left to graduate courses in the subject.
t t
—V 2+ 6
Fig. 8.19 The exterior of the cone on the left is the domain of definition of $(xo, to, 1) =
[Link] b Ants #2. The vertical line segment indicates the interval of existence 7~ (x0, fo) <
t < 7+ (xo, fo). Phase-time space for the equation x’ = —f/x is the shaded half-plane
on the right. Notice the (¢(xo, to, 1), ))—for fixed (xo, to)—approaches the boundary of
phase-time space both as t > rt andast—7~.
258 The existence of solutions 8.10
0 Xo d Xo
ve (Xo, to, t) ca A/ Bie Pas 2 . (Xo, to, t) = A/a ee ae
0 rte ets 0 a eee
exist and are continuous on®. ||
In Example 1, (xo, fo, t) is explicitly known, and the properties of ¢ can be imme-
diately determined. The four questions asked at the beginning of this section can in
fact be answered without an explicit representation of (Xo, fo, ft).
g
[nn Ne
($(4o, to,t),1) »
Figure 8.20
Dependence on initial values 259
Figure 8.20 has been drawn in such a way that ($(xo, fo, £), t) does not appear
to be close to ($(Xo, Zo, 2), 7) since the purpose of the figure is to geometrically
illustrate a consequence of 2’s openness. Actually, these two points can be brought
as near each other as desired by merely bringing (Xo, fo, t) sufficiently near
(Xo, 0, 2). This is the geometrical interpretation of the continuity of @ which may be
phrased analytically as follows: given € > 0 and any point (Xo, fo, 7) in Q, there is
a 6 > O such that
|o(Xo, to, 1) — (Xo, fo, 2| < €
provided
Note that the continuity of@ does not guarantee that |@(Xo, fo, 1) — (Xo, to, D)| < €
for arbitrarily large tf no matter how small |xo — Xo| + |fo — Zo| might be.
The equation x’ = x/(1 — #t) with —o < x < +0 and —ow < t < +1 pro-
vides an illustration of the remarks above. Its maximal solution function is x =
(Xo, to, t) = xXo(1 — fo)(1 — 1). Phase-time space and graphs of x = (Xo, fo, t)
for several values of (xo, fo) are illustrated in Figure 8.21.
(£0, fo)
Fig. 8.21 If the point (xo, fo) is sufficiently near the point (Xo, to) and if ¢ is sufficiently
near 7, then ((xo, fo, 1), f) is “near” (¢(Xo, Zo, 2), 2). Notice that |(xo, to, ) — (0, 0, 1}
is unbounded as t > + no matter how near (xo, fo) is to (0, 0).
has a solution.
Consider the solution (7, f) to the initial value problem
We write the differential equation in the form y’’’(7, 1)/W’’(, t) = —wW(, t) and
integrate to find that
AGEN) oe vf ET RS vf eat ds
0 0
=| rem aie ole
The existence of a solution to the problem (8.10a) then follows from the intermediate
value theorem in the manner indicated above. ||
Theorem 8.8. Consider the system x' = f(x, t), where f has continuous rth partial
derivatives with respect to its first n arguments on a domain D in ®” XK @!. The
solution function x = (Xo, to, t) may be differentiated r times, each differentia-
tion being with respect to any one of the n + 1 variables x, to, provided at most one
differentiation is with respect to ty. The partial derivatives, of order up to r inclusive, so
obtained are continuous on Q and each of these has a continuous first partial derivative
with respect to t.
This theorem is proved by verifying its truth for the case r = 1 and giving an
inductive argument. In connection with the proof, a useful representation for the
first partial derivatives of is shown to be valid. To state the result, we shall need
a definition.
Dependence on initial values 261
where the jth component of y(to) is one and the other components are zero. Further,
Y = Dn+16(Xo, to, f) is the unique solution to the initial value problem
0
= 8X9) fild(Xo, to, t) t)
Now define g(t) = D,¢(0, 1, 0,0, 1). By Theorem 8.9, g is the unique solution of
Eq. (8.10c) which satisfies the initial condition g(0) = [1, 0, O]’. This initial value
problem was solved in Problem 10 of Exercise 4.6. The first; component gi(t) of
g(t) is :
er b=7 ) onl + vB
git) = —3e' 4 € = 4wey ae
But g,(t) = D,yv(0, 1, 0,0, 4). Thus the computation is complete. ||
EXERCISE 8.10
1. Compute the solution x = (xo, fo, ) for the equation x’ = t/x, x > 0. Describe
geometrically the domain 2 of ¢. Compute
x = x — yp ox In V 72 eye,
y=x+y— ylnvx2 + y?
which satisfies (xo, 0, 0,0) = (xo, 0). Use Theorem
8.7 to deduce that there is a value
Xo such that the trajectory of (x, y) = (Xo, 0, 0, 4) is a simple, closed curve in the xy-plane.
(It will then follow from Theorem 9.5 in the next chapter that (x, y) = (Xo, 0, 0, 2) is
periodic.)
5. Let (u, fo, ) denote the solution of x’ = f(x, 4) which satisfies @(u, to, 0) = u, where
f is continuous and lipschitzian on a domain ® X (a, +0). Let xo in @ and to > @ be
given. Explain analytically and geometrically what is meant by the following statement:
“ is continuous in u at xo uniformly with respect to ¢ in the interval [to, +00),”
CHAPTER 9
AUTONOMOUS SYSTEMS
9.1 PRELIMINARIES
Xe Xe) IN = 2, Xie — DT
x’ = x? b] a Se Xe eX
are autonomous.
We shall denote a general autonomous system, which we assume as given through-
out the chapter, with the symbols
x’ = f(x), (A)
and we shall assume throughout that f is continuous and lipschitzian on a given
domain @ in ®”. In the language of Section 8.5, @ is the phase space for the system
(A). Phase-time space D is a cylinder of the form ® X (—«#, +). The phase
space & is frequently called the Poincaré phase plane, when @ = ®?.
Nonautonomous differential equations occur in the physical world when a system
is subjected to external time-dependent influences. For instance, the angular dis-
placement 6 of the permanent magnet in an electric clock has as one mathematical
model the nonautonomous differential equation 6” + k(sin @)- sin (1207t) = 0, k
constant. The angular displacement 6 of the simple, frictionless pendulum, however,
263
264 Autonomous systems 9.2
One normally studies the solution paths (orbits) of autonomous systems in phase
space rather than the graphs of solutions in phase-time space. To illustrate the dif-
ference in approach, consider the system x’ = y, y’ = —x of Example 3, Section 8.5.
The graph of (0, A, to, t) = (A sin (t — to), A cos (t — fo)) is a helix of radius A
about the f-axis. There is a different helix for each numerical value of fo. These are
illustrated in Fig. 9.1.
Figure 9.1
The projection of each helix onto the xy-plane is the circle x? + y? = A4?.
This circle is then a solution path corresponding to infinitely many solutions of the
differential equations. It is a geometric representation for each solution given by
$(0, A; 0,2) = (CA sini — fo), Acos (f — to)), no matter what the value of to
might be. The number fp is called a phase shift. Thus the circle x? + y? = A? is
the path of every solution which can be obtained from ¢(0, A, 0, 1) by introducing
a phase shift.
Let us consider two distinct solutions, say those given by $(0,A, fo, t) and
$(0, A, t1, t), where 4; > fo. We think of the path x? + y? = A? as being traced
out in phase space by each of two moving particles: p, with coordinates (0, A, fo, 2),
and q with coordinates (0, A, ¢,,1f). The particle p is at the point (0, 4) when
t = to and the particle q is at the point (0, A) when t = ft, > fo. See Fig. 9.2. A
moment’s reflection should convince the student that (0, A, t,, t+ ¢, — fo) =
The shift formulas 265
Fig. 9.2 The point p is moving ahead of the point q by t; — fo units in time.
(0, A, fo, £1), which is just a way of saying that q is moving along the same path as
p but is 4; — fo seconds behind p. This phenomenon occurs for autonomous systems
in general.
Theorem 9.1 (The First Shift Formula). Let (xo, to, t) denote the solution of the
autonomous system x’ = f(x) which satisfies (Xo, to, to) = Xo. For any real num-
ber ty, (Xo, t1, t + ty — to) = o(Xo, to, t) as long as both solutions are defined.
Equivalently
$(Xo0, fo, 1) = (Xo, to + a, t+ @)
for all real numbers a and t such that both solutions exist.
Proof. Let &(t) = (Xo, t1,t + t1 — to) and let n(t) = O(Xo, to, 2). Then é’(f) =
f((t)) and y/(t) = f(n(4). But
E(to) = (Xo, f1, 11) = Xo = (Xo, fo, fo) = n(to).
Since the solutions of initial value problems are unique, &(t) = y(t). The equivalent
formula is obtained by setting a = ft; — fo. ||
Corollary [Link]. Initial value problems for the system x' = f(x) may always
be posed at time t = 0. In fact, $(Xo, to, t) = (Xo, 0, t — fo).
Proof. The solution vector &(t) = $(Xo, fo, t) for the initial value problem x’ =
f(x), X = Xo when ¢ = fo is related to the solution vector n(t) = (Xo, 9, 2) for
the problem x’ = f(x), x = Xo when f = fo by the formula &(¢) = n(t — fo). If
n(t) is known, then so is &(t). ||
Corollary 9.1b. If C is the path of the solution vector (Xo, 0, t), then it is also
the path of every solution vector $(Xo, to, t) produced by introducing a phase shift to.
Proof. (Xo, 9, t) = (Xo, to, t+ to). ||
The formula
(Xo, to, 1) = (Xo, fo + a, f+ @) (9.2a)
Figure 9.3
Then at time ¢; = to + a, the particle is at the point x; = (Xo, fo, t;). If the
motion of the particle continues for an additional time interval of length 8, then at
time ¢ = t} + B = to + a+ 8B, the location x of p can be described in two ways:
xX = (Xj, 7%), 2) and x = (Xo, fo, t). But x; = (Xo, fo, 1). Thus
Proof. Let &(t)= $(Xo, to, f) and let n(t) = o(¢(Xo, fo, t1), t1, 1). Then £’(1) =
f(E(t)) and n/(t) = f(n(t)). But £(t1) = $(Xo, to, t1) = 6( (Ko fo, t1)s 1, ii) =
n(t1) by the definition of @. By the uniqueness of solutions to initial value problems,
&(t) = m(t) for all ¢ for which they are defined. ||
Note that Theorem 9.2 is true also for nonautonomous equations x’ = f(x, 2).
Phase portraits 267
EXERCISE 9.2
1. Solve the nonautonomous system x’ = y, y) = —x ++. Depict in phase-time space
the graph of the solution @ which satisfies @(0) = (0,1) and depict in phase space the
trajectory of @. Do the same for the solution W which satisfies ¥(0) = (1, 1) and compare
your results.
2. Sketch two trajectories of solutions to the system x’ = x, y’ = 2tx + y which cross
each other in the xy-plane. Could two distinct paths of solutions for the system x’ = x,
Va ydormthis:)
3. Interpret analytically and geometrically the first phase shift formula by explicitly solving
the autonomous system in Problem 2.
4. Let y denote the solution to the initial value problem x’ = x?, x = 1 when ¢ = 1.
Show that yw is the solution to some initial value problem x’ = x?, x = xo when t = 0
which is posed for initial time zero. What is the interval of existence of ¥? Relate the
example to Corollary 9.1a.
5. Let y denote the solution to the initial value problem x’ = 2¢x?, x = 1 when ¢ = 1.
Show that y is a solution to an initial value problem x’ = 21x”, x = xo when f = 0 even
though Corollary 9.1a does not apply.
6. Explicitly solve x’ = x? for o(xo, fo, 2) and $(x1, f1, 2). Then verify the shift formulas
(9.2a) and (9.2b) algebraically.
7. Explicitly compute all solutions of
which have the unit circle in the xy-plane as their path. Verify the shift formulas (9.2a) and
(9.2b) algebraically for these solutions.
8. Consider two identical linear oscillators of the type depicted in Fig. 1.3. One block is
set in motion by giving it unit initial velocity with the spring unstressed. The other block is
set in motion by stretching the spring one unit and releasing the block with no zero velocity.
What information about the future relative motion of the blocks is provided by the shift
formulas ?
9. Let x’ = f(x, ‘) have the property that there exists an a > 0 such that (xo, fo, 4)
satisfies (xo, to, 1) = (xo, to + a, t + a) for all (xo, to). What general property must f
have?
10. Suppose the identity (xo, to, ) = $(xo, to +a,t-+a) holds for every solution
vector (xo, fo, 2) of x’ = f(x, 4) and every a > 0. What general property must f have?
(Recall that f is assumed to be continuous.)
When one is confronted with the need to study a physical situation which is described
by an autonomous differential equation
Tix) (A)
it is frequently profitable to study the solution paths of the system qualitatively. This
268 Autonomous systems 9.3
Figure 9.4
is not the same procedure as solving the differential equations, for the emphasis is
upon geometry, and time dependence is a somewhat subordinate consideration.
A phase portrait for the differential equation (A) is a complete, qualitative
(perhaps pictorial) description of a// its solution paths. Phase portraits are most
easily given for the case n = 2 since they can be readily illustrated with drawings.
In higher dimensions, verbal descriptions are used.
Let us, for example, suppose that we wish to predict all possible motions of the
linear spring-magnet system in Fig. 9.4. We assume that the units are such that the
differential equation for the displacement x of the first magnet from its position when
the spring is unstretched is x” + x — (x — 2)? = 0.
It would be helpful if the student would stop at this point and conjecture on the
basis of his intuition what possible motions the left-hand magnet could undergo
were it displaced from equilibrium and given, perhaps, a nonzero initial velocity.
The first stop in an analytical study of the situation is to find the positions of
equilibrium. These correspond to constant solutions x = c. The numbers c must
satisfy the equation c — (c — 2)? = 0, that is,
¢c=1,.¢=6—V5)2 ~038, ¢©=6 + 5)/2= 267
We convert the equation into a system by setting
xX =y, y= —x + &— 2). (9.3a)
Phase space is chosen to be the region of the xy-plane to the left of the line x = 2.
The coordinates of the critical points (paths of the constant solutions) which are in
phase space are ((3 — /5)/2, 0) and (1, 0).
We obtain preliminary information about the solution paths by the method of
Section 2.8. The equation x’’x’ + xx’ — (x — 2)~?x’ = 0 is integrated, and we
find that each solution path must lie in the locus of one of the equations
ytx?4+2/(x —2=k,
where k is a constant and y = x’. The plotting of these equations for various values
of the constants k is basically a problem in the calculus. The labor involved is eased
somewhat by arranging the work in a definite order however.
i) Plot the critical points p = ((3 — »/5)/2,0) and q = (1,0).
li) Observe that the derivative dy/dx is given by the formula
3-75 3475
sp pepe tases al 2 Nx 2 )
ke SAG y =a ees)
Phase portraits 269
Figure 9.5
except at critical points. Notice that the noncritical solution paths have vertical
tangents at places where they cross the x-axis. They have horizontal tangents at
places where they cross the lines x = (3 — +/5)/2 and x = 1.
iii) Find k for each critical point and plot the corresponding locus. At the
point (1, 0), for example, k = —1 and the equation ofinterest is y?> = —x(x — 1)?/
(x — 2). The graph of this equation is the heavy curve in Fig. 9.5. It is a tedious,
but straightforward, procedure to show that k = (5 — \/5)/2 at the point x =
(3 — /5)/2, y= 0. The graph of the corresponding equation y® + x? + 2/
(x — 2) = k contains only that point.
With this information, the preliminary sketch of Fig. 9.5 can be given.
If one thinks of each curve as being traced out parametrically by a point with co-
ordinates (x(t), y(t)) where x and y satisfy Eqs. (9.3a), then a time sense is assigned
to each curve and we have indicated it with arrowheads.
In order to use the depicted information to discuss the motion of the magnet, it
is necessary to make a few additional remarks about autonomous systems in general.
Theorem 9.3. No solution path of the equation x' = f(x) can intersect a different
solution path.
Autonomous systems 9.3
270
Proof. Suppose two paths have a common point Xo. Let (Xo, 0, 2) and ¥(Xo, 0, 4)
denote solutions having these paths, where ¥(xo, 0,0) = Xo and (Xo, 0,0) = Xo.
Since $(Xo, 0,0) = ¥(Xo, 0, 0), it follows from the uniqueness of solutions to initial
value problems that (Xo, 0, t) = ¥(Xo, 0, 1). Hence the paths coincide. ||
Theorem 9.4. Let $(Xo, 0, t) denote the solution of the equation x’ = f(x) which
satisfies @(Xo, 0,0) = Xo. If (xo, 0, 4) approaches a point a in @ as t— T* (Xo),
then T*(X9) = +x and a is a critical point.
Proof. It is an immediate consequence of Theorem 8.2 that T'(Xo) = +o. To
show that a is a critical point, one assumes for contradiction that f(a) # 0. Since
¢ is a solution of x’ = f(x)
Since f(a) ~ 0 at least one component of the f(@(xo, 0, s)) does not approach
zero aS s—> +o. Thus, at least one component of the improper integral
fot* £(@(Xo, 0, s)) ds diverges as t— +a. This gives a contradiction when one
takes the limit on each side of Eq. (9.3b). ||
It follows from Proposition 9.3 that the heavy curve in Fig. 9.5 contains at least
four solution paths since q is a solution path the deletion of which would cut the
curve into three parts. Each of these three parts (A, B, C) is a path in its own right
by Proposition 9.3 since it contains no critical point. For the same reason, every
other curve in the figure is precisely one solution path.
The physical motion of the magnet can now be almost completely described.
There are two positions of equilibrium x = (3 — /5)/2 and x = 1. If the magnet
is very carefully displaced until x = 1, it will remain at rest. If it is perturbed slightly
to the right, it will be attracted to the fixed magnet and its velocity y = x’ becomes
(ideally) infinite as x — 2. If it is perturbed slightly to the left, it will move in that
direction until the spring is unstressed, then it will return. If the magnet is very
carefully displaced until x = (3 — »/5)/2, it will again remain at rest. Now if it
is slightly perturbed, it will undergo an oscillatory motion corresponding to the
simple, closed paths about p in Fig. 9.5. One is tempted to assert that the corre-
sponding solution functions are periodic. This is indeed true, but the statement
requires proof.
Theorem 9.5. Let C denote a path of the autonomous system x' = f(x). The
following statements are equivalent:
i) C intersects itself in at least one point.
ii) C is the path of a periodic solution.
ili) C is a simple, closed (Jordan) curve.
Proof. Suppose statement (i) is true. Let C intersect itself at the point xo in phase
space. Then we may regard C as the path of the solution @ which satisfies
Phase portraits 271
$(Xo0, 0,0) = xo. Let T > 0 be the first time at which C intersects itself, that is,
the first positive time at which @(xo, 0,7) = xo. We shall show that
where the last equality stems from the first shift formula with a = —T.
Now suppose that 7*(xo) < +o, then as t—7*(xo) — T, $(Xo0,0,t+T)
becomes unbounded but (xo, 0, f) cannot become unbounded. This contradicts
Eq. (9.3c), and it follows that (xo, 0,t+ 7) = $(Xo, 0,4) for O< t< +o.
The argument for —«# < t < 0 is analogous.
Now assume that (Xo, 0, f) is periodic with (minimal) period T > 0. Then
C is certainly a closed curve. We must show that it is simple, that is, we must show
(Xo, 0, a2) = (Xo, 0, b) with O < a < b < T implies a = b. Suppose a < b and
define a = b — a. Writing the numerals one and two, respectively, above the appro-
priate equality symbols to indicate application of the first and second shift formulas,
compute
$(Xo, 0, + a) 2 $( (Xo, to, ), b, t+ a) = 6((Xo, fo, a), 6, t+ a)
||— $($(Xo, fo, a), 6 — a, t) = $(4(Ko, fo, @), a, t)
2 $(Xo, 0, 2).
This means that a is a period for (Xo, 0, ft). Since a < T, the minimality of T is
contradicted, and the assertion is verified.
If statement (iii) is true, statement (i) is clearly true. ||
The student should compare this theorem with Theorem 9.3. Broadly speaking,
one says that no solution path can intersect another, and should a path intersect
itself, then it is a simple closed curve. This is not true for nonautonomous systems.
Because of Theorem 9.5 and applications to celestial mechanics, a solution path which
is a Jordan curve is sometimes called a periodic orbit.
EXERCISE 9.3
Figure 9.6
the form
k
Ae 4 — =
of: ib = 7 us
d°u 2
do2 + USK 3
where k is a constant.
b) Discuss the physical motion of the satellite by constructing an appropriate phase
portrait.
4. Construct a phase portrait for the equation mx’’ + kx’ + (mg/f)sin@ = 0 of the
simple, damped pendulum.
5. Consider the oscillator of Problem 2 under the assumptions that (1) there is friction
which is proportional to the velocity of the block and (2) that limjzi540 Jo @(@) dt <--o.
Describe the nonperiodic motions, if any, of the block.
Divergence and Bendixson’s negative criterion 273
Figure 9.7
If one thinks of the triangle S in the picture as an oil slick, his intuition should
tell him that it will be carried along and distorted by the current. At a later time, it
will have become the triangle S(z). In more formal language, one says that the set S$
has been mapped onto the set S(t) by the flow mapping.
Working with the flow mapping can be a bit tricky since T*(xo) can usually be
expected to vary with x9. For example, the autonomous system x’ = x” has the
real line —w < x < +o for its phase space and has ¢(xo, 0, t) = xo/(1 — Xof).
274 Autonomous systems 9.4
—f——_+——_+—___+—+> «
0 ye 2 3 4
Lash
Figure 9.8
For t = 1, the flow mapping carries the interval [1,2] onto the interval [%, 4]
(Fig. 9.8). For ¢ = 3, the flow mapping is not even defined for all xo in [1, 2].
The difficulty which occurs in the example occurs because the interval [1, 2]
is too long an interval for the mapping x9 — xo/(1 — 3xo/4) to be meaningful
everywhere on it.
Consider now two distinct points p and q on the path C of a solution x =
$(p, 0, 2) of (A) as shown in Fig. 9.9. Suppose that f has continuous first partial
derivatives on @. Then it is known that if one chooses two sufficiently small x — 1
dimensional neighborhoods P and Q (open in ®”—') with p and q as centers and
if C is not tangent to either P or Q at p or q, respectively, then the flow mapping
from P to Q is defined at every point Xo of P. In fact, one may choose Q so small
that P is mapped onto Q. The mapping is one-to-one, continuous, and has a con-
tinuous inverse. Geometrically, one thinks of a fibrous tube joining the points ofP
to the points of Q; it is called an open path tube.
Figure 9.9
where
where D; = 0/du; and Dy = 0/dug. Theorem 8.9 provides the needed information
since it guarantees that Y,; and yp» are solutions of the linear variational system
y’ = J,(t)y, where
Oe es X2,1) Dofi(%1,
x2, 4
Difax 1X ontews ova Xa, De ageae,
Further 1 (uj, v2, 0) = (1,0) and Po(uy, ve, 0) = (0,1). Thus, if Y = [Wy, Wo] is
the corresponding matrix solution of y = J;(t)y, then Abel’s formula (4.5b) implies
that
t t
det W(t) = det V(O) exp || Tr Je(s) as= exp || Tr J¢(s) as:
0 0
But Tr J-(s) = Dif 1(%1, X2, 8) + Dofo(x1,
X2, 5) = div f(x, x2, 5), and one may
therefore write:
Example 1. Suppose S is the triangle in Fig. 9.7. The divergence of the coefficient
function is (0/dx)(x) + (0/dy)(—2y) = —1. The area of the triangle S(t) equals
the value of the integral
Theorem 9.6. Let the function f = (f1, f2) in the autonomous system (9.4a)
have continuous first partial derivatives on a domain & in ®? and let S be a closed
and bounded region in ®. Suppose that $(Xo, 0, t) exists on the interval0 < t < +a
for each Xo in S and denote the image of S under the flow mapping Xo — $(Xo, 9, 1)
276 Autonomous systems 9.4
This is a contradiction. The analogous argument holds with t < 0 if div f(x) < 0
on ®. ||
For a second order equation x’ = f(x, x’, t) the divergence condition above is
that of (x, x',.2)/0x'’ = Oxon C;
Example 2. Consider the van der Pol equation x’ + e(x? — 1)x’ + x = 0, where
€ > Oisa parameter. The divergence condition is (1 — x”) # 0. Thus, if the equa-
tion has a periodic solution (it does, incidentally), the path of the solution must
intersect the lines x = +1 in the associated phase space x = x, x’ = Xo. ||
Example 3. The equation x’’ + x’ + f(x) = 0 could have no real periodic solution
in the xx’-plane since d(x’ + f(x))/ax’ = 1. ||
There is a generalization of the notion of exactness (Section 2.5) which is occa-
sionally helpful in plotting the solution paths of a two-dimensional autonomous
system
x’ = f(%;y),
(9.4b)
y’ = g(x, y).
Theorem 9.8. Suppose the coefficients f(x, y) and g(x, y) of Eq. (9.40) have con-
tinuous first partial derivatives on a domain & in @®. If the divergence (af (x, y))/ax +
(g(x, y))/dy is identically zero on @, then there is a function V defined on & such
Divergence and Bendixson’s negative criterion PAY
that f(x, y) = dV(x, y)/dy and g(x, y) = —dV(x, y)/dx. Thus the solution paths in
® lie in the loci of the equations V(x, y) = constant.
gles en (zits
Ox S g(x, Vo)
y0 Ox ae
y
02(x, B
= = eet) a — g(%, Yo) = —8(%, y).
Yo
1) Oe Vv av aV AV_
amberoy S20k Oy Oy ee
Thus V(¢(2), ¥(t)) = constant. ||
x! =4xy
+ dy, (9.4c)
y’/ = —4y?x — 4x.
The divergence of the coefficient functions equals zero on ®”. Thus there is a function
V such that V(x, y)/dy = 4x”y + 4y%. Integrating, we have
V(x, y) = 2x?y? + y* + A(x), (9.4d)
where / is an unknown function. The value A(x) is found by differentiating Eq. (9.4d)
and comparing the result with the first of the equations in (9.4c).
aV(x, y) A
sie eg 4xy~ 2 + h'(x).
/
Thus h’(x) = 4x°, and it follows that h(x) = x* + c, where c is a constant. The
solution paths for Eq. (9.4c) lie in the loci
EXERCISE 9.4
1. Consider the system
ery (ee yn yee ee yx? y*).
9.5
278 Autonomous systems
c) Let S denote the region $ < Vx2 + y2 < 3, 0 < tan7'(y/x) < 7/6. Describe
the image S(27) of S under the flow mapping.
2. A block of mass m rests on a belt which moves with constant speed v > 0. The block
is attached to a wall by a linear spring having stiffness coefficient & as indicated in Fig. 9.10.
Assume that the frictional force on the block is proportional to its velocity relative to the
belt. Is a sustained, oscillatory motion possible?
ae
Figure 9.10
solution the path of which is a critical point. If (xo, 0, f) does not approach a limit
as t—>77(Xo), it is still desirable to have a way of describing its behavior as
t— T* (Xo).
Example 1. When the differential equations
x = —X — y+ x
———— > y=x-—yt y ,
V/x? + y2 Vx? + y2
are written in polar coordinates, they have the form 7’ = —r-+ 1, 6 = 1. The solu-
tion paths are then easily depicted in the xy-plane (see Fig. 9.11), for their equations
in polar coordinates are r = ce~® + 1, where c is an arbitrary constant.
The path corresponding to the value c = 1 is the unit circle about the origin.
All other paths are spirals which approach the circle as ¢, hence 6 increases. Consider
the ray from the origin which cuts the circle at the point p and the indicated spiraling
path at the points p, po,.... If this spiral is the path of the solution x = (Xo, 0, 4),
then there is a sequence {t;,} of times such that pz = (Xo, 0, tx), tf; ~ +a as
k—-+o, and limz. o(Xo, 0, t.) = p. Thus, even though lim;,4 (Xo, 9, t)
does not exist, it is possible to describe precisely the behavior of (Xo, 0, ¢) for ¢
sufficiently large. The unit circle is called the positive (w-) limit set of the solution
(Xo, 0, 2). ||
Definition. Let $(xXo,0, t) denote a solution of the autonomous equation x' =
f(x), where f is continuous and lipschitzian on a domain @ in &". A point p in &” is called
Figure 9.11
280 Autonomous systems 9.5
an w-limit point of the solution if and only if there is a sequence {tx} of times such
that limz.40 te = +o .and lim 40 (Xo, 9, te) = P.- The w-limit set 2 of
(Xo, 0, £) is the set of all its w-limit points. One defines a-limit points and an a-limit
set analogously using sequences {t,} approaching —n ask—> +a.
In Fig. 9.5, the critical point q is the w-limit set of the path A; it is both the a-
and w-limit sets of B: and it is the a-limit set of C. The a-limit set of A and the
w-limit of C are empty. The critical points p and q are the a- and w-limit sets of
themselves. Notice, however, that p is not a limit set of any path other than itself.
The paths of which D and E are typical have empty limit sets of both types. The
situation for the closed paths, of which F is typical, is analogous to the situation for
the critical point p: the entire path is its own a-limit set and w-limit set.
We have therefore seen by example that a closed path or critical point may be
a limit set of a path other than itself, but neither must necessarily be such. A closed
path which is a limit set of a path other than itself is called a Jimit cycle. It is in-
structive to see an example of a limit set which is neither a critical point nor a limit
cycle.
The set Q is closed. If $(xo, 0, t) is bounded for all t > 0, then Q is not empty. It is
then connected and consists entirely of whole solution paths. The analogous assertions
hold for the a-limit set of ¢.
Proof. If Q or its boundary is empty, then Q is certainly closed. The remaining
possibility is that neither Q nor its boundary is empty. Let p be a point in the boundary
of Q. Then there is a sequence {p;} of points of 2 such that p, > pask—> +a.
Since each p; is an w-limit point, there is a time tf, > k corresponding to each p,
such that |@(xo, 0, t,) — px| < 1/k. Thus
and passing to the limit one finds that $(xo, 0, t,.) > pas k > +x. Thus p is in
Q and Q is therefore closed.
Now suppose that $(Xo, 0, f) is bounded for all > 0. This implies that 7(xo) =
+ by Theorem 8.5. Any unbounded sequence of times in the interval0 <t< +e
has a subsequence {t;} such that {¢(xo, 0, f;,)} converges by the Bolzano-Weierstrass
theorem. The limit is an w-limit point. Thus © is not empty.
To show that Q is bounded, let M be a constant such that |(xo, 0, )| < M for
all t > O and suppose that there is a point p in 2 which satisfies |p| > M+ 1. Let
{t,} be a sequence of times such that t,-— +o and px = $(Xo,0, t,) > p as
k—-+o. For all sufficiently large k, we have 4 > |p — p;| > |p| — |p| >
M+ 1-— M = 1, a contradiction, and 2 is bounded.
Suppose, for contradiction, that Q is not connected. Since it is closed and bounded,
it consists of two closed and bounded sets Q; and {22 which are separated by a distance
6 > 0 as indicated schematically in Fig. 9.12. If p is in @ we shall write dis (p, ;)
for the distance from p to Q; (i = 1,2). Now choose a distinct sequence of times
t, such that dis (o(Xo, 0, tor41), 21) < 5/3 and dis ($(Xo, 0, fox), 22) < 6/3 for
k= "12... Since the path of x =>°@(xo, 0, 7) 1s an_arc, there is, for each k = I,
a time 5%, for < Sze < tox41, such that dis (@(Xo, 0, sx), 2;) > 8/3 for each %%.
Since $(Xo, 0, t) is bounded for t > 0, there is a subsequence {7;} of {sx} such
(x0, 0, Tj)
6 075 <—
oo
( (xo, 0, ¢)
we Ea
Figure 9.12
Autonomous systems 9.5
282
that (xo, 0, 7,;) approaches a limit p as j> +. But then p must be in either Q,
or Q» and must satisfy the condition dis (p, 2;) > 6/3 fori= 1, 2. This is impossible.
Now let p be an arbitrary point of Q and consider the point q = o(p, 0, 7) for
any fixed T > 0. We prove that 2 consists entirely of whole solution paths by showing
that q is in Q. This is done by exhibiting a sequence s, of times such that s; — +0
and q, = $(Xo, 0, 5;,) > qask—-+o. As a starting point, let {t,} be a sequence
such that 4 ~ +o and py = $(X0,0,%)—> p as kK— +m. Then define 5 =
t, + T. By the second shift formula,
EXERCISE 9.5
1. Let x = u(, y = v(2) denote the solution of the system in Example 1 which satisfies
the initial conditions u(0) = 2, v(0) = 0. Specify a sequence {t,} of times such that
(u(t,,), U(tn)) > (1,0) as t> +0.
2. Ifxisapointin &” and S is a subset of &”, define the distance from x to S by dis (x, S) =
inf {x — y|: y isin S}. Suppose that Q is the w-limit set of a bounded solution @ of x’ =
f(x). Show that dis (¢(),2)— 0 as t> +o.
3. Let f be continuous and lipschitzian on”. Assume thatQ is the w-limit set of a bounded
solution @ of x’ = f(x).
a) Show that if there exists a continuously differentiable, real-valued function V on ®”
such that grad V(x) - f(x) < 0, where ‘-’”? denotes euclidean inner product, then
there is a constant c such that V(x) = ¢c for all x in Q.
b) Illustrate (a) by applying the result to the system in Example 1. Take Q to be the
unit circle in the xy-plane and explicitly specify V(x, y) and c.
4. Consider the frictionless linear oscillators depicted in Fig. 9.13. Let the equations for
the displacements x and z from equilibrium be x’’ + x = 0 and z” + \2z = 0, where
SS O,
a) Define what is meant physically by the phrase “the system undergoes a periodic
motion.”
b) Show that the system can undergo a nonperiodic motion for certain values of }.
c) Let (x, y, z, w) denote an arbitrary point of ®4. For all positive numbers a and b,
the set
S(a, b) = {(x, y, z, w): x? 42 y? = a?, 2? + w? = 5?)
Limit sets in the plane 283
Figure 9.13
is a torus in ®*. Show that the path of any solution of the system
Y= yy — 7 z’ = w, w = —d2z (T)
which initiates on a torus 3(a, 5) in the phase space ®* must lie entirely on that
torus.
d) Show that if \? is rational, then every solution path on a torus 3(a, b) is a simple
closed curve. Describe the physical motion of the oscillating blocks in this case.
e) If \? is irrational, show that 3(a, b) is the w-limit of every solution which initiates
on it. Describe the physical motion of the oscillating blocks in this case.
We confine our attention here to the autonomous system x’ = f(x) with f continuous
and lipschitzian on a domain @ in ®”. It will occasionally be convenient to dispense
with vector notation and write
aa = FLX, y), (9.6)
y’ = g(x, y).
When vector notation is used, $(xXo, 0, 7) will denote that solution (x, y) of
Eq. (9.6) which satisfies (x(0), y(0)) = Xo.
Examples were given above which show that a limit set of a solution of this system
can consist of an isolated critical point, an isolated closed path, or critical points
joined by nonclosed paths. The aim here is to show that a limit set for a solution of
the system (9.6) contains a closed path if and only if it does not contain a critical
point.
Theorem 9.10. Let p denote a noncritical point on the path C of a solution of the
system (9.6) and let N denote the normal line to C at p. Any path other than C which
crosses N sufficiently near p must cross in the same direction as C does.
Autonomous systems 9.6
284
Proof. First observe that the slope of each path may be found at each noncritical
point in & by taking the ratio
dy SCGY)s
de O(X..¥)
Let p have coordinates (u,v). We may assume without loss of generality that the
normal line N is not horizontal. Since the function f/g is continuous at p, there is
a neighborhood of p on which |(f(x, y)/g(x, »)) — (fu, v)/g(u, v))| < 7/8. This
means that the tangent vectors to any two solution paths in the neighborhood could
make an angle of at most 45° (see Fig. 9.14). ||
Fig. 9.14 The angle between the indicated vectors is at most 45°.
Theorem 9.11. Let C denote a closed path and let p denote an arbitrary point of C.
There is a path neighborhood which covers all of C except p, and the bases of the neigh-
borhood lie in the normal line N to C through p.
a
Figure 9.15
Limit sets in the plane 285
Proof. Observe first that, by virtue of the continuity of the solution x = (Xo, fo, 1),
one can make the terminal bases of a path neighborhood as short as desired by
making the initial base sufficiently short. Since C is a closed and bounded set, it can
be covered with a finite number of path neighborhoods. The covering can be so
constructed that the initial portion of one path neighborhood is overlapped by the
terminal portion of its predecessor and so that the terminal base B of at least one path
neighborhood lies in N (Fig. 9.16a). The desired neighborhood is the set of Open
path segments which initiate on N and terminate in B (Fig. 9.16b). ||
(a) (b)
Figure 9.16
The neighborhood in Theorem 9.11 is called an open path ring. Because of the
continuity of@ with respect to Xo, the existence of an open path ring has the following
consequence: if a nonclosed path I initiates at a point q of N, then I can be made
to reintersect N arbitrarily close to p by bringing q sufficiently close to p.
Theorem 9.12. If the w-limit set Q for a solution of the system (9.6) contains a
closed path C, thenQ = C.
Proof. Suppose for contradiction that there is a point x9 in 2 — C. Since C isa
Jordan curve, Xo is either interior to C or exterior to C. For definiteness, assume Xo
is exterior to C.
There must exist at least one normal line N at a point p on C which contains the
point x9 [why?]. Choose a point q ¥ p on N so near p that the path I through q
is interior to an open path ring with bases in N. Then I reintersects N at point r
in the terminal base of the ring (Fig. 9.17). The arc of I from q to r together with
the segment of N from q tor forms a Jordan curve which separates C and x». Hence
Q is not connected, and Theorem 9.9 is contradicted. Therefore, 2 = C. ||
It is clear that if the w-limit set of a solution consists of precisely one closed path
C, then the w-limit set contains no critical point. The converse of this statement,
which in various forms is called the Poincaré-Bendixson theorem, is useful in the
study of nonlinear oscillations. To establish it, we need two preliminary results.
286 Autonomous systems 9.6
Figure 9.17
Theorem 9.13. Suppose that q is an w-limit point of a solution path C for the
system (9.6) but that it is not a critical point. There is a path neighborhood R of q
such that:
i) The axis of R consists entirely of w-limit points of C;
ii) R contains no w-limit points of C other than those on its axis.
Proof. Assertion (i) is an immediate consequence of Theorem 9.9 since an w-limit
set consists of whole solution paths.
Suppose assertion (ii) is not true. Then there exists a path neighborhood R
about q such that the normal WN to the axis at q contains both an w-limit point r of C
and a point Xo of C between q and r (Fig. 9.18). The path C can be described by the
Figure 9.18
solution x = $(Xo, 0, f), and there is a time ¢; > 0 such that x; = (Xo, 0, f,) is
on N and in R since q is an w-limit point.
It is not possible that x; = xo, for otherwise C is a closed path and both q andr
could not be w-limit points of it (Fig. 9.19a).
If x; is located on the same side of xo as q, then (Xo, 0, f) is bounded away
from r for ¢ > ft; and r could not be an w-limit point for @ (Fig. 9.19b).
If x, is located on the same side of xo as r, then (xo, 0, t) is bounded away
from q for ¢ > ¢; and q could not be an w-limit point for $ (Fig. 9:19¢):
A contradiction results in every case. Thus assertion (ii) is established. ||
Limit sets in the plane 287
r
r
XxX]
4
(b) (c)
Figure 9.19
Theorem 9.14. If a path C of the system (9.6) contains one of its own w-limit
points, then C is either a critical point or a closed path.
x1
Figure 9.20
Proof. Let p be inQ and consider the path C of the solution x = ¢(p, 0, /). Certainly
C is in @. We shall show that C is a closed path. Theorem 9.12 then implies that
Q = C.
Let q denote an w-limit point of C and note that q, being in , is not a critical
point. By Theorem 9.13, there is a path neighborhood of q which contains no «-limit
point of C except those on its axis. But then the axis, which contains q, must be a
segment of C, and it follows from Theorem 9.14 that C is a closed path. |
288 Autonomous systems 9.6
Yaa C= a. =)
and consider the compact region G between the concentric closed curves
ote
or it can consist of two critical points approached by nonclosed paths. In general, a
Se RS
Figure 9.21
Limit sets in the plane 289
limit set of the system (9.6) which does not contain a closed path, consists of a finite
number of critical points to which adhere at most a countable infinity of nonclosed
paths.
The method of proof for Theorems 9.13 and 9.14 can be applied to show that
paths in the plane which approach limit cycles do so by spiraling. This observation
leads to a classification of limit cycles. A limit cycle is called
1) Stable (unstable) if it is the w-limit set (a-limit set) of a path in its interior and
a path in its exterior, and
li) Semistable if it is the w-limit (a-limit) set of a path in its exterior and the
a-limit (w-limit) set in its interior. See Fig. 9.22(a), (b), (c).
Figure 9,22
Bendixson’s negative criterion (Theorem 9.7) is an important test for the non-
existence of periodic solutions. We shall conclude the discussion here with another
nonexistence test: A closed path for the system (9.6) must contain a critical point
in its interior. Thus, for example, the second order equation x’’ = f(x, x’) cannot
have a real periodic solution unless it also has a constant solution. Before proving
the result, let us consider an example illustrating a feature of the proof.
Example 2. The differential system
where f(x, y) = (x? + y”)? sin (x? + y”)~' for (x, y) ¥ (0,0) and f(0, 0) = 0,
takes the form
Figure 9.23
Theorem 9.16. A closed path of the system (9.6) must have a critical point in its
interior.
Proof. Let Co be a closed path of Eq. (9.6) and suppose the assertion is false. Let
I be a path inside Cp. Then I has a nonempty a-limit set and a nonempty w-limit set.
The path Cp cannot serve as both limit sets. By the Poincaré-Bendixson theorem
there is another closed path C, in the interior of Co. If C, is any closed path in Co,
let B, denote the union of C, with its interior and let S denote the collection con-
sisting of all the B,’s. A chain in S is a subcollection P of the B,’s which is ordered
by set inclusion; that is, if both B, and Bg are in P, then either B, C Bg or By D Bg.
A chain Q in Sis called maximal if there is no other chain in S which properly contains
all the elements of Q. It is a fundamental assumption of analysis (called the
maximality principle) that there exists a maximal chain in S. Let Q be such a maximal
chain. Since the elements of Q are nested, closed and bounded, it follows from the
nested set theorem that there is a point p, common to all the Q,’s in Q. Then the
path [ through p has nonempty a- and w-limit sets 4 and Q which are contained in
every Q, in Q. Neither A nor Q contains a critical point since By does not contain
one. Thus, both A and Q are closed paths in the interiors of all the Q,’s. This con-
tradicts the maximality of the chain Q. ||
EXERCISE 9.6
1. Consider the system of Example 1, Section 9.5. Construct a neighborhood of the point
(0, —1) on which the tangent vectors to any two solution paths make an angle of at most 1°.
2. Consider the system of Example 1, Section 9.5. Construct an open path ring with its
bases centered at (0, —1) on the negative y-axis. Find the ratio of the lengths of its initial
and terminal bases.
3. Let C denote a simple, closed differentiable curve in the plane and let p denote a point
exterior to C. Show that at least one normal line to C passes through p.
4. Discuss the existence of periodic solutions for the following systems of equations.
a) = ey 1, oy = 2x b) x = xe y-y = yer 4 x:
5. Let a(x, y) = ci and h(x, y) = ce define two continuously differentiable Jordan curves
C,; and C2, respectively, and suppose that C2 is in the interior of C;. Assume that
For 9)5.
7)
ae 9)+ BG 9)Fale »)<0
at every point on C; and
ways. In this section, there are listed the various modes of approach. We assume
that the origin is an isolated critical point for the system (9.7a) and that f and g are
continuous and lipschitzian on a domain & in ®” containing the origin.
If every neighborhood of the origin contains a closed path, then it is called a
rotation point. The origin in Fig. 9.23 and the point p in Fig. 9.5 are rotation points.
A point (such as p above) for which there is a neighborhood containing only periodic
solutions is a special type of rotation point called a center (see Fig. 9.24a).
If there is a neighborhood WN of the origin with the property that every solution
initiating in N approaches the origin as t—> +» (tf — —w), then the origin is called
an attractor (repeller). An attractor with the property that all nearby solutions spiral
about it is called a focus (Fig. 9.24b). If the origin is an attractor and if every line
segment through the origin is tangent to some path as it approaches the origin, then
it is called a proper node (Fig. 9.24c). If the origin is an attractor and if there is one
line segment through the origin which is tangent at the origin to all approaching paths,
then the origin is called an improper node (Fig. 9.24d, e).
The origin in Fig. 9.24(e) is an example of a saddle point. A saddle point is a
critical point which is approached by only a finite number of other paths ast > +a.
Note that a saddle point is neither a rotation point nor an attractor (repeller).
The critical points described above (center, focus, nodes, saddle) are called
elementary critical points. If the origin is an attractor (repeller), these points are
further designated as stable (unstable).
y
ccs
ee
FYING
(d) An improper node (e) An improper node (f) A saddle point
Figure 9.24
Critical points in the plane 293
Elementary critical points are of special interest because the system (9.7a) possesses
such critical points if it is linear.
Theorem 9.17. Consider the system
(9.7b)
Gh i fi 4|
a : . ,
where A = E ‘|is a real, constant, nonsingular matrix and let \, and \» denote
the eigenvalues of A.
i) If \; and Xz are complex conjugates, say \; = a + iB with B ¥ 0, then the
origin is a center if « = 0 and a stable (unstable) focus if a < 0 (a > 0).
ii) If the matrix A does not have two linearly independent eigenvectors, then the
origin is a stable (unstable) improper node if the common value \ of \, and \»
is negative (positive).
iii) If the matrix A has real eigenvalues \, and \» and two corresponding linearly
independent eigenvectors, then: the origin is a stable (unstable) proper node if
Ay = Ag < OQ y = Ag > O); the origin is a stable (unstable) improper node
if \, and Xz are negative (positive) and unequal; and the origin is a saddle point
Tee)
Proof. (i) The eigenvectors p; and py» will be complex conjugates, say po = py,
since hy = 4; = a — i8. The matrix P = $[p; + po, —ip: + ipe] is real and
nonsingular, and the system takes the form
HAE: IB 79
under the change of variables [x, y]” = P[u,v]’. The nontrivial solution paths of
the system (9.7c) can be quickly described with polar coordinates r? = u? + v?
and @ = tan! (v/u), for the system becomes (r”)’ = 2ar”, 6’ = —8, where r(t) =
roe*’ and 6(t) = 89 — Bt. If a = 0, the paths are circles. If a ¥ 0, the paths are
spirals. In the xy-plane, the paths of the system (9.7b) have the same general features.
The curves are distorted, however. To see this, notice that the images of the u-axis
and v-axis under the transformation [x, y]" = Pfu, v]” are generally skewed lines
in the xy-plane (Fig. 9.25).
Hlecalld
ha
y 0
yx >
oe c@ nh fo) se 0)
Figure 9.25
9.7
294 Autonomous systems
ii) Under a linear variable change [x, y]” = Plu, v]", the system (9.7b) becomes
a Jordan system
BiG
alba oleh: | 619
9.7d
The solution paths in the wv-plane are the images of the curves
which are depicted in Fig. 9.26 for \ > 0. The tangents to the paths tend to the
horizontal as (u, v) > 0 since
dv uv’ dv Co 0
di it MU lo (Gece
as (u,v) 0. Figure 9.26 depicts the paths of the system (9.7d) in the uv-plane
v
y
Figure 9.26
and the corresponding paths of the system (9.7b) in the xy-plane. Notice that the
line of common tangency is the u-axis in the wv-plane and is along the line of eigen-
vectors of A in the xy-plane.
iii) If A has two linearly independent eigenvectors, then proceeding as before,
we put the system (9.7b) into the Jordan form
u |’ = 4 0 u
A OEE [Link] Hi
and find that u(t) = cye*, v(t) = coe2'. The description of the solution paths is
now straight-forward, and the details are left to the exercises. ||
Critical points in the plane 295
A critical point of a nonlinear system may, but does not have to, be an elementary
critical point. Commonly considered are the systems of the form
x =xty4tx?4+y?,
VY =x yxy. © ||
Example 2. The origin is an unstable node for the system
x’ = sinx + e” — 1,
y = e+ cos xy — 2,
for by Taylor’s theorem
x’ = x + P(x, y),
y =y+
QC, y),
where
x? xy?
EO) = Azeri D ae
and
2 xy?
OF ite se el
Example 3. The origin is a center for the linear approximation to the system
eer ee ey (9.78)
Vee Vie oy)
The origin, however, is an unstable focus for the nonlinear system. To see this,
observe that its solution paths are described in polar coordinates by the equations
(r2y = —2r*, 6’ = 1. Thus r? = 73/(1 + 2tro) and @ = 0) + 1#. ||
9.7
296 Autonomous systems
the
In the case of a nonlinear system, to which Theorem 9.18 does not apply,
behavior of solution paths near an isolated critical point can be much more com-
plicated. The behavior has, however, been thoroughly studied. Suppose, for example,
a Jordan curve C is drawn as indicated by the dashed curve in Fig. 9.27. Then,
inside C, the solution paths or
are as indicated.
Figure 9.27
The heavy lines of the figure depict certain path segments which connect the
curve C to the critical point. Such a path segment is called a base path. The interior
of C is thus cut into nonlinear sectors I, H, Il], IV, V, VI. The sectors I, II, VI are
characterized by the fact that they contain only two base paths (oppositely sensed)
and no other path in the sector approaches the critical point. Such a sector is called a
hyperbolic sector. The sectors III and V consist entirely of base solutions. A sector
of this type is called parabolic. The sector IV is characterized by the fact that its
boundary consists of one path, and all the paths inside the sector approach and
recede from the critical point. This type of sector is called elliptic.
If an arbitrary isolated critical point of a planar autonomous system is not a
rotation point, then it can be shown that the surrounding paths form sectors about
the point. There are a finite number of elliptic and hyperbolic sectors. The remaining
sectors are parabolic. If the number of elliptic sectors is even (odd), the number of
hyperbolic sectors is even (odd).
Phototropic platyhelminthes 297
_ Light
Wiz-
Ld Turntable W
2D Ww
Figure 9.28
y
A
Figure 9.29
9.8
298 Autonomous systems
We shall analyze these solution paths in detail as an application of the theory presented
in this chapter.
Phase space for the system consists of the xy-plane with the point (R, 0) deleted.
Consider an arbitrary nonconstant solution vector ¢(t) = (x(t), y(t)) initiating
in the interior G of the circle x? + y2 = R?. If r?(t) = x?() + y7(0, it is easy to
check that r’(t) < O if and only if the point ¢(t) lies outside the circle C with equation
y2 + (x — (R/2)? = R*/4. See Fig. 9.30. Thus $(f) cannot leave G.
y
>r
(R, 0)
Figure 9.30
We should like to be able to assert that 2 contains only the point (%, $). This will
be the case if (%, ) is a nonrotation point to which there does not adhere an elliptic
sector.
Since the coefficient functions have continuous partial derivatives of all orders
at (x, }), Theorem 9.18 may be used to study the solution paths in the vicinity of the
critical point. The matrix of the linear approximation is the Jacobian of the co-
efficient functions evaluated at the point (X, }) namely
The equations of van der Pol and Liénard 299
Se Lag yr = R)
ii ae an ca
came ae bo
Vit) (x — R)?
2d a D a ss w
EXERCISE 9.8
y? + (x — R/2)? = R2/4.
3. Show that $(f) cannot approach (R, 0) as t— +0.
4. Set up polar coordinates with the point (R, 0) as the pole and study the behavior of
solution paths near (R, 0).
5. Compute the divergence of the coefficient functions for (9.8).
6. Compute the coordinates of the critical point for (9.8).
7. Compute the matrix J and its eigenvalues.
8. On the basis of physical intuition would you expect the critical point for (9.8) to be a
node or a focus for large values of w? Check your conjecture by examining the eigenvalues
of J.
9, Suppose the speed w of the turntable could be adjusted so that w = v/R. Could the
flatworms with characteristic speed v be separated from the other flatworms in the tank?
10. Describe the physical operation of the separation device for very large values of angular
speed w.
In Section 1.10, we studied a nonlinear electronic oscillator circuit and showed that
its operation is described by the solutions of van der Pol’s equation
x" + 1 — x”)x' + x = 0, ee 0. (9.9a)
300 Autonomous systems : 9.9
We asserted that this equation has a unique periodic solution path towards which
all other nontrivial paths tend with increasing time. We shall now establish this,
not only for Eq. (9.9a), but also for the more general equation
of Liénard.
Theorem 9.19. Let the coefficient f(x) in Liénard’s equation (9.9b) denote a con-
tinuously differentiable, even function defined for ~n <x < +m. Suppose there
isana > O such that f(x)(a — \x|) < O for x # aand such that [Rios a6, dx = +o.
Then Eq. (9.9b) has a unique (nontrivial) periodic orbit toward which every nonconstant
solution tends as t-> +a.
Having made these preliminary observations, let us now proceed with the proof.
Let C denote that part of the curve y = F(x) which lies in the right half-plane and
let U and L denote the portions of the right half-plane which lie above and below C
respectively.
It follows from Eq. (9.9d), that any path in U has a negative slope and any path
in L has a positive slope. Thus, if (0, yo) is an arbitrary point on the positive y-axis,
a particle following a path segment I'(yo) through y) must enter U and then move
downward and to the right. If I'(yo) did not intersect C, then a particle following it
would have to monotonically approach a critical point on C or in U. Since there is
no such critical point, I'(y) crosses C (with a vertical tangent) at some point (x2, yo).
The equations of yan der Pol and Liénard 301
Figure 9.31
Conversely, if one chooses a point (x2, y2) on C and follows the path through
(X2, ¥2) counterclockwise into U, then he must reach the positive y-axis at some
point (0, yo). To see this, let (¢,~) be the solution of Eq. (9.9c) which satisfies
$(0) = x2, ¥(0) = yo. Suppose that, for —x < t < 0, (4(0), ¥(d)) is in U. Then
$(t) is bounded below by zero. Thus lim, ,_. (ft) exists. Since (4(t), ¥(1)) cannot
approach the critical point (0, 0) from inside U, lim;_,_. Y(t) = +~«. This implies
that the tangent to the curve traced by (¢(f), ¥(t)) becomes vertical as t— —~.
Equation (9.9d), on the other hand, implies that the tangent becomes horizontal.
Since this is a contradiction, the curve must cross the positive y-axis at some point
(0, Vo).
Together, the assertions of the last two paragraphs imply that the points of C
and the positive y-axis are put into a one-to-one correspondence by the flow mapping.
By a dual argument, one shows that the points of C and the negative y-axis are in a
similar correspondence. Thus given yo > 0, there is a path segment I'(j9) in the
right half-plane which initiates at (0, yo), crosses C at a point (x2, y2), and intersects
the negative y-axis at a point (0, —y4). Moreover, any two of the quantities yo,
2, ¥4 are monotone increasing functions of the third, and all three approach zero or
become infinite together.
The proof of the theorem hinges upon our comparing the distances from the
origin to each of the points (0, yp) and (0, —y4). A convenient method of comparison
consists of observing the algebraic sign of the function J(vo) = 407 — ya). The
attendant difficulty is that the functional dependence of y4 on Yo is not explicitly
known. Note, however, that /(j9) is given by a line integral
Io) = | (x dx + ydy).
T(yo)
Let j, denote the unique value of yo for which x2 = 6. For 0 < yo < Jo,
the path I'(y’9) may be parameterized with the variable x, say x = Y(y), —Y4 <y<yo-
302 Autonomous systems 9.9
Then
>x
Figure 9.32
If (¢, y) is a solution for the system (9.9c), it is easy to check that (—¢, —yv)
also is. The paths are therefore symmetric with respect to the origin. This implies
that I'(c) is a segment of a closed path which is the w-limit set of every noncritical
path. ||
The equations of van der Pol and Liénard 303
EXERCISE 9.9
1. Generalize Theorem 9.19 so that it applies to an equation of the form
ee A fi(X)X 2(x). = 0;
2. On the basis of physical intuition, for which of the following equations might one
plausibly conjecture the existence of a periodic solution?
a) x" 4- |x|x’ + x = 0.
b) x” + (|x| — 1)x’ + x? = 0
3. Two identical conveyor belts with common speed v > 0 feed into a slot as indicated in
Fig. 9.33. A square, flat block with edgelength ¢ and mass m moves on top of the belts.
ae Figure 9.33
The magnitude of the force of friction between the block and either belt is proportional to
the product of the speed of the block relative to the belt and the area of the block in contact
with the belt.
a) Suppose the block is oriented so that an edge is parallel to the belts. Describe its
motion.
b) Suppose the block is oriented so that a diagonal is parallel to the edge of the belt.
Describe its motion.
CHAPTER 10
STABILITY
10.1 DEFINITIONS
In order to work within a fixed context, we shall consider in this chapter only the
equation
x = [0 7) (S)
304
Definitions 305
at a random time, the voltage V, may correspond to any one of the spiral solution
paths in the phase space for van der Pol’s equation. In a well-constructed circuit,
however, the spiral paths wind about the limit cycle very rapidly. Thus, from a
physical point of view, V, appears always to correspond to the limit cycle.
In this sense the oscillator is stable.
The stability phenomena discussed above illustrate the mathematical terms
“stability (instability) of equilibrium” and “orbital stability” (a limit cycle is some-
times called a closed orbit because of applications in celestial mechanics). These are
special cases of a more general type of stability: the stability of an invariant set.
Definition. A set M in @ is called (positively) invariant with respect to (S) x’ =
f(x, £) if each solution $ of (S) has the property that $(to) in M implies $(t) is in M
Or ON Lato = tb.
Notice that any solution path of an autonomous system is an invariant set. In
the plane, a domain bounded by paths of an autonomous system is also an invariant
set. Periodic orbits for autonomous systems and critical points for (S) are invariant
sets of particular interest. They are closed and bounded. Recall that any limit set
for a bounded solution of an autonomous system is a closed, bounded, invariant set
by Theorem 9.9.
If M is a set and x a point in ®”, we shall take the distance from x to M to be
dis (x, M) = inf {|x — y|: y isin M}.
Definition. Let M be invariant with respect to (S) x’ = f(x, t). Then M is called
i) Stable if, given to > B and € > 0, there is a 6 (which may depend on both
to and €) such that whenever a solution @ of (S) satisfies dis (¢(to, M) < 6,
then $(t) exists and dis (¢(t), M) < € for tp) <t< +m. (Fig. 10./a.)
ii) Asymptotically stable if it is stable and if, given t; = 8, there exists a 6, such
that lim;_,4. dis (@(4), M) = 0 whenever |$(t1)| < 41.
In the case of an autonomous system, one sometimes attaches the phrase “‘in the
sense of Poincaré” to the definitions in (1) and (ii).
Fig. 10.1 (a) A stable invariant set. (b) An asymptotically stable invariant set.
10.1
306 Stability
of
The notion of stability of an invariant set is occasionally too general a notion
portion of the positive x-axis to the right of
stability. Note, for example, that the
(1,0) is an invariant set M for the system x’ = x, y = yy (Pig 02).
><
Figure 10.2
Definition. Let y be a solution of (S) x’ = f(x, t) which exists forB <t< +a.
Then wy is called
i) Stable if, given to > B and € > 0, there is a 6 such that whenever a solution
¢ of (S) satisfies |\@(to) — W(to)| < 6, then $(t) exists and satisfies
ii) Uniformly stable if, given € > 0, there is a 6 > 0 such that whenever
a solu-
tion @ of (S) satisfies |(to) — W(to)| < 6 for any to = B, then $(t)
exists
and satisfies |(t) — ¥(t)| < € for to <t < +m (Fig. 10.3b).
iv) Uniformly asymptotically stable if it is uniformly stable and if there is
a 8,
with the following property: Given €, > 0, there exists a T > 0 such that
lo(t) — ¥(2)| < €, for all t > T + t1 whenever ¢ is a solution of (S) which
satisfies |p(t1) — ¥(t1)| < 61 for any ty > B.
>e
Pe
(b)
Figure 10.3
Stability as defined in items (i) and (ii) is sometimes designated as being “‘in the
sense of Lyapunov.” In case (S) is an autonomous equation and wy is a constant
solution, items (iii) and (iv) are equivalent to items (i) and (il) respectively. Note
that uniform asymptotic stability of a solution implies its asymptotic stability which,
in turn, implies its stability.
If a solution or invariant set is stable and if every solution approaches it as
t— + a, then it is called globally asymptotically stable. If an invariant set or solu-
tion is not stable, then it is called unstable. It is worth pointing out that a solution y
10.1
308 Stability
of (S) may be unstable even though every other solution approaches it in the sense
that |@(1) — ¥())| 2 [Link] t+. Such a solution is sometimes called quasi-
asymptotically stable (Fig. 10.4). See Problem 10, Exercise. 10.1 for a specific example
of this phenomenon.
><
SS
A
Fig. 10.4 A phase portrait for a system with a quasi-asymptotically stable solution x = 0,
y = 0. An analytical example is given in Problem 10 below.
nonhomogeneous linear equation are precisely the stability properties of the zero
solution to its complementary equation. It follows also that every nonzero solution
of x’ = A(t)x exhibits precisely the same stability properties as the zero solution.
Thus stability considerations for any solution of a linear equation are reduced to
Stability considerations for its zero solution or the zero solution of its complementary
equation. One may therefore speak of the linear equations or systems, rather than
their solutions, as being stable or unstable.
EXERCISE 10.1
1. Refer to Fig. 9.11 (Example 1, Section 9.5). List the subsets of the xy-plane that are
invariant with respect to the system of the example.
2. Prove that a stable limit cycle for a two-dimensional autonomous system must be an
asymptotically stable invariant set.
3. Prove that a stable node or focus for a two-dimensional autonomous system must be
an asymptotically stable invariant set.
4. Show that a rotation point, hence a center, for a two-dimensional autonomous system
is a stable invariant set.
5. No solution of the equation x’ = —t/x is stable. Why?
6. Show that the zero solution of x’ = (2 — Ax/t, t > 1, is uniformly asymptotically
stable. Compute a 6; and, given €; > 0, compute a 7 such that a solution y with |W(r1)| <
61 for some ¢; > 1 satisfies |¥(t)| < €1 for all t > T 4+ £1.
7. Let g(t) = x1 1/01 + k4(t — kK)?). Show that the zero solution to the system
x’ = g’(t)x/g(X) is stable, but not asymptotically stable.
8. For each of the following equations find a quasi-linear equation based on the given
solution.
a) x’ =y+xl — x? — y”), yk = —x + yl — x? — y?), x = sint, y = cost.
b)sx) nee = 1x. x = 0, x4 = x’ = 0:
9. Suppose one wanted to deduce the stability of the nontrivial periodic solution x = p(t)
of van der Pol’s equation by studying the stability of equilibrium for the associated quasi-
linear equation. What is the primary difficulty in actually implementing the procedure?
10. a) Show that the critical point (1, 0) is a quasi-asymptotically stable solution of the
system
3 2
3) = ae = 28
x’ =x-—yt+ es
ee sis ee
2 2 3
; acta KEV.
een AS a
Vx2
+ y?
b) Why could a quasi-asymptotically stable cruising attitude be an undesirable feature
in an aircraft?
11. Prove that a constant solution to an autonomous system is uniformly (asymptotically)
stable if and only if it is (asymptotically) stable.
Stability 10.2
310
Most of the results obtained in Chapter 7 are in fact statements about the stability of
linear systems. Here, we shall discuss these results in terms of the formal definitions
of stability. .
Theorem 10.1 below can be motivated by consideration of the phase portrait for
the equation x” + w?x = 0, w > 0 (Fig. 10.5). One thinks of the elliptical paths
=
A
Figure 10.5
as the orbits of hypothetical particles which were all on the y-axis at time ¢ = 0.
To say that the zero solution is stable is to say that a particle which is once sufficiently
near the origin will never leave a previously designated neighborhood of it. Let us
suppose now that the curve labeled C in Fig. 10.5 is the path of the solution x = y(f),
y = W'(0), and consider the path labeled fr. Since the equation x’ + w?x = 0 is
linear, there is a scaling factor u > O such that I is the path of the solution x =
uy(t), y = wy’(t). In fact, every nontrivial path in the plane is related to every other
one in this way. Thus the boundedness of solutions is equivalent to the stability of
the zero solution for this particular equation. We show below that a similar assertion
is true for linear equations in general. A second implication of Theorem 10.1 is that
linear systems do not exhibit the pathological behavior illustrated in Figure 10.4 or in
Problem 10 of Exercise 10.1.
Theorem 10.1. The equation (LH) x’ = A(t)x, t > 6 is stable if and only if all
its solutions are bounded on intervals of the form to <t< +m, to > 6. It is
(globally) asymptotically stable if and only if all its solutions approach zero ast > +a.
Proof. Let & denote a fundamental matrix solution for (LH) and let to > 6 be given.
Suppose first that every solution is bounded for tg < t < +o. Then there
is a constant M such that |®(t)| < M for to <t < +a. Let e > 0 be given and
choose 6 = €/M|®~'(to)|. If @ is a solution satisfying |@(to)| < 4, then
solution of (LH). Let ¢ denote any nonzero solution and define ¥(t) = $(t)6/|@(to)].
Then |W(to)| < 6, and it follows that |y(A)| < € for tp <t< +o. This means
that |@(t)| < € |@(to)|/6 for to < t < +, that is, o(t) is bounded over the in-
dicated interval.
Next suppose that every solution approaches zero as t > +. Then, for each
to = B, each solution is bounded over intervals of the form tp < t < +o. Thus
(LH) is stable by the second paragraph of this proof. By definition, then, it is
asymptotically stable and globally so.
Conversely, suppose that (LH) is asymptotically stable. Then it is stable and,
given 1, there exists a 6; > O such that |y(t,)| < 6; implies Y(t) ~ 0 ast> +a
for each solution y. Let @ denote an arbitrary nonzero solution of the system
and define ¥(t) = 6:0(t)/|o(t1)|.. Then |¥(t,)| < 6,. Consequently y(t) > 0 as
t— +o and ¢(f) must do likewise. ||
Corollary [Link]. The equation x' = Ax is asymptotically stable if and only if
each eigenvalue of A has negative real part. The equation is stable, if and only if
i) every eigenvalue of A has nonpositive real part, and
ii) A has m linearly independent eigenvectors corresponding to each eigenvalue
with zero real part and multiplicity m.
Proof. The assertions are immediate consequences of Theorem 10.1, Corollary 7.2,
and Corollary 7.1. ||
Corollary 10.1b. The equation x' = A(t)x, where A has minimal period T > 0,
is asymptotically stable if and only if every characteristic exponent has negative
real part. Let e®” denote a period transformation matrix. Then the equation is stable,
but not asymptotically stable, ifand only if
i) every characteristic exponent has zero real part and
ii) B has m linearly independent eigenvectors corresponding to each eigenvalue with
zero real part and multiplicity m.
Proof. The assertions are immediate consequences of Theorem 10.1 and Corol-
lanys/25o81
The next theorem gives necessary and sufficient conditions that a general linear
equation be uniformly stable or uniformly asymptotically stable.
Theorem 10.2. The equation (LH) x’ = A(t)x is uniformly stable if and only if
every fundamental matrix solution & has the property that |\b(t)@~ '(t,)| is uniformly
bounded for all t, and t in the interval [8, +x). The equation (LH) is uniformly
asymptotically stable if and only if there are constants M > 0 and » > O such that
\b(t)}b—1(t,)| < Me~"'— for all t and ty satisfying BS ti St< +z.
such that |$(t,)| < 6 for some t, > 8, then |6(1)| = |L(& *(t1)o(t1)| < Mig) < €
for all t > t,. Thus (LH) is uniformly stable.
Conversely, suppose that (LH) is uniformly stable. Let € > 0 be given and let
6 > 0 be as in the definition of uniform stability. Fix an arbitrary ¢; > 6 and let
V(t) = [¥i(0),..., Wn(t)]" denote that solution of (LH) which satisfies ¥,(t1) = 6/2,
¥(t1) = Oif 7 #i (1 <i<n). By definition of uniform stability |¥()| < € for
all t > f,. Since ¥(t) = &(t)}6~ 1(t,)¥(t,), it follows that € > |W(t)| = M(t, t1) 6/2
for all t > t,;, where M,(t, t,) is the norm of the ith column of (ft) '(t,). Thus
1(t,)| < 2ne/6 for all t > fy.
|\&(1)@—
To prove the second assertion, assume that there are positive constants M =
M(8) and » = 7(8) such that |6()}6~1(t,)| < Me"? for all ¢, and ¢ satisfying
B<t, <t< +a. The zero solution is then uniformly stable by the preceding
paragraph. Let @ be any solution of (LH) which satisfies |@(t,)| < 1/M = 6, at
any time ¢; > 6 and let €, > O be given. If t > T+ t1, where T= —(ln€;)/n,
then |@(1)| = |®()b—1(t:)d(t,)| < e7™—'_ < el" 4 = €;. Thus, the zero solution
is uniformly asymptotically stable.
Conversely, suppose the zero solution is uniformly asymptotically stable. Then
it is uniformly stable, and there exists a constant N = N(@) such that |@(1)@~ '(t,)| <
N for all ¢ and ¢, satisfying B < tj < t << +a. Let 6, > 0 be as in the definition
of uniform asymptotic stability and choose an €; > Oso small that 2ne€,;/6; =X < 1.
Then let 7 be as in the definition of uniform asymptotic stability also.
To show that |®()~ '(t;)| is in fact dominated by an exponential function, we
consider again a special solution y satisfying ¥;(t;) = 6,/2 and y,(t,) = 0 if
JeriSn) Then, tori
The proof is analogous to that of Corollary 10.2a and is left to the exercises.
Example. Let a(t) denote the sawtooth function depicted in Fig. 10.6. The function
a(t)
A
2m ---—> |
/ 4(m + 1)
Figure 10.6
a(t) has the property that ApS a(t)dt = —a. The differential equation x’ = a(t)x
is asymptotically stable since
is not uniformly bounded over all intervals of the form 0 <4; <tf< +a. To
see this, take ¢; = 2m, t = 2m +1. Then
The notion of uniform stability is pertinent to Theorems 7.8, 7.9, and 7.10. The
student will be asked to reformulate these theorems in terms of uniform stability in
the exercises below.
Theorem 7.7 can be used in conjunction with Theorem 10.2 to test the general
linear homogeneous system x’ = A(t)x for uniform stability. Specifically, we have
the following theorem.
Theorem 10.3. Let A be a continuous n X n matrix function on the interval
B <t< +o and let M(t) denote the largest of the eigenvalues of A(t) + AL(Do Ife
M(t) < 0 for t > 6, then x’ = A(t)x is uniformly stable. If there exists a constant
n > 0 such that M(t) < —7n < 0 for t > 8, then the equation is uniformly asymp-
totically stable.
Proof. Let @ be an arbitrary solution and suppose that M(t) < —n for ¢t = 8B,
where 7 > O is a constant. Let € > 0 be given and choose positive 6<e. By
Theorem 7.7,
t
EXERCISE 10.2
eS saat (10.3b)
y’ = [sin Int + cos Int — 2a]y + x’, Bt
2 Sig <=) + e=8)/2, “7 > 0:
One easily finds that all solutions are given by
at
p= Cier t
y(t) = i
ee ae = at (oy + of2 | e —s $s sin
sin Ins
ds):
0
316 Stability 10.3
Next we find an interval (u,v) on which f(s) is increasing. Set o = Ins. Then
f(s) = —(sino + cos a), and sketching the graphs of cos o and —sin o (Fig. 10.7),
we see that f is increasing for
se+ 2nr < Ins < usA Dine -OL Ol cen ee ee AU:
Figure 10.7
Thus the quasi-linear system has a critical point (0,0) which is unstable.
The stability of equilibrium for quasi-linear systems 317
nt, 10.3
y’ = [sinIn¢ + cosIn¢ — 2a]yp cee
has a fundamental matrix solution
eat 0
0 ef sin Int—2at
Now let @ denote a solution of Eq. (10.3d) which satisfies |@(t¢;)| < 6 for some
t; > 6. Then |@()| < 6; on some maximal forward interval 4; <¢<T<
7*(Xo;fo). Since
for ft; < t < T. Since M|d(t;)| < Mé = 464, inequality (10.3e) implies (by Theorem
8.6) that T = TT (Xo, fo) = +o. Thus $(¢) exists for t; < t < +0 and inequality
(10.3e) holds over the same interval. This inequality implies that the zero solution
of the quasi-linear equation is uniformly asymptotically stable. The details are left
to the exercises. ||
EXERCISE 10.3
1. Explicitly compute a general solution for the system (10.3b).
2. Is the zero solution of the system (10.3b) unstable for any value of a > (1 + e~”)/2?
Compare your result with your answer to Problem 1, Exercise 10.2.
3. Prove that inequality (10.3e) implies the conclusion of Theorem 10.4.
4. Let g be defined by
= k
0O= yma ee 129
= > > () .
Is the origin a uniformly asymptotically stable critical point for the system
MS SX yy 2);
Neate? Naa oe aaa |
5. Abstract the analysis for Problem 4 and obtain a theorem similar to Theorem 10.4.
xii 2) (S)
under the assumption that there exists a T > 0 such that f(x, t+ 7) =f (x, d,
—%» <t< +a. Notice that if (S) is autonomous, then f(x, t+ T) = f(x, 2) for
all T > 0. As a matter of completeness, we state an existence theorem for an
important special case of (S).
Theorem 10.5. Consider
= y+ pas + sin t,
Dee A Tsaire
oe rece:Sic COS Js
has a solution of period 27. ||
We assume for the rest of this section that the equations to be considered have
(nontrivial) periodic solutions and consider the stability of these solutions.
Theorem 10.6. Suppose that the partial derivatives D,f, k = 1,...,n, are con-
tinuous on D = P X (—xwx, +), where & is a domain in &", and let the equation
x’ = f(x, £) have a solution @ with minimal period w = kT, where k > | is an integer.
If the linear variational system
y =J (Dy, (10.4b)
where J,(t) = (D; f:(o(2), t)), is asymptotically stable, then so is the solution 9.
Proof. In view of the normalization remarks at the end of Section 10.1, the theorem
can be proved by demonstrating the uniform asymptotic stability of the zero solution
to the system
y= JDy + hey, 2), (10.4c)
where
h(y, t) = f(o() + y, t) — S(O, t) — Jey.
First let us show that h(y, 4) = o(ly|) as |y| — 0 uniformly for —%2 <t< +o.
To do this, let C be a closed and bounded subset of ® with the trajectory of @ in
its interior (Fig. 10.8).
o() +y
Figure 10.8
Stability 10.4
320
where £;(t) is on the line segment from ¢(f) to ¢(t) + y. Since the partial derivatives
D, f ;(x, t) are uniformly continuous and periodic on the product set C X (—o, +),
it follows that there exists a positive function M(y) such that M(y) — 0 as |y| — 0
and |h(y, t)| < M(y)y, for all y with |y| sufficiently small, uniformly with respect
LONE:
The system (10.4b) is periodic and asymptotically stable. It is therefore uniformly
stable by Corollary 10.2b. The zero solution of (10.4c) is then uniformly asymptoti-
cally stable by Theorem 10.4. ||
Example 3. The system
xo = = 1-203 xe 2 cosi#ti3 sin f-Eecos? &
; . (10.4d)
y = —-y+y?sint
+ y?x
has a periodic solution x = sint, y = 0.
The variational equations based upon this solution are
ai . a teesin f eH {| (10.4e)
As defined in Theorem 10.3, M(t) = —1. Thus the system (10.4e) is (uniformly)
asymptotically stable. By Theorem 10.6, so is the solution x = sin t, y = O of the
system (10.4d). ||
The last theorem does not apply to autonomous systems with periodic solutions,
for the corresponding variational equations will always have a periodic solution and
cannot be asymptotically stable. Suppose, in fact, that y is a periodic solution of
x’ = f(x); then y = y’(¢) is a periodic solution of
y’ = Je(d)y, (10.4f)
where J-(t) = (D;f iW(D)), has n — 1 characteristic exponents with negative real
parts, then the path T of p is asymptotically stable. If Eq. (10.4f) has one characteristic
exponent with positive real part, then T is unstable.
The stability of periodic solutions 321
(Xo, V0. Z05 t) = (¢1(xo, V0. Z05 t), $2(Xo, V0. Z0s t), $3(Xo, V0 Z0; t))
for the solution of (A) which satisfies $(xo, yo, Zo, t) = (Xo, Yo, Zo). Then y(t) =
(0, 0,0, t). We may assume without loss of generality that the coordinates are
chosen in such a way that ¥(0) = 0 and y/(0) = f(0) is the unit vector along the
positive x-axis (Fig. 10.9).
Figure 10.9
It follows from Theorem 8.7 that (0, yo, Zo, f) exists for |yo| + |zo| + |t — w| <
€, say, since (0, 0,0, w) exists. Consider the equation $,(0, yo, Zo, t) = 0. The
values yo = Zo = 0, ¢ = ware solutions by construction. Since (0/dt) $,(0, 0, 0, w)
= (0/dt) $,(0,0,0,0) = 1, it follows from the implicit function theorem that
there exists a continuously differentiable function T defined for |yo| + |zo| < €1 < ¢,
say, such that
$1(0, yo, Zo, To, Z0)) = 0.
The geometrical significance of these remarks is that any solution vector
(0, yo, Zo, ¢) initiating on the yz-plane with |yo| + |zo| sufficiently small will re-
intersect the yz-plane at a time 7()o, Zo), not very different from w. Thus
ie Vos Zo) =, (¢2(0, V0. Z0s TWV0; Zo))s $3(0, V0. Z05 T(0; Zo)))
maps a neighborhood of the origin in the yz-plane onto another such neighborhood.
By the mean value theorem,
where
By Theorem 8.9,
D,$,(0, 0, O; t) D2 1(0, 0, 0, t) D3 1(0, 0, 0, t)
=F a NS
P= J;(070)P' = E a
with
[hal < [Ae] + ¥ < 1. Introducing new coordinates [u,v]" = P~*[y,z]’ in the
yz-plane, one finds that (10.4g) takes the form
d Yiju
G(uo, Vo) = ine ef[3 + S(uo, Vo),
IG@o, o)| < [Aa] + [Mol + [7] +[vol + |Aal [vol + [S(o, vo)|
Since |S(uo, 09)|/([uol + |vo|) > 0 as |uo| + |vo| — 0, there exists a neighborhood
N of the origin in the yz-plane such that every point p in N with wv-coordinates
(Uo, Vo) is carried by F into a point q with w-coordinates G(uo, Vo) where |G(uo, vo)| <
k(|uo| + |vol) and k < 1. Thus, the path of every solution (0, yo, Zo, t) which
initiates on the yz-plane sufficiently near the origin reintersects the plane at a point
F(yo, Zo) which is strictly nearer the origin in terms of the wv-coordinates. This
implies that the orbit of y is asymptotically stable. The instability assertion is left
to the exercises. ||
Example 4. The equation
x! + (x? + x’? — 1x’ +x =0 (10.4h)
The stability of periodic solutions 323
has a periodic solution x = sin ¢. The system of variational equations based upon
this solution is
1 i |
Uo 0
— 1 — cos 5 “|. (10.41)
H i ge = sii2/)
Let us write uw, and yo for the characteristic multipliers of Eq. (10.4i). Since the
system (10.41) has at least one periodic solution (u = cost, v = —sin f), at least
one characteristic multiplier has unit modulus. Say |w,| = 1. By Theorem 7.6,
EXERCISE 10.4
x’ = —x + sin(x+ A
which satisfies (xo, to, 0) = xo. Consider the function y = F(xo) = (xo, to, to + 27)
on an appropriate interval —R < xo < Rand use the intermediate value theorem to show
that there is a value Xo such that x9 = ¢(xo, fo, fo + 27). Then deduce the existence of
a periodic solution.
2. Carry out the analysis of Problem 1 for the equation x’ = —x + sinx + sint.
3. Is the solution x = cos 2t, y = sin¢ of the equations
SS y’ = (cosy + x?.
7. The system
Xe yy ae ey
has a periodic solution x = sint, y = cost. Is the path of this solution stable?
8. The system x’ = x — xyt — x°, y’ = —yx* — y® has a periodic solution with path
x4 + y+ = 1. Show that the path is asymptotically stable.
9. Let f and g have continuous first partial derivatives and assume that the system v=
f(x,y), ¥’ = g(x, y) has a periodic solution (x, y) = ¢(f) = (u(, v(t). Show that the
orbit of @ is asymptotically stable if Di f(x, y) + Dog(x, y) < 0 at all points (x, y) on the
orbit. Try to apply this result to van der Pol’s equation.
10. Prove the instability assertion of Theorem 10.7.
In Sections 10.2 and 10.3, we discussed the stability of equilibrium for a quasi-linear
equation
x = Ax +b, 7) (10.5a)
x= Nb 2) = ae yey) ee (10.5b)
The associated linear approximation is the system x’ = — x, y’ = 0, and it gives no
information about the stability or instability of equilibrium for the nonlinear system.
Consider, a priori, the function V(x, y) = (x? + y?)/2. It is an example of
what we shall later call a Lyapunov function. The stability of equilibrium for the
system (10.5b) can be deduced with its aid.
Let us, as indicated in Fig. 10.10, realize in ®*, the surface M defined by the
equation z = V(x, y), regarding the xy-plane as the phase space for the system
(10.5b).
The surface M is a circular paraboloid with its lowest point at the origin. Now
let o(t) = (u(t), v(t)) denote a nonconstant solution of the system (10.5b) and
denote its path by C. For each t > 0, the number V(o(0) is the length of the vertical
line segment L(t) from the point $(t) on C to the surface M.
We shall show that DV(¢(t)) = dV(@(1))/dt < 0 for t > 0, provided |$(0)|
is sufficiently small. This will imply that the length of L(t) decreases strictly with
passing time. Because of the geometrical features of M, the point $(f) will then
The direct method of Lyapunoy 325
move toward the origin in such a way that the asymptotic stability of the origin
can be deduced.
Note that it is not necessary to solve the differential equations in order to compute
DV(¢(t)). In fact (suppressing the argument ¢ for notation convenience), we have
DV(@) = uu’ + vv! = —(u? 4 v*) + 2(u? + vu + v)?.
By the triangle inequality,
The general technique which was used to deduce stability in Example 1 is called
the direct method of Lyapunov. We shall give Lyapunov’s basic stability theorems for
the equation
x’ = {Go 7), (S)
Example 2
i) The rectangular and euclidean vector norms |:| and ||-|| are positive definite
on &”.
ii) The function W defined
Figure 10.11
Lyapunov’s theorem which we shall prove below states that the origin is a stable
critical point for (S) x’ = f(x, 2) if there exists a Lyapunov function for (S) at the
origin. The theorem does not tell how one might go about constructing a Lyapunov
function. If, however, one has in hand a continuous, positive definite function V
which is suspected of being a Lyapunov function at the origin for (S), it is desirable
to have analytic criteria for verifying item (11) of the definition above.
If V is a continuously differentiable, positive definite function on the cylinder
G X [6, +~), we define the derivative of V with respect to the equation (S) to be
Example 3. The total energy of the frictionless oscillator described by the equations
x =y, yy = —g), (10.5c)
xg(x) > 0, |x| < A, is given by
2 zx
The function Vis positive definite on the strip |x] < A, -~ < y< +m. Further,
Proof. Let W be positive definite and such that W(x) < V(x, t) on G X [8, +).
Let to > Bande > Obe given and assume without loss of generality that the neighbor-
hood |x| < eisinG. Let\ = min),;—. W(x). Since Vis continuous on G X [8, +)
and V(0, t) = 0, there isa 6 = 6(€, to) < € such that Vp = V(Xo, to) < if |xo| < 6.
Along the trajectory of (Xo, fo, 4) in G, we have DV($(Xo, to, 1)) <0)» This
V($(Xo, to, t)) < d as long as $(Xo, fo, £) is in G. This implies that 7*(Xo, fo) =
+o and |6(Xo, to, t)| < € for all t > to. Otherwise, there is a first time ¢, at which
|o(Xo, to; t1)| = ¢, [hen
V((Xo, fo; £1), £1) < Vo < ¥ < W((Ko, to, 11)) < V(G(Xo, fo, £1), t1);
which is a contradiction. ||
Example 4. Consider again the system (10.5b) of Example 1. Clearly V is positive
definite on ®?. Since
DAC eS) (bebe Sa
DV is negative definite (hence semidefinite) on |x| + |y| < 1/\/2. By Theorem 10.8,
the origin is stable. ||
Example 5. If a system x’ = f(x, f) has a first integral V which is positive definite
on a neighborhood of the origin, then V serves as a Lyapunov function since
DV(x) = 0 by definition of first integral. ||
Now suppose that the solutions of (S) x’ = f(x, #) describe the state of some
physical mechanism and that the state x = 0 corresponds to the desired mode of
operation for the mechanism. It is never possible to start the device in the desired
mode of operation because of inherent mechanical inaccuracies. If the state x = 0
is stable, however, then the mechanism will operate properly provided its initial state
is sufficiently close to x = 0. From a physical point of view, the asymptotic stability
The direct method of Lyapunoy 329
of the initial state is more desirable than its mere stability since, with passing time,
the operation of the device would tend to become optimal.
The asymptotic stability of equilibrium can be studied by Lyapunov’s method;
additional conditions must, however, be imposed on the Lyapunov functions. One
might expect, on the basis of Example 1, that the origin would be asymptotically
stable for (S) if there existed a Lyapunov function V for (S) with DV negative definite
on a neighborhood of the origin. This speculation is correct if (S) is autonomous or
if f(x, 4) is bounded on cylinders G X [8, +~) where G is a neighborhood of the
origin. It is not true in general as the following example of Massera shows.
Example 6 (Massera). The function g defined by
|
g(t) = > ] ate = k)2
A general solution is
x(t) = g(tec,
where c is an arbitrary constant. Since g(m) > 1 for each integer m > 1, the solution
x(t) = 0 could not be asymptotically stable. Let us define
Since
Figure 10.12
The direct method of Lyapunoy 331
Define
i)) A= min
eee W i(x), ii
il) = pee W(x),
dX < W1(o(Xo, fo, t)) < V(o(Xo, to, 2) < Vixo, to) — v(t — to)
Se Xo) at lo) (Lt)
and, at ¢= to + T, we have \ < uw — vyT = X/2, a contradiction. Thus there
exists a ¢1, fo S t1 < to + T, such that |(Xo, fo, t1)| < », and the proof is
complete. ||
There are dual theorems of the Lyapunov type which give criteria for the in-
stability of equilibrium. We shall, however, leave these to the exercises and present
the instability theorem of Cetaev.
For the rest of this section, we shall consider only an autonomous equation
x’ = f(x) (A)
on a neighborhood G of the origin.
Theorem 10.11 (Cetaev). Let U be a domain in G and assume that the origin is a
boundary point of U. Suppose that there exists a continuously differentiable function
V on G such that V(x) > 0 and DV(x) > 0 on U but such that V(x) = 0 on that
part of the boundary of U which lies in G. Then the origin is unstable.
Proof. Let € > 0 be given so small that the region |x| < € is in G (Fig. 10.13) and
let x9 be a point of U with |xo| < ¢. Suppose that the path of $(Xo, fo, 4) is in U
Figure 10.13
332 Stability 10.5
and that |6(xo, fo, 1)| < € for fo < t< +m. Since V(x) > Oand DV($(x0; to, 2) >
0, V(o(Xo, to, 1) > Vixo) > 0 for to < t< +m. Thus, $(Xo, fo, £) is bounded
away from the boundary of U in the region |x| < ¢€ [why?]. Then DV($(Xo; to, 2) >
m > 0, for some mand allt > fo. It follows that
As t— +o, the right side of inequality (10.5e) becomes unbounded. This is im-
possible since V is bounded over the region |x| < €. Thus (Xo, fo, ¢) leaves the
region |x| < € at a finite time ¢,, and the origin is therefore unstable. ||
Example 7. Consider the system
X= yxy, ey = eee (10.5f)
The origin is a center for the linear approximation; thus the stability or instability of
equilibrium is determined by the nonlinear terms. In Cetaev’s theorem, let V(x, y) =
y? — x?, let U denote the domain defined by the inequality |x| < y < 1, and let
G denote the square |x| < 1, |y| < 1 (Fig. 10.14). Then V(x, y) = 0 if y = |x|,
—Jee Xl, Vesey)> Oton Une
DV(x, y) = 2y? + 2x3y > 2p? — 2xl2p > 2y3C1 — y) > O xeon, 0
The origin is therefore unstable. Moreover, any solution path initiating in U must
leave U by crossing the line segment y = 1, -l<x<l. ||
Cetaev’s theorem for nonautonomous systems is given in Hahn [14].
As a practical matter it is not always enough to know only that an equilibrium
position for a physical system is asymptotically stable. One frequently wants to know
the size of the perturbations from which the system can regain equilibrium. Suppose,
for example, a certain attitude for an aircraft is known to be asymptotically stable.
One might ask the following question: If the nose is bumped downward by turbulence,
how many degrees of pitch can be tolerated without the pilot’s intervention to prevent
a dive?
Definition. Let x = 0 be an isolated, asymptotically stable critical point for the
autonomous equation x’ = f(x), where f is continuous and lipschitzian on a domain
><
> x
Figure 10.14
The direct method of Lyapunoy 333
® in BR". The set S of all points xo in © such that (Xo, to, t) ~0 as t> +x is
called the region of asymptotic stability for the origin.
Example 8. The origin is an isolated, asymptotically stable critical point for the
system (10.5b) of Example 1. If @(¢) = (u(t), v(t)) is a solution vector with
|(0)| < 1/\/2, then |¢(t)| > 0 as t—> +m. Thus the region of asymptotic stability
includes the domain |x| + |y| < 1//2. ||
Notice that one can never hope to make an assessment of the size of a region of
asymptotic stability by Theorem 10.4 or similar results. Linear systems, when asymp-
totically stable at all, are globally asymptotically stable. Nonlinear systems, on the
other hand, can display equilibria that are asymptotically stable, but only locally so.
A basic result on the extent of the region of asymptotic stability is LaSalle’s theorem
which is discussed below. It can be motivated by consideration of the system
YS Sib erate BR Paull Sree (10.5g)
Va aCe abs nO ore Sas
If G = ®”, then V(x, y) = x* + y? defines a Lyapunov function V at the origin
since the system (10.5g) takes the form
Figure 10.15
334 Stability 10.5
It is important to observe that for (x, y) in the invariant sets which are also limit
sets (the origin and the unit disc), DV(x, y) = 0. We shall use these observations to
illustrate LaSalle’s theorem before proving it.
Theorem 10.12 (LaSalle). Let B be a closed and bounded region which is invariant.
Suppose V is a continuously differentiable function with the property that DV(x) < 0
if x is in B and let Z denote the set of all points in B where DV(x) = 0. IfM denotes
the union of all the invariant sets in Z, then every solution of x' = f(x) in B approaches
Miasct Poe
To apply LaSalle’s theorem to the system (10.5g), let B denote any disc x” + y* <
R?, R > 1, and let V(x, y) = x? + py”. The set Z consists of the origin and the
unit circle. The set M is, in this case, the same as the set Z. As the conclusion of the
theorem states, any solution with x?(0) + y?(0) < R® approaches M as t—> +o.
Notice that those solutions with x?(0) + y?(0) > 1 actually approach the unit
circle, and those with x7(0) + y?(0) < 1 approach the origin. Thus the theorem
does not say which component of M is approached. The theorem is used to estimate
the region of asymptotic stability for the origin by choosing B, hence R, in such a
way that M consists precisely of the origin. That is, one chooses B to be any disc
x? + yp? < R*,0 < R < 1, and concludes that any solution in B approaches the
origin as t—> +a. Thus the disc x? + y”? < 1 is the region of asymptotic stability.
Proof (of Theorem 10.12). Let x9 be in B. Then x(t) = $(Xo, fo, f) is in B for all
t > to since B is invariant. The w-limit set 2 of @ (which is closed, nonempty, and
connected) is contained in B since B is closed.
We shall show that V is constant-valued on Q. Let p and q be points of 2 and
let {s;,} and {tf} be sequences such that x(s,) — p and x(t) > q as k — w. Since
DV(x) < 0 on B, the quantity V(x(t)) cannot increase as t—> ++». The function
V, being continuous on the closed and bounded region B, is bounded below. Thus
V(x(t)) approaches a limit L as t—> ++«. Then
V(p) = lim V(x(s,)) = lim V(x) = lim V(x(@e)) = V¢Q).
Consequently V(x) =c, say, for x in Q.
We may conclude from the last line that Q is in Z since DV(x) = 0 for x in Q.
Since Q is invariant and M is the union of all invariant sets in Z, it is also true that
Q isin M. This implies that dis (x@: M) —Oast— +o. Otherwise, there exists
ane > 0 anda sequence {t;,} such that t, > +0 ask — +. and dis (x(t), M) >
€. The sequence {x(v;,)} must contain a subsequence which converges to a point p
of B since B is a closed and bounded region. But then p is an w-limit point of @
which is not in M. This contradicts the inclusion of Qin M. ||
Example 9. If ¢ is replaced by —t in Liénard’s equation (9.9b), the phase portraits
for the resulting equation
x” — f(d)x' +x = 0 (10.5i)
The direct method of Lyapunoy 335
are the same as those for the unmodified equation except that the time sense of the
paths is reversed. We assume, as in Theorem 9.19, that f is continuously differentiable
and even, that there exists an a > 0 such that f(x)(a — |x|) < 0 for x ¥ a, and
that ica” f(x) dx = +a. With respect to the equivalent system
Figure 10.16
AMG Stability 10.8
The shaded region 4 between the patha C and 2 ta the region of aaymptotic
stability, The problem here ia to estimate Ui quaaiiaiel, Tone thinks of the system
(105k) as deseribing the motion of a particle oscillating with fietion under the
influence of a nonlinear force, it is natural to study the problem with the ald of the
energy
V(x,y) = y?/2 + bx®/2 + x*/3,
The derivative DV is given by PV(yy) = —ap®. 1 A ik any compact invariant set
which has the origin in its interior but doea not contain the port (4,0), then 7
is the portion of the veaxia whieh lies in A, and M eonsixts only of the origin, Ry
LaSalle’s theorem, A is inside the region of aaymptotio stability, To estimate 8, then,
we construct a compact, invariant set # whieh does net contain (4,0) As a start,
we require that any point (vy, )) in A satay VS =a &® A Oe Be A Then
any path initiating on the hall-line v¥= =@, » & 0 will subsequently move to the
right and never reorass it (Pig, 10,17),
\ (Ay ») = sare?
y= ae eA)
Piguee 10.17
We construct a line in the third quadrant through (3,0) whieh solution paths
must cpass in the positive ydirection, For y < OQ and =& < x < Q the slopes of
solution paths satisfy
dy
/ Ae — b+
= NX —
ax or tay
Thus any path lying above the line p = —a@(y +> 8) in the Uhind Quadrant can never
enoss it there, Since DV(y,») < 0, AO path ean crass a level curve riny) = eat
the energy, The level curve V(x, ») = 4478” through the polat x = Q » = <a@
passes around the origin and intersects the line v= =@ at a point iy the soond
quadrant, The region # consisting of points (x,»)such that > =@»y > <e(e a)
Vy) S 34°8" is thus a compact, invariant set of the requived type, By LaSalle’s
theorem, 8 is contained in the region of agveptotic stability,
~
The direct method of Lyapunoy 337
V(z, DD Uae
Figure 10.18
Let (xo, Yo) be any point in the plane and fix c > 0 so large that V(x9, yo) < c.
Further, choose a > »/2(c-+ 1) so large that (xo, yo) lies in the compact region B
defined by the inequalities V(x, y) < c, |y + F(x)| < a, F(x) = fp 2\u| du. The
path through (xo, yo) cannot cross the curve V(x, y) = ¢ since DV(x,y) < 0. If
U(x, y) = (vy + F(x))”, then DU(x, y) = —4(y + F(x))(xe~*). Thus the path
through (x9, yo) cannot cross the curves |y + F(x)| = a either. Therefore B is
invariant. The set Z of LaSalle’s theorem consists of those segments of the axes
lying in B, and the only invariant set M in Z is the origin. The path through (xo, yo)
approaches the origin, and it follows that the origin is globally asymptotically stable
since (x9, Yo) was arbitrary. ||
EXERCISE 10.5
1. Let A denote ann X n matrix with n linearly independent eigenvectors and consider the
system x’ = Ax. Under what further conditions on the matrix A is there a Lyapunov
function of the form V(x) = x7Bx?
2, Show that
t 1 2 2 rey,
V(x, x', 1) = (++) NSE 2? |bea
Use the euclidean norm as a Lyapunov function and show that the origin is asymptotically
stable.
11. Consider the autonomous equation x’ = f(x), where f is continuously differentiable
and f(x) = 0 if and only if x = 0. Let J¢{x) denote the Jacobian matrix (D)/i(x)) and
assume that there is a constant 7 > 0 such that each eigenvalue \y of J7(@ + J) satisiies
Ne < —y < 0 for K = 1,...,m. Use the ecuciidean norm as a Lyapunov function to
show that the origm is asymptotically stable.
12. Prove the second imstability theorem of Lyapunov: Let V denote a continuously dif
ferentiable function with the property that, given « > Q, there exists a point (Xa, fe) such
that |xo| < « and Vag dD > 0 for tr>m TH DV&,D = AVR) + WEA where
X > 0 is a constant and W is positive semidefinite, then the zero solution of x’ = f(x, a
is unstable
13. Deduce that the zero solution of the equation x” + (1 + + — ©)x = 0 is unstable
with the aid of the function V(x, x") = xx". [Himr> Reason as in Theorem 10.11, but take
nonautonomy into account]
14. Consider the equation x” + f(x, x’) + g{~) = 0, where f and g are continuously dift
ferentiable, yf (x,y) > 0 for y = 0 and xg(x) > 0 for x = Q Show that the origin is an
asymptotically stable critical point.
15. Prove that if one imposes the condition {)” g{(s)
ds + + as |x} + + in Problem 14,
then the origin is globally asymptotically stable.
16. Reconsider Problem 14 under the assumption that f(x, y) = ya(x, »), where A(x, ») >
7 > 0 for some constant y and all (x,y). Show that the origin is gledaily asymptotically
stable.
17. Estimate the region of asymptotic stability for the equation
F ee =
x"
++ xe + 2xe = O
The direct method of Lyapunoy 339
AT AN INTRODUCTORY LEVEL
1, Boyce, EF, B, and R, C, DiPrima, Elementary Differential Equations and Boundary Value
Problems, New York: John Wiley, 1965.
2, Brauer, F, and J, A, Nohel, Ordinary Differential Equations, A First Course, New York:
W. A, Benjamin, 1967,
3, Kreider, D, L., R. G, Kuller, and D. R. Ostberg, Elementary Differential Equations.
Reading, Mass.; Addison-Wesley, 1964.
AT AN ADVANCED LEVEL
A, Birkhoff, G., and G, C, Rota, Ordinary Differential Equations, Boston: Ginn, 1962.
5, Hartman, P., Ordinary Differential Equations, New York: John Wiley, 1964.
6, Hurewiez, W,, Lectures on Ordinary Differential Equations, Cambridge, Mass.: M_LT.
Press, 1958,
7, Lefschetz, S., Differential Equations: Geometric Theory, New York: Interscience, 1957.
8, Nemytskii, V. V., and V. V. Stepanov, Qualitative Theory of Differential Equations,
Princeton, N, J.: Princeton Univ. Press, 1960.
ON APPLICATIONS
9, Minorsky, N., Nonlinear Oscillations, New York: Van Nostrand, 1962.
341
342 Further reading
ON METHODS OF SOLUTION
11. Kamke, E., Differentialgleichungen Lésungsmethoden and Lésungen, Ann Arbor, Mich.:
J. W. Edwards, 1945.
ON STABILITY
12. Cesari, L., Asymptotic Behavior and Stability Problems in Ordinary Differential Equations,
New York: Academic Press, 1963.
13. Coppel, W. A., Stability and Asymptotic Behavior of Differential Equations, Boston:
D. C. Heath, 1963.
14. Hahn, W., Theory and Application of Lyapunov’s Direct Methods, Englewood Cliffs,
N. J.: Prentice-Hall, 1963.
15. LaSalle, J., and S. Lefschetz, Stability by Liapunov’s Direct Method, with Applications,
New York: Academic Press, 1961.
ON LINEAR ALGEBRA
16. Franklin, J. N., Matrix Theory, Englewood Cliffs, N. J.: Prentice-Hall, 1968.
17. Halmos, P. R., Finite Dimensional Vector Spaces, New York: Van Nostrand, 1958.
Exercise 1.1
[Link]=2+4+4142 bx=2477-44+4
c) x = sint d) x = $In({1 + x|/|1 — x|) + 2
e) x =/tan-!4-4nnd+%+3y
2. vo = —240/7 ft/sec
3,0 = —V/v2 + 2g(Ro — r), where vo and Ro denote initial velocity and altitude.
4. —88 ft/sec”, 440 Ib 5. 316.8 hp
Exercise 1.2
Exercise 1.3
343
344 Hints and answers for selected exercises
Exercise 1.4
1. 11,111 m/sec
2. Yes, if vo > (2Gm,/ro)'/?, then r’ > k/r!/?, where k = V/2Gm,.. Thus r(t) >
[ro*/* + 3k(t — t0)/2]?/* > + as t> +o.
3. 200 km
4. Use (1.4d).
5. v(t) = vo + k In (m(0)/m()). The exhaust velocity should be as high as possible,
and as much of the rocket’s mass as possible should be fuel.
Exercise 1.6
1x TCOSIOL
2. x = w! sin wt. One might strike the block while it is at rest in the equilibrium position,
or one might compress the spring and release it, letting t = 0 denote the first time at which
the block passes from left to right through equilibrium.
3. Both functions describe motions of the block which initiate in the same way. If the
motions were different, then the physical hypothesis that like causes produce like effects
would be violated. Thus the motions are the same and the functions must coincide.
4. Sete, = a/2\/B and c; = 1 in (1.6b). Pull the block one unit to the right and release it.
5. 2r/\/4w2 — a2 sec
Exercise 1.7
Exercise 1.8
1. Ki) = (1 — e-P/2)E/R
2. I(t) = Ee-#®C/R
3. I(t) = (.02)e—599% sin 5000¢
4. I(t) = EVC/L sin (t/\/LC), frequency = 1/2r/LC
Hints and answers for selected exercises 345
Exercise 1.9
Exercise 1.10
1. a > RC/M
2. I(t) = CV;(1). Show that the derivative of a periodic function is periodic.
3. Wind a third coil on the same form as the other two coils. The voltage across the third
coil will be proportional to V,.
Exercise 2.1
Exercise 2.2
Exercise 2.3. Let a, b, c, A, B, C denote arbitrary real constants unless otherwise designated.
1. a) x = +1 and x = sin(t+oc) b) x = tan {tan—*
7-1 c)
c) x = 0 and x = (#2/9 + c) d) x = cexp (-1/(1 + 9)
2 R—a 2 2
Phy DE = oe
i5a2 aoe
Be (3R° + 4Ra + 8 8a’)
3. T = R2\/2gh/ga, where R = radius of tank, h = height, and a = radius of vertical
pipe.
4. x = C~!(-1 + cosh (Cd)
346 Hints and answers for selected exercises
Exercise 2.4
s 1/2
2 =] Uu
1. (r’)
(r’)” — 2Gm,
Gm.r =c,t=
Cut ae a Se rs7)
to + ib&& du
Exercise 2.5
loa) xe tr = b) tx +t =€
ec) x*+x+rf=c d) x?¢ + 42/3
=c
e) Not exact f) x7/2+ 4x4 #4/4=c
g)x+sinxt=c h) Not exact
i) x8/3 +xt4+-x4+7/34+t=c j) Not exact
Exercise 2.6
x= QF 1)?
. xX = tant
nk =COS, (tel 72)
= x°/3 + x7/2 +x
~x=a-—(t+oInijt+cl+(t+o,t+e>0
x’27/2—x-l=¢
~ x’ + In|x’ —1] = x7/2+¢
. x'?/2 + fi f(u)du = c
. x’ sinx’ + cosx’ + x2/2=c
Hints and answers for selected exercises 347
10. x3 + x2/2=c
11. In|jx’ + 1] 4+- @’ + 1)7!4 x2/2=c
12eAine? <4) ine —11) $8 3e2720are
fin 1) ee ete
14. In|x’? — 1] + x? =¢
15. x4/4 + x’ + In|x’ — 1| I fey
16. x?/3 + cx +a=2t
17. x’ —In|x’ +1, + x4*/4=c
Ise — e.-x*/4 = c
Exercise 2.8
Exercise 3.1
Ces
Ne = Ae?t
Exercise 3.2
Exercise 3.3
0 0) —9 2 —3i
ib ||3! 7, \\Ml 3h. 6 4.}1 + 2i
3 1 3 i
De PEAY) i @ 4 6 8 -9
SO 4% 2 6 ib 7.{—1-—2 6
0 @ 3 iL @ 02 20: Fes.
Exercise 3.4
_ [t?e2¢ ab te?!, 2te2* aL e*t, 2e2¢]?
(2x + y, 3x7, 17
. [tan-1¢, 4, n@Q + V1
4+ 2)"
2 10,00]F
» fet seh Jey”
. Any vector of the form [cos ¢+ csin+¢, —sin t]’, where c is arbitrary
5 (byes sree
. Any vector of the form [cosh ¢ + c sinh ¢, sinh ¢]”, where c is arbitrary
© ‘ [Se7'/4 =— e3t/4, eftyr
Nn
=
NHN
WwW
fh
CONN
Exercise 3.5
i @ 1 i &
ia) i aE E |+ E le
10 0 0.4 2 6 4
DOs LO DEaD 7a. | li omesel72
OF004 3 at Les
2t 0 eit 0 1 t
Deal |
a) |0 A b) ki ne c) e a ke 1
oye 0) it @): @)
Gyreuen BS) CPHl@d ib © i) CeO il %
@ @) i QO @ il
Exercise 3.6
1.8 23 ea 4a
Spgs O89 O, 3), 35.30) AZyPne 5, 45 Ph 2
9. 6,3, —1 10. 1, 2 + 3i, 2 — 3i
Exercise 3.7
ee = 1
4 +4sin
= $cos¢ — :4cosV3t
t ++ —<si
574 sin v3 t
A sum of two periodic functions is not periodic unless the ratio of the periods is a rational
number.
Periodic motions occur when w(0) = +y(0), w’(0) = +y’(0).
Exercise 3.9. Let $(t) = ci sint + c2 cos ¢, where c, and ce are arbitrary constants.
l1.x=¢()+1 2.x=¢o)
+1
3.x=¢o() +1+1 4 x=¢)4+14+2t4+7
5. X = g(t) + e'/2 6. x = o(t) + e#/5
Tepe OC) e(t —) 1)/2 8. x = ¢() — 24 #2 +4 3e!/2
9. x = o(t) — (sin 21)/3 10. x = g(t) — 3¢t(cos £)/2
11. x = g(t) — c(sin 21)/3 — 4(cos 21)/9 12. x = ¢(f) + t — tcos 1/2
13. x = d(4) + (et sint — 2e cos £)/5 14. x = (4 — (e?! sin 3t + 3e7! cos 32)/40
15. x = o(t) + e[14 — 102) cos t + (St — 2) sin 1]/25
16. x = ¢() + ¢ + [e’sint — 2e' cos t]/5
17. x = cye~* + cote + cze—* 4+ t2e-*/6
18. x = cye~* + cate + cze—** + (42 — £?)e—*/18
19. x = cye~* + cote? + cz3e—*! + (#2 — t?)e~*/6 + Te—*/9
20. x = ph/w + (a — ph/w) cos (t\/ wg/ph)
One could determine the period 7 of oscillation and the radius r by observation. Then
ph = wgT?/4nr?. The weight of the buoy is mr?hpg.
21. T= c1 exp (—R/2L + RV14L/R2C/2L)
— + ce exp (—R/2L — RvV/1 — 4L/R2C/2L)
wEo
sin (wt + 9),
+ V(Ra® + (C71 — La
where tang = (C-! — Lw?)/Rw. The light burns most brightly when L = 1/w?C.
22. The solution becomes unbounded as t > +.
23. w = Vk/m
Exercise 3.10
Exercise 3.13
a 5 ee
3. A has distinct eigenvalues. Its Jordan forms are diagonal matrices.
x = Ae?* + Ber + Ce7*/20, y = Bett + Ce?t/4, z= 'Ce%*
3 i ©
Ab ALS JD) = f 3 0} and letP denote any nonsingular 3 X 3 matrix. Define A = PDP}.
0 @ 7
5. The answers are not unique.
Oaoant 2 1 1 1-1 0
fo 0 oI l+i 1-i c)}—-2 3-1
CSO) 0 -2+i -2-i —-2 2 2
—2 1 38
d) 1 =|
-1 0 2
Exercise 4.2
3. |A|- |x| = 6(1 + 7) for (a), 12 for (b), and (14 + /2)(2 + mw) for (c).
4. Hint: A continuous real valued function on a closed and bounded interval attains its
maximum on that interval.
8. a) x(t) < x(O)e?+ b) x() < x(O)e”
c) x(t) < x(0) exp (1 — cos A) d) x(t) < x(0) exp JO a(s) ds
9. Hint: Set x(t) = A + Bf u(s) ds. Then proceed as in Problem 8.
10. (4) = —e-‘ cos (¢ — 1/4), T = 37/4, no.
11. y@) = (e=" — es3t427)7/3
12. Set ¢ = T + 1 in answer 11 to obtain e~7(1 — e—)/3.
13. Hint: In each identity, let d(7) denote the left side and Y(#) the right. Show that x =
o(t) and x = Y(*) are both solutions to some one initial value problem.
Exercise 4.3
Exercise 4.4
Exercise 4.5
1. e 3! e 2S Cm ea
3. ae we Ave fe fe
Sab COS, ea Sint Ch 2 Siig CrreeSic, Sins weor
Exercise 4.6
Sa 2e** + Ste2!, y = e#
92x 3e' sin ¢ + 2e' cost, y = 3e' cost — 2e! sin ¢
10. u (—2e! + (3 + 2V2el—1+VDE + (3 — 2W2el-1-VD/4, v = Layee Ud
Exercise 4.7
1. x = (sin¢ + (2 — Acosr)/2
2.x = sint+¢sint-+ (cos/Incos¢, |t| < 1/2
82x = 277 — 272 + 7472
AY x = (cost + +-"1-! cos 1—1)/2
5. x = ef + e74 y = e?! — et
6. x = e — t/3 — 1/9, y = —1/3 — 1/9 + 10e%*/9
Cn er
8 x=t+14+AnNnd+9)y=14+014+)mn0+9, |e <1
9 x=1t+intzy=r—t? +¢lnt,z=t+tInt
10. x = 4/3 — t, y= 2/3 4+ 2¢—-—1,2=t7
+2
11. x = —sint+cost—1, + <0; x = —sint—cost+1, ¢+>0
1D se] Sine
Sie WSSs ae x = (1 —7/2)sint+
cost, t>7/2
Exercise 4.8
equal to zero.
14. Set any three, but not four, of the constants A, B, C, D in
equal to zero.
15. Set any three, but not four, of the constants A, B, C, D in
Exercise 5.1
mees
My ad
—9]7o*
fees:
ee
bo11210)
|e)
Dee Fes
7z74+A—3 A+1 = 1 |[ uO
j) QAP +d? — 314+ 1972 —1 27 +r A FT] vO)
—h 3\— 1 2 |] w(O)
2. a) 1/r2 61/021) Oa 1
d) 3/(A? — 2d + 10) e) 1/(2 — 6d+ 10) f) (A — 2)/Q2 — 4\4+ 13)
g) (A — 3)/(? — 6 + 13) h) 1/AQ2— 2A-—2) i) IMA— 2
Di Qe 2/O= 3)00= 1), «ky Or D/A = 3+ 1) 2 2/XA7 + 4)
m) 2/(\ — 4)(A? — 84+ 20) n) 1/(1 — A)? 0) 2e~(sinh bd)/X
p) emer + e-®% — 2)/Br2 q) 1/M1 — e>)
3. Assume the contrary and use |’Hospital’s rule.
4. If |f()| < Me™ for 0 < t < +0, then |L&f(A)| < M/( — 7») for Red > 7.
5. Set u = at in fo'? f(ade—!
dt.
7. w—1 tan—!(r/d) = 1—! (r/2 — tan—! (/n))
Exercise 5.2
e) —6e(A2 + 2 — e — e)/A4
b) x= Bier a2e>' Saxe 4te-— en
Sy A)ex Seen re,
é) x = e cost
d) x = te“{l + 2)
f) x = e-*sint + 2e* cost + 2sint — 2 cost
g) x = e~7# + eff, y = ett — e-#! h) x = Ste*! + 2e74 y =e
iN) 2% 3e' sin t + 2e' cost, y = —2e'sint + 3e' cost
358 Hints and answers for selected exercises
Exercise 5.3
t—1
Exercise 5.4
1. For parts (a) through (j), there is given a particular solution of the equation.
a)x=1 b)x =f c) x=1-+% dx=1+254+ 72 e) x = e/2
f) x = e'(t — 1)/2 g) x = —24 2? + 3et/2 h) x = —(sin 29/3
i) x = (e'sint — 2e* cos f)/5 J) x =t+ (et sint — 2e' cos 1/5
2. There is a solution of period T which is given by
— T/2
e
XG =) eee
i or ° EF OLS 2
tet?
50) I N by “Toh OES 120s ye
Hints and answers for selected exercises 359
3. Let x denote the displacement of the block from the position on top of the cart that it
occupies when the spring is unstressed. Then x = 1 — cost for 0 <4<1 and x =
—cos f 4- cos (¢ — 1) fon t= 1.
RE a)
4. £LV(A) =
L Q2 + w2)Q2 + RC + R/L)
coth (Am/2w)
+ R(RCX(0) — drAy(0))/” + ROA + R/L)
Exercise 6.2
6. Solutions are of the form x = >-?-9 cxt*, where cry2 = —Y—A)O+K + 1)ex/
Geek Dp If k =v, then ci42 = 0. Thus cyi2, = 0 for all m = 1, 2,°° ;
SU aS 9 MR RC heclr che Sal A GD I
7. Represent e’ by its Maclaurin series. Then x = >°¥-0 cxt*, where co = ce = 0,
eq = 1, and cro = —[1/K — 1)! co/(k — 2)! + oe Hepa /1N/K + 2K + YD,
le S21,
8. In each part, an arbitrary matrix solution is given by X = > 20 Crt, where Co is
arbitrary, Cy = ACo, (kK + 1)Ci41 = AC, + BCy_1, k => 1, and A, B are as given below.
|
0 1
9. X = o_o B™Cot?”/2"m!, Co arbitrary and B = F 0
1 O TiS)
a)
Exercise 6.5
Pee AE ON AS ern St
oe ha N= as
h) A7+ 2 7 Awe One
eee | eee
xX = f(2) ko ext® and x = g(t) 2-9 det*, where co = do = 1 and
a) f(t) = 117, cry = —6cx/[6(K + 3/2)(k + 1/2) + (k + 3/2) + 1];
g(t) = 119, cher = —6cx/[6(K + 4/3)(k + 1/3) + (k + 4/3) + 11.
b) f(t) = £93, cy, = (—1)*/ + 2V3)(4 — 4V3)- ++(K? = 2kV3);
g(t) = 1-8, cy = (—1)*/01 — 2/3)4 — 4/3) --- (KR? — 2kV/3).
c) f@) = t, cry = —ex/(K + 10k +1 4 21);
a) = las Gy, = Oy
Cyen—2, k > 0. Then x = >°%_, ext* is one solution. To obtain a second linearly
independent solution x = 11/3 D0f_9 dy 7*, let
== (k*
(E2 + 2ik) 1
“)—1 lee
cy mee c.-1 for k> LE
9. Set s = 11/3 and x() = y(s) to obtain sy” — 2y’ + 9s!°y = 0, y(0) = 0, y’(0)
xX = Po C1imt 11/3, where co = 1 and c1im = —9¢11~@—1)/11m(Q11m — 3), m > 1.
10. Solutions of the transformed system are of the form c1s?@i(s) + c2sp2(s) +
c2Ys?@1(s) Ins, where ci, c2 are constants and 1, $2 are analytic at s = 0. These
solutions approach zero as s does; solutions of the original equation approach zero as
(> o
Exercise 6.6
1. y=
yo i 1-1
ge (B)
—
_ 1/2) yr,
y, BO -|
a
0 a,
aie
9
4 . a) Let t> 04+ in cyt? yi() 4+ coyo( = 0.
b) Let t— 04 in e1f?'yi() + coye(t) = 0.
c) Note that lim;_,o+ t* does not exist.
d) Let > 0+ in cyyi() + ceyo(t) = 0.
|Ax(t — 1)]| < 214.) < 2-1 4-2) <1
pS Map h) e | 1 2/(k + 1)|. |K(A — kI)“}| < 34/5
2/(k + 1) 1
Exercise 6.7
2(—1)'-2(2k — 2)
4k(k — 2)d2; = —do,_2 +
(k — 2)'k122%=D ©
14. Hint: Show that r?x’” + rx’ + (r?\ — n?)x = 0 becomes 12y” + ty’ + (22 — n?)y =
0 if t = Vdr and y(t) = x(v).
Exercise 7.1
. Another hint: Differentiate u(t) = o()Y/(1) — Y()¢d’(1) and integrate wv’ from f; to fo.
. Use Problem 3.
. Use Sturm’s comparison theorem.
5 10) yA Gy Se G BS Uy or erp ee AEE)
. a) x = (ec; + colndr}/? is a general solution.
Ss
On
“ION
ee ha, 13
ee (42) oe
10. Supposey did. Let {t;,} be a sequence of points such that Y(t.) = O and lim; 4, t. = ty
exists. By continuity, ¥(t,,.) = 0. Use Rolle’s theorem to find a sequence s, such that
wW'(s,) = 0 and s, > ft, as k— +0. Thus deduce that y(t.) = 0. Hence y(t)= 0.
Exercise 7.2
1. a) Three b) One
2-3. a) There are four linearly independent periodic solutions of period 7.
b) There are precisely two linearly independent solutions of period 7. All other
solutions are unbounded.
4. There are four linearly independent periodic solutions: two have period 7 and two have
period \/27. Not every solution is periodic since a sum of functions with irrationally
related periods is not periodic.
5. x1 = —e~ 24/2 + (1 + V/2)e—V¥*#/2 R
xg = 1 —e-24/2, x3 = e724, x4 = —e7-74/2 4 e-V7t/2
6. Only (c)
8. a2 > 0, a3 = a2a\
9. x = ci sint + cecost + e~/2(e3 cos(V7 t/2) + c4 sin (V7 t/2))
10. Only (b)
364 Hints and answers for selected, exercises
11. Let x and y denote the displacements of the sheet and block respectively from equilibrium.
The equations of motion are Mx” = —kx + a(y’ — x’), my” = —a(y’ — x’). Note that
\ = Ois an eigenvalue corresponding to [x, x’, y, XY 7 = [0,0, —1, Q}?. By Wall's criterion,
all the other eigenvalues have negative real parts. Now deduce that x — 0.x" - Q»— — 1,
y’—0as t—-+o. The plate must be two units wide.
Exercise 7.3
4. a) (tf) =a Re
1 F 0}|e 1 b) (A) _ ie
jen" ‘ Of‘|2 x
Exercise 7.4
1. The integrand is periodic.
2. They are bounded but do not approach zero as f—> +c:c. Make use of uniform con-
vergence to approximate g by asum > ;_, k—? cos (tk—") with error less than 4—-* F_, &-?.
Assume, for contradiction, that g(t) ~ 0 as t— +2 and consider fy = 2+ MN? for sof
ficiently large M.
3. They are all bounded (Corollary 7.7).
4. Use Theorem 7.7. a) They are at least bounded. b}d) They approach zero.
6. a) Use Theorem 7.8 and Wall’s criterion. b) Use Theorems 7.7 and 7.8.
7. d) They are all bounded.
0 0
8. Take
ake R(t)
R(t) = E Va + capand apply ; Theorem 7.10.
11. Using the decomposition L(r) -| ’ | and R(‘) = t-*(sin? AN, ome sees that
hypothesis (iii) is violated and the conclusion of the theorem does not hold either.
12. A fundamental matrix solution is
t
Exercise 7.5
Exercise 8.2
Exercise 8.3
1. Yes. Note that x2/(x? + y?) < 1 for all (x, y) ¥ (0, 0).
2. Yes. Let a, denote the jth component of x,;. Then 0 < azi1 — a, < (k — 1)7}
(ax — ax—1) for k > 2. Thus 0 < agai — a < (a2 — a1)/(k — 1)! and az41 < a2 +
Fs (a2 — ai)/(r — U!.
3. When plotted in the plane, the points form a rectangular array. List the points in
diagonal order beginning with (1, 1). Any subsequence consisting of points arranged hori-
zontally or vertically will converge. There are also subsequences converging to (0, 0).
4. No. g(m) > m for every integer m > 0.
ee — — eer a — ee ee
6. The boundary of {x: g(x) > 0} is {x: g(x) = O}.
7. Construct a sequence of points from the set which converges to the least upper bound o
of the set. Then deduce that ca is in the set.
Exercise 8.4
1. Let p be a point A and let q = (1/27, 0). If there were an arc from q to p, it would
be given by y = sinx—! for 0 < x < 1/2 and it would be true that (0, sinx—!) — p
as x > 0+.
7 gg
3. a) If it were not connected, it would be the union of two closed and bounded sets
separated by a positive distance. Now use the definition of interval.
b) Assume that the range is disconnected and deduce that [a, 5] is disconnected.
Exercise 8.5
4. Consider unit spheres with centers (0,0,0) and (0,0,1). These spheres separate
®? X R! into four domains.
5. In plane polar coordinates the systems have the form (a) r’ = 4, & = O and (b)r’ = 0,
2. af/dx = (1 — t°x?)/(1 + 1°x?)? is bounded for all (x, f) since —3 < a/(1 + a?) < 4
for any real a.
Hints and answers for selected exercises 367
Exercise 8.7
11,1 — 7,1 —#? + £4/2!; e-? = > 2) (—2)*/k!}
Lite,lt+r+74+P73; Q—-)=PEot
2€=¢%
pe I be any interval containing ¢ = 0. Define x = 0 for in Jand x = #4 for ¢ not
in J.
4. dx(s) ~0Oask>+oua, O0<s<1
foes)ds
> lasko+to, 0<t<1
5. e/100!
GSE)
Exercise 8.9
1. Let p = x? + y? and note that p’ = 29(1 — p). Now use Theorem 8.5 taking discs
for B.
2. The trajectory of any solution initiating in the first quadrant is contained in the first
quadrant. Note that u/(f) > u?() and vo'(t) > v(a).
4. Let u(t) = fo M\6(s) — xo] ds.
5. Show that f is globally lipschitzian.
6. Yes. Modify your argument for Problem 4.
7,. a) The right sides are globally bounded.
b) The right sides are globally lipschitzian.
8. Use polar coordinates. The examples indicate that Theorem 8.6 is a stronger result
than Theorem 8.5.
9. Continuity is sufficient. Modify your solution of Problem 6.
10. Let @ denote a solution. Show that W(@(z)) is nonincreasing; hence @ is bounded,
11. Pattern your argument after Example 2.
Exercise 8.10
- € 2 SS)
1. $(xo, to, = bo + ¢? — ali”?
2. They are at least bounded by Theorem 7.7. 87,
Alternatively, seek a solution e*'[sin
cos 81]? of the variational equations and proceed as in Example 3, Section aot
368 Hints and amswers far seleegadl exewomes
® 1 @
5 Te
imarom ct ma 0 i}
®@ 0 2
4 Use polar coordinaies.
5. Given « > Q there & a 3 > O such that ko — a < 3 bmplies
gixo, to, — dfa,m,2|
<< Rrahl +> wm
Exercise
9.2
Exervise
9.3
1. bb Ye. Yes
2. a) Show that »* + 2 fq 2 a = ¢ defines a Simple dosed curve fee all ¢ > Q@
b) Yes No. Comsder g(x) = xe,
3. b) The critical poimt » = 0 corresponds to escape; w = £7? conrespends to a Giowlar
orbit. Plot > = 2ku*/3 — x? + ¢ and consider eparatnives first,
4. Nowee that there are critical points which correspond t an inverted position for the
Exercise 9.4
where a = k/m(? and e > 0. Now show that @—> +2 ast—> to.
5. Use Theorem 9.8.
6. u(t) = ccos (4c7A), v(t) = csin (4c71). The period is 7/2c?.
Exercise 9.5
NS en = Dip
2. If not, then there is a bounded sequence of points @(t;,) such that dis (@(t;),2) > € > 0
for some e. Now use the Bolzano-Weierstrass theorem and Theorem 9.9.
3. a) Show that lim,_,,, V(@()) exists; let p and q denote distinct points of Q; and
consider sequences {s,} and {f,} such that @(s,) — p and @(t,) — q. Thus deduce
that V(p) = V(q).
b) Take V(x, y) = x? + y?.
4. a) (x — z) and (x’ — 2’) are periodic. b) Let A be irrational.
Exercise 9.6
1. In polar coordinates, dy/dx = [1 — r) + rcot 6]/[1 — r)coté — r].
2. Use polar coordinates. The ratio is e?”.
3. Let x = (2), 0 < ¢ < 2 denote a parameterization of C. Then
f = (9 — p)- (6 — P)
is a differentiable real-valued function which a minimum at fg in [0, 27]. Thus
Exercise 9.8
3. The condition wR > v implies that y’ is bounded away from zero near (R, 0).
4. With r2 = (x — R)? + y?, 6 = tan—! (9/(x« — R)), the differential equations become
r = —v+oRsin#, rf’ = rw + wReos 8.
370 Hints and answers for selected exercises
Exercise 9.9
Exercise 10.1
1. The annulus 0 < r < 1, the circle r = 1, the region r > 1, any solution path, any
region between two distinct solution paths.
2. Construct open path tubes around the limit cycle.
3. What really needs to be verified here is stability. A node or focus is quasi-asymptotically
stable by definition.
4. Use the Jordan curve theorem and Theorem 9.3.
5. All solutions have nonfinite future escape times.
6. Note that |Y(O| < W(t) I[ — 11)? + lle~"-@ < 4\W(t)\e?". Choose 6; = 1
and T = 2\In (€1/4)|.
7. To show stability note that |g(d| < rai ( + &4)-1 + 24+ wie dd +k),
where N is the greatest integer in k. To disprove asymptotic stability, note that g(t) > 1
whenever f¢ is an integer.
Exercise 10.2
1. The equation is asymptotically stable if and only if a > 4. Use the mean value theorem
to estimate ¢sinIn¢ — ssinIns and apply Theorem 10.2 to conclude that uniform asymp-
totic stability holds for a > 1//2. Next let s, = exp [2rn + 1/4] and t, = exp [27m +
n—1 + 7/4] and show that the asymptotic stability is not uniform for $a <0/x/2.
Hints and answers for selected exercises 371
Exercise 10.3
4. Pattern your argument after the proof of Theorem 10.4 with A(x, y, 1) = g(Ao(V/x2 + y2).
Use Gronwall’s inequality (Problem 5, Exercise 7.4) with v(/) = g(#), noting that [o°” g(t) dt
converges even though g is unbounded for 0 < ¢t < +.
5. Replace the hypothesis h(x, 7) = o(|x|) by h(x, 4) = g(‘)b(x), where b(x) = o(|x|) and
0” |g(t)| dt converges.
Exercise 10.4
1. Note that |@(xo, to, d| < (|xo| + e2 — le—“—o). Thus, if |xo| < 1, then |F(xo)| < 1
also. By uniqueness of solutions for initial value problems, @(Xo0, fo, 1) = $(Xo, fo, t + 27).
3. Use Theorems 7.7 and 10.7.
4. The solution is x = 0, y = sin?¢, z = cos¢. It is asymptotically stable by Theorems 7.7
and 10.7.
5. It is stable, but not asymptotically stable.
7. Use Theorems 7.6 and 10.7 as in Example 4.
8. Use the hint for Problem 7.
9. Use the hint for Problem 7. One sees from van der Pol’s equation that the condition
is sufficient for asymptotic stability, but it is not necessary.
Exercise 10.5
1. Hint: Let B be symmetric. Then it is necessary that all eigenvalues of B be positive and
that all eigenvalues of A7B’ + BA be nonpositive.
3. Take V(x, x’, 1) = (p(x? + x’? )e?®, assuming that p is positive-valued and non-
increasing.
4. Use right- and left-hand derivatives.
6. It is unstable.
7. Let p be differentiable, positive-valued, and strictly increasing. Then x’’ + p(‘)x = 0
is unstable.
372 Hints and answers for selected exercises
ABCDE7987654321
seat) SUMS Lett Qaags
ral => ron EES = OF
—)
=
ae |
oa i) > D
= ab @ (60 aaa
= JOC Bowe i
= ter sy :
) why AN A}
aA
Ah
Uf