ANALYSIS
OF
NONLINEAR
CONTROL
SYSTEMS
DUNSTAN GRAHAM
Princeton University
and
Systems Technology, Incorporated
and
DUANE McRUER
Systems Technology, Incorporated
Dover Publications, Inc., New YorkCopyright © 1961 by Dunstan Graham and Duane
McRuer.
All rights reserved under Pan American and Inter-
national Copyright Conventions.
Published in Canada by General Publishing Com-
pany, Ltd., 30 Lesmill Road, Don Mills, Toronto,
Ontario.
Published in the United Kingdom by Constable
and Company, Ltd., 10 Orange Street, London WC 2.
This Dover edition, first published in 1971, is an
unabridged and unaltered republication of the
work originally published by John Wiley & Sons,
Inc, in 1961.
tT
2
G
1
woNImaty
aw
International Standard Book Number: 0-486-61014-4
Library of Congress Catalog Card Number: 72-179143
Manufactured in the United States of America
Dover Publications, Inc.
180 Varick Street
New York, N. Y. 10014To our teachers:
men, books, and machines
28177
eeepakPREFACE
This book is intended primarily for those who are, or those who are
about to become, practicing control system engineers.
When we, the authors, began our professional careers, there were no
books on our subject. In the course of our work we collected, collated, and
sometimes corrected material from a very large number of sources. We
also, naturally, developed our own original thoughts on various aspects of
nonlinear behavior in control systems. We felt that an integrated presenta-
tion of what we have learned would be of value to others like ourselves,
both for instruction and as a reference.
The control engineer is interested in the stability, accuracy, and response
characteristics of physical systems. For linear constant-parameter systems
powerful methods are available to determine these characteristics. The
practical application of these techniques has been a paramount factor in
the recent enormous growth of automatic control technology. A system
represented as linear, however, is a mathematical abstraction never
encountered in a real world. It is often true that experimental facts do not
correspond with any prediction of linear constant-parameter theory. In
this case, nonlinear theory is essential to the description and understanding
of the distinctive behavior produced by the introduction of nonlinearities
into physical systems. Although the practicing engineer continually
encounters nonlinearities, he may or may not know how to deal with them.
Very likely his education in mathematics, physics, and control system
engineering has not adequately prepared him to cope with the problems
that arise in connection with the analysis of actual, and therefore non-
linear, control systems. It is our hope that the present volume will remedy
this defect.
The purpose of this book is to present the essential mathematical tools
for solving the analysis problems that arise in the design of nonlinear
control systems. By devoting the book almost entirely to the analysis of
control systems we have been able to concentrate on the methods that, to
a large degree and in a practical way, allow the engineer to answer his
fundamental questions about stability, accuracy, and response for non-
linear systems. To this end, the largest portion of the book treats twoviii PREFACE
subjects: a general theory of “‘quasi-linear” systems (for the description
of periodic and random input behavior), and topological phase space tech-
niques (for the description of transient behavior).
The philosophy adopted in writing the book was that everything included
should be useful; that the reader should be led from the familiar and simple
toward the more difficult aspects of the subject; and finally that clarity and
“feel” for the physical aspects of the problem should be emphasized. Our
bias throughout has been that of practicing’engineers. Therefore, our
object and our method have been to present adequate theory, and to
immediately illustrate the application of the theory with a wide variety of
physically meaningful, practical problems. This approach has been adopted
in deliberate preference to an attempt to achieve mathematical elegance
and to present all the details required by rigor.
On the other hand, the book deals with mathematical abstractions of
physical systems rather than with the physical characteristics of the system
elements themselves. By eliminating any special emphasis on the physical
characteristics of system elements peculiar to various branches of engineer-
ing science, it is our hope that the book may be read with profit alike by
aeronautical, chemical, electrical, and mechanical engineers, as well as
possibly by those physicists, mathematicians, economists, biologists, and
psychologists who may be concerned with nonlinear feedback systems.
Insofar as possible, results which can be applied to the solution of
problems other than the ones we have considered are summarized in charts
and tables. We expect that several of these will have an outstanding
utilitarian value. Some readers, already knowledgeable, will find immediate
uses for these data.
The text, however, aims to teach by means of discussion. It is for the
graduate student or practicing engineer who has an interest in nonlinear
control system analysis, but who has not necessarily previously undertaken
a study of the subject. No background beyond that supplied by first
courses in differential equations, circuit analysis, and mechanics is pre-
supposed. (A familiarity with methods of analysis and synthesis of linear
control systems is helpful in supplying a frame of reference and motivation
for the study of nonlinear control systems but is not absolutely required.)
Any necessary mathematics beyond the prerequisite course in differential
equations are introduced as the work progresses.
The dedication records a large measure of our indebtedness as authors.
We further wish to acknowledge here the help of the individuals whose
several skills we required in the steps along the way to the completion of a
book. Our especial thanks are due Mr. Ronald O. Anderson for his careful
and complete review of the manuscript. The changes and corrections which
he suggested helped to make the book what it now is. Our thanks are also
due to Mr. Richard A. Peters, who prepared a part of the material on relayChapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
CONTENTS
Introduction
1.1 The Differential Equations of Control Systems and Block
Diagrams
1.2 Nonlinearities Described and Classified
1.3 Behavior of Nonlinear Parameter Systems
1.4 Difficulties of Nonlinear Analysis
General Techniques for Solving Nonlinear Control
Problems
2.1 Liapounoff Stability
2.2 Direct Solutions
2.3 Approximate Solutions in Series
2.4 Step by Step Integration
2.5 Piecewise Linear Solutions
2.6 Evaluation of Methods of Nonlinear Control Analysis
Introduction to Quasi Linearization and the Describing
Function Technique
Sinusoidal Describing Functions for Isolated Nonlinear
Elements
4.1 Interpretation of the Sinusoidal Describing Function and
Correlation Concepts
4.2 Sinusoidal Describing Functions of Simple Nonlinearities
4.3 Sinusoidal Describing Functions for Frequency Invariant
Complex Nonlinearities
4.4 Sinusoidal Describing Functions for Frequency Variant
Complex Nonlinearities
Quasi-Linear Closed Loop Systems with Periodic Outputs
5.1 The Quasi-Linear Closed Loop System for Periodic
Phenomena
5.2 Examples of Periodic Output Closed Loop Systems
Formulated from Classical Nonlinear Equations
xi
77
92
94
104
120
134
144
145
153Chapter 6
Chapter 7
Chapter 8
Chapter 9
CONTENTS
5.3 Extension of Linear Feedback System Graphical Analysis
to Nonlinear Systems
5.4 Estimation of Stability and the Importance of Harmonics
5.5 The Gain Phase Plot Used to Determine the Conditions
for a Limit Cycle and Its Stability
5.6 The Closed Loop Response
Random Input Describing Functions
6.1 Transition from Deterministic to Probabilistic Descrip-
tions of Time Signals
6.2 Gaussian Input Describing Functions for Isolated Non-
linear Elements
6.3 The Measurement of Quasi-Linear Describing Functions
with Stationary Inputs
6.4 Closed Loop Systems with Gaussian Inputs
The Phase Plane Method
7.1 Trajectories and Singular Points
7.2 Phase Plane Trajectories for Linear Systems
7.3 The Method of Isoclines
7.4 Special Constructions in the Phase Plane
7.5 Constructions for Time
7.6 Preliminary Appreciation of the Phase Plane Technique
Trajectories and Stability
8.1 Nonlinear Performance Analyzed on a Piecewise Linear
Basis
8.2 Examples of the Trajectories of Nonlinear Second-Order
Control Systems
8.3 A Summary of Mathematical Theorems on Limit Cycles
8.4 The Mathieu Equation as a Stability Criterion
8.5 The Second or Direct Method of Liapounoff
Relay Servos, Switching, and Programmed Controllers
9.1 Relay Servomechanisms and Regulators
9.2 Programmed Controllers and Optimum Switching
9.3 Trajectories for Nonautonomous and Higher Order
‘Systems
Chapter 10 Epilog and Consequence
180
183
193
213
214
230
244
256
317
321
345
353
356
370
371
393
418
434CONTENTS
Appendix I Amplitude Ratio-Decibel Conversion Chart
Appendix II Amplitude Ratio Departures from the Asymptotes and
Phase Angle Curves
Appendix Ili The Routh and Hurwitz Stability Criteria
Appendix IV The Nichols Chart
Author Index
Subject Index
xiii
455
456
457
461
467
4711
INTRODUCTION
“Nonlinearities” are features of some dynamic system elements which
produce distinctive behavior and preclude an adequate mathematical
analysis of system behavior based on linear models. All physical systems
are nonlinear and have time-varying parameters in some degree. This is
true if for no other reason than that there are always some limits, such as
mechanical stops and fuses, to the excursions of the variables; and the
parts fatigue, corrode, or otherwise deteriorate with time. Where the
effect of the nonlinearity is very small, or if the parameters vary only
slowly with time, linear constant-parameter methods of, analysis can be
applied to give an approximate answer which is adequate for engineering
purposes. The analysis and synthesis of physical systems, predicated on the
study of linear constant-parameter mathematical models, has been, in fact,
an outstandingly successful enterprise. A system represented as linear,
however, is to some extent a mathematical abstraction that can never be
encountered in a real world. Either by design or because of nature’s ways
it is often true that experimental facts do not, or would not, correspond
with any prediction of linear constant-parameter theory. In this case non-
linear or time-varying-parameter theory is essential to the description and
understanding of physical phenomena.
The control engineer is interested in the stability, accuracy, and speed of
response of systems containing nonlinearities, but he finds these qualities
more difficult to understand and to predict when the system can no longer
be described by a linear idealization.
Regardless of the exact nature of the nonlinearity or the purposes of
the designer, there are some general characteristic patterns of nonlinear
behavior which are impossible in a linear system. In a constant-parameter
linear system, for example, the shape of the time response is independent of
the size of the input or initial condition; and stability, or the lack of it,isa
property of the system. In a nonlinear system, on the other hand, the
nature of the time response, and in fact stability, is usually dependent on
the input or initial condition. New frequencies—harmonics and subhar-
monics of the input frequencies—are generated by nonlinear components.
12 INTRODUCTION
Constant-parameter linear components will respond only with the fre-
quencies present in the input. Furthermore, the peculiar phenomenon of
limit cycles, periodic oscillations of fixed frequency and amplitude, cannot
exist in a linear system.
The several ways in which the behavior of nonlinear systems can be re-
markably different from the behavior of linear systems represent both the
weaknesses and possible strengths of nonlinear control systems.
Whether the nonlinearity is undesirable or intended, the objective of
nonlinear analysis is to predict the behavior of the system. Linear analysis
inherently cannot predict those features of behavior which are character-
istic of nonlinear systems. For this reason different approaches must be
applied.
This book is concerned with practical methods of engineering analysis
for nonlinear control systems. Following an introductory section on the
mathematical representation of control systems, nonlinear components
and nonlinearities are defined, described, and classified. There is then a
section on the characteristics of nonlinear behavior and one on the diffi-
culties presented by the analysis of nonlinear problems. The solution of
nonlinear differential equations is discussed, and the two major engineering
approaches are then introduced. The describing function technique and
Phase plane method are presented in detail, and are applied to feedback
control system design problems, together with comments on the successes
and failures of nonlinear analysis and the promise of improved perform-
ance in the synthesis of nonlinear control systems.
I... THE DIFFERENTIAL EQUATIONS OF CONTROL
SYSTEMS AND BLOCK DIAGRAMS
The mathematical study of nonlinear feedback control systems is a part
of the larger field of nonlinear mechanics, but it is specialized by some
properties of feedback control itself. In order, therefore, to provide a
basis for the discussion of nonlinear control systems, the main features of
feedback control and the pertinent highly developed techniques of linear
control system analysis are summarized here.
Control systems are designed to be useful, and the objective of the
control engineer is to satisfy an explicit or implied engineering specifica-
tion. Every control system comprises a controlled element with one or
more input variables which may be manipulated to produce an output
response. The entire system is presumed to consist of cause and effect
elements. In a feedback control system the response is measured by a
feedback element and is compared to some set or desired value. If there isDIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS 3
a difference (error), the control elements are operated so as to tend to
force the output measurement to match the set or desired value. A feed-
back control system serves its purpose only when the output response is
regulated, slaved, or “‘servoed” to the set value. The speed and accuracy
with which this is accomplished are measures of the performance of the
control system. Unfortunately, the same factors which tend to improve
speed and accuracy often produce instability, and if the system oscillates
wildly it is useless. Any useful control system, therefore, is designed to
secure, usually in turn,
1. stability
2. accuracy
3. speed of response
These qualities are predictable by control system analysis techniques.
In addition, the system must be designed for
4. reliability
5. minimum cost
These qualities are a function of detailed mechanical and electrical design,
subjects which are considered to be beyond the scope of this book.
The most concrete way to ascertain whether or not any system possesses
the proper qualities of stability, accuracy, and speed of response is to
measure physical phenomena when the actual system is subjected to a
series of tests. This empirical approach, however, is ordinarily precluded
in the early design phases of a control system project, as well as at other
times when the physical system may be unavailable for the purposes of
testing, or where, perhaps, the tests may be dangerous or very expensive.
Fortunately, an engineer can come close to achieving substantially equiva-
lent results by performing “experiments” with mathematical models. By
analysis, that is, by the determination from the system model and several
inputs what the outputs would be, it is possible to assess the degree to
which the system satisfies the requirements. This process may have to be
repeated several times as changes to improve the performance are made in
the representation of the system. (Mathematical synthesis, that is, the
determination of the mathematical model from the inputs and desired
outputs, is a more difficult process, and methods for direct synthesis of
nonlinear systems are not well developed.)
For an engineer, the most generally useful mathematical model is the
set of differential equations describing the balance of forces, moments,
voltages, and currents occurring within and applied to the devices which4 INTRODUCTION
constitute the control system.* The equations may be written in the con-
ventional way, or may be shown in a diagrammatic fashion.
When they have been properly derived, all of the fundamental infor-
mation required to define the physical behavior of the system is inherent in
these differential equations. Interpretation of this information is the task
of the control system analyst. The equations themselves show the physical
relationships governing how an element or system responds. Solutions of
the differential equations will show exactly what an element or system will
do in response to a particular input. A comprehensive set of input-
Tesponse pairs for a variety of inputs would be transient response models.
Zi :
Spring force Damper force
K(x) = oO Be )
Mass (m)
x
f(t)
Figure I-I. A spring-mass-damper system,
These serve to define the behavior of an element or system. Since, how-
ever, the solution of differential equations for complicated systems can be
both enormously difficult and very tedious, the system engineer is inclined
to use additional special methods of interpreting the equations. He
employs methods which are easier to apply and which at the same time
promote an appreciation of the physical aspects of the problem.
Some types of differential equations are more difficult to solve than other
types, so the initial step in control system analysis usually involves the
approximation of the most complete equations describing the control
* Several books which illustrate the description of physical elements in terms of
differential equations are:
M. F. Gardner, J. L. Barnes, Transients in Linear Systems, John Wiley & Sons, New
York, 1942.
W. C. Johnson, Mathematical and Physical Principles of Engineering Analysis,
McGraw-Hill Book Co., New York, 1944.
R. A. Bruns, R. M. Saunders, Analysis of Feedback Control Systems, McGraw-Hill
Book Co., New York, 1955.
R. Oldenburger, Mathematical Engineering Analysis, The Macmillan Co., New York,
1950. (Dover reprint.)DIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS 5
system by a set which is easier to solve. There is particular power and
elegance in the methods of solution of linear differential equations with
constant coefficients. Therefore, as many system elements as may be fairly
described in linear terms are represented that way. Where this is impos-
sible the “nonlinearities” may sometimes be given a functional represen-
tation which permits the analysis to proceed along lines other than the one
of obtaining a solution to the equations. An alternative to linearization
Spring force
(output)
K(x)
——>
cae Displacement
= (input)
Ma x
Figure 1-2, The nonlinear spring characteristic.
may be to reduce the order of the equation by approximating the impor-
tant features of system behavior with an equation of lower order which is
easier to solve.
Control systems with which this book deals are described and classified
primarily in terms of the differential equations which describe the action of
the systems and their elements. Alternatively, and equivalently, the con-
trol systems are represented in terms of diagrams of the dynamic action of
the system.
In order to illustrate the setting up of equations, linearization when
justified, and the diagrammatic representation of systems, consider two
examples.
For the first example the physical system is shown schematically in
Figure 1-1. A mass is suspended from a fixed point by means of a spring.
Motion of the mass is opposed by the action of a damper. Figure 1-2
shows the spring force as a function of the deflection of the mass from its6 INTRODUCTION
equilibrium position. For small deflections from equilibrium the spring
may obey Hooke’s law, that is, “as the deflection so the force [in pro-
portion].”. When the spring is fully compressed Hooke’s law may still hold
as the material in the coils is deformed, but the spring gradient is increased
sharply. On the other hand, as the spring is stretched, it is unwound and
the yield point of the material is reached. The gradient here is low. With
the application of sufficient force the spring may become a straight wire
Viscous friction
Total
Damper | B(22)
force Quadratic
friction
Stiction ia
. friction
d
0 cooaes
dx
-F, Velocity
Figure 1-3. The nonlinear damper characteristic.
and Hooke’s law applies again, but the gradient is very high. Finally the
wire breaks. Over a limited range the spring force, K(x), is a straight-line
function of the deflection x, and the slope of the functional relationship,
@K(x)/@x, is a constant, k. Of course, it is too much to expect that any
physical spring would supply a strictly proportional force for any value of
the deflection.
The nature of the damper forces is illustrated in Figure 1-3. These
forces may well be even more complicated than the ones introduced by the
action of the spring. Contact between the damper case and the piston may
give rise to a stiction force which decreases rapidly from its initial value as
the velocity, dx/dt, increases from zero, and also to a Coulomb friction
force which is invariant with speed, but which is always directed in oppo-
sition to the velocity of the mass. A portion of the damper’s force may be
proportional to velocity (viscous friction) while another part may vary as
the square (quadratic friction) or some other power of the velocity. The
Coulomb friction force can be represented by F, sgn (dz/dt) where the sgnDIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS fs
notation means “takes the sign of.” The total damper force can be
written B(dx/dt) since each of the components of the total force is a func-
tion of the velocity of the mass.
By summing the force applied directly to the mass, f(t), the spring force,
K(x), the total damper force, and the inertial reaction of the mass, md?2/dt,
in accordance with Newton’s laws and d’Alembert’s principle, under the
assumption that the mass is constant,
dx ed
SF =s) - K@ — 2(4) 0
or
rl
m+ a(Z 2) + K(x) =f) (tH)
or
mé + Bie) + K(x) =f(0)
where x = displacement of the mass = deflection of the spring
t = time
Each of these equivalent equations expresses a relationship between a
dependent variable, x, an independent variable, r, and total derivatives of
the dependent variable with respect to the independent variable. Each
equation is, therefore, an ordinary differential equation. Since the spring and
damper forces are functions of the dependent variable and its derivatives,
the equation is a nonlinear ordinary differential equation. The general
form of such an aon is:
ly
a +
where the notation f,(, t) is understood to mean f(x, dx/dt, d?a/dt® - --
d"x/dt", t). This is the general type of equation which is commonly used
to describe the situations of control system engineering.
In most cases these equations are “ordinary” only in name. Solving
them often involves lengthy computations to obtain specific results which
cannot thereafter be generalized.
Because the mathematical model which most accurately describes the
physical situation of Figure 1-1 is difficult to deal with, an approximate
model is indicated. Two assumptions might be made:
Sul, of + fr-ales i +A, oF yt Sales tx = q(t) (1-2)
1. The damping force, B(dz/dt), is a constant, , times dz/dt.
2. The spring characteristic, K(x), is a constant, k, times x.
The approximation to Equation 1-1 would then become:
ax dx
pS tb ke ast 1-3
moat ae a = f(t) (1-3)8 INTRODUCTION
This equation is an ordinary linear differential equation with constant
coefficients. It is ordinary because there are no partial derivatives;
linear because there are no powers, such as (d*z/dt®)8, products, such as
x(dz/dt), or functions, such as sin x, of the dependent variable and its
derivatives; and the coefficients m, b, and k are taken to be constants.
The most appropriate values of the constants 5 and & are determined by
consideration of the region of operation of the system. If, for example,
K(x) were represented by the solid curve of Figure 1-4, the dashed straight
Figure 1-4. Possible linearizations of the spring characteristic.
lines would represent several different linear approximations to the true
curve. If the operating region of interest were small and symmetrical
about the origin, the slope of the line wv would be used. If the operation
were centered about the point a, the tangent through that operating point
would be used to represent the spring characteristic. (In this case the
variable x would be replaced by an equilibrium value, X, plus a perturbed
value, x, measured from the equilibrium value.) If the operating range
were large and symmetrical about the origin, the line yz might be used.
Great care, however, must be exercised in such a case to avoid the
unintentional suppression of important features of system performance at
small amplitudes of motion about possible operating points within this
large operating range.
Whenever the true characteristics of an element or a system are approxi-
mated by a straight line, it is said that the system has been linearized.
Where the equation which describes the system or element is a nonlinear
equation, the system or element itself is said to be nonlinear.DIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS 9
As a second example of linearization, illustrating how the selection of an
operating point and other assumptions may be employed to reduce the
equations of motion to a linear form, consider the coplanar tracking prob-
lem with the geometry shown in Figure 1-5. The notation used here could
apply to the automatic approach of the aircraft to a runway, to an inter-
ceptor missile, or to still other situations. An automatic control system
for such an airplane or missile would contain the situation of Figure 1-5
as a kinematic feedback.
P, target
(Point being
tracked)
Q = angular velocity,
line of sight
E = elevation oe
angle
U = instantaneous
velocity vector
T = flight path angle
9 x
0
Figure 1-5. The geometry of the coplanar tracking problem.
From the figure:
ARO) _ _ u(t) cos E(t) (1-4)
dt
R(*)Q(t) = U(t) sin E(t) (1-5)
d
QQ) = ai [EQ + TO] (1-6)
Because of the transcendental and product terms, both Equations 1-4 and
1-5 may be recognized as nonlinear differential equations.
In order to obtain the dynamic information which would be of interest
in an automatic tracking problem, these equations should be modified to a
simpler form which still allows the flight path angle and speed of the air-
craft to vary. A reasonable approach might be one in which a set of Jinear
differential equations would define the dynamic characteristics for small
changes in the variables about mean or operating values. This method,
known as the method of small perturbations, is fundamental in the
general process of linearization and is analogous to the straight-line10 INTRODUCTION
approximations of Figure 1-4. It has been applied with notable success to
a wide variety of the situations of engineering science. *
The method can be illustrated by linearizing the coplanar tracking prob-
Jem for a typical operating condition. This is assumed to be a collision
course where the range rate dR/dt is approximately constant, and the
angular velocity of the line of sight, ©, is approximately zero. By putting
these assumptions in a mathematical form, two sets of equations may
be obtained. The first of these sets will be for the operating point or mean
course, and the other, containing the information on the dynamics, will be
a set of linear differential equations for small deviations or perturbations
about the mean conditions.
Let R=Ryt+fott+r
R=pot?
R=?
where R = total range
Ry = range at time, t = 0
Po = Steady-state range rate
r = perturbed range from Ro + fot
Q=A, +o
where Q = total angular velocity of line of sight
Qo = steady-state angular velocity of line of sight
@ = perturbed angular velocity of line of sight
E=E,+e
E£=
where E, = steady-state elevation angle
e = perturbed elevation angle
U=Ut+u
U=u
where U = total flight velocity
Uy = steady-state flight velocity
u = perturbed flight velocity
=Ip+y
Tay
where I = total flight path angle
steady-state flight path angle
y = perturbed flight path angle
* R. W. Jones, “Stability Criteria for Certain Non-linear Systems,” in Automatic and
Manual Control (A. Tustin, Ed.), Butterworths Scientific Publications, London, 1952,
pp. 319-324.DIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS i
If the perturbed quantities are small, their products are negligible, and
also:
cose x1
sine Fe
Then Equations 1-4, 1-5, and 1-6 become:
fo + F = —(Ug + ul(cos Ey — e sin Ey)
= —U, cos Ey + eUy sin Ey — ucos Ey (1-7)
(Ro + fot + r)(Qo + o) = (Uy + u)(sin Ey + e cos Ey)
= Uysin Ey + eU, cos Ey + u sin Ey (1-8)
Qto=yte (1-9)
The equations which define the steady-state operating point are obtained
from these by letting the perturbations go to zero.
Po = —Uy cos Ey (1-10)
(Ro + pof)Q) = Ug sin Ey = 0 (1-11)
Q, =0 (1-12)
The perturbation equations are now obtained by subtracting the operating
point Equations 1-10, 1-11, and 1-12 from the more complete ones repre-
sented by 1-7, 1-8, and 1-9.
? u (1-13)
(Ro + potho = —Poe (1-14)
o=yte (1-15)
Substituting Equation 1-15 in Equation 1-14 and letting Ro/py = —7 (the
negative of the time to go from t = 0 to collision), one may obtain:
Pol(t — E+) te t+] = hoy (1-16)
Noting that d/d[(t — t)(e + y)] = (¢ — 7) + ¥) + (€ + 7), substituting,
and rearranging, the final equations giving the relationships between the
perturbed quantities become:
t
1
ay — ———___ | yi a
e=-y aol) t (1-17)
=u (1-18)
These equations show the dynamic relationships which govern small
motions about the mean course. If the perturbations were infinitesimals12 INTRODUCTION
these linear integrodifferential equations could be considered to be exact.
An indication of the errors in the approximation is the comparison be-
tween e and sin e and the magnitude of the perturbation products compared
to the terms which are retained in the equations.
An alternative process for finding the perturbation equations can be
carried out by recognizing that the division of a total quantity into steady-
state and perturbation terms is equivalent to approximating the total
quantity by the first two terms of a Taylor series expansion. The final
perturbation equations can then be found simply by taking the total
differential. Consider, for example, Equation 1-4, R = —U cos E, which
has the total differential
dR = —(dE)U, sin Ey — (dU) cos Ey (1-19)
If the differentials are replaced by the lower case letters indicating the
perturbed quantities and the operating point condition Ey = 0 is inserted,
Equation 1-19 may be identified as the equivalent to Equation 1-18. The
principal advantages of the more lengthy procedure used in deriving
Equations 1-17 and 1-18 over the one illustrated here lie in the emphasis
placed upon the operating point equations and the formality required in
assuming that the perturbation products are negligible relative to other
terms in the equation.
Although Equations 1-17 and 1-18 are linear, one of the coefficients has
the factor 1/(1 — ¢/r). This coefficient varies with time, t. Such an equation
has the general form:
mn. m1,
HO Tet fol Te + + HOE + HO" =a) (1-20)
It is a linear differential equation with time-varying coefficients. Equations
of this type have solutions with some of the formal properties of linear
differential equations with constant coefficients, but the solutions often
exhibit behavior analogous to solutions of nonlinear differential equations.
The formal properties of the solutions to linear time-varying coefficient
equations are seldom of much use in engineering calculations, and general
methods for treating nonlinear differential equations can be used to treat
these linear differential equations with time-varying coefficients.
Equation 1-17 might be transformed into a linear equation with con-
stant coefficients by considering only the values of time much less than the
time-to-go (¢ <7). Then:
| ydt (1-21)
0DIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS 13
This approximation might be useful for some purposes and might represent
another restriction accepted in order to facilitate the analysis.
The derivation of the linearized perturbation equations given above
serves to stress the underlying assumptions, and to emphasize the restric-
tions on the magnitudes of the perturbation quantities and the type of
steady-state operation. Where the linearizing assumptions are not justi-
fied, the analysis of the nonlinear mathematical model must be carried out
by one or more of the methods of nonlinear analysis.
An alternative form of the mathematical model, entirely equivalent to
the differential equation itself, is the block diagram. The symbolism of the
block diagram is the common language of control engineers.
If symbols are defined so as to represent the frequently occurring func-
tions and operations, such as the ones shown in Figure 1-6, then the
spring-mass-damper system of Equation 1-1, for example, can be repre-
sented by the diagram of Figure 1-7. Here each integrator represents an
operator working so as to transform its input into the integral of the input
with respect to time. The transfer elements inscribed K(x) and B(dzx/dr)
are intended to represent the operator functions shown in Figures 1-2 and
1-3. For an input « or dz/dt the output of the element is K(x) or B(dz/dt).
These elements are drawn as if they were feedback elements. That is, they
return or feed back signals to the input or summing point. The whole
diagram, in fact, has the form of the block diagram of a typical feedback
control system, or, with very slight changes in the symbols, of the com-
puter diagram for an analog computer solution of the differential equation.*
This is no accident. Control system block diagrams, analog computer
diagrams, and the system equations of motion are intimately related, and,
in fact, are simply different ways of expressing the same information. The
differential equation of this spring-mass-damper system (Equation 1-1),
for example, can be derived by “translating” the symbols of the block
diagram.
In some cases an engineer, after having written and inspected the equa-
tions of motion, may draw the diagram as an aid in visualizing how the
system does or should operate. In other cases, the block diagram comes
first, drawn directly from physical considerations or from the schematic
representation of the system.
Unlike the schematic diagram (such as Figure 1-1) which is intended to
* G. A. Korn, T. M. Korn, Electronic Analog Computers, McGraw-Hill Book Co.,
New York, 1952.
C. A. A. Wass, Introduction to Electronic Analog Computers, McGraw-Hill Book Co.,
New York, 1955.
C. L. Johnson, Analog Computer Techniques, McGraw-Hill Book Co., New York,
1956,INTRODUCTION
14
aes = poe
206~ gue ade
L= o>} a7 tiie 240 0 Lesh Be| sapso-ysil4 “py
T i
=e ones
9ap/4p oz PO -duny oft
<— 8} aygue
006 + aseug o=s
eee ot s soyenuareyia “e
ones
Trt PO day ote
—~
iya ssoresado seaUly °9-] aun3iy
—_% -===-==-
cu
+
0
’
+
°
oe
oft} sin? Aejap ai, “8
0
ol
(uonejuasaides ajewixoiddy)
Bin If Yo togm stony
or
ot 0] o- pe9| sapso-puosag */
oft
ari “ot mj—= 15?
Ha f Fora tnge stay
mo~
et Of on r++ .(2) Be| 19p10-puoseg ‘9
T
ot x
Zia IA Mors Mmf— = 194
DIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS
Sammie <0 Bue
_ Aas ased a s
lees 2+ 0 | s@p10-ys1!
Pars! 296 oye (+82) ea 19ps0-IS4l4 °S
ca PO -duy
2ap/qp OZ + ore16 INTRODUCTION
illustrate how a particular element or system works—its scheme of opera-
tion—the block diagram is an abstraction like the differential equation.
A block diagram represents the functional relationship of the various
elements in a system. Both linear and nonlinear elements may be repre-
sented by blocks in the block diagram. In the case of linear constant-
parameter elements, that is, elements whose response to any input is
adequately described by the solution to a linear, constant-coefficient differ-
ential equation, the blocks in the block diagram are inscribed with the
14g BE Integrator |; | Integrator |
f(t) 1|4 + +
a(S
Figure 1-7. Block diagram of the spring-mass-damper system.
transfer function of the element. The transfer function is one possible
mathematical model of a linear element. It is defined as the ratio of the
Laplace transform of the output response of the element to the Laplace
transform of its input with initial conditions all zero. Alternatively, since
the Laplace transform of a unit impulse is 1, the transfer function of a sys-
tem or element is the Laplace transform of the weighting function or impulse
response of the system or element.* Thus, for example, the response 2(t)
of the linear spring-mass-damper system of Equation 1-3 to a unit step
function f(t) = u(t) is:
a(t) = : [: eee Sent sin (o,Ji—o — tan? v= £)|
where £ = damping ratio = b/2Vmk
,, = undamped natural frequency = Vk/m t+
* H. M. James, N. B. Nichols, R. S. Phillips, Theory of Servomechanisms, McGraw-
Hill Book Co., New York, 1947, pp. 48-50. (Dover reprint.)
+ The nondimensional parameters { and w, represent combinations of the system
constants which appear as coefficients in the differential equation. To say that a system
has constant parameters implies that it is described by a linear differential equation with
constant coefficients.DIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS 17
The Laplace transform of x(t) = X(s) is*
Ijm
X(s) = oe (1-23)
and the Laplace transform of f(t) = F(s) is
F(s) =4. for f(t) = the unit step function, u(t) (1-24)
The ratio of the two Laplace transforms
X(s) _ 1/m (1-25)
Fs) s* + 2lo,5 + 0,?
is the transfer function of the linearized spring-mass-damper system. The
transfer function is an invariant property of a linear constant-parameter
element or system. In particular, it is completely independent of the size or
shape of the input and inherently provides more information than any
finite catalog of input-response pairs.
The transfer function can also be considered to be given by algebraic
manipulation of the explicit differential equation of the system in operator
form. If, for example, an element or system is described by the explicit
differential equation
da” qa" \
(«, get Gp F 40,5 ated 4 + ag) X
= (» aa ae 6 b b)X, (1-26
mae bn @at + St 1 a+ bg) X; (1-26)
and the operator s = d/dt, by definition, then the transfer function, by
substitution and cross multiplication, is:
Xo(s) ae Pms™ + bys) + ++ + bas + bis + by
Xx; a,s” + a,-ys" 2 +++ + ays? + ays + ay
It is conventional,+ but by no means necessary, to define the transfer
(1-27)
* The symbol s represents the complex variable of Laplace transform theory at this
point of the book. X(s) is defined as
<0
X(s) -[ x(te~*! dt
0
Elsewhere the same symbol is used to denote d/dt for convenience and compactness. The
context will generally make the local meaning clear.
+ H.M. James, N. B. Nichols, R. J. Phillips, Theory of Servomechanisms, McGraw-
Hill Book Co., New York, 1947. (Dover reprint.)
G.S. Brown, D. P. Campbell, Principles of Servomechanisms, John Wiley & Sons, New
York, 1948.
+ A.W. Porter, Introduction to Servomechanisms, John Wiley & Sons, 2nd ed., New
York, 1953.
J.C. West, Textbook of Servomechanisms, English Universities Press, London, 1953.18 INTRODUCTION
function by an appeal to Laplace transform theory. In connection with the
analysis of nonlinear control systems, the Laplace transform method itself
is seldom useful and the transfer functions of linear elements can be con-
sidered to be derived by means either of Laplace transforms or as an
expression of the explicit differential equation of the element in operator
form.
The eight transfer functions of the linear elements most commonly en-
countered in control system analysis are presented in Figure 1-6. It needs
to be noted that there are several methods of representing these transfer
functions, or the elements for which they in turn are the mathematical
attornies. The graphical root plot and logarithmic jw transfer function
plot are the ones on which attention is concentrated here.
If the operator s is considered to be a complex variable s = o + jo,
then the transfer function may be represented by the position of its poles
and zeros in the o, jw plane. The poles of the transfer function are the
values of s = o + jw which cause the value of the transfer function to be
infinite, and the zeros of the transfer function are the values of s = 0 + jw
which cause the value of the transfer function to be zero. It is of particular
interest in the analysis of some linear control systems to observe the locus
of roots,* that is, the path described by the motion of the poles as some
parameter of the system is varied continuously. The poles and zeros of the
individual transfer functions are presented in Figure 1-6.
Alternatively, the transfer function may be partly represented by the
complex number obtained by substituting s = jo. It is convenient to plot
20 log magnitude of the transfer function and the phase angle of the vector
representing the transfer function against /og w. This representation is
called a Bode diagram.? It is also often useful to plot 20 log magnitude
against phase with ~ as a parameter along the curve. This representation
is referred to as a gain-phase plot. A unique property of the Bode diagrams
is the fact that the magnitude curves are approximated very closely by
straight-line segments. The actual logarithmic magnitude curves depart
from the straight-line approximations only in the immediate vicinity of
a “break point’? where the straight-line approximation changes slope.
Otherwise the actual logarithmic magnitude curves follow the straight-line
approximations. The line segments are accordingly called “asymptotes.”
* W. R. Evans, Control System Dynamics, McGraw-Hill Book Co., New York, 1954.
J.G, Truxal, Automatic Feedback Control System Synthesis, McGraw-Hill Book Co.,
New York, 1955.
+ H.W. Bode, Network Analysis and Feedback Amplifier Design, D. Van NostrandCo.,
New York, 1945.
H. Chestnut, R. W. Mayer, Servomechanisms and Regulating System Design, Vol. 1,
2nd ed., John Wiley & Sons, New York, 1959.DIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS 19
The asymptotes and phase angle curves of the elementary transfer functions
are also shown in Figure 1-6.*
In general, a polynomial transfer function, such as the one of Equation
1-27, is amenable to the expression of its numerator and denominator in
terms of factors identical to the transfer functions of Figure 1-6. There-
fore, the Bode diagram of a linear system transfer function can be built up
by the addition of the logarithmic magnitude and phase angle plots of the
factors. In multiplying the factors (which are complex numbers) the magni-
tudes would be multiplied together and the phase angles added. By means
of the logarithmic representation of the transfer function, the multiplication
is effectively carried out by the addition of logarithmic quantities.
Block diagrams of systems composed entirely of linear elements may be
rearranged into equivalent forms in accordance with the several rules of
block diagram algebra. In a system composed of linear and nonlinear
elements the number of permissible rearrangements is very restricted.
Rules of block diagram algebra are justified by showing that two different
block diagram forms represent the same differential equation. Some of the
forms which are equivalent, for linear systems, are shown in Figure 1-8,
together with remarks on their application to the block diagrams of
nonlinear control systems. Note particularly that items 6a and 6b are
applications of the principle of superposition which is invalid in connection
with nonlinear systems.
The general form of the block diagram of a feedback control system is
presented in Figure 1-9, where the terms applied to characterize the various
elements and signals are inscribed on the diagram. The block diagram in
this general form can be taken to represent any linear or nonlinear feed-
back control system. Note particularly that for any large value of the open
loop gain, 48, the closed loop transfer function, 0,/0,, is substantially inde-
pendent of y, and is in fact approximated by 1/8. This principle, derived for
linear systems, is applicable to the reduction of the effect of nonlinearities.
Enclosing a nonlinearity in a feedback loop, in general, produces a result in
* 20 log magnitude has the “dimensions” of decibels (db). A chart for the conversion
of magnitude to db and vice versa is given in Appendix I
“Departures” of the actual curves from the asymptotes and large-scale phase angle
curves are given in graphical form in Appendix 11.
+ F. E. Nixon, Principles of Automatic Controls, Prentice-Hall, New York, 1953.
Methods of Analysis and Synthesis of Piloted Aircraft Flight Control Systems, BuAer
Report AE-61-4 I, Northrop Aircraft, U. $. Navy Bureau of Aeronautics, Washington,
C., 1952
T. D. Graybeal, “Block Diagram Network Transformation,” Electrical Engineering,
vol. 70, no. 11 (Nov. 1951), pp. 985-990.
T. M, Stout, “Block Diagram Tranformations for Systems with One Nonlinear
Element,” Trans. AIEE, Pt. Il, vol. 75 (1956), pp. 130-139.INTRODUCTION
20
(219 “do inoag wouy pardepy) “eagadje weep 20/9 “g-| aunBig
waysks seauljuou ul ueyodwi!
tyke
ayt
aq Aew suoyeado jo sapig ~~ <] “V'V Ki
ly
s¥90Iq Bulpeoseg *
‘Syuauie|a Jajsued| JeaUIjUON PU Jeaury Gym Swa;shs UI aIq
¥ fe
"
<
t
Gore
R
a
I
x ¥ eee) z
quiod 4jo-2y21 °¢
Gxyak
~A_}=
qualUaje s24sUeN xa{dwu09 40 soyesedQ °Z
APY “7
apy SuoWeTedD
qulod Sujwuins Bui3ueyos9u) “a9
&
knxaz
“F @s ¥
Fenuaraysig “¢
= 1-99
(92 G)G=E
a
quawils sajsued ajduis “T
suORTUyEG21
DIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS
(panursuoa) g-} asn8i4
¥
qulod 4jo-aye} 3u)3ueyo19yu) “GOT
v = v
«
ie t
juiod jjo-ay2y SuiSueyouayu| “eoT
ty
syuawiaja 49jsues) a i doo} yoeqpaay e Sugeuuig “96
seaujuou ajdwis pue VAT | = ty
Je aul} 10} Ajuo pier v +
squaweja 19sue3) ae :
seauljuou adwis pue <4 -¥+t = ty doo} yoeqpaa) e Suyeuruty; “eG
oleate tee hy cz
Jeu 104 AIuo pHIEA
a doo piemuoy e BujeuiTy “g
@)y= puree eee x t
(2)2y % (x) Ty = & uaym fluo pier ale tat trees ROOF =
&24 INTRODUCTION
which the effect of the nonlinearity on the performance of the system is
markedly reduced. In other words, feedback tends to “linearize” the
system. It will not only do this, but it also has the property of minimizing
the effect of time constants and possibly varying gain in the forward path
and of augmenting the resistance of the system to changes in the load on
the output. The error is smallest and the accuracy of the system is the
greatest when the loop gain is high. The speed of response of a feedback
control system is also improved with high loop gain. High gain is therefore
almost synonymous with high performance, in connection with feedback
systems, except that high gain is inimical to dynamic stability. All the
desirable characteristics of feedback control are enhanced by high loop
gain, but too high a gain will make the system unstable. It is for this
reason that the question of stability is of such surpassing importance in
the study of feedback control systems.
In a linear system, that is, one whose performance is described by a solu-
tion to a linear differential equation with constant coefficients, the question
of stability is unequivocally answered by solving for the roots of the charac-
teristic equation. (The characteristic equation is obtained from the homo-
geneous equation of the system by substituting an assumed solution of the
form Ce*'.) The roots invariably are real or occur in complex conjugate
pairs. If the real roots are all negative and the complex roots all have
negative real parts, the system is stable. (A temporary input or a tempo-
rary disturbance to the system causes only a temporary change in the out-
put.) It may be noted that this definition, unlike the one usually adopted by
mathematicians, excludes from the stable category the types of behavior
characterized by zero or pure imaginary roots. These types of behavior are
sometimes accorded the special designation of “marginal stability,” since,
in fact, they are typically neither stable nor unstable but represent some-
thing precisely intermediate between the stable and unstablecategories. For
present purposes, however, it might be said that marginal stability is notuse-
ful and the cases of marginal stability are lumped with the unstable ones.
There are several tests which permit a determination of the nature of the
stability without actually solving the characteristic equation. These are the
Routh, Hurwitz, and Nyquist stability criteria.* The criteria are applicable
* E. J. Routh, Advanced Dynamics of a System of Rigid Bodies, 6th ed., The Mac-
millan Company, London, 1905, republished by Dover Publications, New York, 1955.
A. Hurwitz, “Ueber die Bedingungen, unter welchen eine Gleichung nur Wurzeln mit
negativen reellen Teilen besitzt,” Mathematische Annalen. vol. 46, B. G. Teubner, Leip-
zig, 1895, pp. 273-284.
H. Nyquist, “Regeneration Theory,” Bell System Technical Journal, vol. 11, no. 1
Gan. 1932), pp. 126-147.
The Routh and Hurwitz stability criteria are presented and discussed briefly in
Appendix III.DIFFERENTIAL EQUATIONS OF CONTROL SYSTEMS 25
only to linear systems; unfortunately, no such convenient methods exist
for the determination of the stability of nonlinear systems in general. Here,
except in special and restricted cases, the characteristic equation has no
significance, and the only recourse is to the qualitative definition of
stability.
If a small temporary input or disturbance applied to the system in equi-
librium causes only a temporary change in the output or response, the system
is stable.
This definition may be applied to the behavior of linear and nonlinear
systems alike. Since the representation of the “linear” system, however, is
\
i
Y
EEN Unstable
=: fe beh oscillatory
Response —>
4
/*~_ Unstable
aperiodic
Response —>
Stable aperiodic
0 Time ——>
Figure 1-10. Stable and unstable transient responses.
likely to be based on small perturbations about some equilibrium condi-
tion, the statement that a system is stable means that the system is stable
about a particular equilibrium point. It may be noted that linear systems
have at most one equilibrium point, but that, as will appear, it is easily
possible for a nonlinear system to be stable about one equilibrium point
and unstable about another one. Figure 1-10 presents several possible
responses of a system in equilibrium to a pulse input and illustrates the
stable responses which might be produced by a control system or control
system element.26 INTRODUCTION
1.2 NONLINEARITIES DESCRIBED AND CLASSIFIED
Because there is such an infinite variety to all possible nonlinearities, and
since there are no methods of analysis which are universally applicable,
there have been many attempts to divide and classify nonlinearities. In
this regard, adjectives referring to the character of the nonlinear functions
themselves, such as single-valued and multivalued, are useful in defining
applicable methods of analysis. Other adjectives refer to the place of the
nonlinearity in the system. Nonlinearities such as friction and backlash are
inherent or parasitic, and the designer may be at pains to eliminate them, or
at least to minimize their effect. Some nonlinearities, such as limiting,
variable damping, or the operation of a relay, may be intentional or
essential. They are introduced by the designer to achieve some desirable
effect; or in cases such as an oscillator, the nonlinearity may be absolutely
-tequired in order to achieve the intended performance.
Two common nonlinearities have already been presented in Figures 1-2
and 1-3. The cause and effect relationship illustrated in Figure 1-2
between the deflection of the spring and the spring force is a straight line
for small deflections of the spring. It is only for large deflections of the
spring that the spring “constant,” 0K(x)/0x, changes. When the spring con-
stant does change, the change is continuous. The spring characteristic,
therefore, is an example of a /arge-value nonlinearity and it has already been
noted that a valid analysis can be carried out on a linear basis if the deflec-
tions of interest are confined to ‘‘small’ deflections in the vicinity of the
equilibrium point.
On the other hand, it is apparent from Figure.1-3 that the damping
force changes sign discontinuously with the reversal in the direction of,
travel and that the change occurs between a very small velocity on one side
of zero velocity and a very small velocity on the other side of zero velocity.
The damper characteristic may therefore be taken to be an example of
a small-value, discontinuous nonlinearity. Both the spring and damper
characteristics are single-valued functions.
In addition to nonlinear spring forces and friction, another nonlinearity
which is almost a matter of universal experience is hysteresis in mechanical
transmissions including gear trains and linkages. A simplified representa-
tion of a mechanical system with hysteresis caused by backlash is shown
in Figure 1-11(a). The linkage of Figure 1-11(6) has the same hysteresis
characteristic, but in this case the cause is Coulomb friction.
Figure I-11(c) shows the hysteresis characteristic in terms of the func-
tional relationship between input and output of the mechanical linkage of
Figure 1-11(a). If the pin 1 starts clockwise from a position midwayNONLINEARITIES DESCRIBED AND CLASSIFIED vis
between the prongs of fork 2, no motion of the fork takes place until the pin
contacts one side of the slot. This corresponds to the line segment AB in
Figure 1-11(c). Then the fork rotates clockwise with the pin, as illustrated
dy the line segment BC, until the motion of the pin is reversed. Ideally the
fork, which is assumed to have negligible inertia, stands still while the pin
traverses the dead zone CD = EF. It then travels counterclockwise with
the pin, as along the line segment DE, and stands still again when the input
motion is reversed as at the point E. The spring and friction linkage of
Figure 1-11(6) behaves in an analogous fashion. The output shaft does not
respond to input shaft motion until the spring torque (which is proportional
Senet
Friction
bearing
Torsional
spring
(a) (b) (c)
Figure I-11, Mechanical hysteresis. (a) Hysteresis because of backlash. (b) Hysteresis
because of compliance with friction. (c) The hysteresis characteristic.
to the difference between the angular positions of the shafts) exceeds the
friction torque. When the input motion is reversed the output shaft stands
still until a corresponding torque has built up in the opposite direction.
Entirely equivalent nonlinear behavior is demonstrated by these two quite
different physical arrangements.
The hysteresis characteristic is nonlinear in a complicated way. The
characteristic of Figure 1-11(c) is multivalued. For any value of the input
the output could. have many (actually an infinite number) of possible values.
Some of these output values are indicated for one value of the input by the
heavy dots in Figure 1-11(c). A multivalued nonlinearity like hysteresis is
even more difficult to analyze than the single-valued nonlinearities like
spring forces and friction.
Unlike the mechanical nonlinearities just described, some nonlinearities
are functions of more than one variable. Figure 1-12(a) shows the idealized
torque speed characteristic of an electric servomotor. The torque decreases
linearly with an increase in speed, and it is also ideally a linear function
of the current in the control field. Figure 1-12(6) shows a more real-
istic servomotor torque speed characteristic. Here motor torque is a28 INTRODUCTION
nonlinear function of speed and a nonlinear function of the control cur-
rent. This is an example of a multivariable nonlinearity. Vacuum-tube
characteristics are a further and well-known example of a multivariable
nonlinearity.
A special class of nonlinearities, including some multivariable nonlinear-
ities, arises from mathematical operations with the variables in control
systems. The mathematical operations of addition, subtraction, r.ultipli-
cation by a constant, integration with respect to time, and differentiation
with respect to time are appropriately thought of as Jinear mathematical
operations. Others, suchas multiplication and division of variables, raising
In +
8 g
2] gI tm
& ain YN cenaaiee
[Ne ms
99 Speed, 6 % Speed, 6
@ )
Figure 1-12. Servomotor torque speed characteristics. (a) The linearized characteristics.
(b) The actual nonlinear characteristics.
to powers, extracting roots, coordinate transformations (including vector
resolution), and integration and differentiation with respect to dependent
variables (variables other than time), can be termed nonlinear operations.
Their concrete embodiment is called an operational nonlinearity.
These and other common and interesting nonlinearities occurring in
control system engineering are presented in Table 1-1. Almost all of these
nonlinearities can arise from many different physical arrangements. The
nonlinear behavior, however, is characterized by the form of the transfer
characteristics. Analytically, this form is all-important, and the analyst
should exercise care in determining the type of transfer characteristic
which is actually present. An example is the spring-friction linkage of
Figure 1-11(b), which has a hysteresis characteristic (item 11). Too hasty
consideration might have led to ascribing a simple friction characteristic
(item 7) to the linkage.
The examples have shown some attempts to classify nonlinearities and to
separate from the infinite population of a// nonlinearities some particular
class which may be of interest in connection with a particular problem.
Adjective pairs, such as small-value or large-value, continuous discontinuous,NONLINEARITIES DESCRIBED AND CLASSIFIED 29
Table 1-1
Typical Simple and Complex Nonlinearities
4 & s
3 g 3 &
3 6 a
6
Input Input Input Input
(1) Saturation
or limiting (2) Threshold (3) Preload (4) Rectifier
a 2 3 $
8 é a z
Input Input Input Tnput
©) Square law (6) x8 — kx (7) Off-on contactor (8) Detent
(a) Simple Nonlinearities
ra 8
£
g a
Input (3
Input
a e
(9) Toggle or negative (10) Magnetic hysteresis (11) Hysteresis
deficiency (a dependent on (width @ independent
input amplitude) yo of imput amplitude)
y R 1
(12) Multiplier (13) Resolver
(b) Complex Nonlinearities
single-value multivalue or multivariable, and functional operational, * are used
to divide classes of nonlinearities. More than one classification can apply
* Methods of Analysis and Synthesis of Piloted Aircraft Flight Control Systems, BuAer
Report AE-61-4 I, Northrop Aircraft, U. S. Navy Bureau of Aeronautics, Washington,
D.C., 1952.
R. A. Bruns, R. M. Saunders, Analysis of Feedback Control Systems, McGraw-Hill
Book Co., New York, 1955.
E. C. Cherry, W. Millar, ‘Some New Concepts and Theorems Concerning Nonlinear
Systems,” in Automatic and Manual Control (A. Tustin, Ed.), Butterworths Scientific
Publications, London, 1952, pp. 262-274.30 INTRODUCTION
to a given nonlinearity; and, on the other hand, several classifications can
be combined into a larger classification. The most pertinent breakdown for
analytical purposes is one dividing the entire group of nonlinear transfer
characteristics into classes called simple and complex. Any single-valued,
functional nonlinearity will be termed a simple nonlinearity and all others
will be termed complex.
1.3 BEHAVIOR OF NONLINEAR PARAMETER SYSTEMS
The physical behavior which the engineer is accustomed to observe in
connection with linear constant-parameter systems is subject to relatively
simple “laws”. These laws are readily learned, and the behavior is then
predictable and therefore, ‘‘understood.” If, however, the system has non-
linear parameters to a significant degree, it will exhibit behavior which is
essentially different and “incomprehensible” from the point of view of
linear analysis. The system will behave in distinctive ways that are impos-
sible for a linear constant-parameter system. Sometimes this distinctive
behavior is turned to good account. Very often, to the contrary, it is a
source of trouble to the control engineer, and represents an obstacle to be
overcome.
There are at least six patterns of behavior which are typical of nonlinear
systems. *
I. New Frequencies
The first of these patterns stems from the fact that nonlinear, or, for that
matter, linear time-varying-parameter systems, may generate new fre-
quencies. If a constant-parameter linear system is forced with asine wave at
a frequency «, the steady-state output will then contain a sine wave of
frequency w (although this output wave may be altered in amplitude, and
* J.G. Truxal, Automatic Feedback Control System Synthesis, McGraw-Hill Book Co.,
New York, 1955, pp. 559-566.
F. H. Clauser “The Behavior of Nonlinear Systems,” Journal of the Aeronautical
Sciences, vol. 23 (1956), pp. 411-434.
There is one form of typically nonlinear behavior which is not accounted for here.
This is the phenomenon of frequency entrainment. Frequency entrainment is of con-
siderable practical importance in connection with oscillators, but it does not appear to
have any pertinent connection with control systems. Illustrative analyses of frequency
entrainment have been presented by Minorsky and by Cunningham. See: N. Minorsky,
“The Theory of Oscillations,” in E. Leimanis and N. Minorsky, Dynamics and Nonlinear
Mechanics, John Wiley & Sons, New York, 1958, pp. 154-156. W. J. Cunningham,
Introduction to Nonlinear Analysis, McGraw-Hill Book Co., New York, 1958, pp.
213-220.BEHAVIOR OF NONLINEAR PARAMETER SYSTEMS 31
shifted in phase). If the input contains two frequencies , and «, the out-
put will contain those two frequencies; and, in general, the output will
contain the frequencies present in the input.
In a nonlinear system, on the other hand, a single input frequency may
produce a response with harmonics or subharmonics in the output. The
application of the sum of two sine waves of different frequencies will, in the
general nonlinear case, produce frequencies in the output which correspond
to the two input frequencies, their sum, their difference, harmonics of these,
and possibly even more elaborate combinations. Ina situation like this it is
impossible to justify the concept of a “‘frequency response” without drastic
modification to its definition and interpretation.
The generation of new frequencies (nonlinear distortion) may be illus-
trated by the case of a saturating amplifier. (Vide Figure 1-13.) For small
inputs to the amplifier the output is proportional to the input. When the
input reaches the limit, however, the output is “clipped.” If the output in
response to a sine wave input, £,, = A sin wt, is expressed as a Fourier
series,
Eout = 5, Sin ot + bg sin 3wt + b; sin Sot +--+ +5, sinnwt (1-28)
where
2
by = 24K [sins = fi-(4}} rene
T AK AK AK. AK
le “4 >1
AK
2
aa [ f cos nB — 1— (4) sin na], fel
m(1 — n2) LAKn \AK.
=0,
Basin?
AK
The terms b, sin 3c, b; sin Swt-- + , represent the new frequencies gener-
ated by the nonlinear element.
2. Jump Resonance
Even when a nonlinear system does not have an output waveform which
is obviously distorted, the phenomenon of jump resonance may occur.
When the saturating amplifier of Figure 1-13 is incorporated in the servo-
mechanism system of Figure 1-14, the inertia of the motor will tend to32, INTRODUCTION
Output
Input
Figure I-13. Response of a saturating amplifier.
1
_ A [me
Saturating amplifier Motor
Figure I-14. The saturating amplifier incorporated in a servomechanism.
Phase Angle, X.
Frequency, w Frequency, w
Figure I-15, Jump resonance.
smooth the clipped peaks. The servomechanism output may then very
closely resemble a pure sine wave. In the event that the superficial resemb-
lance to the behavior of a linear servomechanism leads to an attempt at
measuring the harmonic response, it will be found that both the “ampli-
tude ratio” and “phase angle” functions may have sudden discontinuities.
These “jumps” are illustrated for the case of the saturating amplifier
servomechanism in Figure 1-15. Dashed portions of the curve represent an
unstable condition and cannot be observed in practice. The amplitude
ratio or phase angle function proceeds as far as it can without doubling back
on itself, and then it jumps to the other branch. The discontinuity or jumpBEHAVIOR OF NONLINEAR PARAMETER SYSTEMS 33
takes place at a frequency which is dependent on the history of the test and
the direction from which the jump region has been approached.
The “bending back” of the amplitude ratio function of the saturating
amplifier servomechanism can be partly explained by analogy to a linear
system. Because of the limiting at high values of the actuating signal, the
average restoring torque per unit error is less than in the corresponding
linear system. At the larger amplitudes of resonance, therefore, the natural
frequency is less and the peak is bent back toward lower frequencies. If the
average gain increased with motion amplitude, the resonant peak would
bend in the higher frequency direction.
Performance of the saturating amplifier servomechanism of Figure 1-14
and, in particular, the curves of Figure 1-15 can be derived by means of a
semigraphical analysis originally given by Levinson.*
If K(|e|) represents the amplifier input-output relationship, which is a
single-valued function of the error, then a quasi-linear open loop transfer
function can be written as:
% K(lel)
2 irs +0) ee
and
8565) 9, _ (Ts + 1) + K(lel) ee
Se s(Ts + 1) va
The assumption is made that there is no very appreciable distortion in the
servomechanism output and that when the system is forced with a sine
wave, the error, e(f), is sinusoidal. When 0,(t) is represented as |9,| e””,
then e() will be approximately:
th €(t) © [el elett oe) (1-31)
en
jo) _ jojo + 1) Te
O{jo) — je(joT + 1) + K(jel)
or
lel (i-33)
and
be = 90° + tan“ mT — tan7 © (1-34)
: [K(lel) — wT]
Squaring and transposing:
[K(lel) — o2? TP fel? = ww? T? + 1) (02 — wef? (1-35)
* E. Levinson, “Some Saturation Phenomena in Servomechanisms with Emphasis on
the Tachometer Stabilized System,” Trans. A/EE, Pt. II, vol. 72 (1953), pp. 1-9.34 INTRODUCTION
T
oo
~
, thousands
a
1
Ee characteristic T [TT ie
=8
a Saturating ~} 0
+4 amplifier | ha
a sharacteristic Kile) lel
Fy 4 %
3
4 T |
z= = 20 it 7
al -
Sele ® 4
‘ LI
0 2 4 6 8 10 12
Error Amplitude, |e|
(a)
9
Se COr 100
H eeea
16 [
a T
ss j CL
. ++
4 - |
e ‘Coo Saturating amplifier
Stl characteristic
i
0
1
2 6 10 12
Error Amplitude, | €|
(6)
Figure 1-16. The servomechanism jump resonance graphical solution. (a) Input, || = 2,
motor time constant T= 0.1, w is the parameter. (b) Input, |0,| = 1, motor time
constant T = 0.1, @ is the parameter.BEHAVIOR OF NONLINEAR PARAMETER SYSTEMS 35
9 1
4 = 50 |
Zo Laadtast
23 T=01 co
3
8, [esi
Jes” Linear
= ¢| 1 ampiier t
! characteristic a TT tej =10
a saturating
=5 meee amplifier
a characteristic la 5
+ 4 ie e
[2 3 {
Co l=
3 a
a2 18; = 5
a (0; = 2 1
&y
3 {2 | \9;| = 10
7 I i
0 2 4 6 8 10 12
Error Amplitude, |¢|
(c)
Figure 1-16 (continued). (c) Radian frequency, » = 50, motor time constant T = 0.1,
(6,| is the parameter.
Taking the square root:
[K(lel) — oT] fel = toV(w*T? + 1) 16,7 = lel? (1-36)
or
(lel) |el = °F |el & OV (WT? + 1)10,7 = fe (1-37)
The left-hand side of this equation represents points on the saturation curve
where K(le|) |e| is plotted against «. The right-hand side represents a
family of ellipses with w and |9,| as parameters.
A typical family of these ellipses is shown in Figure 1-16(a). Here values
of 7 and |6;| have been selected, and the ellipses are plotted against |e] for
various values of the angular frequency parameter, @. When any typical
amplifier saturation characteristic (such as the dashed curve) is superposed
on the family of ellipses, it may be seen that intersections, for any one
value of frequency, can occur once, twice (a tangency), or even three times.
With a strictly linear amplifier gain only one intersection is possible. Where
only one intersection occurs it gives a value of the error amplitude, |e],
which can be plotted against the frequency, w. The curve of error against
frequency is tangent to a line of constant frequency when the ellipse is
tangent to the saturation curve. When three intersections occur there are
three distinct values of error amplitude at one value of frequency. This is36 INTRODUCTION
a NI |
WITT
c | \le|=20
Ze
g \)\I
2, \
= FF V
= 48 |
5 |
Es
2 \ J
la,
2030 40 50 60 00 200 300-400 600. 800 1000
Angular Frequency, « (radians / sec)
R
|8,| = 20
10
= + ~| Idea! r C 6
is motor
L—Friction
‘Amplifier 7 Bsgnd
Friction
band
Error, €
Figure 1-20. A servomechanism with Coulomb friction and its error responses.
friction. That this should be so is apparent from the equation of motion
which is derived from consideration of the block diagram as follows,
16, = T= —Bsgn 0, + Ke, «=6,—0, (1-40)
= —Bsgn@,—K6, when 0, = 0 (1-41)
0
8, + san 6, + © 6
or defining K/J as w? and —B//as f,
6+o0%°+f=0, 8<0
G+0%—f=0, 450 (1-42)40 INTRODUCTION
The friction level + frepresents a bias on the pure cosine wave which is the
solution of 6 + w2 = 0. The output of the servomechanism stops moving
at the end of any given half cycle whenever the error is small enough so that
the motor torque minus the inertial torque is less than the frictional torque.
As this may well occur at any level of the error within the friction “band,”
the number of possible equilibrium points is infinite. A somewhat similar
situation is obtained if the amplifier or error detector in a servomechanism
has a threshold or deadband nonlinearity. The system can then come to
rest anywhere in the deadband.
5. Nonexponential Time Response
It is worth noting that, in the example of Figure 1-20, the response curves
not only show the possibility of several equilibrium points but also exhibit
Logarithmic
decrement
Constant
decrement
Friction band
(a) (b)
Figure 1-21. Oscillations with damping. (a) Linear damping. (b) Nonlinear Coulomb
damping.
an approach to rest in which the amplitude of each half cycle is decreased
by a constant amount. This constant, instead of logarithmic decrement, is
typical of responses with Coulomb friction. In a linear system each mode
of aperiodic or oscillatory motion can only increase or decrease within or
along exponential envelopes. This is not always true in nonlinear systems.
Figure 1-21 presents the exponential (logarithmic) damping of a linear
system contrasted to the nonlinear system with Coulomb friction damping.
Another example of nonlinear response unlike the behavior of the normal
(exponential) modes of a linear lumped constant system is the response of a
mechanical system with a “cubic” spring. If the spring characteristic has the
form K(a) = 2°, the period of oscillation increases as the amplitude decays
under the influence of damping. (This kind of a spring is called a “shard”
spring. If the spring “constant” decreased with deflection the spring
would be referred to as a “soft” spring.)BEHAVIOR OF NONLINEAR PARAMETER SYSTEMS 4l
6. Limit Cycles
Limit cycle is the name given to a nonlinear oscillation of fixed frequency
and amplitude determined by the nonlinear properties of the system. Limit
cycles are one of the most frequently encountered modes of behavior
peculiar to nonlinear systems. Everyday examples of limit cycles include
the action of the human heart, the squealing of chalk on a blackboard, the
flashing of automobile turn signals, and the operation of an escapement
clock. Limit cycles are distinguished from linear oscillations in that their
amplitude of oscillation is independent of initial conditions. If a limit
Amplifier Motor Hysteresis.
1 1
K T+ >>rR >
Limit cycle —>
Figure 1-22. Limit cycle oscillations of a servomechanism with hysteresis.
cycle is itself stable, the system will tend to fall into this condition of oscil-
lation if the oscillation amplitude approaches that of the limit cycle, no
matter what the initial condition or forcing function may have been. Limit
cycles are very easily recognized as closed curves in the phase plane repre-
sentation where velocity is plotted against position. For example, Figure
1-22 shows the response to small and large initial conditions of a servo-
mechanism with hysteresis. Note that the final oscillation, the limit cycle,
has the same frequency and amplitude in each case. Figure 1-23 presents the
same data in the phase plane where velocity is plotted against displacement
and time is a parameter along the trajectory curve.
It sometimes happens that the motion of a system will tend to fall into a
limit cycle from any starting condition. This behavior is termed soft self-
excitation and is typical of devices which are deliberately intended as42 INTRODUCTION
oscillators. On the other hand, there are systems whose motion is only
forced into a limit cycle by some (perhaps rare) combination of circum-
stances, such as a particularly large step input. Here the term hard self-
excitation is applied. In connection with control systems, limit cycles are
undesirable, and soft self-excitation is very nearly intolerable under any
circumstances.
A servomechanism configuration proposed and actually built by Lewis*
and investigated by Caldwell and Rideout illustrates several features of
nonlinear behavior. In this case the nonlinearity is intentional. The idea is
Output Rate, 6,
Limit cycle
Output, 8,
Figure 1-23. Phase plane representation of the limit cycle.
to have a positioning servomechanism whose damping is negative or at
least very small when the error is large. This tends to insure a more rapid
response to large errors than in the corresponding linear system. This
“desirable” effect is accomplished by a nonlinear velocity generator feed-
back. A block diagram of the servomechanism is presented in Figure 1-24.
Figure 1-25(a) shows the step responses for several input magnitudes, and
a comparison is made between these and the best possible response with
linear damping. These very rapid and apparently stable responses illustrate
how attractive intentional nonlinearities can be. If excited with a double
pulse input, however, the output of this servo may diverge, indicating that
the system is unstable for this input. The unstable response is illustrated in
* J.B. Lewis, ‘‘The Use of Nonlinear Feedback to Improve the Transient Response
of a Servomechanism,” Trans. AIEE, Pt. Il, vol. 71 (1952), pp. 449-453.
1 R. R. Caldwell, V. C. Rideout, ‘A Differential-Analyzer Study of Certain Non-
linearly Damped Servomechanisms,” Trans. AIEE, Pt. Il, vol. 72 (1953), pp. 165-169.9%
‘Absolute Multiplier
value unit
Figure 1-24. Block diagram representation of the Lewis servomechanism.
4
\ Output
: pL tnput
v2 \
= Linear
BL VAR response
ae ¢ = 065 0 j_, __1
0 7 2r
Time, ¢
0
|
1
- 2
0 7 2
Time, t : (0) f= 05,a= 10
(a) § =22,a=07
I ft h
rout frequency Input frequency,
: f = 4.92 cps
(c)
Figure 1-25. Response of the Lewis servomechanism to a variety of inputs. (a) Error
responses to a step function. (b) Output response to a square doublet. (c) Frequency
response at two different frequencies. (Partly adapted from Caldwell and Rideout,
op. cit.)
4344 INTRODUCTION
Figure 1-25(b). The response to sinusoidal inputs can also be surprising.
Figure 1-25(c) illustrates the steady-state sinusoidal response at two dif-
ferent frequencies. At a frequency of 1.57 cps the system is fairly well
behaved. At the higher frequency of 4.92 cps it no longer responds at the
frequency of the input and it exhibits frequency demultiplication. The Lewis
servo thus exhibits new frequencies, and stability depends on the input
shape and amplitude.
1.4 DIFFICULTIES OF NONLINEAR ANALYSIS
Such factors as frequencies in the output unlike those in the input, jump
resonance, limit cycles, the effects on stability, response shape, and final
equilibrium of the size of the initial conditions and the input cannot be
predicted by the methods of linear constant-parameter analysis. To cor-
rect this deficiency, other methods of analysis are required to cope with
nonlinear parameter problems. Unfortunately, nonlinear systems are not
only essentially different physically from linear constant-parameter ones
but they are also essentially different mathematically. The difference in
their mathematical description introduces formidable analytical diffi-
culties.
In principle at least, linear constant-coefficient differential equations can
always be solved, in a uniform fashion, using a general technique. In
contrast, solutions to nonlinear differential equations are relatively rare,
and are found by techniques almost as various as the known solutions.
Even when the equations are “‘solved” by an electronic computer, the
control engineer has not resolved his difficulties. The physical under-
standing which would permit the extrapolation of results to new cases is
often totally absent.
The principal supports of linear constant-parameter system analysis are
inapplicable to nonlinear problems. Stability as a property of the system,
the frequency response concept, transform calculus, and the principle of
superposition will not be usable, at least unless they are drastically modified.
In connection with nonlinear systems, the essential concept of stability
needs re-examination. Stability, or the lack of it, is a property of the
linear constant-parameter system, and the stability is predictable on the
basis of the homogeneous equation of motion of the system. Initial con-
ditions and forcing functions have no effect on the stability. On the other
hand, a nonlinear system may typically be stable for one input or initial
condition and unstable for another. Thus, while a stable situation can still
be defined, attention must be focused upon system and input or output
combinations rather than on the system alone.DIFFICULTIES OF NONLINEAR ANALYSIS 45
With linear components it should be quite apparent that the principle of
superposition applies. Twice the cause produces twice the effect and, in
general, the total output is the sum of the outputs produced by each of the
several elements of the input.
The principle of superposition also applies on a dynamic basis to linear
systems. The amplitude ratio function in a linear system frequency re-
sponse, for example, is always the same no matter what the amplitude of the
input wave may be. The transient responses of a linear system to steps of
different magnitudes all “look” the same. In fact, the only difference is in
a “stretching” or “contraction” of the response scale. The total transient
is a sum of exponential terms each of whose magnitudes is linearly related
to the magnitude of the forcing function. Each term can be treated sepa-
rately so that the analysis can be broken into simple parts. Furthermore,
since any input can be approximated by a series of small steps, an extremely
important consequence of the principle of superposition in linear systems
is that if the response to one input is known it is at least theoretically
possible to know the response to any input.
Since the principle of superposition does not apply to nonlinear systems,
the whole problem must be treated as an entity, and the knowledge of the
response to one particular input is just that. In fact, as has already been
pointed out, nonlinear machines are full of surprises. A nonlinear servo-
mechanism adjusted for the best performance in following a step of a
certain magnitude may be unstable for a larger step, and may produce
frequency demultiplication if excited with a sine wave. The nonlinear
analog of the correlation between responses in the time and frequency
domains for linear systems is particularly weak.
To further complicate the matter, the Laplace transformation, which is
especially useful in problems where the principle of superposition applies,
becomes inapplicable to most problems in nonlinear dynamics. In
general the sum of two solutions to a nonlinear differential equation is not
a solution.
Unfortunately, no general method of analysis comparable to the opera-
tional calculus has been developed to treat systems with nonlinearities. In
nonlinear (and time-varying-parameter analysis) the usual engineering
analysis methods and the applicability of the results both have extremely
limited generality. Where and how to use the methods can be defined only
for specific cases. Yet, in spite of the analytical obstacles encountered in
nonlinear problems, the control analyst is often confronted with nonlinear
characteristics and must do his utmost to understand and predict their
interesting effects.2
GENERAL
TECHNIQUES
FOR
SOLVING
NONLINEAR
CONTROL
PROBLEMS
There are very few nonlinear differential equations whose solutions are
known. Furthermore, even the few equations which have been solved are
seldom encountered in control engineering.* Several methods of analysis,
however, can be applied in specific cases in control engineering to obtain
an approximate solution or other useful information, such as the stability
of small motions.
A correct estimate of the dynamic stability can always be made for the
small motions of systems with simple nonlinearities that have a finite first
derivative. Also, at least in principle, any ordinary differential equation, or
set of simultancous equations, can be integrated by a step by step process.
A variety of methods employing both graphical and numerical techniques
are available for this purpose. If an approximate solution can be found, by
any method, the accuracy of the solution can be refined to an arbitrary
degree by means of an iterative process.
Many nonlinear systems are characterized by being piecewise linear.
That is, the operation of the machine obeys a certain linear differential
equation in one region of operation and obeys another linear differential
equation or equations in other regions. Significant use can then be made of
known solutions to linear differential equations by piecing them together
to give a solution of the original nonlinear equation. The end conditions of
* Anentire text, for engineers, on the solution of problems in nonlinear mechanics is:
W. J. Cunningham, Introduction to Nonlinear Analysis, McGraw-Hill Book Co., New
York, 1958.
46LIAPOUNOFF STABILITY 47
one solution segment are used as the initial conditions of the next segment.
In these cases superposition, with ail its attendant advantages, can be
applied in the various linear segments. An elementary example of this
method has already been presented in connection with the friction damped
servomechanism of Figure 1-20. Even when the nonlinear system is not
actually piecewise linear, it can often be approximated as such to a satis-
factory degree of accuracy. This may be especially true when the inputs
are suitably restricted in their characteristics.
There are, finally, two methods of engineering analysis for nonlinear
control systems which also have an appreciable degree of generality. The
describing function approach is most useful in complex systems where the
effects of the nonlinearities are significant but “small.” This approach is
most commonly used with sinusoidal inputs, but the concept can be ex-
tended to other input functions, including those described statistically. The
Phase plane method, on the other hand, is most useful in connection with
large nonlinearities in simple systems. Although it can be viewed in quite
general terms, its primary usefulness in engineering practice is in situations
having only initial conditions of displacement or velocity, or step or ramp
inputs.
In a pragmatical sense, these two engineering methods complement each
other. Together with computer solutions they constitute the principal
tools of the control engineer in attacking nonlinear problems. It should be
noted here that the use of the mathematical model set up on a computer is
the most powerful method of all for specific problems. This, however, is
the proper subject of books on computer applications* and is beyond
the scope of the present work.
2.1 LIAPOUNOFF STABILITY
The question of the stability of small motions of a nonlinear system con-
taining only nonlinearities which possess continuous derivatives has been
throroughly investigated by M. A. Liapounoff.+{ If the nonlinearity is
* See footnote on page 13 and also: H. M. Paynter (Ed.), “A Palimpsest on the
Electronic Analog Art,” Geo. A. Philbrick Researches, Boston, Mass. 1955.
+ M. A. Liapounoff, “Probléme général de la stabilité du movement,” Annals of
Mathematical Studies, No. 17, Princeton University Press, Princeton, N.J., 1947.
The second, or direct, method of Liapounoff for the determination of the stability
of systems described by linear or nonlinear differential equations is best understood in
connection with trajectories which are phase space representations of the solutions to
the differential equations. Discussion of this matter is therefore deferred to the point
where it can be taken up in the appropriate context of the trajectories in the phase space.
+ Because the Russian language uses an alphabet completely different from the Latin
one, and because English is a notably unphonetic language, there is tremendous variety48 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
single-valued and has derivatives of every order in the vicinity of a point, a,
the nonlinear function y = f(x) can be represented by a Taylor series:
v= f(a) = ula) + (x — a() + Fe — ap (£4)
oe *( £4) et (£4) -
+5 ah Tt — Oey, +
The first two terms of the series represent the operating point and the
linear (or first) approximation to the actual nonlinearity. Liapounoff’s
research resulted in the theorem: “If the real parts of the roots of the
characteristic equation corresponding to the differential equations of the
first approximation are different from zero, the equations of the first
approximation always give a correct answer to the question of stability of a
nonlinear system.” *
According to this theorem, if all the real parts of the roots of the charac-
teristic equation of the linear approximation of the differential equation
are negative, the nonlinear system is stable about the point in question,
and any small temporary disturbance in the input will result in a temporary
disturbance in the output. If, however, any of the real parts of the roots of
the characteristic equation of the linear approximation of the differential
equation are positive, the nonlinear system is unstable about the operating
point, and any small temporary disturbance at the input will result in an
output which will diverge from this unstable point.
If any of the roots of the linear approximation of the differential equation
about the equilibrium point have zero real parts, the theorem may not be
used to give a definitive answer to the question of stability. Zero roots may
Tesult in a situation in which the stability might depend on the direction of
the disturbance. In control engineering this situation would usually be as
undesirable as an absolutely unstable situation, hence the fact that the
theorem does not apply is of little consequence.
It should be pointed out that, although the linear approximation of the
differential equations may indicate a stable system for all amplitudes of
disturbances, the theorem actually applies only to small disturbances within
in the transliteration of proper names from the Russian, and no “correct” spelling.
The author's name, for example, may be transliterated Liapounoff, Ljapunov, Lyapunov.
and so forth. Since the main feature of library indexing is by authors’ names listed
alphabetically, this confusion is a burden to the student and research worker. In this
book the attempt is made to preserve the transliteration of the best recognized, or most
available, translation or interpretation.
* N. Minorsky, Introduction to Non-linear Mechanics, J. W. Edwards, Ann Arbor,
Mich., 1947, p. 52.LIAPOUNOFF STABILITY 49
the range of validity of the Taylor series written about the selected operating
joint.
i To illustrate the application of the Liapounoff theorem, consider the
differential equation:
aay (2) 4 ke =@ (2-1)
gaia dt
This is van der Pol’s equation* and when « < 0 it describes, among other
physical situations, a type of electronic oscillator.
When the acceleration and velocity are zero, the value of x defines the
point of equilibrium; that is, kx = Q; x = Q/k at the equilibrium point.
In order to define the nonlinear function in the vicinity of an equilibrium
point, X, let:
w=X+2, where — = — =
Then, by substitution:
@&
+ ull — (XP + 2XE + Ze KK +]=0 22)
The first (linear) approximation leads to the equation
Px dx
= 1-X*7]=+kze=0 2-3
wet oL ] Gate (2-3)
and the characteristic equation is:
ev +n —X%)s+k=0 (2-4)
Since the Routh-Hurwitz stability condition applied to the characteristic
equation is
wl — X2)>0 (2-5)
Equation 2-S indicates that when u > 0 the system is stable for all X less
than unity. As X approaches unity, the coefficient u(1 — X*) becomes
small and the system becomes poorly damped. The theorem is not appli-
cable when X = | since this condition results in zero real parts for the roots
of the characteristic equation. For X greater than unity the system is
unstable.
The concept of Liapounoff stability pertains solely to the dynamic sta-
bility in the immediate vicinity of an equilibrium point. A related, but less
precise, concept can be employed to predict any possible instability for
systems which contain a simple nonlinearity.
* B. van der Pol, “On Relaxation Oscillations,” Phil. Mag., series 7, vol. 2 (July-Dec.
1926), pp. 978-992.50 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
If the nonlinearity is single-valued and continuous and has a finite first
derivative (but not necessarily finite higher order derivatives), a general
statement can be made which applies to both small and large motions. The
nonlinearity can be viewed as a gain-changing element, and any nonlinear-
ity of this type can be approximated, arbitrarily closely, by a series of
straight-line segments. Each of the straight-line segments corresponds to
an incremental gain, K;. If the system is stable for all such K,, it will be
stable even in the presence of the nonlinearity. Kalman* has suggested
that a necessary but not sufficient condition for instability is a gain K;
which would produce instability in an equivalent linear system.
K
‘Actual nonlinear
function
Amplifier Output, K(e,
Approximation by
straight-line segments
Error, €
Figure 2-1. Approximation of a gain-changing nonlinearity with straight-line segments.
Of course, conventional constant-coefficient linear analysis methods, for
example, root loci and Bode diagrams, can be employed to examine the
stability as a function of these incremental gains. Figure 2-1 shows a
single-valued nonlinearity and the approximation by straight-line segments.
If this nonlinearity were in the forward loop of the closed loop system illus-
trated in Figure 2-2(a), and the system were stable for the maximum gain
(slope) of the nonlinear function, it would be stable under all conditions.
Figure 2-2(b) shows the root locus plot for the system of Figure 2-1. The
system is indicated to be stable for the maximum gain K,. If G(s) were of
such a form that the system were conditionally stable, the stability would
have to be examined for all possible values of K;. This concept of a neces-
sary (but not sufficient) condition for instability can be applied to a system
of any order with any number of sirnple nonlinearities.
*R. E. Kalman, ‘Physical and Mathematical Mechanisms of Instability in Non-
linear Automatic Control Systems,”” Trans. ASME, vol. 79 (1957), pp. 553-563.LIAPOUNOFF STABILITY 51
The determination of stability according to Liapounoff can be viewed
either as a powerful justification for the use of linear-analysis or as a pre-
liminary step in the approximate nonlinear analysis. The validity of the
linear approximation is of particular interest when it is used to predict the
stability of the system. If the solution of continuous nonlinear equations
indicated the possibility of a condition of instability not revealed by the
€ 1 9.
> KO) | 08) = ae >
Simple Linear transfer
nonlinearity function
(6)
Figure 2-2. A servomechanism with the gain-changing nonlinearity in the forward loop
and the root locus plot.
linear approximation, the linear approximation would be practically worth-
Jess. However, for simple nonlinearities in general, and in all cases where
Liapounoff’s conditions are met, the linear approximation can always be
depended upon to determine the stability in the vicinity of an equilibrium
point. Furthermore, according to Kalman, no instability is possible for
small or large motions without an incremental gain which would produce
instability in the equivalent linear system.52. GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
2.2 DIRECT SOLUTIONS
Where the mere knowledge of stability is insufficient or where the restric-
tions placed on the given methods for examining stability are too severe, it
may be desirable to attempt an exact or an approximate solution to the
equations of motion.
A few nonlinear equations have known solutions. The equation of
motion of a simple pendulum, for example, can be solved in terms of
elliptic integrals of the first kind. Although this problem has been treated
extensively elsewhere,* the rudiments of an approximate solution are
presented here. This serves the triple purpose of providing an illustrative
example, showing a technique to be used again later, and presenting a
classic nonlinear parameter problem, classic both historically and in the
fact that it has been solved completely!
The equation of motion of the simple pendulum is
2
SS + o,f sin 6 =0 (2-6)
where § = deflection of the pendulum from the vertical
,° = g/l = acceleration of gravity/length of the pendulum
t = time
The first integration, to obtain the velocity, is set up by multiplying by
dO/dt: °
7 fom 1a (ap _
Alvar say
Multiplication by df leaves the left-hand side as an exact differential.
Integrating, and inserting the initial conditions 6 =0 and d0/dt = wy
when ¢ = 0, the velocity is:
dé 20n) 6
ae on - (222) sine (3) (2-8)
A final integration then yields:
-iUtpiog
* See, for example; T. v. Karman, M. A. Biot, Mathematical Methods in Engineering,
McGraw-Hill Book Co., New York, 1940.
N. W. McLachlan, Ordinary Nonlinear Differential Equations in Engineering and
Physical Sciences, University Press, Oxford, 1956.
7 do
ne sin 8) ae (2-7)
(2-9)DIRECT SOLUTIONS 53
This expression can be manipulated so as to produce a form which is an
elliptic integral of the first kind as tabulated in tables of elliptic integrals.
While three fundamentally different forms of motion are possible,
depending upon the magnitude of 2w,,/m9, the possible oscillatory motion
(2@,,/@o > 1) is the most interesting for the purpose of drawing analogies
to control systems. In this case, the radical will be real only when
2(@,,/@ 9) sin (6/2) < 1. Consequently, the oscillatory motion must have a
maximum amplitude, 6,,, such that:
sin (8/2) = @o/2@, OF =~ Aq = 2 Sin? (wp/2,)
The period of the motion can be found by noting that a quarter period
corresponds to the time taken in going from 6 = 0 to @ = 6,,. Thus:
46
ad Tefal)
The period is then easily computed if the integral is modified to have
limits between zero and 7/2. This can be accomplished by replacing
2(w,/@o) sin (6/2) by sin 4, since the upper limit will then correspond to
¢ =7/2. Since, then,
sin ¢ = 2(w,,/@) sin (8/2)
differentiation yields
cos ¢ df = (w,,/a») cos (0/2) db
and the modified expression for the period becomes:
(2-10)
= cos ddd
ol ihieas Tayo
If w/2m, is much less than unity, the integrand can be expanded by the
binomial theorem to give a series expression which converges rapidly, that
is:
Pos Po [a5 (af ome e+ 2i(m =) sot iat ‘Jes
in!
2, + Hap +S (2) —] (2-42)
p=4
(2-11)
For the small values of w/w, assumed, w,/2m, = sin (6,,/2) © (6,,/2), So
@o/w,, is approximately 6,,. In terms of 0,, the period is then:
pa [i4%ey. “| (2-13)
@,54 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
Besides the very few nonlinear equations with known solutions, one
occasionally comes across an equation (almost invariably, or reducible to,
a first-order equation) which can be integrated readily by standard methods.
The most useful method in practice requires the separation of the variables.
When one has | of the form
dy _ HO 2-14
Shy) = 2) (2-14)
these become:
By) dy = $(t) dt (2-15)
or
fo dy -{ao a+C (2-16)
The first integration of the pendulum problem utilized the fact that the
variables could be separated.
As another example consider the motion of a mass particle, m, subjected
to a constant force, T, and operating against quadratic friction. The non-
linear equation of motion is
ee ea Z(t ne *) 2-17)
dt m m m T
where v = velocity
b = damping coefficient
Separating the variables:
. dv T [
——. = - | at 2-18
f 1-—(b/T)® mJo ean
Integrating:
1+ Jd/T v2
In —_— = — J/bTt 2-19)
1— Jo/T vom Vv ae)
or
1+ Jb/Tv (2 — )
~~ = = JbTt 2-20)
1— /b/Tv ad m v (2-20)
Solving for v:
ae a Sm fF ean, (2-21)
exp bT t) +1
Homogeneous equations, such as
arf) omAPPROXIMATE SOLUTIONS IN SERIES 55
can be put into a form in which the variables are separable by means of the
substitution y = vt. Consider, for example, the equation:
dy_ yt? _y it
at 244 - 2-23
. dt 2ty 2t a 2y ( )
Letting y = vt: a 7 a
ly yp ot
== t—= 2-24)
ee eae em)
oe
2 phi! (2-25)
Be t 2v
2v dv dt
= (2-26)
1-v’ ¢ Ci
which is readily integrated by quadratures.
The use of an integrating factor, which makes the equation an exact
differential (for example, the multiplication by d6/dt in the pendulum
problem), is another elementary method which is occasionally applicable
to the problems of control system analysis.
The techniques reviewed here by no means exhaust the possibilities ap-
plicable to nonlinear equations in general.* For control engineering prob-
lems, however, these techniques represent a fair cross section of the direct
solution methods which have been found useful. In view of this last state-
ment and the severely restricted types of problems to which these elementary
direct methods apply, it should be evident that the chance of encountering
a control engineering equation of motion which can be integrated directly
is actually very small.
2.3 APPROXIMATE SOLUTIONS IN SERIES
If exact solutions are not possible, approximate solutions to differential
equations can be found in two different ways. A function, x = f(t), is to be
found which satisfies the equation for a given range of values of the inde-
pendent variable, ¢. Either
1. the expression for x can be found in terms of functions of ¢, and
values of x are then obtained by substitution; or, alternatively,
2. tabulated values of « can be found which correspond to tabulated
values of t.
* A useful tabulation of particular nonlinear equations for which solutions are
known, or which have been extensively studied, can be found in: E. Kamke, Dif-
ferentialgleichungen, Losungsmethoden und Losungen, Chelsea Publishing Company, New
York, 1948,56 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
An equivalent of the tabulated values is, of course, a graph of x against t,
The practical application of these methods involves either finding an in-
finite series which is an approximate solution or carrying out a step by step
numerical or graphical integration.
The material which follows is not meant to be a complete discussion of
numerical or graphical methods of solution for ordinary differential equa-
tions. It is intended, however, to illustrate what can be accomplished by
several elementary methods, and to give the reader tools with which to
attack what might otherwise be a severely difficult problem. More com-
plete discussions and extended methods are presented by Levy and Baggot,
and by Willers, among others.*
The method of Picard is used to obtain a solution in terms of a power
series. Unlike approaches which depend upon the Taylor series, this
method can be used even when one or more of the derivatives of x are
infinite at the initial point. Suppose the equation is:
dz
a TI) (2-27)
and the initial conditions are x = xq, t = fo. If the origin is changed to the
initial point, the equation is unchanged except that the variables are now
understood to represent the departures from the initial conditions. If an
initial condition on the dependent variable is given, this may very well be
taken as the first approximation to a solution.
Otherwise, assume for small values of ¢ that:
du sae
«x=at" and therefore oman a
If these expressions are substituted in the original differential equation, the
constants a and n can be determined. A first approximation «is calculated.
2, is substituted in the expression for f(x, t), and the equation is integrated
to give a more accurate expression 2». If xg is substituted in the expression
for f(x, t) and the equation is integrated again, a still more accurate expres-
sion x, is the result. This process can be repeated as many times as may
be desirable. It illustrates a general process of iteration which may be used
to refine an approximate solution.
As an example of the method of Picard, consider the equation of motion
of a body opposed by quadratic friction accelerating from a standstill under
* H. Levy, E. A. Baggot, Numerical Solutions of Differential Equations, Dover Publica-
tions, New York, 1950.
Fr. A. Willers, Practical Analysis, Dover Publications, New York, 1948.APPROXIMATE SOLUTIONS IN SERIES 57
the influence of a constant force. This problem was solved in exact form in
the previous section.
wl ibeyt (2-28)
dt m m
where v = vy = Owhent = 0. Let
v, = at”
Then
ant = — 2 gyn E (2-29)
m m
for small ¢. Assuming that the term t?” may be neglected
ant™ t= + ZI (2-30)
m
n — 1 = 0 by equating exponents
therefore
n=1
an = T/m by equating coefficients (2-31)
therefore
a=T/m
and therefore finally
v=a"= = t
m
Now
‘dv
= —dt 2-32)
van+[ 2-3)
and from the original equation:
See (2-33)
tacit
So the second approximation is obtained from the first one by substituting
and integrating:
a
aao+] [Z-2s] dt
olm om
t 2
= [ [z zs #(Z) a| at (2-34)
ioLm m\m
-(}- (ETE
m, m/ \m/ 358 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
By repeating the process of squaring the approximation, multiplying by
—b/m, integrating the series with respect to time, and adding the result to
(T/m)t (which would reappear after each integration), one may obtain:
2 3 2 5 “M4?
ENTE ASCE ens
m m! \m/ 3 m/ \m/ 3-5 m/ \m/7-9
2 8 2 5 " 7
BMT BCH
m m/ \m/ 3 m/ \m/ 3-5 m/ \m/3-3-5-7
“Ps 9 6
YS s5- Oat
m/ \m/3-3-5-+7-9 m/ \m/3-5-5-7-9- 11
tos ates o
m/ \m/3+5-7-9-13 m/ \m/7-7-9-9-15
The exact solution to this problem, given by Equation 2-21, is:
o= fF ton,
b m
2 48 2 5
=T i (*)(2FE 4 2(8) (FE
m m/ \m/ 3 m/ \m/ 15
ay? t 9
= (2 (24 + (”) (7)8.---
m/ \m/ 315 m/ \m/ 2835
Equation 2-36 may be seen to be identical to the exact solution through the
fourth term.
In any series approximation like the one of Equation 2-36, if the first
neglected term is taken as approximately equal to the allowable error, then
the range of values of the independent variable over which the series is Valid
can be calculated. Thus, for example, if one chooses to neglect the term in
1° and subsequent terms in the series of Equation 2-36, and the maximum
allowable error in v is 0.01, then
ae
33(2) (Zf2 <0.01
ml \ml 5-7-9°9
2835(0.01)
ie aes
with appropriate units and numerical values for b/m and T/m.
As the calculation of the coefficients in the series required to maintain
a given degree of accuracy becomes excessively tedious, it is possible to
start a new series which would be valid over a new range of values of the
(2-37)APPROXIMATE SOLUTIONS IN SERIES 59
independent variable. In the example given above a new series could be
developed for the values of t beyond f,,,x. Such a process, however, could
rapidly become more of a task than the application of one of the methods
of numerical or graphical integration.
The method of Picard can also be applied to solve higher order and simul-
taneous equations. With regard to the higher order equations, it should be
noted that a differential equation of order may be reduced, byappropriate
substitutions, to an equivalent system of n simultaneous equations, each of
the first order.
For example, the second-order equation
43 + 60=0 2-38
de Mic
is equivalent to the set of two simultaneous equations:
dx
“= 2-39)
ae (2-39)
® 6x —% (2-40)
dt
These equations, with a change in time scale, represent the simplified
situation of an airplane following the localizer beam to an automatic
landing. If zy = 1, (dz/dt)) = z = 1, and ty = Oare the initial conditions,
then:
t t
zemtfednt+[ea
0 0
Hi
z= % + [on —32)dt=1 + [6 — 32) dt
By substituting the initial conditions as a first approximation and carrying
out the indicated integrations, the first approximation is obtained:
m=1tt
(2-41)
z=1-3t
The second approximation is found by substituting the first approximation
and carrying out the indicated integrations:
3f
at =1+ Paaai+1— 2
: 32 (2-42)
m%=it (—6myt — 3a) dt = 1 — 34+ — 2iP
060 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
The third approximation is determined in the same way:
t 32
nat+| (1 — 34 3 _ 29) dt
0 2
Seu ate
eee gee eects
a Dea Deeg:
7
mel +f (—62yt — 324) dt
: : (2-43)
=14[' (6-68 + 98-3 491-8 + of) at
0
3°78 | 1504
Seep oeeroe
Hinata
. af ft ar
| [ da=it1—-2 45-4
oe nee eee err
and so forth.
A power series which is an approximate solution to a differential equa-
tion can often be found in terms of a Taylor series. For example:
& = fe,» (2-44)
Att=aandz=4),
d
oF = a = f(a, b)
Differentiating Equation 2-44,
:
ee (=, t, 4) (2-45)
de dt
and evaluating at ¢ = a and « = b,
a9" = g(a, b, x)
and so forth. The process is continued and the results inserted into a
Taylor series to obtain the solution:
(= b)=(t—a)ey’ +L (¢—a)?xy’ + Lt — aay + tL ayy”
21 31 nl
The use of the Taylor series implies the restriction that all derivatives,
d"x/dt", must exist in the region about t = a.
If the equation is of higher order than the first, the same procedure can be
followed because the initial conditions on all the derivatives will be known,
and the higher power terms in the solution can be calculated by successive
differentiation.APPROXIMATE SOLUTIONS IN SERIES 6l
Consider Equation 2-38, here for convenience,
Px a
mt +34 _ = + 6xt =0 (2-38)
subject to the initial conditions ¢ = "6, daldt = 1.0, and x = 1.0. Then
x" = —32' — bat
a —3a" — 6a't — 6x
a’ = —32" — 62"t — 122’
o = va — 6x"t — 182"
aN! = —30" — 60" — 240”
and
x" = 3
my” =+9-6=3
ao’ = —9 —12 = —21
a’ = +63 + 54= +117
ay\| = —351 — 72 = —423
Therefore:
38 diet tT 42308
SUeTEage (Esmee eon te nc
(@-)=
This series coresponds term for term with the one developed by the
method of Picard, except for the last term in the previous series which is the
one which would change at the next step. The range over which the series
is valid can again be approximated by equating the first neglected term to
the maximum allowable error in the dependent variable.
The Taylor theorem approach can be used with simultaneous equations
by solving them in a parallel fashion. The process can be illustrated as
follows: a
SF = fe 2)
dx
(2-47)
ee aya)
az
The initial conditions are x = 2», Y¥ = Yo, Z = 203 Yo and 29 are deter-
mined by the original equations. Differentiating:
y= fle H2 92) (2-48)
2" = Bolx, 2, Y', 2’)
Now yo” and 2,” can be calculated by substituting known quantities on the
right-hand sides of these equations. Proceeding to differentiate and sub-
stitute, y)” and z)” and higher derivatives at the initial point are determined.62 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
Then the solution is given by the two equations:
rgd oil ”
Y= Yo + (® — a)uo! + 5; ( — Ba)” + 5 (@ — a)Y0” +
; : (2-49)
2 tot (U — tyleo! +55 ( — aa)en" + 37 (He — aa)e0l” +o
2.4 STEP BY STEP INTEGRATION
When an approximate solution in terms of an infinite series is impractical,
a solution can always be obtained by means of step by step integration.
=_—
&
g
>
"
—_~
HS BS Be
—
~—
0 :—
Figure 2-3. Graphical solution of a differential equation.
Either graphical or numerical methods can be employed. The procedures
are actually identical in principle. Since the graphical methods are useful
as a means of visualizing the numerical procedures they will be explained
first.
Consider the first-order differential equation:
d
oe =
in TIO (2-50)
For any initial values of x and y, at the point Pp, there is a slope (dy/dx), which
is obtained from the equation. A short line with this slope can be laid off
from the initial point to the point P;. Vide Figure 2-3. The coordinates ofSTEP BY STEP INTEGRATION 63
P, determine a new pair of values x, and y;, and from them a new slope
(dy/dz), can be calculated. This slope is now laid off from P, along a line
segment ending at P,. There a new slope is determined and laid off, and so
forth.
Increments in the independent variable # need not be constant. In fact,
the increments should be small and the line segments short where the slope
is changing rapidly. They may be longer where the slope is changing
slowly.
Table 2-1
Numerical Values for the
Graphical Integration
dy
x y i
0 1.0 0
0.1 1.0 —0.100
0.2 0.99 —0.198
0.3 0.97 —0.290
0.4 0.94 —0.375
0.5 0.90 —0.450
0.6 0.85 —0.510
0.8 0.75 —0.60
1.0 0.63 -0.63
1.2 0.50 —0.60
1.4 0.38 —0.530
1.6 0.275 -0.44
17 0.23 —0.392
1.8 0.19 —0,342
19 0.16 —0.304
2.0 0.13 =0.26
21 0.10 -0.21
2.2 0.08 —0.176
24 0.045 = —0.108
2.6 0.02 —0.052
This is the simplest form of the step by step integration of differential
equations. It is relatively crude and errors tend to accumulate rapidly. Of
course, if a first approximation to the solution can be found, the approxi-
mation can be refined by iteration in the same way as with the solution in
series. The. process is illustrated in detail by the example of the equation
eye 2-51
aa xy (2-51)
with the initial condition y = 1.0 when x = 0. A useful table, such as
Table 2-1, giving numerical values, is maintained in conjunction with the64 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
graphical construction shown in Figure 2-4, The values of 2 and y at the
end of each line segment are inscribed in the table and a record of the calcu-
lated values of dy/dz is kept. The process starts with a line segment of zero
slope from the point y = 1.0, = 0. An interval in x of 0.1 is chosen as
convenient. At the point «= 0.1, y = 1.0 the slope is —(0.1)(1.0) =
—0.100. A line with this slope is constructed forward from the point = 0.1,
y = 1.0. Along this line at x = 0.2, y is read from the graph and is found
to be 0.99. The slope here is dy/dx = —(0.2)(0.99) = —0.198, and so forth,
10
1
= rst approximation—step by step with |
06 Points from the SF taight-tine segments $2 = — xy,
2 analytical salution| | _ [A] S$yy90= 1: #0 = 0
04 eee 4 Second approximation
In
02 mt [tee
from first approximation
RE
0 Renee gag
x
Figure 2-4. Graphical solution of the equation dy/dx = —xy.
to the third, fourth, fifth, and sixth points on the curve. There it may be
observed that the slope is no longer changing so rapidly. It is now per-
missible to lengthen the increments in the independent variable, x, until the
slope begins to change rapidly again.
A solution obtained in this way might be accurate enough. More accu-
racy could be obtained by plotting the slopes at smaller intervals. On the
other hand, a second approximation (more accurate than the first) may be
obtained by plotting the values of dy/dz (which are available in Table 2-1)
and integrating the dy/dxcurve. The integration can beaccomplished graphi-
cally, numerically, or mechanically (with a planimeter). The curves for
dy/dx from the first approximation, and the second approximation y =
S(«) obtained by integrating dy/dx (using the rectangular approximation),
are plotted in Figure 2-4.STEP BY STEP INTEGRATION 65
It is apparent, of course, that Equation 2-51 can be integrated directly by
separating variables. Points from the exact solution are plotted in Figure
2-4. for purposes of comparison.
In this particular case, the second approximation is considered to be more
than good enough. If it were not, however, a new set of slopes could be
calculated from pairs of points on the second approximation solution curve.
If these slopes were plotted and integrated, the result would be a third
approximation and so forth.
Simultaneous equations (and therefore higher order equations) can also be
solved by this same method. Here, of course, it is necessary to carry the
process forward simultaneously with more than one curve. Assume, for
example, a second-order equation reduced by substitution of a new variable
to two simultaneous first-order equations:
oY = fla m2)
2-52)
£ = o(x,4,2)
dx ™
The values x», yy and zy may be substituted to obtain (dy/dx), and (dz/dr),.
Line segments with these values are constructed on separate graphs of
y vs. x and z vs. x. The end points of the line segments must now be taken at
identical values of x. They determine a new set of values 2, y, and 2.
These are substituted in the two equations and determine two new slopes
(dy/dx), and (dz/dz),. The construction of the two graphs of the solution
proceeds together in this manner, as shown in Figure 2-5. If the derivatives
are plotted and integrated, a second approximation can be obtained exactly
as in the case of the single first-order equation.
It has already been pointed out that the simple method of step by step
integration tends to accumulate errors. A considerable gain in accuracy is
achieved by using an average slope between its two end points for each
individual line segment. This amounts to taking half a step backward for
each step forward, but the benefits in accuracy may well repay the extra
labor.
A graphical construction using the average slopes is illustrated in Figure
2-6. It starts in a way identical to the simple method with a slope (dy/dz)y
calculated from the initial values of x and y. The slope (dy/dz),, calculated
from the values x, and y, at the end of the first line segment, is also plotted
proceeding from 2, Yo. The average slope between the two is then em-
ployed to construct a segment of the actual solution. This segment will ter-
minate at a point 2}, yy, different from 2,, y,. The pair of values +, 9}, is
used to calculate a new slope (dy/dx),, which is projected to x, y, and this66 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
3%
es
0
0 x— >
Figure 2-5. Graphical solution of two simultaneous first-order equations.
pair of values determines (dy/dz),. The average slope between (dy/dz),,
and (dy/dz), is now used to construct a second segment of the solution. The
same procedure is applied repetitively until the interesting range of values of
the independent variable has been covered.
In some cases it might be discovered that there is no appreciable dif-
ference between the first and second approximations to the value of the
derivatives, such as (dy/dx), and (dy/dzx),,, at a given value of the independ-
ent variable, such as x. If this is true, the graphical construction using
average slopes can be carried out very rapidly since each slope is only calcu-
Jated once, as in the simple method. Each slope, however, is plotted twice
and improved accuracy is achieved. On the other hand, it may be advan-
tageous, although tedious, to determine a third approximation to theSTEP BY STEP INTEGRATION 67
0 2—>
Figure 2-6. Improving the graphical solution by using average slopes.
dependent variable (y,,;, for example) at each step. Thus y,,, would be
the end point (at x = z,) of a line segment drawn from xp, yy with a slope
dy|de = 3{(dy[dx)y + (dyldz)s1).
The procedures just described for the graphical solution of differential
equations can alternatively be carried out completely numerically without
any plotting. What has been called the simple method is known in its
completely numerical form as the method of Euler.
In Euler’s method, if the equation is dy/dxe = f(x,y), then the initial values
x and yo, are used to calculate (dy/dx)y. This slope is multiplied by an incre-
ment Az to give an increment in the dependent variable Ay. The new
values x, = 2%) + Ax and y,; = yy + Ay are now substituted to find
(dy/dzx),, which is in turn multiplied by an increment Az to find a new incre-
ment Ay,. The calculation is repeated over and over in order to cover the
range of values of x which is of interest. Procedures for integrating
simultaneous equations, and for reducing higher order equations to
simultaneous equations, are exactly analogous to the procedures used in
conjunction with the graphical constructions.
The method of Euler can also be modified to make use of average slopes
as in the more elaborate graphical procedure. Suppose there is a differen-
tial equation in the form:
d"y _ (“2s dy dy )
dx” da gk dn
(2-53). 90
8r'I— 88'1— IvI- ayeunnsy
86°0— I~ s61—-
8L:0- elz— 8L:0— r0
8L':0— Loz— 8L0- aqeunsy,
8s0— |~90'2
8e0- soz— 8e0— v0
8e0— 661 9€°0— areunsy,
61'0— [68
0 o8'I— 0 0
“G)
“@) |)
‘G)
o=>+?
plo — atv +2
:SUOIIPUOD [eNTUy
UL
e071
sadojg adesiaay Buss, poyrapl daag jerjsaunyy ayy
TT PIFEL
68STEP BY STEP INTEGRATION 69
The first estimate, (d"y/dx"),, for the slope at the starting point is obtained by
substituting the initial conditions. Multiplying (d"y/dx") by Az gives
a first approximation to (d"-ly/de"-1), Multiplying (d"“1y/de"“) by Ax
gives a first approximation to (d"~*y/dx"-*), and so forth, to the first
approximation for y,. y, and its derivatives are now substituted in the
functional relationship to determine the second estimate for the slope
(ay/ dx"), The average slope }[(d"y/dx")y + (d"y/dx"),], which is a
final estimate, is then multiplied by Ax to give a second approximation to
(d"y/dx"), This slope is averaged with (d"-1y/dx"), and multiplied by
Az to give the second approximation to (d"~*y/dz"~*), and so on.
Table 2-2 illustrates the process of step by step numerical integration by
presenting the first few steps in the solution of the normalized equation of
motion of the servomechanism whose block diagram is presented in Figure
1-24,
The methods for the graphical and numerical solution of differential
equations which have been presented are general and flexible. They are
readily grasped, and are relatively easy to carry out. They give fairly accu-
rate results with a minimum amount of labor. Other methods are available
where a high degree of precision is required; and there are still other special
methods where the equation has some particular form. For a discussion
of these matters the reader is referred to any of the several excellent books
on numerical methods.*
There is one common engineering method of approximate graphical
solution of first- and second-order ordinary differential equations which has
not been discussed here. This is the method of isoclines. The discussion of
this technique is deferred to Chapter 7, where it can be presented in a more
appropriate setting in conjunction with the phase plane.
The methods which have been presented should be employed when other
methods fail or are inapplicable. Perhaps the solutions obtained for a
* H. Levy, E. A. Baggot, Numerical Solutions of Differential Equations, Dover Publi-
cations, New York, 1950.
I. S. Sokolnikoff, E. S. Sokolnikoff, Higher Mathematics for Engineers and Physicists,
McGraw-Hill Book Co., New York, 1941.
A summary discussion of numerical methods aimed at control engineers is in E. S.
Smith, Automatic Control Engineering, McGraw-Hill Book Co., New York, 1944.
In addition to the general methods there have been several presentations of thenumber
series methods which are analogous to numerical convolution. The number series
method, however, unlike convolution, may be applied to nonlinear control systems.
See, for example, A. Tustin, “A Method of Analyzing the Behavior of Linear Systems in
Terms of Time Series,” Journal IEE, Pt. la, vol. 94 (1947), pp. 130-142; A. Tustin,
“A Method of Analyzing the Effect of Certain Kinds of Non-linearity in Closed-Cycle
Control Systems,” Journal IEE, Pt. Ha, vol. 94 (1947), pp. 152-160, and A. Madwed,
Number Series Method of Solving Linear and Nonlinear Differential Equations, Report No.
6445-T-26, Instrumentation Laboratory, MIT, Cambridge, Mass. April 1950.70 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
very limited number of parameter and initial condition combinations may
illuminate the whole problem. At worst, a limited amount of information,
properly considered, is much to be preferred over an attack which entirely
disdains the analytical approach. The examples which have been presented
should indicate that a control engineer has no particular cause for despair
even in the event thatconventional analytical approaches to nonlinear param-
eter problems are unsuccessful. A direct attack aimed at a solution of the
equation is likely to, quite literally, solve the problem. Step by step methods
which have been presented here are, in principle, the same techniques by
which digital computers solve the equations of motion of nonlinear control
systems.
2.5 PIECEWISE LINEAR SOLUTIONS
Many nonlinear systems are “piecewise linear,” or may approximate this
condition. The term “piecewise linear” is meant to imply that, in its
over-all operation, the system obeys two or more sets of linear constant-
coefficient differential equations. Each of the several equation sets is asso-
ciated with different regions of operation. The performance of the system
can be examined by “piecing” together solutions to the individual equations.
Final values of one solution piece, as the operation of the system crosses
the boundary between operating regions, are used as the initial conditions
for the next one.
Some simplified representations of relay servomechanisms can be
analyzed as if they were piecewise linear. A separate exposition of relay
servomechanisms is presented in Chapter 9.
Another example of a piecewise linear system is furnished by the inverted
pendulum with a vertically oscillating pivot when its operation is described
by the Meissner equation: *
ao
ade
This equation is an approximation to the Mathieu equation, 6 + (« —
A cos t)§ = 0, which actually describes the performance of the inverted
pendulum with an oscillating pivot. The cosine term in the Mathieu
equation has now been replaced by a square wave, and thus a linear time-
varying equation has been replaced by a nonlinear equation. Transition
+ («4 po =0 (2-54)
* E. Meissner, “Ueber Schuettelerscheinungen in Systemen mit periodisch veraender-
licher Elastizitaet,” Schweitzerische Bauzeitung, vol. 72, no. 11 (Sept. 14, 1918), pp.
95-98.
J. P. Den Hartog, Mechanical Vibrations, McGraw-Hill Book Co., New York, 1940.PIECEWISE LINEAR SOLUTIONS 7
from the plus value to the minus value of 8 and back takes place when t =
1/2, 327[2, Si7/2, 77/2, +++. The nature of the approximation is indicated in
Figure 2-7. A solution is obtained by piecing together at times t = 7/2,
3/2, 5/2, 7/2, +--+, solutions to the two linear constant-coefficient
equations:
#6
— + («+ p)6=0
qe t eth
and (2-55)
&O
—+(—-—p6=0
et @-#
"A cost
+8
o4
-B
z 3x sr In on
0 2 2 2 2 2
Time, t —>
Figure 2-7. The approximation employed in the Meissner equation.
The solution segments are given by:
a. sin Ja + Bt
and (2-56)
6(t) = O)cos Ja — Bt Hoga tt — Bt
A(t) = Oy cos fa + Bt+
To illustrate the fact that an inverted a with oscillating pivot
can be “stable,” consider a configuration where « = —0.05, B = +0.35,
and the initial conditions are 0, = 0.1 radian, (d0/dt)) = 0. 6 is the angle
of the pendulum measured from vertical. When 9 = 0 the bob of the in-
verted pendulum is standing directly above the vertically oscillating pivot.
Inserting these values into Equation 2-56,
6(1) = 0.1 cos (0.548)t
and (2-57)
d6/dt = —0.1(0.548) sin (0.548)t72 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
characterize the first solution segment. When
t=n7/2
6 = +0.0652
d6/dt = —0.0415
If these values are now used as the initial conditions for the second solution
segment,
6(4) = 0.0652 cos (0.6321) — 22415 sin (0.632/2)
0.632)
or
O(¢) = 0.0652 cosh (0.6324) — 2-415 sinh (0.6321)
0.632
(2-58)
and
dO/dt = 0.0652(0.632) sinh (0.6321) — 0.0415 cosh (0.6324)
See.
Pendulum Deflection,
6 (radians)
yPrte
Figure 2-8. Response of the inverted pendulum described by the Meissner equation;
a = —0.05, B = +0.35.
characterize the second solution segment. These equations evaluated at
t = 3n/2 provide the initial conditions for the third segment, and so forth.
The solution is illustrated in Figure 2-8. It may be observed that the
inverted pendulum does not diverge, as might be expected, from its
“unstable” equilibrum position.* This is a very interesting phenomenon
to observe.
* Conversely, a simple pendulum with a vertically oscillating support can be “un-
stable,” that is, execute diverging oscillations about its vertical (bob down) equilibrium
position.EVALUATION OF METHODS 73
The concept of nonlinear or time-varying-parameter systems which are
actually or appproximately piecewise linear is a very convenient one. It
will be used over and over again, not so much to obtain solutions to non-
linear differential equations but particularly in conjunction with the phase
plane method as a means of visualizing, in approximate form, the main
features of the behavior of nonlinear systems. Since, in principle anyway,
the solutions to linear equations are known, the system trajectories of a
piecewise linear system in the phase plane are composed of trajectory
segments which are familiar representations of linear system behavior.
2.6 EVALUATION OF METHODS OF NONLINEAR
CONTROL ANALYSIS
The bulk of this chapter has been concerned with obtaining solutions to
nonlinear differential equations subject to arbitrary forcing functions or
initial conditions. In progressing to this point, the reader has been exposed
to some of the rudiments of nonlinear theory which can often profitably be
employed in the analysis of nonlinear control systems. Little mention has
yet been made, however, of the key techniques used in the great majority of
problems involving nonlinear controls. These are, of course, the subject of
the remaining chapters, but some anticipation is pertinent here to indicate
their connection, as problem-solving tools, with the so-called general
methods previously discussed.
For many engineering purposes a complete solution for the response of a
system subjected to arbitrary forcing functions or initial conditions is not
essential for adequate understanding of system behavior. For example, in
many cases a linearized analysis of system performance will have been
made. It may well be sufficient to modify this analysis by making suitable
assumptions concerning the nature and extent of certain likely nonlinear-
ities, and then to calculate their effect on system performance. A suscep-
tibility to oscillation will be of particular interest to control engineers, and
the possible amplitude and frequency of the oscillation can be of para-
mount importance. Very likely the effect of raising or lowering the gain in
one or more control loops will have already been estimated for the linear
case. Consequently, if the effect of the nonlinearity can be reduced to these
terms, the requisite analysis can be accomplished quite handily. The
conditions for oscillation and the speed of response in terms of bandwidth
can be estimated.
It is the objective of the sinusoidal describing function method of non-
linear analysis to reduce the representation of the actual nonlinearity to an
equivalent linear gain (and phase angle). The representation of the non-
linearity is simplified by assuming that the output of the nonlinear element,74 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
in response to a sine wave input, can be described in terms of the funda-
mental (sine wave) component of the distorted waveform. Precisely
because of this, the first approximation fundamental sinusoidal describing
function technique does not account for harmonic or subharmonic oscil-
lations. If some of the harmonic terms in the distorted waveform are
retained, the analysis is more accurate and complete and the existence of
harmonic and subharmonic osciliations may then be predicted. For these
purposes a more general periodic input describing function is used.
Since a nearly sinusoidal periodic wave having zero mean value is
assumed to exist somewhere in the system, the possibility of more than one
equilibrium point cannot be a direct result of a single analysis. Further-
more, the result of a sinusoidal describing function analysis is a represen-
tation of the system in the frequency domain. The generally satisfactory
correlation between the time and frequency domains in linear analysis is
much less precise in connection with nonlinear analysis. On the other
hand, most servomechanism systems are analogous to a low-pass filter, and
the higher frequency components of a distorted waveform are heavily
attenuated during their trip around the control loop. For this reason,
which is apt to be even truer in higher order systems, the sinusoidal de-
scribing function technique is often impressively successful. The close
analogy between describing function analysis and linear analysis especially
recommends it.
The synthesis of nonlinear systems, particularly the adjustment of gain
and compensation of the phase characteristics, can be carried out with the
sinusoidal describing function technique in much the same way as is done
in connection with linear systems. In summary, the technique is most useful
where the effect of the nonlinearity is small and the system is of relatively
high order, and where a knowledge of the response to sine wave inputs has
some significance.
‘Viewed in a general way, the describing function method is not restricted
to oscillatory phenomena. Itis possible to define describing functions of non-
linear elements for any kind of input. Of course, the describing functions of
a particular nonlinearity will be different for each type of input considered.
Probably the most valuable ones, other than the sinusoidal type, are
the describing functions for statistical inputs. These allow the analyst to
extend, within limits, the linear analysis and synthesis methods based upon
statistical inputs to the similar treatment of nonlinear problems. Other
types of describing functions are based upon step functions, which extend
the indicial (step function) and weighting function response concepts, to
some degree, into the nonlinear domain.
The phase space representation is a most powerful tool in nonlinear
analysis. Most engineers, however, have difficulty in interpreting or evenEVALUATION OF METHODS 75
imagining phase diagrams in more than two dimensions. Diagrams in the
phase plane may represent the behavior of first- or second-order systems.
The system, therefore, must be relatively simple in order to be practically
amenable to this type of analysis. It often happens, however, that even
when the system is of higher order a second-order mathematical model can
be set up which is capable of representing the main features of system per-
formance. A deadtime is useful as a representation of higher order lags in
switching systems.
In the use of the phase plane method there are no restrictions on the
nature or extent of the nonlinearity. Initial displacements and velocities in
any combination and, in some cases, steps and ramps can be used to excite
the system. The phase plane diagrams can be interpreted directly in terms
of the time behavior of the system. Finally, limit cycles and multiple
equilibrium points are evident. On the other hand, the phase plane dia-
grams do not readily indicate the steps that need to be taken to correct
system performance, and synthesis can only be accomplished by cut and
try procedures.
It may thus be appreciated that the describing function technique and
the phase plane method are uniquely complementary. One is useful where
the other is not. It is necessary to master both.
Table 2-3 sets forth the several methods of analyzing nonlinear control
systems, and presents in concise form an evaluation of the area of practical
application and difficulty of each method.
None of the methods is as easy to apply as the corresponding tech-
niques of linear analysis, and the easier ones are limited in their area of
application.
In control engineering practice, nonlinear problems are often solved by
machine methods. Such a procedure strongly recommends itself for ob-
taining quantitative results. Computing machines of either the analog or
digital type can greatly extend the capability of design engineers. In fact,
the machines can usually produce solutions to linear or nonlinear equations
much faster than the operator or analyst can assimilate the results. It is for
precisely this reason that the study of the theory of linear and nonlinear
control systems is so important. Even if the analyst lets the machine relieve
him of the burden of calculating solutions—theory, like experience, pro-
vides guides for interpreting the results. The machines, dumb and robot-
like, answer questions. The operator-analyst must pose the questions
intelligently. Very often the direction of the line of questioning has to be
set by a preliminary paper and pencil analysis. In any event, some know-
ledge of the expected answer is required in order to be sure that that part of
the question which is represented by the machine setup has been posed as
intended.76 GENERAL TECHNIQUES FOR NONLINEAR PROBLEMS
Table 2-3
Methods of Nonlinear Control System Analysis
Method
Application
Difficulty
Liapounoff stability
criterion
At an equilibrium point
where nonlinearity is
continuous and has
continuous derivatives
Relatively simple to apply
Direct solutions
Limited number of
equations
May be very difficult to
cast practical problem
into proper form
Graphical solutions
Numerical solutions
Any transient response
Any transient response
Grows rapidly with sys-
tem complexity and
time interval of interest.
Should be used as a last
resort
Periodic input
describing function
Prediction of periodic
phenomena
Represents extensions
of familiar techniques
Gaussian, random
input describing
Approximate perfor-
mance with statistical
Represents extensions
of familiar techniques
function inputs. Only “small”
nonlinearities are
permissible
Phase plane Transient response of No undue difficulty
nonlinear first- and
second-order systems3
INTRODUCTION
TO QUASI
LINEARIZATION
AND THE
DESCRIBING
FUNCTION
TECHNIQUE
A fundamental problem in dynamic analysis, already mentioned in
Chapter 1, is the mathematical characterization of the cause and effect
relationships for general system elements. The direct way to specify such
information would be to determine the element responses resulting from a
large variety of inputs, and then to catalog these results as input-response
pairs. Except for simple elements acting on a small number of inputs, this
direct procedure is extremely unwieldy. In order to achieve a simpler
specification of the system element, the basic question “What are the
effects due to various causes?” might be changed to “What is the operation
of the system element in modifying a given cause into an effect?” To
answer the dynamic analysis question expressed in this latter operational
form, the analyst desires a “mathematical model” which responds to an
input in a fashion which closely approximates the response of the actual
physical element.
For systems comprising only elements which behave in a manner de-
scribable by linear constant-coefficient differential equations, the second
question is answered simply by specifying the system’s weighting or trans-
fer function. If the system were represented by the block diagram of
Figure 3-1, where the weighting function, w(t), is the time response of the
system when an impulse function is applied at zero time, then the relation-
ship between the response and the input would be given by the so-called
Superposition or convolution integral:
yt) -|" w(r)a(t — 7) dr =|" w(t — 7)x(r) dr G-1)
7778 INTRODUCTION TO QUASI! LINEARIZATION
Since the response, y(t), to any input, 2(7), can be found from Equation 3-1
if w(x) is known, no gigantic tabulation of input-response pairs is required
to describe the system’s operation. The analyst need know only the
weighting function, w(t).
Because algebraic operations are easier to use than integrals, such as in
Equation 3-1, it is usually preferable to work with transforms of the input,
system
Characterized by:
Input ‘i 7 Output
Characterized by: welchiine finetion Characterized by:
|_Characterized by:
1. x(t) as a time W(s) or Wl joo) 1. y(t) as a time
function transfer function function
2. X(s) as the Laplace 2. ¥(s) as the Laplace
iransform of x(t) transform of y(t)
3. X(ja) as the Fourier 3. YG) as the Fourier
transform of x(t) transform of y(t)
Figure 3-1. Linear system representation.
response, and weighting functions rather than with the time functions. If
the Fourier transform is used, the transformation of Equation 3-1 becomes
©
Y(jo) = i y(tye7#* dt = Wjw)X (ja) (3-2)
where Y(jw), X(ja), and W(ja) are the Fourier transforms of y(t), «(¢),
and w(t) respectively. When the conventional unilateral Laplace transform
is used, the convolution integral of Equation 3-1 is first modified to:
yD = [wae —)dr, t>0 (3-3)
0
This form of the convolution integral can then be Laplace-transformed to
give i
¥(s) =| y(de~" dt = W(s)X(s) (3-4)
0
where ¥(s), X(s), and W(s) are the Laplace transforms of y(#), x(r), and
w(t). The Fourier transfer function W(jo) is essentially the same thing
as the transfer function W(s), with s replaced by jw.
The introduction of the weighting function, or its transform the transfer
function, allows a linear system to be described as an operational entity
which is independent of both the input and the response. In a nonlinear
system, on the other hand, some of the system “parameters” depend upon
the values of the dependent variables which define the system’s response.
In this case, the system cannot be characterized as a separate entity, the
convolution integral is not valid, and the behavior of the system is a functionTHE DESCRIBING FUNCTION TECHNIQUE 79
of the particular inputs and initial conditions. In a general sense, this situa-
tion puts the analyst right back at the beginning—forcing him to define
system operation by means of input-response pairs. This contrast in the
behavior of linear and nonlinear systems has been emphasized in previous
chapters by both word and example. Nevertheless, some of the examples
shown previously, and a great many other nonlinear systems of interest,
have specific input-response pairs which appear to be similar to input-
response pairs for linear systems. This similarity leads to the notion that
the performance of some nonlinear elements, for certain specific inputs,
could be divided into two parts: (1) the response of a linear element
driven by the particular input, and (2) an additional quantity called the
remnant. From this general idea, there is evolved the concept of quasi
linearization, which emphasizes the similarities, rather than the differences,
between linear and nonlinear systems. A quasi-linear system is one in
which the relationships between pertinent measures of system input and
output signals are linear in spite of the existence of nonlinear elements.
A quasi-linear system is an exact representation of the nonlinear system for
specific inputs.
A particular quasi-linear system is found from the actual nonlinear
input-system combination by replacing the nonlinear elements with
“equivalent” linear elements characterized mathematically by describing
functions and remnants. Each equivalent linear element is derived from
consideration of the response of the corresponding nonlinear element toa
specific input. The new linear system has the same response to the input in
question as the original nonlinear system. The describing function concept
becomes most useful when it can be generalized for a whole category or
class of inputs, when it can be shown that the describing function of a
given nonlinear clement is unique, and when it can be shown or at least
believed that the effect of the remnant on the input to the nonlinear element is
negligible. Under these circumstances it is possible to develop a practical
catalog of nonlinear element, input-response pairs in terms of describing
functions, and the describing functions act as linear operators. The per-
formance of the true or an approximate quasi-linear system may then be
discovered by means of the powerful methods of linear analysis.
The replacement of a nonlinear element by a quasi-linear equivalent
tailored to a specific input is not restricted to any particular type of input.
Indeed, in principle at least, the partition of an output response into two
components, one linearly connected with the input and the other a remnant,
can be accomplished for almost any combination of nonlinear elements
and inputs. In control systems, however, interest is centered upon the
system as a whole, and not on its constituent elements. Consequently, the
fact that a quasi-linear equivalent can be found for a nonlinear element is80 INTRODUCTION TO QUASI! LINEARIZATION
of little value unless this knowledge can be converted into a precise, or
approximate, quasi-linear system model. The difficulties encountered in
this process, of course, stem from the presence of feedback loops within
the system. The actual input signal to a particular nonlinear element is
required in order to construct the quasi-linear representation for the ele-
ment. Because of the feedbacks, the input to the nonlinear element can
only be found by solving for the system response. This was the problem in
the first place, so little would appear to be gained by knowing the quasi-
linear description of a given nonlinear element excited by a particular
input signal.
In a few cases, however, the signals existing within the system at the
inputs to the nonlinearities are exactly the ones for which quasi-linear
element representations are easily found. In these circumstances the quasi-
linear representation for the nonlinear elements can be substituted for the
actual elements to obtain an exact linear representation of the nonlinear
input-system combination. In many other cases the signals at the inputs to
the nonlinearities are quite similar to the ones for which describing func-
tions and remnants are known or can be found. Then the quasi-linear
representations of the nonlinear clements are substituted for the actual
nonlinearities in order to obtain a close first approximation to the quasi-
linear system.
Corresponding to the three main categories of test input functions which
are employed in linear analysis, there are three main types of describing
functions: (1) transient (actually step) input describing functions, (2)
periodic (usually sinusoidal) input describing functions, and (3) stationary
random input describing functions. The transient input describing functions
are of more conceptual than practical interest. They cannot be developed in
as general and as satisfactory a form as the others. Sinusoidal input
describing functions have been quite thoroughly explored for many, but
by no means all, interesting nonlinearities. They have a surpassing import-
ance in the analysis of nonlinear control systems because they are par-
ticularly amenable to the determination of stability. It has been repeatedly
shown that their use produces good results with a modicum of effort.
Furthermore, the use of sinusoidal input describing functions permits the
extension to nonlinear control systems of the well-known harmonic or
frequency response method of designing equalizing or compensating net-
works. This is a tremendous and almost unique advantage. Finally, the
stationary random input describing functions can be developed in a fairly
general way if the input has a Gaussian amplitude distribution and the
effect of the nonlinear element on the performance of the system is not too
large. The real significance of random input describing function analysis
where these conditions are not met is very little understood. When theseTHE DESCRIBING FUNCTION TECHNIQUE 81
conditions are met, however, and the describing functions have been
formulated analytically, or measured experimentally, the Gaussian, ran-
dom input describing function may be employed to predict the response and
accuracy of the system under dynamic conditions.
7 \xo(t)| x y)
x aad
La
Figure 3-2. Block diagram of a simple feedback system containing a limiter.
x1(t) +> €lt)
In order to illustrate these remarks on quasi linearization and the use of
describing functions it is helpful to consider a very simple example.* A
closed loop system which comprises a unit, or normalized, limiter and an
integrator in the forward path is shown in Figure 3-2.
n— Kt
€(t)
e(t)
Figure 3-3. The limiter with its actual input and output. (Chen, op. cit.)
If the input, x,(1), is a step function of magnitude, applied at time t = 0.
where 7 is greater than unity, then the limiter output immediately following
t = 0 will be unity, and the output, y(t), will be equal to Kt until a time, ¢,
equal to(n — 1)/K. Therefore between the times given by 0 < t < (n — 1)/K,
the actual input to the nonlinear element is ¢(t) = x,(¢) — y(t) = n — Kt.
This can be considered to be the proper specific input signal to use in
the development of a describing function relating the input and output of
the nonlinearity. The specific input and actual output are shown in Figure
3-3. Now the limiter can be replaced by an equivalent linear operator, the
describing function, and an additional quantity, the remnant. This is shown
in Figure 3-4, where the linearized transfer characteristic of the limiter (its
* K. Chen, “Quasi-Linearization Techniques for Transient Study of Nonlinear Feed-
back Control Systems,” Trans. AIEE, Pt. Il, vol. 75 (1956), pp. 354-363.82 INTRODUCTION TO QUASI LINEARIZATION
a-} Remnant
r(t)
Figure 3-4. Equivalent limiter consisting of describing function, 1, and remnant, r(t).
(Chen, op. cit.)
describing function) is its gain in the unlimited region, unity; and the
remnant signal, r(), accounts for the discrepancy when the limiter operates
in the saturated region. The remnant signal, r(¢), can be put into a form
which may be derived directly from the original input. The linear operator
which produces r(¢) from the input, x,(¢), will be called G,(s). Itis developed
as a transfer function as follows
1
r=(—-1)-Kt, O
K
Now
a()=n, 1>0
and
er]
Gs) = ==
. 2[a()]
Sea te)
n
= (C24) 2 -eo[-(24)]THE DESCRIBING FUNCTION TECHNIQUE 83
When the nonlinear element (limiter) in the system of Figure 3-2 is replaced
by its describing function and remnant, the quasi-linéar limiter represen-
tation of Figure 3-5 is the result. Then the system block diagram of
Figure 3-6 may be rearranged according to the rules of block diagram
algebra so as to appear in the form shown in Figure 3-7.
40 4D ed) x2(t)
y(t)
n
)
Ot
>
Figure 3-5. Equivalent limiter with remnant derived from a linear operation on the input.
(Chen, op. cit.)
It is now seen that the nonlinear system of Figure 3-2 may be represented
by the quasi-linear systems of Figures 3-6 and 3-7. Either one of these
completely linear representations is an exact representation for the whole
class of step function inputs to the system. It may be noticed that the
20)
Figure 3-6. The quasi-linear system. (Chen, op. cit.)
x(t) + K
)
1-6, 2°
Ys) __K
RO Tek U- Gr(6)]
Figure 3-7. Equivalent block diagram of the quasi-linear system. (Chen, op. cit.)
effects of the nonlinearity, in this case, are represented by inserting a linear,
albeit transcendental, operator into the system outside the control loop. It
may further be observed that the characteristics of this linear transfer func-
tion depend on both the input, n, and the characteristics of the system, in this84 INTRODUCTION TO QUASI LINEARIZATION
case K. In essence, the effect of nonlinearity has been represented by
a closed loop linear system operating into a linear transfer characteristic
which depends upon the nonlinearity, the input, and system parameters.
This kind of representation will always be possible with systems having
single, simple, piecewise linear elements within the loop. Sometimes this
kind of representation may be applied when more than one nonlinear
element is present, although the computational complexity in applying the
method increases rapidly with the number of nonlinear elements. In
connection with transient input describing functions and remnants, the
representation is not unique for a given nonlinearity. Everything depends
on the position of the nonlinearity within the loop. If the positions of the
limiter and integrator wete interchanged in this example the appropriate
quasi-linear system would be quite different.
The basically difficult problem in the transient input describing function
method, however, is involved in finding the transfer function, G,(s), which
relates the output of the linearized closed loop system to the actual approxi-
mate output, that is, to find the mathematical model of a linear element
which gives an effect “equivalent” to the effect of the nonlinearity. This
may become extremely onerous. While the simple analysis given above is
exact, in more elaborate situations it is thoroughly impractical to account
for more than gross effects, such as dominant modes; and when the non-
linearity is a complex one, such as hysteresis or backlash, even though
it can still be represented by piecewise linear characteristics, the task of
finding a justifiable transient input quasi linearization is much more severe.
It is for these reasons that the use of the transient input describing function
is confined here merely to the illustration of the important concept of quasi
linearization.
The same concept of quasi linearization turns out to be much more
powerful when it is employed with periodic input or stationary random
input describing functions. This is because unique describing functions
and remnants, developed for given nonlinearities considered as isolated
elements, are employed directly to construct first approximation quasi-
linear system models. This may seem strange by comparison with the
transient input describing function technique where the performance of the
whole system had to be taken into account, but in a very real sense it is only
in such circumstances that the concept of quasi linearization has practical
consequences for the design of feedback control systems.
The isolated element sinusoidal input describing function is derived from
consideration of the harmonic response of a nonlinearity to a sinusoidal
input at various frequencies and amplitudes. In a constant-coefficient
linear system, or element, excited by a sinusoid, a portion of the output
will, of course, be a sinusoid of the same frequency, although the amplitudeTHE DESCRIBING FUNCTION TECHNIQUE 85
and phase angle of the sinusoidal response may differ from the amplitude
and phase angle of the input. Now suppose that a sine wave is applied to
the input of a nonlinear element having a single input and a single output.
The output will very likely be a nonsinusoidal periodic wave with the
same period as the input wave. If the output waveform is analyzed in
terms of its Fourier components, the fundamental component will bear a
relationship to the input sine wave which can be described in amplitude
ratio and phase angle terms. A sinusoidal input describing function is defined
as the complex ratio of the fundamental component of the output to the input.
(This is the same thing as is done in connection with the definition of a
Fourier transfer function for a linear system.) The remnant, in this case,
consists of all the higher harmonics, and the output is then the sum of the
describing function times the input, plus the remnant. Both the describing
function and the remnant depend upon the input amplitude in such a
fashion that the output of the quasi-linear element is identical to the output
of the actual isolated nonlinear element.
As a specific example of a quasi-linear replacement for a nonlinear
element subjected to a sinusoidal input, consider again the limiter of the
system of Figure 3-1 whose performance with a sine wave input was also
discussed previously in connection with Figure 1-13. If the gain of the
linear portion of the limiter characteristic is taken to be unity, the response
to a sinusoidal input, «(#) = A sin wt, similarly to the result of Equation
1-28, will be:
@
x{t)=b,sinot+ > 5b, sinnot (3-7)
where whee
by =1andb, =
2A fey (a a af }
t= 74 [snr (4) + (Gt F] |
4A a) cos nB a |
neta let Ji- Baa]
naan naseta) 1 4 sin n. A>a
|
B=sin? (¢) |
A
The nonlinear element and its quasi-linear equivalent are shown in Figure
3-8. In this example the “phase angle” of the sinusoidal input describing
function is zero, and the describing function is a pure gain which varies
with input amplitude alone. This will always be true when the output is
dependent only upon the instantaneous value of the input. (This type of
nonlinearity is defined as a simple nonlinearity.) In general, of course, a
A amen" | lim 5— if ein ondt at} (4-13)
m0 n=—0 2-0 2T I-74
The value of the integral term in braces in Equation 4-13 can be evaluated
by considering two cases, w,, # w, and w,, = w,. For the first of these,
© #~ Wp, the term in braces is:
1 fT, 1 J eil@m-ondt 72
lim — eon ont dt = lim — ("| =0 (414)
7 .
P0 2T 20 2T Lio, — o,)J-2
When w,, = o,, the same term becomes simply
eae a 1 [?
lim a foment dt = lim a‘ dt=1 (4-15)
T+02T/-7 T202TJ-7
Consequently, the cross-correlation function of equation 4-13 becomes:
Raft) =X on B ye" (4-16)
na 0
It will be noted that Equation 4-16 shows only those frequencies which98 SINUSOIDAL DESCRIBING FUNCTIONS
exist in both «() and y(¢), and that the amplitude of each term in the series
depends upon both the amplitude and phase of the individual frequency
components in x(t) and y(t).
The autocorrelation for x(t) is obtained by inserting the expression of
Equation 4-11 into the defining relation, Equation 4-6:
7
Ret) = tim [ s ate] S aqeront| dt
D0 2T I-27 Ln=-0 m=— 0
SES cata em tim 4 [eton- eo ar} (417)
Sto mito” ™ T-+02TJ-7
The term in braces can be treated in the same fashion as in Equations 4-14
and 4-15, so the autocorrelation function for the periodic time function x(t)
becomes:
Ret) =X |ttql? 7" (4-18)
nite
Examination of Equation 4-18 reveals that all of the frequency com-
ponents in 2(¢) appear in the autocorrelation function. The coefficients,
|a,|®, for each frequency component in the autocorrelation function are
real, and have the same nature as a power. In this way the autocorrelation
indicates the “power” at each frequency in the time function 2(1).
Now, in order to specialize the discussion to the sinusoidal input describ-
ing function for an open loop nonlinear element, consider the case when
x(t) = A cos wt, and y(t) is the output. The output of the nonlinear ele-
ment will then be given by a Fourier series, similar to Equation 4-9 or
4-10: a
VOD) = XC, COs (Ot — Py)
no
©
= > eye (4419)
nico
where
Cy = S (cos Yn —J SiN Y,)s Con = & (cos y, + j sin y,)
Assuming that the mean value of the output, y(¢), is zero, that is, cy =
Cy = 0, the cross-correlation function between input and output will be:
a
R(t) = £ 2 ener = A fee + ee]
= AG cos Wa (gir 4 gm tory som (eer om|
7 - cos (wr — yi) (4-20)CORRELATION CONCEPTS 99
Only the fundamental of the output, characterized byc, or C, and y,, appears
in the expression for the cross correlation, R,,(r). It alone is linearly corre-
lated with the input. All the higher harmonics in the output are uncor-
related with the input, since the cross correlations between them and the
input are zero. In essence, the multiplication and time-averaging processés
involved in the cross correlation have “averaged” out the effects of those
harmonics in the output which do not appear in the input signal. Linear
coherence, or linear correlation, implies that a fixed phase angle exists
between two signals at a given frequency.
If the element under consideration were linear, with a transfer function
W(s), the output of the element for a sinusoidal input x(t) = A cos wt
would be:
Ut) = A |W(ja)| cos [or — AWGo)] (4-21)
The cross-correlation function in these circumstances is:
Ae :
Ralt) = > |W(ja)| cos [or — AW(o)] (4-22)
Comparison of Equations 4-20 and 4-22 shows the similarity between the
sinusoidal describing function, which would appear in operator form as
(C,/A)e~*", and the transfer function of a linear system.
The autocorrelation function, R,,(7), When x(t) = A sin wt, will become:
A
R,.(7) = ie cos wr (4-23)
The cross-correlation function for this case can then, by analogy to
Equation 4-21, be interpreted as the result of thedescribing function operat-
ing upon the input autocorrelation function. In general, for inputs that
are not necessarily single sinusoids, this same interpretation takes a form
which is analogous to the convolution integral, given previously by
Equation 3-1. Thus, if w(t) is the weighting function of a constant-coeffi-
cient linear element,
Ral) = ij © Rar — 1) du (4-24)
All of the concepts relating the describing function to notions of linear
correlation can also be depicted in frequency terms by using Fourier trans-
forms of the autocorrelation and cross-correlation functions. For this pur-
pose, so-called power- and cross-spectral densities can be defined as twice
the Fourier transforms of the autocorrelation and cross-correlation func-
tions, respectively. Thus, the power-spectral density is
©,,{j0)
af" Rite!" dr (4-25)100 SINUSOIDAL DESCRIBING FUNCTIONS
which, since R,,(r) is an even function, becomes:
©,,(0) = 4f "Ra cos wr dr (4-26)
Similarly, the cross-spectral density is:
0,,(jo) = 2 | Rater" dr (4-27)
As an example, consider again that the input and output of a nonlinear
element are periodic, and are characterized mathematically by the Fourier
series of Equations 4-11 and 4-12. Then, using the resulting autocorre-
lation function for the input signal given by Equation 4-18, the power-
spectral density, ®,,(w), of the input will be:
o(o)=2f" $ |
© n= @
[2 enor de
~ ~
=2> leat" eumaedn (4-28)
The evaluation of the integral in Equation 4-28 requires the definition of a
special function, the delta or impulse function, 6(v). This function (or
“measure,” to be precise) has the properties:
a nae eo 4-29
M0 — Po) =| neo, v= oy ie
and
/ * av — ») dv =1 (4-30)
(03 Uy, > Ug > Up OF Ug > Vy > Vy
i | ie
il "Alo)a(v — v) de 7 Af(vo)s vy > vq and either : im fe (431)
Fi a = %
Lf (00): Dy > Up > Va
From Equation 4-31, the Fourier transform of 6(v — v9) will be:
®
8(v — rye #0" dv = e7 #0% (4-32)
The inverse Fourier transform of e~* is:
ALP ensereg+i00 day = L |” o*2000-%0) dy = 8(v =v) (4-33)
In Jo In Jo
The integral left unevaluated in the expression for the power-spectral
density, Equation 4-28, can now be expressed as:
| a ae 2nd(w — @,) (4-34)CORRELATION CONCEPTS 101
The power-spectral density for the periodic input signal then becomes:
©
©,(0) = 47 > |a,|? o — o,) (4-35)
n=a@
The nature of the power-spectral density for periodic functions is seen to be
akin to that of a comb, with tooth areas proportional to the power at each
frequency, and with teeth knocked out in the regions where no signal power
is present. While it is not theoretically precise, it is common practice to
make the height of each “tooth,” or “spectral line,” proportional to the
value, |2,|, of the “power.” An illustrative example of the power-spectral
density, using a square wave as the periodic time function, is presented in
Figure 4-1.
In a similar manner, the cross-spectral density between the nonlinear
element’s periodic input and output, «(¢) and y(z), can be found by Fourier
transforming the cross-correlation function of Equation 4-16. Accordingly
nea
©,,(jo) = 2 | YT Byaqhernte 2" dr
=2 S aant[” etoer de
os
7
=40 Y Bit, *(o — o,) (4-36)
we
In Equation 4-36, the spectral line character of the cross-spectral density
for periodic signals is indicated by the delta functions. Also, the cross
spectra are seen to preserve phase characteristics in the same way as the
cross-correlation function. As an illustrative example Figure 4-2 presents
the cross-spectral density function obtained when «(¢) is a square wave and
y(t) is a sawtooth wave.
Power-spectral and cross-spectral densities for periodic functions may
alternatively be defined directly in terms of Fourier transforms of the sig-
nals involved. Thus,
®,,(0) = { Jim # [XGo)X(— jo)]|rate = 0) (4-37)
©,,(jo) = ( Jim aX jo) Yo) Ho — Oy) (4-38)
where X(j) and Y(jew) are Fourier transforms defined in the following
special way:
i
X(jo) =| a(the ** dt, a(t) = 0 except for -T (en) [sj - en + v|
a
(b)
Figure 4-1. Time history and power-spectral density for a square wave. (a) Time history.
(b) Power-spectral density.
Specializing the above general periodic signal results to the case where
the input signal to an element is a(t) = A cos wot and the output is
U0) = 3 C, 605 (not — 4)
~
> c,eineot
the power- and cross-spectral densities of Equations 4-35 and 4-36
become:
(0) = rA*[3(@ + 0) + H(o — 0)]
© (jo) = wAC Le’ 6(@ + co) +e 6(@ — a )]
(4-39)CORRELATION CONCEPTS 103
(8)
cd Ene
5] eet "| or [oe 0 = wot
x(0) = “4 (cos@ -
4A ZS (a yyn cos ante
oO" ent
sin 20 4 sin30_
2a aa
=-2 Sep at
(0)
+90--
day ( 7
(degrees)
90
Figure 4-2. The cross-spectral density between a square and a sawtooth wave. (a) Square
wave. (b) Sawtooth wave. {c) Cross-spectral density.104 SINUSOIDAL DESCRIBING FUNCTIONS
It will be noted that the cross-spectral density has a real and an imaginary
part, while the power-spectral density is totally real. The spectra exist only
at the frequencies @ = +g. This fact, in connection with the cross-spectral
density, again reflects the concept of linear coherence, that is, only that
portion of the output which is linearly correlated with the input (at the
same frequency) will emerge in the cross-spectral density.
The most significant connection between the various correlation notions
and the sinusoidal input describing function is now revealed by taking the
ratio of the cross- and power-spectral densities for real frequencies:
Dif jo) _ TAC” (w — 00) _ Cr g- 501 (4-40)
®,,(@) 7A? S(w — Wo) A
This most unusual “ratio” which is presumed to “exist” only when the
delta functions are not zero, is seen to be the sinusoidal input describing
function. This relationship is much more general than is noted here. For
instance, it is equally valid for each frequency of the more general situation
for periodic inputs and outputs. In a linear system it essentially provides
the transform equivalent of Equation 4-24, that is, if W(j) is the Fourier
transform of the weighting function (the Fourier transfer function), then:
®,,(j0) = W(jo) Pp) (4-41)
While the operation shown in Equation 4-40 is dubious mathematically,
that of Equation 4-41 is quite legitimate. Besides providing the most
general basis for the sinusoidal input describing function, cross-corre-
lation or cross-spectral concepts can also be used for the experimental
measurement of describing functions. This possibility is discussed further
in Chapter 6.
4.2 SINUSOIDAL DESCRIBING FUNCTIONS OF
SIMPLE NONLINEARITIES
Sinusoidal input describing functions of simple nonlinearities are not
frequency-sensitive. It is, therefore, only necessary to analyze the input-
output relationship of the nonlinear element at a single frequency. In
general, the outputs may be represented by a Fourier series written in terms
of the magnitude of the nonlinearity and the amplitude of the input sine
wave.
Figure 4-3 shows the input-output relationship for several commonly
interesting simple nonlinearities, together with an illustration of the output
wave form for one input amplitude.
When the output-input relationship is given graphically, the output
waveform can be obtained very simply by projecting the amplitudes of anSIMPLE NONLINEARITIES 105
assumed input sine wave at successive instants of time up to the nonlinear
characteristic and across to the derived output wave. Alternatively, since
the projection of an input wave on a line with a slope which is the ratio of
output to input scale factors is the same whether it is made from below or
from the left, the construction can be made so as to maintain the sense of
the transfer characteristic. The latter construction is illustrated in Figure
4-3,
Once the shape of the output waveform is known, any one of several
analytical, numerical, or graphical methods can be applied to evaluate the
coefficients of the representative Fourier series.* For symmetrical simple
nonlinearities, subjected to sine form inputs, the series for the output need
contain only odd harmonic sine terms, since all the cosine and the even
harmonic sine terms will be zero. Consequently, the Fourier series for
the output of such a nonlinear element, for an input defined as A sin wt, is
yt) = b, sin wt + by sin 3ot + bs sin Swt +--- (4-42)
and the coefficients are given by
4 (PF
Deg-4 = pe y(t) sin (2n — Let dt (4-43)
Of course, the coefficients of the Fourier series of the output wave must be
evaluated for several amplitudes of the input sine wave. If the Fourier
coefficients should be negative, it indicates a “phase angle” of 180 degrees
with respect to the fundamental component.
“In Figures 4-4 through 4-9} there are presented the “amplitude ratios”
b,/A of the fundamental, and b,/b, and b;/, of the third and fifth harmon-
ics, as well as the “phase angles”, for the nonlinearly distorted waveforms
of Figure 4-3. The amplitude ratio and phase angle of the fundamental
represent the sinusoidal describing function of the nonlinearity in question.
The amplitude ratio is scaled in decibels so as to be compatible for later use
with Bode diagrams. Amplitudes of the third and fifth harmonics of the
output, relative to the fundamental amplitude, b,, are also given in the
figures. These remnant data are necessary in order to measure the accuracy
of the quasi-linearization technique in a particular analysis. In a first
approximation analysis of a closed loop quasi-linear system the effect of
the harmonics when propagated around the control loop must be small
* Fr. A. Willers, Practical Analysis, Dover Publications, New York, 1948.
Reference Data for Radio Engineers, 4th ed., Federal Telephone and Radio Corpora-
tion, New York, 1956.
+ Figures 4-5, 4-6, 4-7, and 4-18 are taken from Methods of Analysis and Synthesis of
Piloted Aircraft Flight Control Systems, BuAer Report AE-61-41, Northrop Aircraft, Inc.
U.S. Navy Bureau of Aeronautics, Washington, D.C., 1952. The original computations
were performed by S. J. Press.