(De Gruyter Stem) Dingyu Xue - Differential Equation Solutions With MATLAB - Fundamentals and Numerical Implementations (De Gruyter STEM) (2020, de Gruyter)
(De Gruyter Stem) Dingyu Xue - Differential Equation Solutions With MATLAB - Fundamentals and Numerical Implementations (De Gruyter STEM) (2020, de Gruyter)
Differential
Equation Solutions
®
with MATLAB
|
Author
Prof. Dingyü Xue
School of Information Science and Engineering
Northeastern University
Wenhua Road 3rd Street
110819 Shenyang
China
[email protected]
MATLAB and Simulink are registered trademarks of The MathWorks, Inc. See www.mathworks.com/
trademarks for a list of additional trademarks. The MathWorks Publisher Logo identifies books that
contain MATLAB and Simulink content. Used with permission. The MathWorks does not warrant the
accuracy of the text or exercises in this book. This book’s use or discussion of MATLAB and Simulink
software or related products does not constitute endorsement or sponsorship by The MathWorks of
a particular use of the MATLAB and Simulink software or related products. For MATLAB® and
Simulink® product information, or information on other related products, please contact:
ISBN 978-3-11-067524-5
e-ISBN (PDF) 978-3-11-067525-2
e-ISBN (EPUB) 978-3-11-067531-3
© 2020 Tsinghua University Press Limited and Walter de Gruyter GmbH, Berlin/Boston
Cover image: Dingyü Xue
Typesetting: VTeX UAB, Lithuania
Printing and binding: CPI books GmbH, Leck
www.degruyter.com
Preface
Scientific computing is commonly and inevitably encountered in course learning, sci-
entific research and engineering practice for each scientific and engineering student
and researcher. For the students and researchers in the disciplines which are not pure
mathematics, it is usually not a wise thing to learn thoroughly low-level details of re-
lated mathematical problems, and also it is not a simple thing to find solutions of com-
plicated problems by hand. It is an effective way to tackle scientific problems, with
high efficiency and in accurate and creative manner, with the most advanced com-
puter tools. This method is especially useful in satisfying the needs for those in the
area of science and engineering.
The author had made some effort towards this goal by addressing directly the so-
lution methods for various branches in mathematics in a single book. Such a book,
entitled “MATLAB based solutions to advanced applied mathematics”, was published
first in 2004 by Tsinghua University Press. Several new editions were published after-
wards: in 2015, the second edition in English by CRC Press, and in 2018, the fourth
edition in Chinese were published. Based on the latest Chinese edition, a brand new
MOOC project was released in 2018,1 and received significant attention. The number
of registered students was around 14 000 in the first round of the MOOC course, and
reached tens of thousands in later rounds. The textbook has been cited tens of thou-
sands times by journal papers, books, and degree theses.
The author has over 30 years of extensive experience of using MATLAB in scientific
research and education. Significant amount of materials and first-hand knowledge has
been accumulated, which cannot be covered in a single book. A series entitled “Profes-
sor Xue Dingyü’s Lecture Hall” of such works are scheduled with Tsinghua University
Press, and the English editions are included in the DG STEM series with De Gruyter.
These books are intended to provide systematic, extensive and deep explorations in
scientific computing skills with the use of MATLAB and related tools. The author wants
to express his sincere gratitude to his supervisor, Professor Derek Atherton of Sussex
University, who first brought him into the paradise of MATLAB.
The MATLAB series is not a simple revision of the existing books. With decades of
experience and material accumulation, the idea of “revisiting” is adopted in author-
ing the series, in contrast to other mathematics and other MATLAB-rich books. The
viewpoint of an engineering professor is established and the focus in on solving var-
ious applied mathematical problems with tools. Many innovative skills and general-
purpose solvers are provided to solve problems with MATLAB, which is not possible
by any other existing solvers, so as to better illustrate the applications of computer
tools in solving mathematical problems in every mathematics branch. It also helps
the readers broaden their viewpoints in solving scientific computing, and even find
https://doi.org/10.1515/9783110675252-201
VI | Preface
Bibliography | 425
Index | 433
1 An introduction to differential equations
Equations are equalities containing unknown variables. Equations are often classified
into algebraic and differential equations. In Volumes III and IV in the series, algebraic
equations and their solutions were fully addressed. Algebraic equations are used to
describe static relationships among the unknowns, while differential equations are
used to describe dynamic relationships. Differential equations are the mathematical
foundations of dynamic systems.
In Volume II, foundations and computations of functions and calculus were fully
covered. Ordinary differential equations are used to describe relationships among un-
knowns, their derivatives and independent variables such as time t. Besides, relation-
ships among the present and past values of the unknowns are also involved. Generally
speaking, the description and solutions of differential equations are much more com-
plicated than those of their algebraic counterparts.
In Section 1.1, examples are studied for the modeling of electric, mechanical and
social systems. A method is described of how to establish differential equation models
from phenomena. In Section 1.2, a brief history of differential equations is presented.
In Section 1.3, the outline and brief introduction to the materials in this book are
proposed.
In electric circuit theory, it is known from Ohm’s law that the relationship between the
current and voltage of a resistor is static, u(t) = Ri(t). That is, the voltage u(t) can be
computed from the current value of the current i(t). Therefore for a resistive network,
the system can be modeled by algebraic equations. The current in any branch and the
voltage across any component can be modeled by static equations.
If two other types of electric components – capacitors and inductors – are in-
volved, differential equations must be employed to describe the relationship between
the instantaneous voltage and current.
https://doi.org/10.1515/9783110675252-001
2 | 1 An introduction to differential equations
Example 1.1. Consider the simple circuit in Figure 1.1. Establish the mathematical
model of the circuit.
L i(t) R
+
+
u(t) C uc (t)
−
−
Solutions. In the circuit, the signals of common interest are the supplied voltage u(t)
and the voltage uc (t) across the capacitor. The former is referred to as the input signal,
while the latter, the output signal. The current i(t) in the loop is the intermediate
signal. It is known from the fundamental circuit theory that the voltage across the
inductor L and the loop current i(t) satisfy the following differential equation:
di(t)
ul (t) = L . (1.1.3)
dt
Therefore, from the well-known Kirchhoff’s law, the equation between the input
and output signals can be written as
di(t)
u(t) = Ri(t) + L + uc (t). (1.1.4)
dt
Besides, it is known from circuit theory that the voltage uc (t) across the capacitor
and the loop current i(t) satisfy the following differential equation:
duc (t)
i(t) = C . (1.1.5)
dt
1.1 Introduction to differential equation modeling | 3
Substituting (1.1.5) into (1.1.4), it is found that the entire differential equation for
the circuit can be written as
duc (t) 1
{ dt = C i(t),
{
{
(1.1.7)
{ di(t) = 1 u(t) − R i(t) + 1 u (t).
{
{
{ dt L L L c
It can be seen that differential equation models can be set up easily for simple cir-
cuits, with the essential knowledge of calculus. If there are more loops, high-order dif-
ferential equations can be established. If there are nonlinear elements such as diodes
and transistors, nonlinear differential equations for the circuits may be created. It
can be seen that differential equations are the ubiquitous modeling tool in describing
dynamic behaviors in electric circuits.
The commonly used modeling technique introduced here is for lumped parameter
circuits. In real circuits, if there exists leakage resistance or other factor, ordinary
differential equations are not adequate in modeling the circuit. Partial differential
equations must be introduced to model distributed parameter circuits.
The Newton’s second law in mechanics, which is usually learnt in high school, is
mathematically represented as
where the mass M is a constant. For better describe the dynamic relationship among
the variables, the external force and acceleration are both represented as functions of
time t. They can be regarded as the instantaneous external force F(t) and acceleration
a(t). It seems that they satisfy an algebraic relation.
In practice, the acceleration of the mass cannot be measured directly. The mea-
surable variable is the instantaneous position x(t) ̂ of the mass. The instantaneous
displacement can be defined as x(t) = x(t)̂ − x(t0 ). What kind of relationship is there
between the displacement and the external force? Since acceleration is the second-
order derivative of the displacement, it can be seen from Newton’s second law that the
following differential equation can be established:
A simple modeling example will be given next to show how to write differential
equation models for a mechanical system.
Example 1.2. Consider a simple mechanical system shown in Figure 1.2. If an external
force F(t) is applied, what is the mathematical model between the displacement x(t)
and the external force F(t)?
Solutions. It is known from Newton’s second law that the composite force is required,
where we assume that the external force is in the positive direction. It is known from
Hooke’s law that the resisting force in the spring is proportional to the displacement,
denoted as −Kx(t); the other resisting force is from the damper, which is proportional
to the speed of the mass, denoted as −R dx(t)/dt. Therefore it is not hard to establish
the following differential equation for the mechanical system from Newton’s second
law:
dx(t) d2 x(t)
F(t) − Kx(t) − R =M . (1.1.10)
dt dt 2
The analytical solution to the differential equation is x(t) = x0 er(t−t0 ) , which im-
plies that the population is increasing as an exponential function. If t is large, the
population may tend to infinity. This is not feasible, since when the total population
reaches a certain size, the factors such a space and resources may restrict its growth
such that it may not increase infinitely.
A Belgian mathematician Pierre-François Verhulst constructed a function r(t),
later known as the logistic function (also known as a sigmoid function, or S-shaped
function), to replace the constant in Malthus model, and the population model
becomes the following time-varying differential equation:
Example 1.4. Richardson’s arms race model[10] was proposed by a British mathemati-
cian and physicist Lewis Fry Richardson (1881–1953).[59] There are three basic premises
in Richardson’s arms race model. We assume that there are two nations X and Y. If
one nation finds that the other is spending a huge amount of wealth on purchas-
ing weapons, it may spend more money on purchasing weapons as well. Of course,
the money spent may create economic and social burden. More money spent on pur-
chasing weapons may inhibit future increases in spending; furthermore, there are
grievances and ambitions relating to both cultures and national leaders that either en-
courage or discourage changes in military spending. Therefore Richardson proposed
the arms race model using differential equations:
where α, β, δ, and γ are positive real numbers. The model was first proposed by an
American mathematician Alfred James Lotka (1880–1949), and a similar model was
proposed later by an Italian mathematician Vito Volterra (1860–1940). This model is
commonly used in the fields such as economics.
6 | 1 An introduction to differential equations
{
{ S (t) = −βI(t)S(t),
S(0) = S0 ,
{
{ I (t) = βI(t)S(t) − γI(t), I(0) = I0 ,
{
{
{R (t) = γI(t), R(0) = R0 .
Therefore the model is also called as the SIR model, where S(t), I(t) and R(t) are
respectively the numbers of susceptible (meaning, healthy persons), infective, and
removed (removed by immunity, isolation or death) persons, with S(t) + I(t) + R(t) =
N(t), and N(t) is the total population. For convenience of handling the problem, the
numbers are normalized such that S(t) + I(t) + R(t) = 1. In the model parameters, β
is the contact rate, which is the number of contacts by an infective per unit time, also
known as infection rate; γ is the removal rate, indicating that the number of removed
is proportional to the number of infected.
If the latent period is τ days, then the SIR model can be rewritten as the following
delay differential–algebraic equations:[36]
{
{ S (t) = −βI(t)S(t),
{
{ I (t) = βI(t)S(t) − βI(t − τ)S(t − τ),
{
{
{S(t) + I(t) + R(t) = 1.
It can be seen that even though one is carrying research in social sciences, if
differential equations are mastered as a tool in research, a new viewpoint may still
be introduced and quantitative results may be obtained.
With the power series method, y(x) is expressed as a power series whose coeffi-
cients are unknown. It can be substituted into the equation, and letting the coefficients
on both sides of the equation of equal power of x be the same, algebraic equation can
be solved, and the solution he found was
1 1 1 5 1
y(x) = x − xx + x3 − x 6 + x − x6
3 6 30 45
where xx was the notation used in Newton’s works, and it should be x 2 .
1.2 A brief history of differential equations | 7
y(x)
y (x) = − . (1.2.2)
√a2 − y(x)
He introduced the integral method to find the solution of the differential equation,
namely
a
√a2 − y2 a − √a2 − y2
x=∫ dy = −√a2 − y2 − a ln .
y y
y
Leibniz also proposed the solution methods for differential equations with sepa-
rable variables, which can be used to solve a large category of differential equations.
In fact, before Newton and Leibniz invented the calculus theory, an Italian physi-
cist and astronomer Galileo Galilei (1564–1642) studied some problems which com-
monly required the basis of differential equations. Since calculus was not established
at his time, he mostly used traditional geometric and algebraic tools.
A French mathematician Alexis Claude Clairaut (1713–1765) was interested in a
particular field of application problems. He was the first to study implicit differential
equations:[32]
plicit differential equations are presented first. Validations of the numerical solutions
to the differential equations are also made.
In Chapter 4, the task is to convert differential equations in different forms into
the directly solvable first-order explicit ones. Single high-order differential equations
and differential equation sets are considered. If the conversion to the standard form
is successful, the methods in Chapter 3 can be called directly for solving differential
equations.
In Chapter 5, some special forms of differential equations unsuitable for the previ-
ously discussed methods are explored, including stiff differential equations, implicit
differential equations, and differential–algebraic equations. Also, numerical solution
methods for switched differential equations and linear stochastic differential equa-
tions are discussed.
In Chapter 6, the general forms of delay differential equations are introduced, and
then simple delay differential equations, equations with variable delays, and neutral-
type delay differential equations are all studied.
In Chapter 7, properties and behaviors of differential equations are analyzed.
Properties such as stability and periodicity of differential equations are studied
systematically. Behaviors such as limit cycles, chaos, and bifurcations are studied.
The concepts such as equilibrium points and linearization are also presented.
In Chapter 8, solutions of fractional-order differential equations are provided.
Fundamental definitions and properties of fractional calculus are introduced first,
followed by the introduction of linear and nonlinear fractional-order differential
equations. In particular, high precision algorithms in numerical solution of fractional-
order differential equations are proposed.
In Chapter 9, a block diagram-based solution approach for various differential
equations is discussed, with the powerful tool Simulink. An introduction to Simulink
is presented. Then the Simulink modeling and solution methods are demonstrated
through examples of ordinary differential equations, differential–algebraic equa-
tions, switched differential equations, stochastic differential equations, and delay
differential equations. General purpose modeling and simulation methods for various
fractional-order differential equations are also discussed.
In Chapter 10, boundary value problems of ordinary differential equations are
presented. The basic ideas of shooting methods are proposed first. Then a powerful
MATLAB solver and its applications in boundary value problems are demonstrated
for various problems. Finally, optimization-based boundary value problem solvers are
proposed to tackle problems not solvable by the conventional methods.
Finally, in Chapter 11, numerical solutions to partial differential equations are
introduced. The algorithm and MATLAB implementation of diffusion equations are
discussed, and then with the use of MATLAB Partial Differential Equation Toolbox,
partial differential equations in several special forms are discussed.
The focus of this book is to show how to solve various differential equations. If
analytical solutions can be found, the solutions can be adopted, and the numerical
10 | 1 An introduction to differential equations
solutions are implied. In many cases where analytical solutions are not available,
numerical solutions should be found. The philosophy of this book is to instruct the
readers on how to utilize the powerful facilities provided in MATLAB to study differ-
ential equations in a reliable manner.
1.4 Exercises
1.1 In the circuit shown in Figure 1.1, if the resistor is replaced by a serially connected
resistor R and another capacitor C1 , write again the differential equation. If it is
replaced by R and C1 connected in parallel, how to rewrite the differential equa-
tion?
1.2 In the Malthus population model, if the world population at year t0 is 6 billion,
and it is known that the natural growth rate is 2 %, find the population after 10
years. What will be the population after 100 years?
1.3 Find the first 8 terms of the Newton’s equation in (1.2.1), with the power series
method.
1.4 If you have already acquired a certain knowledge on differential equations, solve
the Clairaut’s implicit differential equation in (1.2.3), if function f (c) = 5(c3 − c)/2
is known.
2 Analytical solutions of ordinary differential
equations
Differential equation modeling of physical phenomena was demonstrated in the pre-
vious section. If there exists a differential equation model for a certain dynamical sys-
tem, or for a physical phenomenon, the next step to carry out is to solve the differential
equation, so as to find the system response. Compared with the solutions of algebraic
equations, differential equations are much more complicated. Normally, more skills
and tactics in dedicated methods are needed when solving differential equations. The
aim of the book is to extensively use computer tools to explore solutions of various
differential equations.
In Section 2.1, major analytical methods are explored for the first-order differential
equations. The low-level methods are introduced when addressing various types of
first-order differential equations. The analytical solutions for some differential equa-
tions can be found in this way. In Section 2.2, analytical solutions of the second-order
linear differential equations are explored, and various skills are needed to apply the
low-level methods. Some commonly used special functions are introduced. In Sec-
tion 2.3, Laplace transform-based methods are discussed for finding analytical solu-
tions of differential equations. With Laplace transform tools, the linear differential
equations with constant coefficients can be mapped into algebraic equations, and di-
rect solution methods are explored when finding solutions of linear differential equa-
tions with nonzero initial values. In Section 2.4, Symbolic Math Toolbox is introduced
to find solutions directly of some ordinary differential equations. With such a tool,
complicated ordinary and time-varying differential equations can be solved directly. In
Section 2.5, solution methods are explored for linear differential equations, state space
equations, and Sylvester matrix equations. In Section 2.6, attempts are made to solve
directly certain particular nonlinear differential equations. It is worth mentioning that
only a few nonlinear differential equations have analytical solutions. For most nonlin-
ear differential equations, analytical solutions do not exist. Numerical solutions will
be discussed in later chapters.
Manual formulation methods are discussed in the first few sections in this chapter,
which may be useful background knowledge for the readers to understand the low-
level solution process of differential equations. If the readers are only interested in
how to solve differential equations with computers, the materials here can be skipped,
and one can start reading the materials from Section 2.4.
https://doi.org/10.1515/9783110675252-002
12 | 2 Analytical solutions of ordinary differential equations
There are various formats for the first-order differential equations. Therefore the solu-
tions of the differential equations become extremely complicated, since a tremendous
amount of skills and tactics should be mastered before one is able to solve differential
equations. For some differential equations, even if one is an expert who has mastered
a significant number of methods and tactics, he/she may still not able to find the
analytical solutions. In this section, some simple first-order differential equations are
discussed, and solution patterns are explored.
dy(x)
= f (x). (2.1.1)
dx
It can also be simply denoted as y (x) = f (x). The analytical solution of this differ-
ential equation is, in fact, the indefinite integral of f (x), namely
The solution to such a differential equation is, in fact, the first-order indefinite
integral of the given function. If the indefinite integral has an analytical expression,
then there is an analytical solution for the differential equation. Otherwise, there is no
analytical solution for the differential equation.
Solutions. Comparing the mathematical form given here with that in the definition,
it is immediately found that the right-hand side is f (x). Therefore, direct integration
can be performed to find the primitive function which is the analytical solution of the
differential equation with the following statements:
>> syms x
f(x)=cos(x)/(x^2+4*x+3)-sin(x)*(2*x+4)/(x^2+4*x+3)^2;
Y=simplify(int(f)) % evaluate integral and simplify the result
and the analytical solution can be found as sin x/(x 2 + 4x + 3). In fact, the actual solu-
tion is not the only primitive function of the indefinite integral, an arbitrary constant
2.1 Analytical solutions of first-order differential equations | 13
C should be added to the result. Several different values of C are selected, and the
family of differential equation solutions are shown in Figure 2.1. It can be seen that
the solutions of the differential equation are simple translations of a certain solution.
>> fplot([Y,Y+1,Y+2,Y+3],[0,10])
The aforementioned differential equations directly solvable by integration are not gen-
uine differential equations. Normally, a differential equation may also contain the
function y(x) itself. Such differential equations cannot be solved directly by merely
using direct indefinite integral evaluation methods. In this section, simple first-order
homogeneous linear differential equations are introduced, as well as their general
solution methods.
Definition 2.2. The mathematical form of the first-order homogeneous linear differ-
ential equation is
dy(x)
+ f (x)y(x) = 0. (2.1.3)
dx
With simple conversion, it is not hard to find the analytical solution of the differ-
ential equation through the following procedure:
dy(x)
= −f (x)dx. (2.1.4)
y(x)
It can then be found that
Solutions. For this type of differential equation, with separable variables, the equa-
tion can be converted into the form of y (x)/y = −x. Computing integrals of the left-
and right-hand side of the equation
−x2 2
ln y(x) = + C1 ⇒ y(x) = Ce−x /2 .
2
If a term x is added to the right-hand side function g(x), the original homogeneous
differential equation is converted into an inhomogeneous one. In this section, an idea
is explored when finding analytical solutions of the first-order inhomogeneous linear
differential equation.
Evaluating definite integrals of both sides of the equation over the integration
range [x0 , x], it is found that
x
It can be seen that the first key step in solving this equation is to find an auxiliary
function ϕ(x). The method of finding an auxiliary function depends upon the expres-
sion of function f (x). The following integral is needed in solving the problem:
Example 2.3. Find the analytical solution of the following first-order inhomogeneous
differential equation:[38]
dy(t)
+ eλt y(t) = keλt , y(0) = y0 .
dt
Solutions. It can be seen from the standard form in Definition 2.3 that f (t) = eλt , g(t) =
keλt , and x0 = 0. Therefore, the following MATLAB statements can be used to find
directly the analytical solution of the differential equation:
Theorem 2.1. The analytical solution for the nonlinear differential equation with sepa-
rable variables in Definition 2.4 is
y(x) x
dτ
∫ = ∫ f (t)dt. (2.1.14)
g(τ)
y(x0 ) x0
If the expressions of the two integrals in (2.1.14) can be obtained, the analytical
solution of the original solution can be found. In fact, there is no analytical solution
for most nonlinear differential equations, even those with separable variables may
have no analytical solutions.
>> syms x t u y
g(y)=15+8*y+y^2; f(t)=-sym(1);
G=simplify(int(1/g(u),u,0,y)), F=int(f,0,t)
eq=G==F, y=solve(eq,y)
where the left-hand side of (2.1.14), i. e., G, is arctanh(4) − arctanh(y + 4), while on the
right-hand side, the solution is −t. Therefore an implicit equation can be constructed
as arctanh(4) − arctanh(y + 4) = −t. Solving the implicit equation, the solution to the
nonlinear differential equation can be found as y(t) = tanh(t + arctanh(4)) − 4.
It can be seen by computing the integral of G that, if the manual method is used,
1/g(y) can be expressed in a partial fraction form. The result obtained is then different
from the arctanh(⋅) function obtained here. If similar results are expected, the format
of G should be rewritten. For instance, use rewrite() function to modify its default
display format.
The implicit equation obtained is then ln(y + 3)/2 − ln(y + 5)/2 − ln 3/2 + ln 5/2 = −t.
The analytical solution to the nonlinear differential equation is then written as
y(t) = 6/(5e2t − 3) − 3. The result obtained here is equivalent to that obtained earlier.
It can be seen that for an ordinary linear differential equation, the manual solution
method is complicated and tedious. If integrals are involved in the solution process,
the equations are usually not solvable. Special functions should be invented to de-
scribe the analytical solutions of the differential equations. The solutions of nonlinear
2.2 Special functions and second-order differential equations | 17
differential equations are even more complicated, and most nonlinear differential
equations do not have analytical solutions. In exploring analytical solutions of certain
types of differential equations, mathematicians invented various special functions.
Despite this, there is still a significant number of nonlinear differential equations
which do not have analytical solutions. Therefore form Chapter 3 onward, we will con-
centrate on exploring differential equations with numerical methods using MATLAB.
d2 y(x) dy(x)
a(x) + p(x) + q(x)y(x) = f (x) (2.2.1)
dx2 dx
where the coefficient functions a(x), p(x), q(x) and f (x) are treated as given.
With the variations of the coefficient functions, the solution methods for second-
order linear differential equations are also different. In [58], more than 1 000 different
coefficient combinations are discussed. Most of the cases are under the assumption
that f (x) = 0, that is, homogeneous equations are considered. For the linear differen-
tial equations with no analytical solutions, various special functions are invented by
mathematicians to describe certain “analytical solutions” of the equations.
For a better understanding of certain analytical solutions, it is necessary to intro-
duce first some commonly used special functions. They include gamma functions, hy-
pergeometric functions, various Bessel functions, Legendre functions, and Airy func-
tions. Some of the special functions are related to certain second-order linear differ-
ential equations. In this section, a simple introduction to these equations and special
functions is presented.
Definition 2.6. The gamma function is defined by the following infinite integral:
Theorem 2.3. As a special case, for a nonnegative integer z, the factorial formula can
be derived directly from (2.2.3)
Example 2.5. Draw the gamma function curve in the interval x ∈ (−5, 5).
Solutions. The following statements can be used to draw directly the gamma function
curve, as shown in Figure 2.2.
Since the values of the gamma function at z = 0, −1, −2, . . . explode to infinity, function
ylim() can be called to restrict the range in the y axis, such that the curve obtained is
more informative.
Solutions. The gamma function in (2.2.5) can be shown with the following MATLAB
statements:
1
Γ (z + ) = (2π)(m−1)/2 m1/2−mz Γ(mz). (2.2.6)
m
Hypergeometric functions are commonly used special functions, and they comprise
also the foundation for some other special functions. In this section, definitions of
hypergeometric functions are presented, and then computations of hypergeometric
functions are addressed.
p Fq (a1 , . . . , ap ; b1 , . . . , bq ; z)
Γ(b1 ) ⋅ ⋅ ⋅ Γ(bq ) ∞ Γ(a1 + k) ⋅ ⋅ ⋅ Γ(ap + k) z k
= ∑ (2.2.7)
Γ(a1 ) ⋅ ⋅ ⋅ Γ(ap ) k=0 Γ(b1 + k) ⋅ ⋅ ⋅ Γ(bq + k) k!
p Fq (a1 , . . . , ap ; b1 , . . . , bq ; z),
Example 2.7. Draw the curve of 2 F1 (1.5, −1.5; 1/2; (1 − cos x)/2), the Gaussian hyperge-
ometric function.
20 | 2 Analytical solutions of ordinary differential equations
Solutions. The hypergeometric function 2 F1 (1.5, −1.5; 1/2; (1 − cos x)/2) can be simpli-
fied as cos 1.5x. If function hypergeom() is used, the curve for the hypergeometric
function can be drawn, as shown in Figure 2.3. It can be seen that the curve coincides
completely with that of cos 1.5x.
Theorem 2.5. As a special case, the hypergeometric function 2 F1 (a, b; c; z) is the solution
of the following differential equation:
d2 y(z) dy(z)
z(1 − z) + [c − (a + b + 1)] − aby(z) = 0 (2.2.8)
dz 2 dz
where we deliberately denote the independent variable by z, indicating that it can also
be a complex variable.
Theorem 2.6. The general “analytical solution” of a νth-order Bessel differential equa-
tion can be written as
where C1 and C2 are arbitrary constants, Jν (x) is the νth order Bessel function of the first
kind, defined as
∞
t ν+2m
Jν (t) = ∑ (−1)m (2.2.11)
m=0 2ν+2m m! Γ(ν + m + 1)
In other words, since genuine analytical solutions to the original differential equa-
tions do not exist, relevant special functions were invented by mathematicians. It is
2
the same as in the case where the indefinite integral of e−t does not exist, where
mathematicians invented the special function erf(⋅) to study its analytical properties.
Theorem 2.7. If ν = n is a positive integer, the Bessel function of the first kind has the
following properties:
Theorem 2.8. The νth order Bessel function of the first kind is a special case of the
hypergeometric function, namely
(x/2)ν x2
Jν (x) = 0 F1 (ν + 1; − ). (2.2.15)
Γ(ν + 1) 4
Definition 2.9. The νth order Bessel function of the second kind is defined as
Jν (t) cos νt − J−ν (t)
Yν (t) = . (2.2.16)
sin νt
Bessel function of the second kind is also known as Neumann function, named
after a German mathematician Carl Gottfried Neumann (1832–1925).
22 | 2 Analytical solutions of ordinary differential equations
H(1)
ν (x) = Jν (x) + jYν (x), H(2)
ν (x) = Jν (x) − jYν (x). (2.2.17)
where k = 1 and k = 2 correspond to the two cases of Bessel functions of the third
kind.
Example 2.8. Draw Bessel functions of the first kind of orders ν = 0, 1, −1, 2.
Solutions. Several orders are selected like these, and the following statements can be
used to draw directly the Bessel function curves, as shown in Figure 2.4. It can be seen
that J1 (x) = −J−1 (x).
>> x=-5:0.1:5;
y1=besselj(0,x); y2=besselj(1,x); y3=besselj(-1,x);
y4=besselj(2,x); plot(x,y1,x,y2,x,y3,x,y4)
d2 y(z) dy(z) μ2
(1 − z 2 ) − 2z + [λ(1 + λ) − ] y(z) = 0 (2.2.18)
dz 2 dz 1 − z2
where the complex quantities λ and μ are referred to respectively as the degrees and
orders of the Legendre function.
Definition 2.12. The mathematical form of the Legendre function can also be depicted
by the hypergeometric function
μ 1 1 + z μ/2 1−z
Pλ (z) = ( ) 2 F1 (−λ, λ + 1; 1 − μ; ) (2.2.19)
Γ(1 − λ) 1 − z 2
with the range of convergence being |1 − z| < 2, where μ = 0, 1, 2, . . . , λ.
Solutions. If μ = 3, the following MATLAB commands can be used to draw the four
different Legendre functions, as shown in Figure 2.5. Of course, the readers may con-
sider the following commands to draw different Legendre functions:
Airy functions are a class of special functions named after a British mathematician
and astronomer George Biddell Airy (1801–1892). These functions are defined as the
analytical solutions of Airy differential equations. In this section the general form of
Airy differential equations is presented first, followed by the computation and sketch-
ing of Airy functions.
24 | 2 Analytical solutions of ordinary differential equations
d2 y(z)
− zy(z) = 0 (2.2.20)
dz 2
where the independent variable z can be a complex quantity.
Definition 2.14. Airy functions of the first and second kind are defined as
t3
∞
1
Ai(z) = ∫ cos ( + zt) dt, (2.2.21)
π 3
0
t3 t3
∞
1
Bi(z) = ∫ [exp (− + zt) + sin ( + zt)] dt. (2.2.22)
π 3 3
0
Example 2.10. It is known that Airy function is a complex one. Draw the contours of
the real part of the Airy function in the complex plane.
Solutions. The complex mesh grids matrices z = x +jy can be generated first. The Airy
function matrix can be obtained. In order to increase the resolution of the complex
matrix, the thresholds of ±4 can be used, and the contours obtained are shown in
Figure 2.6.
It was pointed out earlier that, with different forms of the coefficients, different
forms of second-order linear differential equations can be constructed, such that man-
ual solutions of the differential equations become extremely difficult. Second-order
nonlinear differential equations or third- and higher order linear or nonlinear differ-
ential equations are even more complicated to handle. The methods studied here are
only applicable to extremely simple differential equations.
Dedicated tools for analytically solving differential equations will be illustrated
later, such that second- or even higher order differential equations can be solved di-
rectly. If there are no analytical solutions, numerical solutions will be studied next for
differential equations.
Definition 2.15. Linear differential equations with constant coefficients are mathe-
matically described as
σ+j∞
1
f (t) = L −1 [F(s)] = ∫ F(s)est ds (2.3.3)
2πj
σ−j∞
Theorem 2.9. The differentiation property of Laplace transform, being the most impor-
tant property when solving differential equations, reads:
df (t)
L[ ] = sF(s) − f (0+ ). (2.3.4)
dt
More generally, Laplace transform of the nth order derivative can be evaluated from
the following equation:
dn f (t)
L[ ] = sn F(s) − sn−1 f (0+ ) − sn−2 f (0+ ) − ⋅ ⋅ ⋅ − f (n−1) (0+ ). (2.3.5)
dt n
Assuming that the initial values of f (t) and its derivatives are all zero, (2.3.5) can be
simplified as
dn f (t)
L[ ] = sn F(s). (2.3.6)
dt n
With the property (2.3.6), it is known that for zero initial value problems,
L [dm y(t)/dt m ] = sm L [y(t)], the following polynomial equation can be derived
Theorem 2.10. Assuming that the characteristic roots si of the algebraic equation can
be found, and they are distinct, the general form of the analytical solution of the original
differential equation can be constructed as
where Ci are undetermined coefficients and γ(t) is a particular solution under u(t) signal
input.
Similarly, analytical solutions can also be formulated if some of the si are repeated
roots.
Theorem 2.11. If there is an m-tuple repeated root ri , the corresponding terms can be
constructed for the general solution:
It can be seen from the two theorems that, if the roots of the characteristic equa-
tion can be found, the general form of the analytical solution of a differential equation
can be manually composed.
It can be seen from the algebraic equation in (2.3.7) that, according to the well-
known Abel–Ruffini theorem, all the roots of the characteristic equations of degree 4
or lower can be found analytically, which implies that the low-order differential equa-
tions have analytical solutions. High-order differential equations may also have quasi-
analytical solutions, since high precision solutions to the polynomial equations can be
found such that quasi-analytical solutions of high order linear differential equations
with constant coefficients can be composed.
Example 2.11. Solve the following homogeneous linear differential equation with con-
stant coefficients:
Solutions. If the initial values of the output signal y(t) and its derivatives are all zero,
it can be found according to the Laplace transform property that
Example 2.12. Find the general solution of the following homogeneous differential
equation:
y(5) (t) + 12y(4) (t) + 57y (t) + 134y (t) + 156y (t) + 72y(t) = 0
under the assumption that the initial values of y(t) and its derivatives are all zero.
It is the found that the characteristic roots are s1 = −2, s2 = −2, s3 = −2, s4 = −3, and
s5 = −3. There are repeated roots, and the general solution to the differential equation
can be manually constructed as
Example 2.13. Consider again the differential equation in Example 2.12. If it is given as
the following inhomogeneous one, find the general solution of the differential equa-
tion:
y(5) (t) + 12y(4) (t) + 57y (t) + 134y (t) + 156y (t) + 72y(t) = e−t sin t.
2.3 Solutions of linear differential equations with constant coefficients | 29
Solutions. The following statements can be used in finding the Laplace transform
and for computing the Laplace transform expression of the output signal, such that
the analytical solution of the differential equation can be constructed. The input and
output signals can be obtained, as shown in Figure 2.7. In this particular example,
since the output signal is too small, 20 times of y(t) is drawn instead.
Collecting the like terms manually, the analytical solution can further be simpli-
fied as
1 1 1 −t
y(t) = (3 − 2t + t 2 )e−2t − (18 + 5t)e−3t + e (cos t − 7 sin t) .
4 25 100
If the initial values of the differential equations are not zero, the expression in (2.3.6)
does not hold. Initial conditions should be taken into account. From (2.3.5), the dif-
ferential equation can be converted into an algebraic equation. With inverse Laplace
30 | 2 Analytical solutions of ordinary differential equations
transform, the analytical solution of the original differential equation can be found.
Examples will be given next to show solution procedures.
Example 2.14. Assuming that the input signal is u(t) = e−5t cos(2t + 1) + 5, and the
initial values are y(0) = 3, y (0) = 2, y (0) = y (0) = y(4) (0) = 0, find the analytical
solution of the following differential equation:
y(4) (t) + 10y (t) + 35y (t) + 50y (t) + 24y(t) = u(t).
Solutions. For simplicity, the following statements can be used to write directly the
Laplace transform on the left, where part of the results (eqn) represents the coefficients
of Y(s), the other part (eqn1) is the initial value related expression. In this case, the
expression Y(s) can be found, for which function ilaplace() can be called to find the
analytical solution of the differential equation, and the output can also be drawn as
in Figure 2.8.
Example 2.15. Solve the following high-order linear differential equation set with con-
stant coefficients:
{
{ x (t) − x(t) + y(t) + z(t) = 0,
{
{x(t) + y (t) − y(t) + z(t) = 0,
{
{
{x(t) + y(t) + z (t) − z(t) = 0
Solutions. Consider the initial values given. If x in the equations is replaced by
s2 X(s) − sx(0) = s2 X(s) − s, x, y and z are replaced by X(s), Y(s) and Z(s), respectively,
with y and z replaced by s2 Y(s) and s2 Z(s), respectively, the following algebraic
equations can be formulated:
2
{
{ s X(s) − s − X(s) + Y(s) + Z(s) = 0,
{
{ X(s) + s2 Y(s) − Y(s) + Z(s) = 0,
{
{ 2
{X(s) + Y(s) + s Z(s) − Z(s) = 0.
With the following statements, the algebraic equations can then be solved.
>> syms s X Y Z
eqns=[s^2*X-s-X+Y+Z==0, X+s^2*Y-Y+Z==0, X+Y+s^2*Z-Z==0];
sol=solve(eqns,[X,Y,Z]); sol.X, sol.Y, sol.Z
s3 s
X(s) = , Y(s) = Z(s) = − .
(s2 + 1)(s2 − 2) (s2 + 1)(s2 − 2)
2 1 1 1
x(t) = cosh √2t + cos t, y(t) = z(t) = cos t − cosh √2t,
3 3 3 3
from which the signals x(t) and y(t) can be drawn as shown in Figure 2.9.
32 | 2 Analytical solutions of ordinary differential equations
subsequent versions, less support is provided for string expressions. The focus is on
symbolic expression description and solutions.
Example 2.16. Solve the differential equation in (1.2.1) with computer tools.
Solutions. With the powerful computer tools, the problem which could not be fully
studied by Newton, can be solved easily and in a simple and unified way
x 1 2
y1 (x) = Cex(x+2)/2 − x + 3√2π erf (√2 ( + )) e(x+1) /2 + 4
2 2
where C is an arbitrary constant, and erf(⋅) is a special function. Note that in describing
the equation, the equality sign should be expressed as ==, otherwise there may be error
messages. In fact, the solution obtained by Newton is only a particular solution when
y(0) = 0. Taylor series expansion can also be written for the solution obtained above.
>> y2=dsolve(diff(y)==1-3*x+y+x^2+x*y,’y(0)==0’);
y2=expand(taylor(y2,’Order’,10))
It can be seen that the last 6 terms are identical to the Newton’s result:
x9 13 x8 x7 x6 x5 x4 x3
y2 (x) ≈ − − + − + − + − x 2 + x.
9 072 5 040 630 45 30 6 3
Solutions. The following MATLAB commands can be used to try to solve the original
differential equation. Unfortunately, it is prompted that there is no analytical solution.
Why is there no analytical solution found, if Leibniz found one? According to Leib-
niz’s solution, y was used as the independent variable, and the explicit solution of
x was found. It is not possible to find the analytical solution in the form of y = f (x).
According to Leibniz’s method, if the fact that dy/dx = 1/(dx/dy) is used, the following
statements can be employed to solve the differential equation:
a2 1
x1 (y) = C − a ln (√ − 1 − a √ 2 ) − √a2 − y2 .
y2 y
Example 2.18. Solve the first-order differential equation in Example 2.4 directly by
using the unified solver.
Solutions. If function dsolve() is called, the symbolic variable x and symbolic func-
tion y(x) should be declared first. A symbolic expression can be used to describe the
differential equation directly. Then the solver can be called to solve it. Note that when
describing the differential equation, since the right side is 0, ==0 can be omitted from
the symbolic expression. The following statements can be written to describe and
solve the differential equation:
It is found that the analytical solution is y(x) = −(15e2x −15)/(5e2x −3). In fact, although
the solution obtained here is apparently different from that obtained in Example 2.4,
they are equivalent.
In the earlier versions, strings could be used to describe the differential equations
and initial values. For instance, the following statements could be used and identical
results were found:
In a string description, the short-hand notation D3y can be used to describe the third-
order derivative of y. In this book, string descriptions of differential equations are not
recommended.
Solutions. A positive symbolic variable λ should be declared first. The following com-
mands can be written to describe the differential equations and given initial values,
and then the solution can be found:
2.4 Analytical solutions of ordinary differential equations | 35
b
y1 (t) = a cos √λ t + sin √λ t, y2 (t) = b cos √λ t − a√λ sin √λ t.
√λ
The function dsolve() discussed earlier can be applied directly in solving high-order
linear differential equations with constant coefficients, or any other form of compli-
cated differential equations. The differential equations and given conditions should be
expressed directly so that the function can be called to find the analytical solutions. In
the earlier versions, strings could be used to describe the differential equations, while
in a future version, the string description might not be supported.
To concisely describe the differential equation and given conditions, intermediate
variables can be defined to record the derivatives of y(t). The definition of the interme-
diate variables will be demonstrated through examples.
Example 2.20. Solve again the differential equation in Example 2.14 with the unified
solver and compare the results.
Solutions. Since function y(x) and the initial values of several derivatives are needed,
some intermediate variables should be defined to describe the derivatives of y(x). For
instance, variable d3y can be used to describe the intermediate function y (x). This
kind of notation is recommended such that the description of the differential equa-
tions become more concise and easy to understand. The following commands can be
used to directly solve the differential equation. The result obtained is exactly the same
as that obtained in Example 2.14. It can be seen that the statements used here are more
concise and easy to validate. The final plotting command yields the same curve as that
shown in Figure 2.8.
Example 2.21. Assuming that the input signal is u(t) = e−5t cos(2t + 1) + 5, find the
general solution of the following differential equation:
y(4) (t) + 10y (t) + 35y (t) + 50y (t) + 24y(t) = 5u (t) + 4u (t) + 2u(t).
Solutions. Since no initial values of y(t) are involved in this example, intermediate
variables may be declared as in the case of Example 2.20, or they may be ignored.
With the above statements, the general solution of the differential equation can be
found as
5 343 −5t 547 −5t
y(t) = − e cos(2t + 1) − e sin(2t + 1)
12 520 520
+ C1 e−4t + C2 e−3t + C3 e−2t + C4 e−t
where Ci are arbitrary constants. If the initial or boundary values are known, they can
be used to automatically solve the algebraic equations so as to find the values of Ci .
The idea is the same as that discussed in calculus courses.
To validate the general solution found for the differential equation, it can be sub-
stituted back into the differential equation, and sometimes function simplify() can
be called to evaluate the error. If the error is zero, the solution is validated. For this
example, it can be seen that the error is zero, which means that the general solution
satisfies the original differential equation, no matter what are the values of Ci .
>> err=diff(y,4)+10*diff(y,3)+35*diff(y,2)+50*diff(y)+24*y-...
(5*diff(u,t,2)+4*diff(u,t)+2*u) % solution validation
Example 2.22. In the differential equations studied so far, only real poles appear. The
Symbolic Math Toolbox applies to the cases where there are also complex poles. As-
sume that a differential equation is given by
y(5) (t) + 5y(4) (t) + 12y (t) + 16y (t) + 12y (t) + 4y(t) = u (t) + 3u(t).
Assuming that the input signal is sinusoidal, u(t) = sin t, and y(0) = y (0) = y (0) =
y (0) = y(4) (0) = 0, find the analytical solution of the differential equation.
Solutions. With the following commands, the analytical solution of the original dif-
ferential equation can be found. Some intermediate variables should be defined first,
and the equation can be expressed and solved. The result is shown graphically in
Figure 2.10.
2.4 Analytical solutions of ordinary differential equations | 37
1 2 4 11 1
y(t) = e−t − cos t − sin t − e−t cos t + e−t sin t − te−t cos t,
5 5 5 10 2
1 2 4 t 11
y(t) = e−t − cos t − sin t − ( + ) e−t cos t + e−t sin t.
5 5 5 2 10
It can be seen from the curve that when t is relatively large, the curve is almost
sustained oscillation. This is because the first two terms are sine and cosine functions
which are sustained oscillating ones, while the other terms will vanish when t is large.
Example 2.23. Solve again the linear differential equation set in Example 2.15.
Solutions. The following commands can be used directly to solve the differential
equation set and the results are identical to those obtained in Example 2.15:
Solutions. The linear differential equation can also be solved directly with dsolve()
function. The following MATLAB commands can be used to solve the differential equa-
tion:
The direct solution method was demonstrated for linear differential equations with
constant coefficients. If the coefficients are functions of the independent variable,
and the variable is regarded as time, the differential equation is known as a time-
varying one. Time-varying differential equations can also be explored with the solver
dsolve(). Several examples are given next to demonstrate direct solutions of time-
varying differential equations.
Solutions. The differential equation can be expressed directly with a symbolic expres-
sion and solved with the following statements:
The analytical solution of the differential equation can be obtained as follows, where
Airy special functions are involved:
C1 −a2 + 4c + 4bx C2 −a2 + 4c + 4bx
y(x) = Ai (− ) + Bi (− ).
√eax 4b(1/b)1/3 √eax 4b(1/b)1/3
Note that for this particular problem, the following string expression format is not
suitable:
2.4 Analytical solutions of ordinary differential equations | 39
>> y=dsolve(’D2y+a*Dy+(b*x+c)*y’)
This is because the notation D2y, by default, describes the second-order derivative
of y with respect to t, not x. Therefore, the above command yields the wrong result.
In order to solve it correctly, an ’x’ option should be appended to the function call.
Since in the future versions string descriptions might not be supported, this format is
not recommended in the book.
Example 2.26. Find the analytical solution to the time-varying differential equation
Solutions. For this particular time-varying differential equation, the solver dsolve()
can be used directly to solve the equation
3 3 3
y(x) = C1 (x + ) − 2C2 √x + + C3 (2x + 6) √x + .
2 2 2
When it is substituted back into the original equation, the error is zero.
Solutions. If the independent variable x is regarded as time, the coefficients are all
functions of time. Such differential equations can be regarded as time-varying dif-
ferential equations. For this particular problem, since no initial values are provided,
there is no need to define intermediate variables. The differential equation can be
expressed directly, then with function dsolve() the solution of the time-varying dif-
ferential equation can be found.
The analytical solution obtained is as follows. When it is substituted back to the orig-
inal equation, the error is zero. In fact, since Ci are free variables, 2C1 − C3 can be
40 | 2 Analytical solutions of ordinary differential equations
combined into a new free constant C0 such that the result can further be simplified
into
1
y(x) = − (2C1 − C3 + 8C3 x + 32C2 x 2 + 8C3 x 2 ln x).
16x
Linear time-varying differential equation sets can also be handled directly with the
solver dsolve() in the Symbolic Math Toolbox. It is fine if the analytical solutions can
be found, and it is also normal that the analytical solutions do not exist. In the follow-
ing chapters, numerical solutions of various differential equations will be explored.
The analytical solution is demonstrated through the following example.
Example 2.28. Find the analytical solutions for the following linear time-varying dif-
ferential equation:
Solutions. Before the solution process, the variables y and z should be declared as
functions of x. The following statements can be used to solve the linear differential
equation set:
Function dsolve() can also be used to solve boundary value problems directly. Exam-
ples will be shown next to demonstrate how to solve boundary value problems with
computers.
Example 2.29. Consider the differential equation in Example 2.20. Assume that the
boundary values are known as y(0) = 1/2, y (0) = 1, y(2π) = 0 and y (2π) = 0. Solve
the differential equation.
Solutions. Let us review the solution process in Section 2.3, where Laplace transform
was involved. In Laplace transforms, only initial values are needed. While in this ex-
ample, the values of y(x) at different time instances are expected. Therefore such a
problem cannot be handled with Laplace transform method. It is not necessary to
worry about these trivial things, if the solver dsolve() is used. Therefore the uni-
fied framework of differential equation solution procedures introduced here can be
adopted.
The analytical solution of the equation can be found, however, it is too complicated
to display the result here. The solution curve is shown in Figure 2.11.
In fact, function dsolve() can also be used to directly solve the problems of com-
plicated initial value problems. If the conditions are mutually independent, and the
Example 2.30. Consider again the differential equation in Example 2.20, and assume
that the given conditions are y(0) = 1/2, y (π) = 0 and y (2π) = y (2π) = 0. Solve again
the differential equation.
Solutions. Let us review the Laplace transform method studied in Example 2.3. Since
only initial values can be accepted in Laplace transform method, while in this ex-
ample, the values of y(x) at different time instances are needed, Laplace transform
cannot be used in solving this kind of problem, while the solver dsolve() can be
called directly.
The corresponding solution curve is shown in Figure 2.12. Even though there are many
zero equations, the analytical solution is still far too complicated to display.
tion of the linear state space equations is presented, followed by the direct solution
method. Also matrix differential equations such as Sylvester equations will also be
studied in this section.
Linear state space models are the most widely used linear differential equations in
many fields. They provide an alternative description format for linear differential
equations with constant coefficients. For instance, in control science, linear state
space models are used to describe linear dynamical systems. In this section, the
mathematical form of a state space model is presented, and analytical solution
methods are illustrated.
Since Ae−At = e−At A, the left side happens to be the derivative formula
d −At
[e x(t)] = e−At Bu(t). (2.5.4)
dt
Computing integrals of both sides with respect to t, the following formula can be
derived directly:
From this formula, the analytical solutions to linear state space equations can be
established.
44 | 2 Analytical solutions of ordinary differential equations
Theorem 2.12. The analytical solution of the state space equation in Definition 2.18 can
be written as
t
A(t−t0 )
x(t) = e x(t0 ) + ∫ eA(t−τ) Bu(τ) dτ (2.5.6)
t0
where eA(t−t0 ) is also known as the state transition matrix, denoted as Φ(t, t0 ) = eA(t−t0 ) ,
which can be used to describe state transition from time t0 to time t.
For the given input signal u(t), matrix integral can be computed to find the ana-
lytical solutions of the differential equations. It can be seen from the formula that the
exponential function of matrices is involved and integrals are evaluated. These prob-
lems can easily be handled with the facilities provided in the Symbolic Math Toolbox.
With the toolbox, (2.5.6) can be evaluated directly with the following command:
Definition 2.19. If in Definition 2.18, the external input signal u(t) ≡ 0, the system will
be driven solely by the initial value x(t0 ), and the model is changed to
Example 2.31. Assume that the input signal is u(t) = 2 + 2e−3t sin 2t, and the matrices
in the state space model are given as follows:
Find the analytical solution of the differential equation if the initial state vector is
known as x = [0, 1, 1, 2]T .
Solutions. The matrices can be entered into MATLAB workspace first, and then ac-
cording to (2.5.6), the analytical solutions of the equation can be found as
Function dsolve() can be used to solve the state space equations directly. The dif-
ficulties of using such a function appear in constructing a symbolic function matrix
X(t). A MATLAB any_matrix() function is written to define immediately any symbolic
function matrix. The listing of the function is as follows. The syntaxes of the function
will be demonstrated later through examples.
a11 (x, y) a12 (x, y) a13 (x, y) a14 (x, y) f11 (t) f12 (t)
[ a (x, y) a22 (x, y) a23 (x, y) a24 (x, y) ] [ f (t) f22 (t) ]
A = [ 21 B = [ 21
[ ] [ ]
], ].
[a31 (x, y) a32 (x, y) a33 (x, y) a34 (x, y)] [f31 (t) f32 (t)]
[a41 (x, y) a42 (x, y) a43 (x, y) a44 (x, y)] [f41 (t) f42 (t)]
Solutions. The following commands can be used directly to generate the two sym-
bolic function matrices, and the results are the same, as expected:
>> syms x y t
A=any_matrix(4,’a’,x,y), B=any_matrix([4,2],’f’,t)
Example 2.33. Solve again the state space equation in Example 2.31.
Theorem 2.13. The analytical solution of the matrix Sylvester differential equation[43] is
X(t) = eAt X 0 eBt .
−1 −2 0 −1 0 −1
[−1 −3 −1 −2] −2 1 [1 1]
X (t) = [ ] X(t) + X(t) [ X(0) = [
[ ] [ ]
], ].
[−1 1 −2 0] 0 −2 [1 0]
[ 1 2 1 1] [0 1]
Solutions. Input the relevant matrices into MATLAB workspace first, then Theo-
rem 2.13 can be used to solve the differential equation directly.
Substituting the solution back to the original equation, it can be seen that the equation
and initial values are satisfied, since the error matrices are both zero.
Function dsolve() can be tried to solve the Sylvester differential equation di-
rectly. The following example is used to demonstrate the solution process.
Example 2.35. Use dsolve() function to solve again the Sylvester differential equa-
tion in Example 2.34.
Kronecker product studied in Volume III can be introduced to convert the matrix
Sylvester equation into the standard vector differential equation. With Kronecker
product, (2.5.8) can be converted into
where x(t) = vec(X(t)) is the column vector expanded column-wise from matrix X(t).
The original matrix equation can be converted into the standard form of first-order
explicit differential equations. With function dsolve(), the equation can be solved
directly through function expm(). After the solution process, function reshape() can
be used to convert the vector back to the expected matrix.
Example 2.36. Use Kronecker product to solve again Sylvester differential equation in
Example 2.34.
Solutions. With Kronecker product, the coefficient matrix can be computed, from
which the solution of the original Sylvester differential equation can be found, which
is the same as that in Example 2.34.
If the solver is employed instead, the following commands can be used and the solu-
tions obtained are exactly the same:
>> x(t)=any_matrix([8,1],’x’,t);
y=dsolve(diff(x)==A0*x, x(0)==X0(:)); % an alternative method
x2=[y.x1 y.x5; y.x2 y.x6; y.x3 y.x7; y.x4 y.x8]
The is no other method which can be used to judge whether a nonlinear differential
equation is solvable or not. The only way is to express the differential equation in the
standard way as shown earlier, and call the solver to try to solve it. The solvability
of differential equations can only be probed in this way. Examples are shown later to
show the solution methods of nonlinear differential equations.
Example 2.37. Some of the low-order differential equations studied earlier are non-
linear, for instance, in Example 2.4, the first-order nonlinear differential equation was
studied. Solve the differential equation again with the solver:
dy(t)
+ 8y(t) + y2 (t) = −15, y(0) = 0.
dt
Solutions. The standard method as follows can be used to describe the differential
equation, and then call the solver to find the analytical solution:
It can be seen that there is no need to apply any tactics. What the user needs to do is
to express the differential equation in a formal way, and then call the solver to find
2.6 Analytical solutions to special nonlinear differential equations | 49
the solution. The analytical solution of the differential equation, if possible, can be
obtained directly.
Example 2.38. Find the analytical solution of the first-order nonlinear differential
equation x (t) = x(t)(1 − x2 (t)).
The analytical solution obtained is as follows. It can be seen that both branches ob-
tained are valid:
C1 x 2 1 2 2
{ 2 + C2 x − 2C a (C1 + cC1 − aC2 − bC2 + k),
{
{
y(x) = { 1
{C3 e−√a x − 1 (c + bx) +
{ 1
e
√−a x 2
(b + 4ak).
{ 2a 16C3 a3
Example 2.40. Consider now the following third-order nonlinear differential equa-
tion and find the analytical solution in the interval t ∈ (0.2, π):[58]
x5 y (x) = 2(xy (x) − 2y(x)), y(1) = 1, y (1) = 0.5, y (1) = −1.
Solutions. Compared with first- and second-order differential equations, there are
rare cases where a third-order differential equation has analytical solutions. In fact,
50 | 2 Analytical solutions of ordinary differential equations
the formal solution method discussed earlier can be tried. Here for the given differen-
tial equation and initial values, intermediate variables can be defined, and the differ-
ential equation can be solved directly, with the solution curve shown in Figure 2.13.
2.6.2 Nonlinear differential equations where analytical solutions are not available
It can be seen that the differential equation solver dsolve() is rather powerful. This
may give the readers an illusion that it can be applied directly to any differential equa-
tion. The solver can be tried, of course, but for many cases, especially for nonlinear
differential equations, no analytical solutions are available. Several examples are pre-
sented in this section.
Example 2.41. Solve the nonlinear differential equation x (t) = x(t)(1 − x2 (t)) + 1.
Solutions. In fact, this equation is that of Example 2.38, with a slight modification,
namely, we have added 1 to the right-hand side of the equation. The following com-
mands can be tried, yet the solution process is unsuccessful, since the warning mes-
2.7 Exercises | 51
sage “Warning: Unable to find explicit solution. Returning implicit solution instead”
is displayed, meaning that there is no analytical solution to the equation.
Example 2.42. A Dutch physicist Balthasar van der Pol (1889–1959) and his col-
leagues reported an oscillating circuit in 1927.[69] The corresponding differential equa-
tion is often referred to as van der Pol equation, with the mathematical form of
d2 y(t) dy(t)
+ μ[y2 (t) − 1] + y(t) = 0. (2.6.1)
dt 2 dt
Use dsolve() to solve this differential equation and see what happens.
Solutions. For many differential equations studied so far, it seems that they can be
studied with the dsolve() solver. It is natural to try to solve the given nonlinear dif-
ferential equation with such a solver.
If the following commands are tried for van der Pol equation, the warning message
“Unable to find explicit solution” is obtained, indicating that the solution process is
unsuccessful.
2.7 Exercises
2.1 Solve the following first-order differential equations:[58]
√x(x + 1)
(1) y (x) = ,
√x + √1 + x
t − sin 2t
(2) y (t) = [ 2 ] y(t), y(0) = 4,
t + cos 2t
3 2t
(3) y (t) + y(t) = , y(0) = 8.
(1 + t 2 /4)3/2 (1 + t 2 /4)2
2.2 Validate the following properties for hypergeometric functions:[71]
(1) 1 F1 (a; a; x) = ex ,
52 | 2 Analytical solutions of ordinary differential equations
1
(2) 2 F1 (a, 1; 1; z) = ,
(1 − z)a
1
(3) 2 F1 (1, 1; 2; z) = ln(z + 1),
z
1 z+1
(4) 2 F1 (1/2, 1; 3/2; z 2 ) = ln .
2z 1 − z
1
(5) 2 F1 (1/2, 1/2; 3/2; z 2 ) = arcsinz.
z
2.3 Solve the following differential equations with Laplace transform method:
x4 y(4) (x) + 14x3 y (x) + 55x2 y (x) + 65xy (x) + 16y(x) = 0.
If the following conditions are known, y(0) = y(π) = 1, y (0) = y (π) = 1, solve
the differential equation again and draw the curve of the solution.
2.7 Find the general solution of the following differential equation, and also find the
analytical solution satisfying the conditions x(0) = 1, x(π) = 2 and y(0) = 0:
2.8 Find the analytical solutions of the following time-varying linear differential
equations:
2.7 Exercises | 53
and also find the solution satisfying the boundary conditions y(1) = π and
y(π) = 1.
2.10 Solve the following boundary value problem:
Draw the curve of the solution and see whether the boundary conditions are
satisfied.
2.11 Find the general solutions for the following differential equations:
(1) x (t) + 2tx (t) + t 2 x(t) = t + 1,
2
(2) y (x) + 2xy(x) = xe−x ,
(3) y (t) + 3y (t) + 3y (t) + y(t) = e−t sin t.
2.12 Use MATLAB solver to directly tackle the differential equation in Exercise 2.3 and
validate the solution.
2.13 Find the general solution of the following first-order differential equation:
dy(x) 2x
− 2 y(x) = (4 + x 2 )3 e−2x .
dx x +4
If it is known that y(0) = 1, solve the equation again.
2.14 Find the analytical solutions of the following nonlinear differential equations:
(1) y (x) = y4 (x) cos x + y(x) tan x,
(2) xy2 (x)y (x) = x2 + y2 (x),
(3) xy (x) + 2y(x) + x5 y3 (x)ex = 0.
2.15 Find the solutions of the following differential equations and draw the (x, y) tra-
jectory:
−3 −2 0 −2 1 0
[3 2 0 3] [3 1]
] [
x (t) = [ ] x(t) + [ ] u(t)
[ ]
[2 2 −1 2] [2 1]
[−2 −2 0 −3] [1 0]
where x(0) = [0, 1, 2, 1]T , u1 (t) = sin t, and u2 (t) = e−t + cos t.
3 Initial value problems
Analytical solution methods were studied in Chapter 2. It was indicated that there are
many differential equations, especially nonlinear, where analytical solutions are not
available. Numerical methods should be employed to study these differential equa-
tions. From this chapter on, numerical solutions for various differential equations are
presented.
In Section 3.1, the mathematical model of a first-order explicit differential equa-
tion and its initial value problem is presented. In Section 3.2, fixed-step Runge–Kutta
methods and multistep algorithms are introduced, with low-level MATLAB implemen-
tations. Examples are used to assess the accuracy of the algorithms, through compar-
ative studies.
In real applications, fixed-step methods are not actually utilized, since the accu-
racy and efficiency are not satisfactory. Adaptive variable-step methods with preci-
sion monitoring facilities are usually adopted. In Section 3.3, the variable-step solver
provided in MATLAB is demonstrated directly. The solution procedures are proposed
for ordinary differential equations. Examples are used to demonstrate the numerical
solution process. In Section 3.4, a very important step in numerical solution process
– the validation step – is presented and demonstrated through examples. With such
a step, the solutions obtained may be ensured to be valid. High precision algorithms
and fixed-step display of simulation results are also presented.
In the first part of this chapter, some ideas and low-level implementations of sim-
ple numerical algorithms are presented. It may be helpful for the reader to understand
numerical solutions and algorithms. If the readers are only interested in how to use
MATLAB to directly find numerical solutions of initial value problems, it is suggested
to read this chapter from Section 3.3.
In this section, the first-order explicit differential equation format is presented, which
can be regarded as the fundamental basis for numerical algorithms presented later.
https://doi.org/10.1515/9783110675252-003
56 | 3 Initial value problems
where the vector x(t) = [x1 (t), x2 (t), . . . , xn (t)]T is known as the state vector, and the
function vector f (⋅) = [f1 (⋅), f2 (⋅), . . . , fn (⋅)]T is composed by any nonlinear functions.
Definition 3.2. If the initial state vector x 0 (t0 ) = [x1 (t0 ), x2 (t0 ), . . . , xn (t0 )]T is known,
and the first-order explicit differential equation in Definition 3.1 is to be solved, the
problem is regarded as an initial value problem.
Numerical solutions of initial value problems aim at finding the solutions x(t) of
the differential equations over a given interval t ∈ [t0 , tn ] in a numerical format. The
quantity tn is referred to as the terminal time. The numerical solution reliably finds
how the state vector evolves from the given initial state vector x 0 in the predefined
interval t ∈ [t0 , tn ].
It can be seen from the initial value problem model that if the differential equa-
tions are provided in the first-order explicit form, numerical methods can be used to
find the solutions of the equations. On the contrary to the analytical method, numer-
ical methods do not significantly change when the format of differential equations
changes. It will be shown in later chapters that even if the equations are not provided
in the first-order explicit form, conversions can be made such that numerical solvers
can still be used to solve the equations.
Two important theorems are presented in the differential equation theory to describe
the existence and uniqueness of the solutions. The two theorems are listed below,[8]
with some necessary illustrations.
Theorem 3.1 (Existence theorem). Assuming that f (t, x(t)) is continuous in the rectan-
gular region a < t < b and c < x(t) < d, and the initial value (t0 , x(t0 )) is also defined in
this region, there exists an ϵ > 0, such that in the interval t0 − ϵ < t < t0 + ϵ, the function
x(t) satisfies (3.1.1).
In simple terms, if f (t, x(t)) is a continuous function, there exists solutions to the
initial value problems. However, this does not imply that if the function is not contin-
uous, there is no solutions.
Theorem 3.2 (Uniqueness theorem). Assume that f (t, x(t)) is continuous over the rect-
angular region a < t < b and c < x(t) < d, and the initial value (t0 , x(t0 )) is also defined
in the region. If 𝜓f /𝜓x is also a continuous function, there exists an ϵ > 0, such that in
3.2 Implementation of fixed-step numerical algorithms | 57
the time interval t0 − ϵ < t < t0 + ϵ, if the equation has two solutions x1 (t) and x2 (t), one
has x1 (t) = x2 (t). In other words, the solution is unique.
For the initial value problem of ordinary differential equations, Euler’s method is the
most straightforward. Euler’s method is named after a Swiss mathematician Leonhard
Euler (1707–1783), who proposed this algorithm in his book “Institutionum calculi inte-
gralis” in 1768. It is the oldest numerical algorithm for differential equations.
Although the algorithm appears to be very simple, it is helpful in understanding
other complicated algorithms to be presented later. Therefore, Euler’s method is in-
troduced, and its MATLAB implementation is completed.
Assume that at time t0 , the initial state vector x(t0 ) is known. A very small step-
size h can be selected, and from the definition of derivatives in (3.1.1),
x(t0 + h) − x(t0 )
lim = f (t, x(t)). (3.2.1)
h→0 t0 + h − t0
It can be seen that in mathematics there is no step-size h satisfying h → 0. A rel-
atively small step-size h can be chosen instead, and the limit sign in (3.2.1) can be
removed, such that the x(t) at time t0 + h can be approximately expressed as
x(t
̂ 0 + h) ≈ x(t0 ) + hf (t0 , x(t0 )). (3.2.2)
Strictly speaking, such an approximation may yield errors. Therefore the state
vector at time t0 + h can be expressed as
x(t0 + h) = x(t
̂ 0 + h) + R0 = x(t0 ) + hf (t, x(t0 )) + R0 . (3.2.3)
58 | 3 Initial value problems
h2
x(t0 + h) = x(t0 ) + hx (t0 ) + x (t0 ) + o(h3 ). (3.2.4)
2
It can be seen by comparing the two expressions that the cumulative error is
2
h x (t0 )/2. In mathematics, it is said that the algorithm is of accuracy o(h). In practical
algorithm presentation, the “ ⋅ ̂ ” sign in (3.2.2) can be dropped, and the numerical
solution is denoted directly as x 1 . Besides x(t0 + kh) is simply denoted as x k .
Theorem 3.3. Assuming that the state vector x k at time tk is known, the numerical so-
lution at time tk + h can be obtained by Euler’s method as
An iterative method can be used to evaluate the numerical solution of the differ-
ential equation in the interval t ∈ [0, T], which means that the numerical solutions at
time instances t0 + h, t0 + 2h, . . . can be found.
Based on the Euler’s method, the following MATLAB function can be written to
find numerical solutions of first-order explicit differential equations:
function [t,x]=ode_euler(f,vec,x0)
if length(vec)==3, h=vec(2); t=[vec(1):h:vec(3)]’;
else, h=vec(2)-vec(1); t=vec(:); end
n=length(t); x0=x0(:); x=[x0’; zeros(n-1,length(x0))];
for i=1:n-1
x1=f(t(i),x0); x1=x0+h*x1(:); % iteratively solve the equation for one step
x(i+1,:)=x1’; x0=x1; % store and update the solutions
end
Note that the action of (:) in the program is to force the vector into a column vector.
This is the fault tolerance facility in the program. If the vector of the function is erro-
neously written as a row vector, by careless action, the program can still be executed
normally.
The syntax of the function is [x ,t ]=ode_euler(f ,vec,x 0 ), where the differential
equation can be described by the function handle f . Later, examples are used to show
how to express differential equations by function handles. The argument x 0 stores
the initial state vector. There are two definitions in the argument vec: one is to specify
the actual fixed-step time vector t, the other is to supply the parameters in a vector
as [t0 ,h,tn ], where h is the step-size while (t0 , tn ) is the time interval of interest. The
returned argument t is a time vector, while x is a matrix composed of n columns, each
corresponding to the numerical expression of a state variable, and n is the number of
states. Examples are given next to demonstrate the use of Euler’s method, and also its
behaviors.
3.2 Implementation of fixed-step numerical algorithms | 59
Example 3.1. Consider the second-order differential equation studied in Example 2.19.
Let λ = 2, a = 1, and b = 0. For simplicity, the differential equation is given again, and
yi (t) is rewritten as xi (t):
First find the analytical solution of the differential equation. Besides, select differ-
ent step-sizes and find the numerical solution of the differential equation with Euler’s
method. Assess the impact of the step-size on the computational accuracy.
Solutions. If the method in Example 2.19 is used, the following commands can be
employed to find the analytical solution of the differential equation:
It is found that the analytical solution is x1 (t) = cos √2 t and x2 (t) = −√2 sin √2 t. Later
this solution is used to assess the accuracy of the algorithms.
If numerical algorithms are applied, the equation must be expressed in MATLAB
first. If the first-order explicit differential equation is given, a MATLAB or anonymous
function should be written to compute the derivative x (t) from the known t and x(t).
The standard anonymous function for this problem is as follows:
Now selecting different step-sizes h = 0.001, 0.01, 0.02, and computing the numerical
solutions, it is found that the maximum errors are respectively e1 = 0.0086, e2 =
0.0874, and e3 = 0.1789. Comparisons of the numerical solutions with the analytical
solution are shown in Figure 3.1. It is evident that when the step-size is large, there
exist large discrepancies in the numerical solutions. It can be concluded from the
example that Euler’s method is far from satisfactory.
In the displayed result, the thick curves are composed of several curves, and they are
almost the same. They represent the analytical solution and the numerical solution
60 | 3 Initial value problems
at h = 0.001. The other curves correspond to the cases when h = 0.01 and h = 0.02.
It can be seen that the errors are relatively large. When h = 0.001, the difference may
even not be witnessed from the curves, since the maximum error is only e = 0.0086.
If h is increased, the error tends to increase significantly, and the curve deviates from
the theoretical one. Even if h = 0.001, the error is still considered too large under the
double-precision standard. Better algorithms are needed for such problems.
It has been commented that there are two ways to describe the differential equa-
tions. The anonymous function method was demonstrated above. The other method is
to write a genuine MATLAB function. For instance, the function below can be written
function dx=c3mtest(t,x)
dx=[x(2); -2*x(1)]; % use MATLAB function for the differential equation
In this case, the following commands can be used to solve the first-order explicit dif-
ferential equation, and the result is identical to that given above.
Since the accuracy of the Euler’s method described earlier is too low, better algo-
rithms with higher accuracy are expected. Runge–Kutta methods are a family of widely
used numerical methods. Runge–Kutta methods are named after two German mathe-
maticians Carl David Tolmé Runge (1856–1927) and Martin Wilhelm Kutta (1867–1944),
where Runge proposed the method in 1895, and Kutta proposed a series of computa-
tion formulas of orders less than 4.
Euler’s method discussed earlier is also a special case of these formulas, known
as the first-order Runge–Kutta method. Here, another simple Runge–Kutta method –
second-order Runge–Kutta algorithm – is illustrated.
3.2 Implementation of fixed-step numerical algorithms | 61
Theorem 3.4. Assuming that the state vector x k at time tk is known, the numerical so-
lution of the differential equation at tk + h, using second-order Runge–Kutta algorithm,
can be written as
h
x k+1 = x k + (k + k 2 ) (3.2.6)
2 1
where the intermediate variables are
k 1 = f (tk , x k ),
{ (3.2.7)
k 2 = f (tk+1 , x k + hk 1 ).
What is the difference in the notations o(h) and o(h2 )? A simple illustration is given
below. If the step-size is selected as h = 0.01, then o(h) means that the cumulative
error is about the level of 0.01, while o(h2 ) yields a cumulation error at level 0.012 ,
i. e., 0.0001. It is obvious that for relatively small step-sizes h, the o(h2 ) algorithms are
evidently superior to the o(h) algorithms. For the same step-size h, one should select
o(hp ) algorithms with larger values of p.
Using the framework of the function designed for Euler’s method, the following
MATLAB function can be written for the second-order fixed-step Runge–Kutta algo-
rithm, and the syntax is exactly the same as for ode_euler() function:
function [t,x]=ode_rk2(f,vec,x0)
if length(vec)==3, h=vec(2); t=[vec(1):h:vec(3)]’;
else, h=vec(2)-vec(1); t=vec(:); end
n=length(t); x0=x0(:); x=[x0’; zeros(n-1,length(x0))];
for i=1:n-1
k1=f(t(i),x0); k2=f(t(i+1),x0+h*k1);
x1=x0+h*(k1(:)+k2(:))/2; x(i+1,:)=x1’; x0=x1;
end
Example 3.2. Solve again the differential equation in Example 3.1 using the second-
order Runge–Kutta method, and assess the accuracy.
Solutions. The following commands can be used to solve the differential equation
directly. Again different step-sizes can be tried, and the maximum errors for the step-
sizes are respectively e1 = 4.0246×10−6 , e2 = 4.0299×10−4 , and e3 = 0.0016, which are
significantly smaller than those for Euler’s method. A larger step-size of h = 0.1 can
be selected, and the maximum error is e4 = 0.0407. Comparisons of the numerical
solutions are shown in Figure 3.2. It can be seen that apart from the case h = 0.1,
the other results can hardly be distinguished from the plot. Even if a larger step-size
of h = 0.2 is selected, the result obtained is still much better than that with Euler’s
method.
62 | 3 Initial value problems
It can be seen that, compared with Euler’s method, the errors are significantly reduced
for the same step-size. For relatively large step-sizes, the error may still be rather large.
Theorem 3.5. With the fourth-order Runge–Kutta algorithm, the states can be recur-
sively computed from
h
(k + 2k 2 + 2k 3 + k 4 )
x k+1 = x k + (3.2.8)
6 1
where the four intermediate vectors can be computed from
{
{ k 1 = f (tk , x k ),
{
{
{k 2 = f (tk + h/2, x k + hk 1 /2) ,
{
{ (3.2.9)
{k 3 = f (tk + h/2, x k + hk 2 /2) ,
{
{
{
{
{k 4 = f (tk + h, x k + hk 3 )
where h is the step-size, which can be a constant.
3.2 Implementation of fixed-step numerical algorithms | 63
It can be shown that the accuracy of the algorithm is supposed to be o(h4 ). We can
still use h = 0.01 as an example. If the fourth-order Runge–Kutta algorithm is used,
the error may reach the level of 0.014 , i. e., 10−8 . Therefore this algorithm’s accuracy is
far higher than for Euler’s method and second-order Runge–Kutta algorithm.
A recursive method can still be used to compute from the initial state vector to find
the states at time instances t0 + h, t0 + 2h, . . . , such that the numerical solutions in the
interval t ∈ [t0 , tn ] can be found.
With the above mathematical formula, the MATLAB solver can be written for the
fourth-order Runge–Kutta algorithm:
function [t,x]=ode_rk4(f,vec,x0)
if length(vec)==3, h=vec(2); t=[vec(1):h:vec(3)]’;
else, h=vec(2)-vec(1); t=vec(:); end
n=length(t); x0=x0(:); x=[x0’; zeros(n-1,length(x0))];
for i=1:n-1 % use loop structure to solve the differential equation
k1=f(t(i),x0); k2=f(t(i)+h/2,x0+h*k1/2);
k3=f(t(i)+h/2,x0+h*k2/2); k4=f(t(i+1),x0+h*k3);
x1=k1+2*k2+2*k3+k4; x1=x0+x1(:)*h/6;
x(i+1,:)=x1’; x0=x1; % update the initial values and store the solutions
end
Example 3.3. Use the fourth-order Runge–Kutta algorithm to solve again the problem
in Example 3.1, and assess the accuracy.
Solutions. With the fourth-order Runge–Kutta algorithm, and selecting the step-sizes
h = 0.001, h = 0.01, h = 0.1, and an even larger one h = 0.2, the maximum errors
obtained are respectively e1 = 4.0712 × 10−13 , e2 = 4.0307 × 10−9 , e3 = 4.0756 × 10−5 ,
and e4 = 0.00065. It can be seen that with the fourth-order Runge–Kutta algorithm,
the accuracy is significantly increased. Even though the step-size is selected as h = 0.2,
the algorithm is still applicable, and the error is smaller than for Euler’s method, if
h = 0.001 is used. It can be seen that the efficiency of the algorithm is much superior
to those of Euler’s and second-order Runge–Kutta algorithms.
h
x k+1 = x k + [k + (2 − √2)k 2 + (2 + √2)k 3 + k 4 ] (3.2.10)
6 1
{
{ k 1 = f (tk , x k ),
{
{
{k 2 = f (tk + h/2, x k + hk 1 /2),
{
{ (3.2.11)
{k 3 = f (tk + h/2, x k + h(√2 − 1)k 1 /2 + (2 − √2)hk 2 /2),
{
{
{
{
{k 4 = f (tk + h, x k − 2hk 2 /2 + (2 + 2)hk 3 /2)
√ √
where h is the step-size, and the error level of Gill’s algorithm is o(h4 ).
Based on the above algorithm, it is not difficult to write the following MATLAB
implementation:
function [t,x]=ode_gill(f,vec,x0)
if length(vec)==3, h=vec(2); t=[vec(1):h:vec(3)]’;
else, h=vec(2)-vec(1); t=vec(:); end
n=length(t); x0=x0(:); x=[x0’; zeros(n-1,length(x0))];
for i=1:n-1
k1=f(t(i),x0); k2=f(t(i)+h/2,x0+k1*h/2);
k3=f(t(i)+h/2,x0+(sqrt(2)-1)*h*k1/2+(2-sqrt(2))*h*k2/2);
k4=f(t(i+1),x0-sqrt(2)*h*k2/2+(2+sqrt(2))*h*k3/2);
x1=k1+(2-sqrt(2))*k2+(2+sqrt(2))*k3+k4;
x1=x0+x1(:)*h/6; x(i+1,:)=x1’; x0=x1;
end
Example 3.4. Solve again the problem in Example 3.1 with Gill’s algorithm and assess
the accuracy.
Solutions. Similar to the above cases, the following statements can be tried for vari-
ous step-sizes, and the maximum errors can be found:
[t1,x1]=ode_gill(f,[t0,h,tn],[1;0]);
sol=[cos(sq*t1), -sq*sin(sq*t1)];
e1=norm(sol-x1,inf); E=[E,e1];
end
Several numerical algorithms were discussed above, where Euler’s method can be re-
garded as a special case of Runge–Kutta methods, or as the first-order Runge–Kutta al-
gorithm. The second- and fourth-order Runge–Kutta algorithms were also presented.
It can be seen from the structures that the two algorithms are similar, and the accura-
cies are respectively o(h2 ) and o(h4 ). More generally, a class of the mth Runge–Kutta
algorithms can be constructed.
Interpolation methods are used in the Runge–Kutta algorithms, and from the
given values one computes the next function values through the definite integral
where a = tk and b = tk+1 . The step-size is h = tk+1 − tk . Within the interval (tk , tk+2 ),
some interpolation points are defined
tij = tk + αj h, j = 1, 2, . . . , m. (3.2.13)
From the specifically selected interpolation points, the mth order Runge–Kutta
algorithm is formulated.
Theorem 3.7. The general formula for the mth order Runge–Kutta algorithm is
m
x k+1 = x k + ∑ hγi k i (3.2.14)
i=1
i−1
k i = f (tk + αi h, x k + h ∑ βij k j ) , i = 1, 2, . . . , m. (3.2.15)
j=1
It can be seen that the second- and fourth-order Runge–Kutta algorithms as well
as Gill’s algorithm are all special cases of the mth order Runge–Kutta algorithms. The
cumulative error of the mth order Runge–Kutta algorithm is o(hm ).
66 | 3 Initial value problems
Theorem 3.8. In Runge–Kutta methods, the relationship between the coefficients is de-
scribed in Butcher tableau[13]
α1 = 0
α2 β21
α3 β31 β32
.. .. .. .. (3.2.16)
. . . .
αm βm1 βm2 ⋅⋅⋅ βm,m−1
γ1 γ2 ⋅⋅⋅ γm−1 γm
and
m−1
1
∑ γi αiλ−1 = , λ = 1, 2, . . . , m − 1, (3.2.18)
i=1
λ
1 1
∑ γi βij αj = , ∑ γi αi βij αj = ,... (3.2.19)
6 8
Butcher invented the tree-like symbols to denote combination forms of the coef-
∙
ficient terms. For instance, with a solid dot ∙ and symbols ∙| and ∙∨∙∙ , we can denote the
terms for λ = 1, λ = 2, and λ = 3 in (3.2.18).
Example 3.5. Find all the undetermined coefficients for the third-order Runge–Kutta
algorithm, and write down the formulas.
{
{ β21 = α2 , β31 + β32 = α3 ,
{
{ 1 1
γ2 α22 + γ3 α32 =
{
γ1 + γ2 + γ3 = 1, γ2 α2 + γ3 α3 = , ,
{
{ 2 3
{γ β α = 1 .
{
{
{ 3 32 2 6
It can be seen that there are eight undetermined variables to define. One should
then assign artificially two of them. For simplicity, one may assign the values αi . For
instance, select α2 = 1/2 and α3 = 1. With MATLAB symbolic computation, the other
coefficients can be solved directly from the following algebraic equations:
h
x k+1 = x k + (k + 4k 2 + k 3 )
6 1
where
{
{ k 1 = f (tk , x k ),
{
{ k 2 = f (tk + h/2, x k + hk 1 /2),
{
{
{k 1 = f (tk + h, x k + h(−k 1 + 2k 2 )).
Of course, α2 and α3 can be assigned to other values. For instance, letting α2 = 1/2
and α3 = 3/4, the corresponding algebraic equations can be solved again:
h
x k+1 = x k + (2k 1 + 3k 2 + 4k 3 )
9
where
{
{ k 1 = f (tk , x k ),
{
{ k 2 = f (tk + h/2, x k + hk 1 /2),
{
{
{k 3 = f (tk + 2h/3, x k + 3hk 2 /4).
0 0
1/2 1/2 1/2 1/2
1 −1 2 2/3 0 3/4
1/6 2/3 1/6 2/9 1/3 4/9
68 | 3 Initial value problems
It may be difficult to derive formulas for the fourth- or even high-order Runge–
Kutta algorithms. The interested readers may refer to [25, 65], where the fourth-order
Runge–Kutta algorithm requires solving 11 algebraic equations, with 13 unknowns.
Two of the unknowns should be assigned artificially. For instance, one may select α2
and α3 to find others.
The Runge–Kutta methods discussed so far are one-step methods. That is, we compute
x k+1 based on the given x k . Although in the solution process, several intermediate
variables k i are computed, they can be regarded as interpolations within a step-size h.
In real applications, if some initial points x 0 , x 1 , . . . are known, multistep algorithms
can be introduced.
The main idea of a linear multistep method is to express the solution of the differ-
ential equation in the form of linear difference equations, and use a recursive method
to find the approximate solutions of the differential equations. The typical mathemat-
ical formula for a linear multistep method is as follows:
The so-called linear multistep methods need to determine the weights ai and bi ,
i = 1, 2, . . . , m, and use a recursive method to find the numerical solutions of the differ-
ential equations. The commonly used ones include the fourth-order Adams–Bashforth
and Adams–Mouton algorithms.
h
x k+1 = x k + [55f (tk , x k ) − 59f (tk−1 , x k−1 ) + 37f (tk−2 , x k−2 ) − 9f (tk−3 , x k−3 )] (3.2.21)
24
Theorem 3.10. The numerical solution with the fourth-order Adams–Mouton algorithm
is
h
x k+1 = x k + [9f (tk+1 , x k+1 ) + 19f (tk , x k ) + 37f (tk−1 , x k−1 ) − 9f (tk−2 , x k−2 )] (3.2.22)
24
Since in the Adams–Mouton algorithm, when computing x k+1 , the same variable
appears also on the right-hand side of the formula, this may cause problems in actual
computation. Sometimes a predictor–corrector method may be needed to implement
the algorithm. This algorithm is not discussed further in this book. The implementa-
tion of the fourth-order Adams–Bashforth algorithm is studied instead.
It can be seen from the algorithm that, if x 3 (or in Adams–Mouton formula, x 2 ) can
be provided, the algorithm can be initiated. In actual initial value problems, only the
initial states x 0 are given, the other ones, such as x 3 , are not known. Other methods
should be used to determine the initial states. In a classical linear multistep method,
Euler’s method is used to determine x 1 , and from x 0 and the newly found x 1 , a low-
accuracy trapezoidal method is used to approximate x 2 , and three-step methods can
be used to find x 3 . It is obvious that there may exist relatively large errors at the begin-
ning, which may impact all other subsequent computations. Therefore this algorithm
is not a good choice. High precision algorithms should be introduced to reconstruct the
initial points. For instance, the fourth-order Runge–Kutta algorithm can be embedded
into the solver to compute x 1 , x 2 , and x 3 :
function [t,x]=ode_adams(f,vec,x0)
if length(vec)==3, h=vec(2); t=[vec(1):h:vec(3)]’;
else, h=vec(2)-vec(1); t=vec(:); end
[t0,x]=ode_rk4(f,[vec(1),h,vec(1)+3*h],x0)
n=length(t); x=[x; zeros(n-4,length(x0))];
for i=4:n-1 % find the numerical solutions with multistep algorithm
k1=f(t(i),x(i,:)); k2=f(t(i-1),x(i-1,:));
k3=f(t(i-2),x(i-2,:)); k4=f(t(i-3),x(i-3,:));
x1=55*k1-59*k2+37*k3-9*k4; x1=x(i,:).’+h*x1(:)/24;
x1=x1(:); x(i+1,:)=x1’;
end
Example 3.6. Solve the differential equation in Example 3.1 with Adams–Bashforth
method and assess the accuracy.
Solutions. In contrast to the statements in the previous examples, the following MAT-
LAB commands can be used, and for different step-sizes, the numerical solution can
be found, as well as the errors assessed.
For the same differential equation, different solvers were tried in the previous ex-
amples. The accuracy information is collected as shown in Table 3.1 for different algo-
rithms. It can be seen that the fourth-order Runge–Kutta and Gill’s algorithms yield al-
most the same accuracy. Besides, the fourth-order Runge–Kutta algorithm (with o(h4 ))
is evidently more accurate than the second-order Runge–Kutta algorithm (with o(h2 ))
and Euler’s method (with o(h)). Even though a large step-size such as h = 0.2 is used,
rather high accuracy results can still be witnessed for the o(h4 ) algorithms. Although
Adams–Bashford linear multistep algorithm claims to have o(h4 ) accuracy, its behav-
ior is much worse than that of the fourth-order Runge–Kutta algorithm for this partic-
ular problem.
Compared with the one-step methods such as Runge–Kutta methods, the disadvan-
tages are that other methods should be used to compute the first few points, so that the
multistep algorithms can be started. The accuracy of the computed initial points may
impact the total accuracy of the algorithm. In the implementation, the initial points
are evaluated by the fourth-order Runge–Kutta algorithm. From another viewpoint,
multistep methods introduced here are only applicable to fixed-step algorithms, oth-
erwise the previous information cannot be adopted. Variable multistep methods may
have even more complicated structures which are not easy to implement.[32] They are
not discussed further in this book.
the differential equations. If the order of the algorithm is selected, the way of increase
the accuracy is to reduce the step-size h. However, in real applications, the step-size h
cannot be reduced infinitely. There are two reasons for that:
(1) Slow computational speed. For the same time of interest, reducing step-size im-
plies increasing the total number of points in the interval, therefore the speed may
be slow.
(2) Increased cumulative error. No matter how small the step-size is selected, there
will be a roundoff error. If the step-size is reduced, the number of points will be
increased, and the roundoff error may have more time to propagate, which implies
a larger cumulative error. The roundoff, cumulative and total errors are illustrated
in Figure 3.3.
In this section, variable-step algorithms are mainly discussed. With the variable-
step solver provided in MATLAB, the solutions of ordinary differential equations are
demonstrated through examples. The syntaxes of the solver are addressed so as to
better solve differential equations.
Although fixed-step algorithms are mainly taught in the courses such as numerical
analysis, almost nobody uses them in practice, since there are too many limitations.
If one wants to increase the efficiency in solving differential equations, the following
measures can be considered:
(1) Selecting a suitable step-size. As in the case of Euler’s method, the step-size
should be properly chosen. It should not be too small or too large.
(2) Improving the accuracy of the algorithm. Since the accuracy of Euler’s method
is too low, better algorithms should be selected. For instance, the fourth-order
Runge–Kutta algorithm is a better choice.
72 | 3 Initial value problems
The principles of some of the variable-step algorithms are illustrated in Figure 3.4. If
the state vector x k at time tk is known, the initial step-size h can be used to compute
the state x̃ k+1 at time tk + h. On the other hand, the step-size can be reduced by half
such that the state vector at x̂ k+1 can be evaluated in two steps. If the error using the
two step-sizes ϵ = ‖x̂ k+1 − x̃ k+1 ‖ is smaller than the prespecified error tolerance, the
step-size can be used, or increased; if the error is too large, the step-size should be
reduced, and checked again. An adaptive variable-step algorithm monitors the error
in the solution process, and adjusts the step-size whenever necessary, to ensure fast
speed and high accuracy.
sometimes very large step-sizes are allowed in variable-step algorithms, the variable-
step algorithms are more efficient in terms of speed.
Erwin Felhberg, a researcher working for NASA, improved the classical Runge–Kutta
method,[25] where within each computation step, the fi (⋅) function is evaluated 6 times,
so as to ensure high precision and numerical stability. The efficiency of the algorithm
is high and the accuracy can be controlled by the user. Therefore it is usually used in
practical applications.
Theorem 3.11. Assuming that the current step-size is hk , 6 immediate variables k i are
evaluated as
i−1
k i = f (tk + αi hk , x k + ∑ βij k j ) , i = 1, 2, . . . , 6 (3.3.1)
j=1
where tk is the current time, and the coefficients αi , βij , and γi are shown in Table 3.2. The
coefficients αi and βij are also referred to as Dormand–Prince pairs. The state variable
in the next step can be computed from
6
x k+1 = x k + ∑ γi hk k i . (3.3.2)
i=1
Table 3.2: Butcher tableau for the 4/5th order RKF coefficients.
0
1/4 1/4
3/8 3/32 9/32
12/13 1 932/2 197 −7 200/2 197 7 296/2 197
1 439/216 −8 3 680/513 −845/4 104
1/2 −8/27 2 −3 544/2 565 1 859/4 104 −11/40
γi 16/135 0 6 656/12 825 28 561/56 430 −9/50 2/55
γi∗ 25/216 0 1 408/2 565 2 197/4 104 −1/5 0
Of course, a fixed-step algorithm can be formulated like this. While in real ap-
plications, an alternative formula can be used to evaluate the solution with another
method. The coefficients γi∗ are used. The errors are defined as follows:
6
ϵk = ∑(γi − γi∗ )hk i . (3.3.3)
i=1
74 | 3 Initial value problems
The value of the error can be used to update the step-size. The algorithm can be
used to adaptively change the step-size, and ensure the accuracy is satisfied. In the
algorithm, the result obtained with γi is of o(h5 ) accuracy, while that used in monitor-
ing the error, i. e., evaluating γi∗ , is of o(h4 ) accuracy. Therefore the algorithm is also
known as the 4/5th order Runge–Kutta–Felhberg (RKF) algorithm.
It can be seen from the numerical examples studied earlier that the fixed-step algo-
rithms are not really used in practical situations. Examples are given to illustrate dif-
ferential equations with variable-step algorithms. Comparative studies are also made
for existing MATLAB solvers.
Function ode45() is provided in MATLAB to solve first-order explicit differential
equations. The variable-step 4/5th order Runge–Kutta–Felhberg algorithm is adopted.
The syntaxes of the function are
[t ,x ]=ode45(Fun,[t0 , tn ], x 0 ), %direct solution
[t ,x ]=ode45(Fun,[t0 , tn ], x 0 ,options), %with control options
[t ,x ]=ode45(Fun,[t0 , tn ], x 0 ,options, p1 ,p2 , . . . ,pm )
in the latter, additional parameters p1 , p2 , . . . are allowed. The differential equation
can be described by either anonymous or MATLAB function, with Fun as its function
handle. Examples will be used later to demonstrate the use of function handles. The
argument tspan is used to describe the solution interval. Normally tspan=[t0 ,tn ],
which is used to describe the solution interval, and if only one value, tn , is used, the
default initial time is t0 = 0, and tn is the terminal time. Apart from these, the initial
state vector x 0 should be supplied.
Besides, in the function call, tn < t0 is allowed, meaning that backward solutions
of differential equations are possible, where t0 can be regarded as terminal time, and
tn the initial time. Vector x 0 stores the terminal state value. The function can then be
used in solving terminal value problems directly.
Apart from the solver ode45(), other similar solvers such as ode113, ode15s(),
and ode23() are also provided. They share the same syntax, but the internal algo-
rithms inside the functions are different.
The key step in differential equation solution is to write a MATLAB function to de-
scribe the first-order explicit differential equation. The interface of the function should
be
function x d =funname(t ,x ), %without additional parameters
function x d =funname(t ,x ,p1 ,p2 ,. . . ,pm ), %with additional parameters
where scalar t is the time variable, so that time-varying differential equations can also
be handled. If the equation does not explicitly contain t, it should still be used to hold
the space, otherwise errors may occur in the MATLAB solution process. The other input
3.3 Variable-step numerical algorithms and implementations | 75
argument x is the state vector. The returned argument x d computes the derivative of
the state vector.
In some application examples, there are cases where additional parameters
should be assigned. These parameters can be passed to the model with variables
p1 , p2 , . . . , pm , which should be exactly matched. Examples will be given later to
demonstrate in detail the syntaxes of the solvers.
Example 3.7. Consider the well-known Lorenz model with the state space equation
>> f=@(t,x)[-8/3*x(1)+x(2)*x(3);
-10*x(2)+10*x(3);
-x(1)*x(2)+28*x(2)-x(3)];
Function ode45() can then be called to numerically solve the differential equation
described by the anonymous function f . A graphical display of the solution can be
found when the numerical solutions are obtained.
where tn is the terminal time and x 0 is the initial state. The relationship between the
states versus time is obtained as shown in Figure 3.5.
76 | 3 Initial value problems
If the state space trajectory of the three states is expected, the following statements
can be used, and the result is shown in Figure 3.6.
It can be seen that a complicated nonlinear differential equation with three states can
be solved directly by a few simple statements. Besides, simple scientific visualization
commands can be used to display various results. This is the reason why MATLAB is
selected as the working language in the book.
In fact, the best way to observe the phase space trajectory is by using function
comet3(), where animation is used to show the trajectory dynamically. The reader
only needs to change the last command to comet3(x(:, 1),x(:,2),x(:,3)) to show
the animation.
3.3 Variable-step numerical algorithms and implementations | 77
Solutions. The functions to be solved for are x(t) and y(t), and in the standard form
we need the x(t) vector. The state variables should be introduced. For instance, letting
x1 (t) = x(t) and x2 (t) = y(t), the standard form of the differential equations can be
written as
With the standard model, an anonymous function can be written to describe the
differential equation, and the relation between x(t) and y(t) can be drawn, also known
as the phase plane trajectory, as shown in 3.7. The elapsed time for the solution process
is 0.00648 seconds, and the number of points computed is 153.
78 | 3 Initial value problems
Figure 3.7: Phase plane trajectory (the curves are rather rough).
It can be seen from the example that, if the equation can be expressed by an
anonymous or MATLAB function, the solver ode45() can be called to find the numer-
ical solutions directly. It can be seen that writing a MATLAB function is the key step in
the numerical solution process.
Now let us consider an even more complicated example and see how this example
can be solved with MATLAB.
Example 3.9. Consider the three-body model.[13] The notation (t) is omitted from the
following mathematical model:
{ y1 = y4 ,
{
{
y2 = y5 ,
{
{
{
{
{
{
{y3 = y6 ,
{
{
{
{
{
{
{y = 2y + y − μ(y1 + μ − 1) (1 − μ)(y1 + μ)
{
{ 4 5 1 − ,
√(y 2 + y 2 + (y + μ − 1) 2 ) 3 √(y 2 + y 2 + (y + μ)2 )3
{ 2 3 1 2 3 1
{
{
{
{ μy2 (1 − μ)y2
{y5 = −2y4 + y2 −
{
{ − ,
{
{
{
{ √(y22 + y32 + (y1 + μ − 1)2 )3 √(y22 + y32 + (y1 + μ)2 )3
{
{
{
{ μy3 (1 − μ)y3
{y6 = −
{
{ −
2 2
√(y2 + y3 + (y1 + μ − 1) )2 3 √(y2 + y32 + (y1 + μ)2 )3
2
{
where the two larger bodies are the Earth and the Moon, while the smaller one is a
satellite. The spatial coordinates of the satellite are expressed by y1 , y2 , and y3 , while
y4 , y5 , and y6 are the projections of the speed onto the axes. Assume that the initial
3.3 Variable-step numerical algorithms and implementations | 79
Solutions. It can be seen from the first-order explicit differential equations that
anonymous functions can be used to express the equations. Then the solver ode45()
can be called to solve the differential equations. It is not hard to see from the mathe-
matical model that there are common terms,
which appear many times. It is better to compute them and use as pre-assigned vari-
ables so that they can be reused in other cases. Since anonymous functions do not
support intermediate variables, MATLAB functions can be used instead to describe
the differential equations:
function dy=threebody(t,y)
mu0=1/81.45;
D1=sqrt((y(2)^2+y(3)^2+(y(1)+mu0-1)^2)^3);
D2=sqrt((y(2)^2+y(3)^2+(y(1)+mu0)^2)^3);
dy=[y(4:6);
2*y(5)+y(1)-mu0*(y(1)+mu0-1)/D1-(1-mu0)*(y(1)+mu0)/D2;
-2*y(4)+y(2)-mu0*y(2)/D1-(1-mu0)*y(2)/D2;
-mu0*y(3)/D1-(1-mu0)*y(3)/D2];
Selecting t0 = 0 and tn = 40, the following commands can be used to solve the
differential equations, and the trajectory of the satellite can be obtained, as shown
in Figure 3.8. Unfortunately, the result obtained is not correct. This brings us a new
>> y0=[0.994,0,0,0,-2.0015851063790825224,0]’;
[t,y]=ode45(@threebody,[0,40],y0); % solve the differential equations
plot3(y(:,1),y(:,2),y(:,3)) % draw the phase space trajectory
In the solution process, the aim of introducing additional parameters is that, if a dif-
ferential equation contains certain parameters, and different values of the parameters
are to be tried, they can be used as additional parameters, such that there is no need
for the user to modify the model file every time the parameters are changed. Consider
the case in Example 3.7 where there are parameters β, ρ, and σ in the Lorenz equation.
If they are selected as additional parameters, there is no need to change the MATLAB
function for the Lorenz equation each time the parameters are changed.
Example 3.10. Write a MATLAB function containing additional parameters for the
Lorenz equation in Example 3.7. Solve the equation again and compare the results.
Select a new set of parameters β = 2, ρ = 5, and σ = 20. Solve numerically the Lorenz
equation again.
Then the differential equations can be solved with the following commands. It can
be seen that in the calling command, the variable names need not be the same as in
the function. If the corresponding relationships are the same, the parameters can be
passed to the variables correctly. In the ode45() function call, an empty matrix is used
to indicate that the default control option is used.
With the new MATLAB function to describe the Lorenz equation, the parameters β,
ρ, and σ can be assigned to other values, and the user does not have to modify the
model file lorenz1.m. For instance, selecting β = 2, ρ = 5, and σ = 20, the following
commands can be used to find the numerical solutions directly. The results obtained
are as shown in Figures 3.9 and 3.10.
It can be seen from the results that if the parameters in the chaotic system are changed,
the behavior of the system may also be changed. In this example, chaotic behaviors
are no longer so apparent.
Figure 3.9: The state curves under the new parameters in Lorenz equation.
Figure 3.10: The phase space trajectory of Lorenz equation under new parameters.
82 | 3 Initial value problems
If the differential equations are simple, anonymous functions can be used to de-
scribe them. In this case, there is no need to assign additional parameters, since the
variables in MATLAB workspace can be extracted directly in anonymous functions.
The following commands can then be used and identical results can be obtained:
In real applications, sometimes the use of additional parameters may not be very
convenient, since in certain function calls, additional parameters are not allowed.
For simple problems, anonymous functions can be used to express the differential
equations, and the variables in MATLAB workspace can be extracted directly. There is
no need to use additional parameters.
In other applications, the differential equations may be too complicated to de-
scribe. Intermediate variables are needed. Therefore, anonymous functions are not
suitable to describe differential equations. MATLAB functions may become the only
choice. In these cases, a method is demonstrated on how to avoid additional parame-
ters.
An anonymous function can be designed as an interface to such a MATLAB func-
tion. Therefore the additional parameters in MATLAB workspace can be passed to the
anonymous function, then to the MATLAB function through the interface. In this case,
there is no need to use additional parameters. An example is given next to demonstrate
the interface manipulations.
Example 3.11. Consider the differential equations model in Example 3.10, where
additional parameters are introduced to describe the differential equations, and file
lorenz1.m is written. Set up the differential equations solution mechanism such that
no additional parameters are needed.
Therefore, the following commands can be used to solve again the differential equa-
tions. The results are identical to those obtained earlier.
Further adjustments to the control options can be made in the differential equation
solution process. The control options can be modified with function odeset(). The
variable options thus created is a structured variable. In Table 3.3, some of the com-
monly used members in the structured variable are listed. There are two ways to mod-
ify the members. One is to modify the options with odeset() function, the other is to
directly change the members in the variable options. For instance, if one wants to set
the relative error tolerance to 10−7 , the following two methods can be used:
options=odeset(’RelTol’,1e-7);
options=odeset; options.RelTol=1e-7;
Examples are given next to demonstrate the validation of numerical solutions of dif-
ferential equations.
84 | 3 Initial value problems
RelTol Relative error tolerance, with default value of 0.001 (i. e., 0.1 % of relative error). This
value is too large in practice, and should be reduced in the solution
AbsTol The absolute error tolerance, with default value of 10−6 . Again this value is too large
and should be decreased
MaxStep The maximum allowed step-size
Mass The mass matrix in differential–algebraic equations, which should be assigned in
describing differential–algebraic equations
Jacobian Describes Jacobian matrix function 𝜓f /𝜓x. It should be a function handle to describe
Jacobian matrix, so as to speed up the simulation process
Events Event response property, used to set event response function handle
OutputFcn Call user-defined function in each successful computation
Example 3.12. It can be seen from Figure 3.7 that the curve is rather rough. It seems
that there is something wrong in the precision setting. Select a more strict control
option and see whether accurate results can be found.
Solutions. The relative error tolerance can be set to a very small value, for instance,
10−10 . The differential equations can be solved again, and the new phase plane trajec-
tory can be obtained as shown in Figure 3.11. It can be seen that the curves are smooth.
It implies that the precision is rather high. No better results may be found by further
reducing the error tolerance.
Since the precision criterion is increased by 107 times, is it true that the computational
load increased significantly? It can be seen that the elapsed time is 0.02007 seconds,
and the number of points is increased to 3 593. It can be seen that the elapsed time has
not increased significantly. It is worthwhile to use this method to find more accurate
solutions.
If the error tolerances are set to eps (in fact, the minimum allowed error tolerance
is 100eps for the solvers), the elapsed time is only increased to 0.0487 seconds, and
the number of points is increased to 19 329.
Example 3.13. Solve again the problem in Example 3.9 with better accuracy.
Solutions. Since there is no analytical solution for the original problem, validation
of the solution is not a simple thing. A reliable way is to set tough control options
for solving the differential equations. It can be seen that the two members, AbsTol
and RelTol, are the most important control options regarding the accuracy of the
equations. If necessary, they can be set to the toughest eps. If under such tough con-
trol, accurate solutions can still not be found, it means that ode45() function is not
suitable for solving this problem. Other alternative effective methods should be used
to tackle it.
The two error tolerances can be set to the toughest eps, and the following com-
mands can be used to find a numerical solution of the equation:
>> y0=[0.994,0,0,0,-2.0015851063790825224,0]’;
ff=odeset; ff.AbsTol=eps; ff.RelTol=eps;
tic, [t,y]=ode45(@threebody,[0,40],y0,ff); toc
plot3(y(:,1),y(:,2),y(:,3)), length(t) % phase space trajectory
Running the above code, a warning message “Warning: RelTol has been increased
to 2.22045 × 10−14 ” indicates that the error tolerances are not allowed to be set to
such small values. They have been set automatically to 2.22045 × 10−14 . This is the
most accurate possible solution under the double precision data type. The new three-
dimensional phase space trajectory obtained is as shown in Figure 3.12. It can be seen
by closely observing the curves that the trajectory has the tendency to diverge at the
point marked in the figure. If the terminal time is increased, divergent results can be
found, and erroneous results may be obtained. It can also be seen that with length()
function, it is found that the number of points computed is 83 485.
86 | 3 Initial value problems
In normal cases, the user may use an anonymous or MATLAB function to describe the
differential equations, then call the solver ode45() to find the numerical solutions.
If the user wants to display the intermediate results during the solution process, the
member OutputFcn in the control options can be set to a user-defined function such
that, when the solution at each point is successfully found, the MATLAB mechanism
calls the user-defined function once automatically and the intermediate result can be
handled.
Four existing functions are already provided in MATLAB to routinely handle the
intermediate results. The four functions are:
Example 3.14. Consider the Lorenz equation in Example 3.7. Draw dynamically the
phase space trajectory during the solution process.
Solutions. Compared with Example 3.7, no other changes should be made to the so-
lution commands. The only thing to modify is the OutputFcn member of the control
options. It can be set to @odephas3. To ensure the accuracy in the solutions, tough error
tolerances can be set. The following statements can be used to solve the differential
equations. During the solution process, the dynamic phase space trajectory can be
drawn automatically.
3.4 Validations of numerical solutions | 87
>> f=@(t,x)[-8/3*x(1)+x(2)*x(3);
-10*x(2)+10*x(3); -x(1)*x(2)+28*x(2)-x(3)];
tn=100; x0=[0;0;1e-10];
ff=odeset(’RelTol’,100*eps,’AbsTol’,100*eps,...
’OutputFcn’,@odephas3); % modify several members
[t,x]=ode45(f,[0,tn],x0,ff); % show dynamic phase space plot
To better display the dynamic behaviors, in default options, exaggerated width in the
dynamic trajectory is used in MATLAB. This is adjusted by the ’Marker’ option in the
plotting facilities. If the user is not satisfied with that, he/she can write his/her own
response function. Alternatively, the low-level command in file odephas3.m can be
modified. More specifically, the following statement can be located with source code
editing interface:
ud.line = plot3(y(1),y(2),y(3),’-o’,’Parent’,TARGET_AXIS);
We can change the option ’-o’ into ’-’. If necessary, other source files, such as ode-
plot.m and odephas2.m, can be modified accordingly.
It can be seen from the examples that, although very tough error tolerances are set
in the control options, the simulation results may contain large errors. It seems that
this is beyond the capabilities of the ode45() solvers. More accurate solvers may be
needed.
An 8/7th order Runge–Kutta variable-step solver was developed by a Russian
scholar Vasiliy Govorukhin, which is the solver ode87().[29] The theoretical accuracy
may reach o(h8 ). In each computation step, the model function is called 13 times.
Therefore the efficiency may well exceed that of the ode45() solver. Higher accuracy
solutions may be achieved. The syntaxes of the solver are exactly the same as those
for the function ode45(). It should be noted that the error tolerances should not be
assigned to too small values. Otherwise error message may be returned.
Example 3.15. Solve again the three-body problem in Example 3.9 again with the more
accurate solver.
Solutions. The more accurate solver can be called again for the problem. It can be
seen that the same divergence phenomenon may happen again, since the code is ex-
ecuted under the same double precision framework. Since the ode45() solver has
already found the most accurate solution under the double precision framework, the
difference is that the total number of points is significantly reduced to 2 511 – only 3 %
88 | 3 Initial value problems
of the points with solver ode45(). It can be seen that the efficiency of this solver is
much higher than that of ode45().
It should be noted that since function ode87() is written and executed under the
double precision framework, it may not yield a more accurate result than that obtained
with the ode45() solver, if the error tolerances are already set to the toughest values.
Besides, the two solvers can be executed to solve the same problem and the user can
compare whether consistent results can be found. In this way, the solutions can be
cross-validated.
In the ode45() solver provided in MATLAB and the third-party solver such as ode87(),
variable-step algorithms are implemented. This is different from the functions such as
ode_rk2() discussed earlier. Examples are used to show the benefits of variable-step
algorithms. Also, we illustrate how to return fixed-step results using the variable-step
solvers.
Example 3.16. Solve the differential equations in Example 3.8 again, and observe the
changes in the step-sizes in the solution process.
Solutions. Since in the function call of ode45(), the time vector t can also be re-
turned, differences (the latter term subtracted by the former term) of vector t can be
taken, such that each step-size can be found. Differences can be obtained by calling
function diff(t ). Note that the length of the result is 1 less than that of t vector. The
error tolerances are set to 100 times the machine precision eps, and the following
commands can be used to solve the differential equations. The step-sizes during the
solution process are shown in Figure 3.13. It can be seen that the step-sizes are chang-
ing in the period to ensure that the error tolerances are obeyed.
It can be seen that at the initial stage, the smallest step-size was taken, and with min()
function call, it is found that the minimum step-size at this point is 1.859 × 10−4 .
It has been shown that in the function call of the solver ode45(), the argument
tspan can be assigned to a given time vector t and function ode45() can be called in
variable-step format within each subinterval of vector t. The exact expressions of the
states at the instances in vector t can be found. Even if t is an equally spaced vector, it
does not imply that a fixed-step algorithm is used in solving the differential equations.
Solutions. If the argument tspan is set to an equally spaced t vector, with increment
of h = 0.2, and tough error tolerances are also selected, the solutions at the points in
vector t can be found, and marked “o” in Figure 3.14. It can be seen that the points are
exactly located on the analytical solution curves. The maximum error at the samples is
2.6423×10−14 , which is much more accurate than when using the fixed-step algorithms.
Note that also in the returned vectors an equally spaced vector t is found, yet the com-
putation process applies a variable-step algorithm all the time. The results obtained
are accurate and reliable.
Fixed-step displays of simulation results are useful and meaningful in some par-
ticular applications. An example is given next to show an application of the fixed-step
display.
Solutions. If function ode45() is called directly, for each value of λ, the numerical ex-
pression can be found. However, since a variable-step mechanism is used, the length
of t and the solution vectors may be different. It may be hard to draw the responses in
three-dimensional surfaces. If a fixed-step display format is adopted, the solutions at
preselected t samples can be found, for each value of λ. Therefore the results can be
stored in a matrix, and the surface of x2 (t) can be drawn, as shown in Figure 3.15.
Many test examples for initial value problems are provided at the website mentioned
in [52]. Some of the examples are of good application illustrations. The simplest and
most straightforward example is used here to demonstrate the solution process of
differential equations. The interested readers are recommended to try solving more
complicated problems in the test set.
Solutions. If MATLAB is used to solve this problem, the following statements can be
written to describe the differential equation with an anonymous function. Then the
solver ode45() can be called to solve the differential equation. The solution process is
fast, and it only needs 0.15 seconds to find the numerical solution. The error tolerances
are set to tough values, and 41 521 points are computed. The state space responses are
obtained as shown in Figure 3.16. It can be seen that the results are the same, as those
92 | 3 Initial value problems
Figure 3.16: The time domain responses (locally zoomed over 0 ⩽ t ⩽ 5).
provided in [52].
>> f=@(t,y)[-1.71*y(1)+0.43*y(2)+8.32*y(3)+0.0007;
1.71*y(1)-8.75*y(2);
-10.03*y(3)+0.43*y(4)+0.035*y(5);
8.32*y(2)+1.71*y(3)-1.12*y(4);
-1.745*y(5)+0.43*y(6)+0.43*y(7);
-280*y(6)*y(8)+0.69*y(4)+1.71*y(5)-0.43*y(6)+0.69*y(7);
280*y(6)*y(8)-1.81*y(7)
-280*y(6)*y(8)+1.81*y(7)];
y0=[1;0;0;0;0;0;0;0.0057]; % initial values
ff=odeset; ff.RelTol=100*eps; ff.AbsTol=100*eps;
tic, [t,x]=ode45(f,[0,321.8122],y0); toc
plot(t,x), xlim([0,5]) % time domain responses
The step-size curve can also be drawn, as shown in Figure 3.17. It can be seen that at the
initial stage, the smallest step-size is 2.9379×10−5 . When t is large, relatively large step-
sizes are selected automatically, so as to speed up the solution process, while ensuring
the accuracy. It can be seen that the tools provided here are of high efficiency.
In the original reference, this differential equation was regarded as a stiff one.
More on stiff differential equation will be presented later in Chapter 5. Therefore it
may be hard to solve with usual methods. Here MATLAB is used to solve the equations
directly, and the solution process is straightforward. In an extremely short period, the
exact solutions of the differential equation can be found, under tough error tolerances.
It can be seen that the capabilities in handling differential equations with MATLAB are
powerful and reliable.
3.5 Exercises
3.1 Use the algebraic equation solution method to validate the coefficients of the
fourth-order Runge–Kutta and Gill’s algorithms.
3.2 Write a MATLAB function to implement the 5th and 6th order Adams–Bashforth
algorithms, and assess their efficiency:
(1) The 5th order Adams–Bashforth algorithm:
h
x k+1 = x k + [1 901 f (tk , x k ) − 2 774 f (tk−1 , x k−1 ) + 2 616 f (tk−2 , x k−2 )
720
− 1 274 f (tk−3 , x k−3 ) + 251 f (tk−4 , x k−4 )];
h
x k+1 = x k + [4 277 f (tk , x k ) − 7 923 f (tk−1 , x k−1 ) + 9 982 f (tk−2 , x k−2 )
1 440
− 7 298 f (tk−3 , x k−3 ) + 2 877 f (tk−4 , x k−4 ) − 475 f (tk−5 , x k−5 )].
3.4 Consider the following nonlinear differential equation. It is pointed out in [22]
that the equation may have multiple limit cycles. Use numerical solutions to
94 | 3 Initial value problems
{
{ x (t) = −y(t) − z(t),
{
{ y (t) = x(t) + ay(t),
{
{
{z (t) = b + (x(t) − c)z(t).
If a = b = 0.2 and c = 5.7, and the initial values are x(0) = y(0) = z(0), draw the
phase space trajectory and its projection on the xy plane. It is suggested to set
a, b, and c as additional parameters. If the parameters are changed to a = 0.2,
b = 0.5, and c = 10, draw the two- and three-dimensional trajectories of the
states.
3.6 Consider the following differential equation:[63]
{
{ y (t) = tan ϕ(t),
{
g sin ϕ(t)γv2 (t)
{
{
v (t) = − ,
{
{
{
{ v(t) cos ϕ(t)
{ 2
{ϕ (t) = −g/v (t)
where g = 0.032, γ = 0.02. Initial values are y(0) = 0, v(0) = 0.5. If ϕ(0) is
selected respectively as 0.3782 and 9.7456, find the numerical solution.
3.7 Chua circuit is an often mentioned differential equation in chaos theory:[79]
{
{ x (t) = α[ y(t) − x(t) − f (x(t))],
{
{ y (t) = x(t) − y(t) + z(t),
{
{
{z (t) = −βy(t) − γz(t)
1
f (x) = bx + (a − b)(|x + 1| − |x − 1|),
2
and a < b < 0. Write a MATLAB function to describe the differential equation,
and draw the phase space trajectory for α = 15, β = 20, γ = 0.5, a = −120/7,
b = −75/7. The initial values are x(0) = −2.121304, y(0) = −0.066170, and z(0) =
2.881090.
3.5 Exercises | 95
with the initial values x(0) = 0, y(0) = e, and z(0) = 1. It is known that the
2 2
analytical solution is y(x) = ecos x and z(x) = esin x . Solve the problem with
different algorithms and assess the speed and accuracy.
3.9 Consider the following differential equation:[16]
{
{ w (t) = αKw + βy(t) − γx(t)w(t) − αw(t)z(t),
{
{
{x (t) = −x(t) + βy(t) − γx(t)w(t) − y(t)z(t)/Ka ,
{
{
{y (t) = x(t) − βy(t) + γx(t)w(t) − y(t)z(t)/Ka ,
{
{
{
{
{z (t) = αKw + x(t) + αw(t)z(t) − y(t)z(t)/Ka
where x1 (0) = −0.2 and x2 (0) = −0.7. Draw the surface of state variable x1 (t) with
respect to the changes in parameter μ.
3.12 A differential equation set is described as[40]
{
{ x (t) = s[qy(t) − x(t)y(t) + x(t)(1 − x(t))],
{
{y (t) = h[ − qy(t) − x(t)y(t) − pz(t)],
{
{
{z (t) = x(t) − z(t)
where h = 8.333, p = 0.3, q = 0.01, and s = 20. The initial values are known
as x(0) = 1, y(0) = 2 and z(0) = 0.6. If tn = 60, solve the differential equation
system.
3.13 FitzHugh–Nagumo model is given by[40]
where a = 3, b = 0.7, and c = 0.8. If the initial values are v(0) = 0.5 and w(0) =
0.2, solve the differential equations with three different methods.
3.14 Consider the following differential equations. Select suitable terminal time to
solve the differential equations, and observe the phase plane trajectory during
the solution process
x (t) = −x2 (t) − 10x12 (t) + 5x1 (t)x2 (t) + x22 (t),
{ 1
x2 (t) = x1 (t) + x12 (t) − 25x1 (t)x2 (t)
with the initial values x1 (0) = −0.0914 and x2 (0) = −0.1075.
3.15 Solve the following differential equations[32]
where
3
{
{ iNa (t) = gNa m (t)h(t)(v − VNa ),
{
{ iK (t) = gK n4 (t)(v − VK ),
{
{
{iL (t) = gL (v − VL ),
and the constants are i = 10, C = 1, VNa = 115, VK = 12, VL = 10.599, gNa = 120,
gK = 36, and gL = 0.3. Besides we known the differential equations
{
{m (t) = am (1 − v) − bm m(t),
{
{n (t) = an (1 − v) − bn n(t),
{
{
{h (t) = ah (1 − v) − bh h(t)
where
0.1(25 − v)
am = , bm = 4e−v/18 ,
e0.1(25−v) − 1
0.01(10 − v)
an = 0.01(10−v) , bn = 0.125e−v/80 ,
e −1
1
ah = 0.07e−v/20 , bh = ,
e0.1(30−v) + 1
3.5 Exercises | 97
with initial values V(0) = 0 and m(0) = n(0) = h(0) = 0.5. Assuming that v
takes values in the interval (−80, 80), solve the differential equations for different
values of v, with t ∈ (0, 50).
3.17 At the website [52], many test examples are provided as benchmark problems.
The differential equation shown in Example 3.19 is only one of them. The readers
may visit the website to download certain problems and try to solve the differen-
tial equations to test whether accurate Solutions Can Be Found, and to Observe
the Efficiency of the Solver.
4 Standard form conversions of ordinary differential
equations
The initial value problem solver studied in Chapter 3 can only be used for solving
differential equations given in the form of first-order explicit differential equations
whose standard form is
This is not adequate for the study of ordinary differential equations. Numerical
solvers should be applicable to solve differential equations given in any form. In fact,
an alternative solution pattern is to convert the differential equations to be studied
into the standard form of first-order explicit differential equations. Then a universal
solver such as ode45() can be called to find numerical solutions.
It can be seen from the standard forms and solvers that if a system of ordinary
differential equations is composed of one or more high-order differential equations,
a set of state variables should be selected to convert it first into the first-order ex-
plicit differential equations in standard form. In this chapter, the conversion meth-
ods are introduced for various differential equations. In Section 4.1, the conversion of
a single high-order differential equation is introduced. If differential equations can
be successfully converted into the standard form, solvers such as ode45() can be
called to solve them directly. In Section 4.2, complicated single high-order differen-
tial equations are considered for the conversion into standard form. In Section 4.3,
conversion from high-order differential equation sets is explored. The major objective
is the same as that discussed earlier, that is, to select a set of state variables, and
rewrite the differential equation sets into the first-order explicit differential equations.
In Section 4.4, high-order matrix differential equations are explored, and numerical
algorithms are considered to solve Sylvester and Riccati matrix differential equations.
In Section 4.5, for a class of Volterra integro-differential equations with separable
variables, conversion methods are presented such that numerical solutions can be
found. In fact, solutions of many differential equations will be revisited in the next
chapter.
https://doi.org/10.1515/9783110675252-004
100 | 4 Standard form conversions of ordinary differential equations
solutions of the differential equations. If the explicit form does not exist for the orig-
inal differential equation, algebraic solution techniques can be employed, such that
the original differential equation can finally be converted into the standard form and
solved numerically.
In this section, some different cases are considered and presented. The final target
is to eventually convert the equations into the standard form so that the numerical
solution can be found.
and the initial values of y(t) and its directives are given as y(t0 ), y (t0 ), . . . , y(n−1) (t0 ),
a set of state variables can be selected. For instance, let x1 (t) = y(t), x2 (t) = y (t), . . . ,
xn (t) = y(n−1) (t). Therefore, the original high-order differential equation can be con-
verted into the following equivalent standard form
{
{ x1 (t) = x2 (t),
{
{
{x2 (t) = x3 (t),
{
{
{ .. (4.1.2)
{
{
{
{ .
{
{
{xn (t) = f (t, x1 (t), x2 (t), . . . , xn (t))
with initial states x1 (t0 ) = y(t0 ), x2 (t0 ) = y (t0 ), . . . , xn (t0 ) = y(n−1) (t0 ). Therefore, func-
tion ode45() can be called to solve the original differential equation directly.
In fact, there are infinitely many ways to select the state variables. The above
mentioned is only one of them. An alternative way to select the state variables is x1 (t) =
y(n−1) (t), x2 (t) = y(n−2) (t), . . . , xn−1 (t) = y (t) and xn (t) = y(t), such that the original
differential equation can be converted into the form
{
{ x1 (t) = f (t, xn (t), xn−1 (t), . . . , x1 (t)),
{
{
{x2 (t) = x1 (t),
{
{
(4.1.3)
{ ..
{
{
{
{ .
{
{
{xn (t) = xn−1 (t)
with initial state values x1 (t0 ) = y(n−1) (t0 ), x2 (t0 ) = y(n−2) (t0 ), . . . , xn (t0 ) = y(t0 ).
Of course, there are other ways of state variable selection. In real applications, if
there is no other specific request, the first method is recommended.
4.1 Conversion method for a single high-order differential equation | 101
Example 4.1. For given initial values y(0) = −0.2, y (0) = −0.7, numerically solve the
following van der Pol equation, and draw the phase plane trajectories for different
values of μ:
Solutions. It has been demonstrated in Example 2.42 that the differential equation
has no analytical solution. Numerical method is the only choice to study this differ-
ential equation. Since van der Pol equation is not given in the standard form of the
first-order explicit differential equations, it should be converted to that form so that the
equation can be solved numerically with MATLAB. It is seen from the original equation
that it can be rewritten to find the explicit form of y (t):
Select a set of state variables x1 (t) = y(t), x2 (t) = y (t), the original equation can
easily be converted into the following form:
>> f=@(t,x,mu)[x(2);
-mu*(x(1)^2-1)*x(2)-x(1)]; % with additional parameter
It can be seen that there is an extra input argument mu in the model function. When
calling the solver ode45(), this variable must be assigned also in the function call.
Assuming that the initial state vector is x(0) = [−0.2, −0.7]T , the following statements
can be used to solve the differential equation, for μ = 1 and μ = 2, respectively. The
time response plots of the states can be obtained, as shown in Figure 4.1.
Figure 4.1: Van der Pol equation solutions for different values of μ.
If x1 and x2 are selected as the two axes, the phase plane trajectories can be drawn as
shown in Figure 4.2. It can be seen from the phase plane trajectories that no matter
what the value of μ, the phase portraits may settle down eventually on closed paths.
The closed paths are referred to as limit cycles in differential equation theory.
In this example, the initial values provided are located inside the limit cycles. The
values of the unknown function may continue growing and finally settle down in the
limit cycles in a periodic mode. If the initial positions are located outside of the limit
cycles, the unknown function may continue shrinking, and eventually settle down on
the limit cycles. Limit cycles will be further studied in Chapter 7.
Example 4.2. Solve the van der Pol equation in Example 4.1 without using additional
parameters.
Solutions. If an additional parameter is not used, the value of μ must exist in MATLAB
workspace. Then an anonymous function can be defined for the differential equation.
Since the variables in MATLAB workspace can be accepted directly by the anonymous
function, this method can be used. If the value of μ is changed, the anonymous func-
tion should be defined again. The following commands can be used to solve the differ-
ential equation with different values of μ. The results obtained are identical to those
obtained earlier.
Example 4.3. Select the state variables differently for the van der Pol equation in Ex-
ample 4.1 and solve it again.
Solutions. In Example 4.1, the states were selected as x1 (x) = y(t), x2 (t) = y (t), such
that the original differential equation was converted into standard form. If a new set
of states are selected as x1 (x) = y (t) and x2 (t) = y(t), the original equation can be
converted into the following standard form:
Therefore, the same solver ode45() can be called to solve the differential equa-
tion. Note that, since the physical meaning of the states is changed, the initial values
are also changed. Care must be taken in solving the problems.
Example 4.4. For μ = 1 000 and tn = 3 000, solve the equation in Example 4.1 again,
and see what happens.
ode45() can be used to find the numerical solution of the equations. If the new value
of μ is assigned, the terminal time is set to 3 000, and moderate error tolerance is
chosen, the following commands can be used to solve the van der Pol equation:
After long waiting, the error message “Out of memory. Type HELP MEMORY for your
options” appears and the solution process is aborted. Since in the variable-step solu-
tion process, at some points the step-size is selected too small to ensure the expected
accuracy, this may lead to a huge increase in the total number of points in the solution
such that the memory is exhausted. If another solver ode87() is used, similar thing
may happen.
This kind of phenomenon is often caused by the so-called stiff differential equa-
tions, and the concept and manipulation of stiff differential equations will be further
explored in the next chapter.
So far the studied examples were all with constant coefficients. In fact, the standard
form in (4.0.1) fully supports the descriptions of time-varying differential equations.
Examples are used here to show the numerical solutions of time-varying differential
equations.
x5 y (x) = 2(xy (x) − 2y(x)), y(1) = 1, y (1) = 0.5, y (1) = −1.
Solutions. The following commands can be used to find the analytical solution of the
time-varying differential equation:
y=dsolve(x^5*d3y==2*(x*d1y-2*y),...
y(1)==1, d1y(1)==0.5, d2y(1)==-1) % analytical solution
Dividing both sides of the equation by x5 , the original equation can be converted
to the explicit form of y (t):
To avoid confusion, the independent variable is changed from x to t, and the state
variables are selected as x1 (t) = y(t), x2 (t) = y (t), and x3 (t) = y (t). The first-order
explicit differential equations in standard form can be written as
x2 (t) 2
[ ] [ ]
x (t) = [
[ x3 (t) ]
], x(1) = [0.5]
[
].
[2 ]
[tx2 (t) − 2x1 (t)] [ −1 ]
[ t5 ]
It can be seen that the interval t ∈ (0.2, π) avoids the singularity at t = 0, so the solution
obtained is valid.
106 | 4 Standard form conversions of ordinary differential equations
In the previous example, the behavior of the differential equation at point t = 0 may
be special. In this section, the concept of singularities in differential equations is pre-
sented. Examples are used to demonstrate the behaviors of the differential equations
around their singularities.
Definition 4.1. For linear time-varying differential equations, if the coefficient func-
tions contain singularities, they are referred to as the regular singular points of the
differential equations.
For the differential equation studied in Example 4.5, it can be seen that x = 0,
or t = 0, is the regular singular point of the differential equation. Since the point
is located outside of the solution interval, there was no impact witnessed. Now an
example is given to show the numerical solutions in larger intervals and to illustrate
the impact of the singularities.
Example 4.6. Solve the differential equation in Example 4.5 in the interval t ∈ (0, π),
and see what happens.
Solutions. The statements in the previous example can still be used, by replacing 0.2
with 0. The differential equation solutions can be found, and the state curves can be
drawn, as shown in Figure 4.3. It should be noted that due to the existence of the
singularity at t = 0, the values of the states around this point are NaN. When t is small,
the values of the states tend to infinity. In the plot, only the curves in the range of
y(t) ∈ (−20, 20) are shown.
[tn,xn]=ode45(f,[1,pi],x0,ff);
[t1,x1]=ode45(f,[1,0],x0,ff);
tm=[t1(end:-1:2); tn]; xm=[x1(end:-1:2,:); xn];
plot(tm,ym), ylim([-10,10])
Solutions. It can be seen from the example that if y(t) = 0, singular behavior may
appear in this differential equation. Unfortunately, since the analytical expression
of function y(t) is not known, the singularity phenomenon cannot be studied theo-
retically. Numerical method is the only way to study the behavior of this differential
equation.
Since the highest order term in the equation is y(6) (t), its explicit expression can
be written as
1 2
y(6) (t) = [−6y (t)y(5) (t) − 15y (t)y(4) (t) − 10(y (t)) + a sinm λt]
y(t)
which implies y(t) ≠ 0. Introducing the state variables x1 (t) = y(t), x2 (t) = y (t), x3 (t) =
y (t), x4 (t) = y (t), x5 (t) = y(4) (t), and x6 (t) = y(5) (t), the standard form of first-order
explicit differential equations can be written as
x2 (t)
[
[ x 3 (t)
]
]
[ ]
[ x4 (t) ]
x (t) = [
[ ]
[ x5 (t) ]
]
[
[ x6 (t) ]
]
[ 1 2 m
]
[−6x2 (t)x6 (t) − 15x3 (t)x5 (t) − 10x4 (t) + a sin λt]
[ x1 (t) ]
In fact, in the solution process, there is a hidden problem that, when finding the ex-
plicit form of y(6) (t), it was assumed that y(t) ≠ 0. We need this assumption, otherwise
the numerical solution cannot be found, since it is impossible to show in mathematics
that y(t) ≠ 0. Luckily, in the whole solution process there is no such warning like
“Division by zero”. It implies that the phenomenon of y(t) = 0 did not happen at all in
the solution process. The result obtained is correct.
In other words, although the analytical form of y(t) signal is unknown, and it is
not possible to decide theoretically whether there are singularities, from simulation
it can be concluded that in the solution process y(t) does not equal to zero, and there
are no singularities in the differential equation.
If there are constant parameters in the differential equation, the methods illustrated
earlier can be used to solve the problem, without any difficulties. In fact, there are
many methods to process differential equations with constant parameters. Here a state
augmentation is presented to solve the differential equations with constant parame-
ters.
Assume that the mathematical expression of a differential equation is
one can be introduced, by denoting xn+1 (t) = a. The augmented state space model can
be written as
x(t) f (t, x(t), a)
x̃ (t) = [ (4.1.5)
]=[ ]
xn+1 (t) 0
where the initial values of the augmented state are x̃ (t0 ) = [x T (t0 ), a]T . The differential
equation can then be solved directly, so that the differential equations with constant
parameters can be handled in this manner.
Example 4.8. Use the state augmentation method to solve the van der Pol equation in
Example 4.1, where μ = 1.
Solutions. In earlier examples, the state variables are selected as x1 (t) = y(t) and
x2 (t) = y (t). If one wants to set the constant parameter μ as the augmented state x3 (t) =
μ, the augmented state space model of van der Pol equation can be rewritten as
x (t) = x2 (t),
{ 1
{
{
{ x2 (t) = −x3 (t/0(x12 (t) − 1)x2 (t) − x1 (t),
{
{
{x3 (t) = 0,
and the initial values of the new states are x1 (0) = −0.2, x2 (0) = −0.7, and x3 (0) = 1.
With the first-order explicit differential equations standard form, an anonymous func-
tion can be written to express the model, and the solution can then be obtained. The
results obtained are identical to those in Figure 4.2, for μ = 1. If can be seen from the
solution that x3 (t) is constant. If one wants to study the case when μ = 2, one can
simply set the last entry in the initial value vector to 2.
Suppose the highest order term of the unknown function appears in the square form
2
[y(n) (t)] = f (t, y(t), y (t), . . . , y(n−1) (t)) (4.2.1)
and the initial values y(t0 ), y (t0 ), . . . , y(n−1) (t0 ) are given. As before, the state variables
x1 (t) = y(t), x2 (t) = y (t), . . . , xn (t) = y(n−1) (t) can be selected first, and taking the square
root of the last term, two different sets of first-order explicit differential equations can
be created
{
{ x1 (t) = x2 (t),
{
{
{
{ ..
{
{ .
(4.2.2)
{
{
{
{ xn−1 (t) = xn (t),
{
{
{
{
{ xn (t) = f (t, y(t), y (t), . . . , y
√ (n−1) (t))
and
{
{ x1 (t) = x2 (t),
{
{
{
{ ..
{
{ .
(4.2.3)
{
{
{
{ xn−1 (t) = xn (t),
{
{
{
{
{ xn (t) = − f (t, y(t), y (t), . . . , y
√ (n−1) (t)).
The two state space models comprise the original differential equation. The initial
values of the states are
T
x(t0 ) = [y(t0 ), y (t0 ), . . . , y(n−1) (t0 )] . (4.2.4)
It can be seen that the two differential equation systems can both be solved di-
rectly with MATLAB solvers. Therefore, with anonymous or MATLAB functions, the
two equation sets can be described so that they can be solved numerically. Both solu-
tions satisfy the original differential equation.
Example 4.9. Use the numerical method to solve the following differential equation:
(y (t))2 = 4(ty (t) − y(t)) + 2y (t) + 1, y(0) = 0, y (0) = 0.1,
Solutions. Taking square root of the right-hand side of the differential equation, the
explicit forms of the original equation can be written as
Selecting state variables x1 (t) = y(t) and x2 (t) = y (t), two differential equations
in standard form can be established. The first is written as
x2 (t)
x (t) = [ ].
√4(tx2 (t) − x1 (t)) + 2x2 (t) + 1
The following MATLAB commands can be used respectively to solve the two differ-
ential equations. Two solutions can then be found. It should be noted that in the solu-
tion process, tiny imaginary quantities may appear. The bias like this can be neglected
by using function real(). The genuine numerical solutions can then be found.
If function dsolve() is called, the analytical solutions of the equation can be found,
and they can be superimposed on the numerical solutions, as shown in Figure 4.5.
It can be found that the first model fits the analytical solution perfectly, while the
second has large discrepancies when t is large. If one wants to reduce further the error
tolerance, the numerical solutions may not be found.
The results are the most accurate solutions obtained under the double precision
using this method. This problem will be revisited in Chapter 5, and accurate results
will be found, see Example 5.13.
In real applications there are even differential equations of complicated form. For
instance, the odd power of the highest-order derivative term may exist:
2k+1
[yn (t)] = f (t, x1 (t), x2 (t), . . . , xn (t)). (4.2.5)
x2 (t)
[
[ .. ]
]
.
x (t) = [ (4.2.6)
[ ]
].
[
[ xn−1 (t) ]
]
[ √f (t, x1 (t), x2 (t), . . . , xn (t))]
2k+1
The next two examples are created by the author based on a given function y(t) =
e . If one wants to find the analytical solution in a usual way, the solution cannot
−t
be easily found. Numerical methods can be tried, and compared with the analytical
solution.
Solutions. If the state variables x1 (t) = y(t) and x2 (t) = y (t) are selected, it is not hard
to write down the first-order explicit differential equation
x2 (t)
x (t) = [ 3 ]
√−3x2 (t) sin x1 (t) + 3x1 (t) sin x2 (t) + e−3t
4.2 Conversions of complicated high-order differential equations | 113
with the initial state vector x 0 = [1, −1]T . Therefore, the following MATLAB commands
can be used to solve this differential equation directly. The solutions for the two state
variables can be found as shown in Figure 4.6. Since the exact solutions are known as
y(t) = e−t , y (t) = −e−t , the maximum error can be estimated as 6.7722 × 10−12 . It can
be seen that the accuracy of the numerical solutions is very high.
>> f=@(t,x)[x(2);
(-3*x(2)*sin(x(1))+3*x(1)*sin(x(2))+exp(-3*t))^(1/3)];
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
[t,x]=ode45(f,[0,4],[1; -1],ff); plot(t,x)
norm(x-[exp(-t) -exp(-t)],1) % norm of the error matrix
If there exist nonlinear functions of the highest-order derivative of the unknown func-
tion, the direct method discussed earlier cannot be used to find the first-order ex-
plicit differential equations in standard form. The algebraic equation solution process
should be embedded in the differential equation description. Finally, the numerical
solutions can be found. An example next will be used to demonstrate how to convert
and solve such differential equations.
Solutions. It is not possible to find the explicit form of function y (t) in this example.
The state variables can be introduced as usual, x1 (t) = y(t) and x2 (t) = y (t). Denoting
114 | 4 Standard form conversions of ordinary differential equations
p(t) = y (t), the algebraic equation for p(t) can be established as
With the numerical method, the solution p(t) can be found, such that the term in
the differential equation, x2 (t) = p(t), can be formulated. The first-order explicit form
of the differential equation can be established as
x2 (t)
x (t) = [ ].
p(t)
With the solver ode45(), the numerical solution of the original differential equa-
tion can be found.
The standard form of the differential equations cannot be described by anony-
mous functions, since an algebraic equation solver is embedded. MATLAB function is
the only choice to describe such differential equations.
function dx=c4exode1(t,x)
f=@(p)p^3+3*p*sin(x(1))+3*x(2)*sin(p)-exp(-3*t);
ff=optimset; ff.Tolx=eps; ff.TolFun=eps;
p=fsolve(f,x(1),ff); dx=[x(2); p];
Since the example was created for the analytical solution y(t) = e−t , it cannot be found
with the symbolic method in MATLAB with the help of dsolve(). Numerical solution
to the equation is the only choice. The numerical solution can easily be found with
the following statements, and the maximum error is 1.1915 × 10−10 . The elapsed time is
25.92 seconds. The solution process is quite time consuming, since in each step of the
solution process, the algebraic equation is solved once. It can be seen that the solution
method is quite ineffective.
Of course, implicit differential equation solvers will be presented to all the exam-
ples in this section, and efficient solvers will be demonstrated in the next chapter.
In this section, two differential equations are demonstrated and we show how to con-
vert them into first-order explicit differential equations. Suppose the two equations
can be written as the following differential equations:
x(m) (t) = f (t, x(t), x (t), . . . , x(m−1) (t), y(t), y (t), . . . , y(n−1) (t)),
{ (n) (4.3.1)
y (t) = g(t, x(t), x (t), . . . , x(m−1) (t), y(t), y (t), . . . , y(n−1) (t))
where each equation may contain the explicit form of the highest-order derivative of
one unknown function. The state variables can still be selected as x1 (t) = x(t), x2 (t) =
x (t), . . . , xm (t) = x(m−1) (t). Then one may continue selecting the state variables as
xm+1 = y(t), xm+2 = y (t), . . . , xm+n (t) = y(n−1) (t). In this case, the original differential
equations can be converted into
{
{ x1 (t) = x2 (t),
{
{ .
{ ..
{
{
{
{
{
{
{xm
{
(t) = f (t, x1 (t), x2 (t), . . . , xm+n (t)),
{ (4.3.2)
{
{
{ xm+1 (t) = xm+2 (t),
{
{ ..
{
{
{
{
{ .
{
{
{xm+n (t) = g(t, x1 (t), x2 (t), . . . , xm+n (t)).
The initial values of the state variables can also be set accordingly. The expected
first-order explicit differential equations can then be established. An example will be
given next to demonstrate the conversion and solution process.
Example 4.12. Assume that the coordinates (x, y) of the Apollo satellite satisfy
3 3
{x (t) = 2y (t) + x(t) − μ (x(t) + μ)/r1 (t) − μ(x(t) − μ )/r2 (t),
∗ ∗
{ 3 3
{y (t) = −2x (t) + y(t) − μ y(t)/r1 (t) − μy/r2 (t)
∗
The initial values are x(0) = 1.2, x (0) = 0, y(0) = 0, and y (0) = −1.04935751. Solve
the differential equations and draw the (x, y) trajectory of the Apollo satellite.
Solutions. Select a set of state variables as x1 (t) = x(t), x2 (t) = x (t), x3 (t) = y(t), and
x4 (t) = y (t). The first-order explicit differential equations can be written as
116 | 4 Standard form conversions of ordinary differential equations
x2 (t)
[ ]
[2x (t) + x (t) − μ∗ (x (t) + μ)/r 3 (t) − μ(x (t) − μ∗ )/r 3 (t)]
[ 4 1 1 1 1 2
x (t) = [
]
]
[
[ x 4 (t) ]
]
3 3
[ −2x 2 (t) + x 3 (t) − μ ∗
x 3 (t)/r 1 (t) − μx 3 (t)/r 2 (t) ]
where
r1 (t) = √(x1 (t) + μ)2 + x32 (t), r2 (t) = √(x1 (t) − μ∗ )2 + x32 (t)
with μ = 1/82.45 and μ∗ = 1 − μ. The initial values of the state variables are x 0 =
[1.2, 0, 0, −1.04935751]T .
Since there are two intermediate variables r1 (t) and r2 (t), a MATLAB function can
be used to compute x (t). The MATLAB function can be written as follows:
function dx=apolloeq(t,x)
mu=1/82.45; mu1=1-mu;
r1=sqrt((x(1)+mu)^2+x(3)^2); r2=sqrt((x(1)-mu1)^2+x(3)^2);
dx=[x(2);
2*x(4)+x(1)-mu1*(x(1)+mu)/r1^3-mu*(x(1)-mu1)/r2^3;
x(4);
-2*x(2)+x(3)-mu1*x(3)/r1^3-mu*x(3)/r2^3]; % describe ODE
>> x0=[1.2;0;0;-1.04935751];
tic, [t,y]=ode45(@apolloeq,[0,20],x0); toc % solution
length(t), plot(y(:,1),y(:,3)) % draw phase plane trajectory
The trajectory of the satellite can be obtained as shown in Figure 4.7. The number of
points computed is 689, and the elapsed time is 0.014 seconds.
In fact, the Apollo satellite trajectory thus found is incorrect. This is because the de-
fault error tolerance, RelTol, is too large, such that the solver ode45() yields the
wrong results. To better solve the problem, the error tolerance should be reduced. For
instance, it can be educed to 10−6 . The following commands can be used to solve the
differential equations again:
The new trajectory obtained is shown in Figure 4.8. The total number of points is
increased to 1 873, and the elapsed time is 0.067 seconds. It can be seen that the tra-
jectory is completely different from that under the default setting. The error tolerance
can further be decreased, however, the trajectory may look almost the same. In a real
solution process, the error tolerance RelTol should be set to different values, and one
may use such a method to validate the result.
With the following MATLAB commands, the minimum step-size of 1.8927 × 10−4 is
witnessed. The step-size curve can be obtained, as shown in Figure 4.9.
The significance of using variable-step algorithm can be seen from the step-size curve.
It is obvious that when accurate solutions are needed, the step-size can be set to small
values; while when the error is already kept small, the step-size can be increased
adaptively, so as to speed up the simulation process. In this case, the efficiency of
the algorithm is increased.
118 | 4 Standard form conversions of ordinary differential equations
It is also seen that most of the time, a large step-size of 0.04 is used. In fixed-step algo-
rithms, no one dares to use such a large step-size in the solution process. To ensure the
accuracy of some points, the small step-size of 2 × 10−4 is taken automatically. In other
words, at these points, extremely small step-sizes are used to ensure that the error is
kept under 10−6 . From a fixed-step viewpoint, if one wants to ensure 10−6 accuracy, the
step-size should not be selected larger than the minimum step-size obtained above. It
can be seen the number of computation points is increased to 105 , 56 times more than
for the variable-step algorithm.
The error tolerance can further be set to 100eps, and the new step-sizes can be
drawn in Figure 4.10. It can be seen that the minimum step-size is automatically set to
5.4904 × 10−6 , and the total number of points is 63 053.
>> ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
tic, [t1,y1]=ode45(@apolloeq,[0,20],x0,ff); toc % solve again
min(diff(t1)), length(t1) % find the minimum step-size
plot(t1(1:end-1),diff(t1)) % draw the step-size plot
Figure 4.10: The new step-size curve when error tolerance is changed.
4.3 Conversions of differential equation sets | 119
It can be imagined that to ensure such precision, the step-size of 5.4904 × 10−6 should
be used throughout the fixed-step solution process. It may bring too much computa-
tional load for the computers. It can be seen that the variable-step algorithms are far
superior to the fixed-step ones.
Example 4.13. In Example 4.12, a MATLAB function was used to express the differen-
tial equations. Use an anonymous function to describe the differential equations, and
solve them again.
Solutions. In fact, for this type of problem, anonymous functions can be used to de-
scribe the differential equations, since intermediate variables can also be described by
anonymous functions. Therefore there is no longer any need to use MATLAB files to
express differential equation models. An anonymous function can be used to handle
such problems. The following commands can be used to solve the differential equa-
tions again, and the results are exactly the same.
Example 4.14. Consider the seven-body problem. The coordinates (xi (t), yi (t)) of the
seven bodies satisfy the following differential equations:[32]
mj mj
xi (t) = ∑ (xj (t) − xi (t)) , yi (t) = ∑ (yj (t) − yi (t)) ,
j=i̸
rij (t) j=i̸
rij (t)
where mi = i, i = 1, 2, . . . , 7, and
3/2
rij (t) = ((xi (t) − xj (t))2 + (yi (t) − yj (t))2 ) .
The initial positions and speeds of the bodies are known as x1 (0) = 3, x2 (0) = 3,
x3 (0) = −1, x4 (0) = −3, x5 (0) = 2, x6 (0) = −2, x7 (0) = 2, y1 (0) = 3, y2 (0) = −3, y3 (0) = 2,
y4 (0) = 0, y5 (0) = 0, y6 (0) = −4, y7 (0) = 4, x6 (0) = 1.75, x7 (0) = −1.5, y4 (0) = −1.25,
y5 (0) = 1, and all the other xi (0) = yi (0) = 0. The simulation interval is t ∈ (0, 3).
Solutions. Analyzing the original problem, it is found that rij (t) forms a 7 × 7 sym-
metric matrix R(t). Considering the conditions i ≠ j in the sum signs, the diagonal
terms in matrix R(t) should be set to be very large such as 3030 . As before, the state
variables can be selected as z 1 (t) = x(t), z 2 (t) = y(t), z 3 (t) = x (t), z 4 (t) = y (t), and
120 | 4 Standard form conversions of ordinary differential equations
z(t) = [z 1 (t), z 2 (t), z 3 (t), z 4 (t)]T . It is not hard to write the following MATLAB function
to express the original differential equations:
function dz=seven_body(t,z)
n=7; M=[1:n]’; x=z(1:n); y=z(n+1:2*n);
[xi,xj]=meshgrid(x,x); [yi,yj]=meshgrid(y,y);
R=(xi-xj).^2+(yi-yj).^2; R=R.^(3/2); R=R+1e30*eye(n);
DX=sum(((xj-xi)./R).*M); DY=sum(((yj-yi)./R).*M);
z0=z(2*n+1:end); dz=[z0(:); DX(:); DY(:)];
The initial state vector can be entered, while the following commands can be used to
initiate the simulation process and draw the trajectories. The trajectories of the seven
bodies are as shown in Figure 4.11. The results are exactly the same as those in [32].
In fact, one may use many loops to describe the differential equations. However, the
method like this is not native to MATLAB programming. The vectorized programming
and matrix computation is a better solution, and more suitable for this kind of prob-
lem.
Normally, functions like comet() cannot be used to draw multiple trajectories in
animation form. The animation technique in Volume I can be used and the follow-
4.3 Conversions of differential equation sets | 121
ing commands tried to create the animation display of the seven bodies. A video file
c4stars.avi can then be made, which can be played on any multimedia player.
Example 4.15. Solve high-order linear differential equations with constant coeffi-
cients in Example 2.15 with the numerical method:
Solutions. With the recommended method, the state variables can be selected as
x1 (t) = x(t), x2 (t) = x (t), x3 (t) = y(t), x4 (t) = y (t), x5 (t) = z(t), and x6 (t) = z (t). It is
immediately found that the first-order explicit differential equations can be written as
x2 (t)
[ x1 (t) − x3 (t) − x5 (t) ]
[ ]
[ ]
[ x4 (t) ]
x(t) = [
[−x (t) + x (t) − x (t)]
]
[ 1 3 5 ]
[ ]
[ x6 (t) ]
−x
[ 1 (t) − x 3 (t) + x 5 (t) ]
and the initial state vector is x(0) = [1, 0, 0, 0, 0, 0]T . With an anonymous function, the
standardized differential equations can be defined, and the following commands can
be used to solve them. The results are the same as in Figure 2.9.
The methods in Example 2.23 can be used to find the analytical solution of the original
differential equations. The exact values at each point can also be evaluated, from
122 | 4 Standard form conversions of ordinary differential equations
which the maximum error can be found as 2.9388 × 10−12 . It can be seen that the result
obtained is rather accurate.
Example 4.16. Solve the Apollo satellite equation again with the fixed-step fourth-
order Runge–Kutta algorithm.
It is immediately seen that the result is incorrect. A smaller step-size can be tried.
For instance, selecting the step-size of 0.001, the differential equations can be solved
again. A more accurate trajectory curve can be found, which looks similar to that
shown in Figure 4.8. The elapsed time in this case is 0.84 seconds, 13 times of the
time needed by a variable-step algorithm.
4.3 Conversions of differential equation sets | 123
If the solutions using the variable-step algorithm ode45() and under tough error tol-
erance are considered as accurate ones, the errors under fixed-step algorithm can be
obtained, as shown in Figure 4.13. It can be seen that although the trajectory curves
look similar, the actual errors are rather large and not negligible. The errors in the
derivative signal are even as large as 4 in magnitude. The errors like this cannot be
accepted in real applications.
In fact, there may exist very large errors at certain points. Although the shape of the
trajectory looks correct, the results may be considered incorrect. In solving differential
equations, variable-step algorithms are recommended. There is no need to use the
commonly taught fixed-step algorithms in numerical analysis courses. If the tough
error-level of 100eps is expected, it is not possible to find the result with a fixed-step
method. Besides, since there is no monitoring mechanism in the fixed-step algorithm,
it is not possible to select a step-size to ensure certain error bounds. Therefore fixed-
step algorithms are unsuitable for dealing with real problems.
In the previously studied examples, the explicit form of the highest-order derivative
of the unknown function could be found. It was easy to build up the first-order ex-
plicit differential equation model. In this section, complicated differential equation
sets are discussed, where in each equation the highest-order derivatives of two un-
known functions appear simultaneously. Algebraic equation solution must be carried
out in describing the first-order explicit differential equations. Examples are given to
demonstrate the conversion and solving methods.
Solutions. Letting x(t) = [x1 (t), x2 (t)]T , the original differential equations can be ex-
pressed in matrix form
where
If one can show that A(x(t)) is a nonsingular matrix, then the equation can be
converted into the following standard form;
With the MATLAB solvers, the numerical solutions can be found. In fact, there is
no method to strictly show that matrix A(x(t)) is nonsingular, one may try using the
4.3 Conversions of differential equation sets | 125
standard form to solve the problem numerically. If in the solution process, there is no
error or warning message indicating A(x(t)) is singular, it means that in the solution
process A(x(t)) is nonsingular. The solutions thus obtained are acceptable. If in the
solution process, there were warnings like these, the result obtained may not be of
any use. Better solvers should be adopted.
In order to study implicit differential equations, an anonymous function can be
taken to describe the differential equations. Therefore, the following commands can
be used to solve the considered differential equations:
The time responses of the state variables can be drawn as shown in Figure 4.14. It can
be seen that no warning or error messages appeared in the solution process. Therefore
the solution obtained is correct.
Example 4.18. Assuming that the differential equations are given as follows:
convert them to first-order explicit differential equations, and find the numerical so-
lutions.
Solutions. It can be seen that the two equations contain the highest order derivatives
x (t) and y (t) simultaneously, but the state variables can still be selected as x1 (t) =
x(t), x2 (t) = x (t), x3 (t) = y(t), and x4 (t) = y (t). The target is to eliminate one of the
126 | 4 Standard form conversions of ordinary differential equations
highest-order derivative term. For this example, solving the first equation, it is found
that y (t) = y (t)x(t) + x (t)/2. Substituting it into the second equation, x (t) can be
expressed as
Substituting the above result back to the y (t) equation, it is found that
Summarizing the above, the first-order explicit differential equations can be ob-
tained as
x2 (t)
[ ]
[ 2x3 (t) + 10 − 2x1 (t)x4 (t) − 6x1 (t)x2 (t)x4 (t) ]
[ ]
[ 2x4 (t) + 3x2 (t) ]
x (t) = [
[ ]
].
[
[ x 4 (t) ]
]
[ 2 ]
[ x3 (t) + 5 − x1 (t)x4 (t) + 2x1 (t)x4 (t) ]
[ 2x 4 (t) + 3x 2 (t) ]
In fact, equations like this cannot be easily solved applying the manual method.
Symbolic Math Toolbox should be adopted to solve the algebraic equations. For con-
venience, denote p1 (t) = x (t) and p2 (t) = y (t). Therefore, p1 (t) and p2 (t) are, in fact,
x2 (t) and x4 (t). With the following commands, the solutions of the equation can be
found. If can be seen that the results are the same as obtained above.
From the converted standard form, the following commands can be used to describe
the differential equations and find their numerical solutions. It can be seen that for
these particular differential equations, there are no solutions, since the matrix x ob-
tained this way is composed of NaN’s.
>> f=@(t,x)[x(2);
(2*x(3)+10-2*x(1)*x(4)+6*x(1)*x(2)*x(4))/(2*x(4)+3*x(2));
x(4);
4.3 Conversions of differential equation sets | 127
(x(3)+5-x(1)*x(4)+2*x(1)*x(4)^2)/(2*x(4)+3*x(2))];
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
[t,x]=ode45(f,[0,10],[1; 0; 1; 0],ff);
plot(t,x) % solve differential equation and draw solutions
The examples shown earlier are too simple, since manual conversion may help trans-
form them directly into the standard form. If x(m) (t) and y(n) (t) terms appear simulta-
neously in both equations, corresponding manipulations should be made such that
two algebraic equations can be established. Therefore a MATLAB code for solving the
two algebraic equations can be written. Solving the equations, a MATLAB function
describing the standard differential equations can be constructed, so that the solver
ode45() can be used to solve them directly. In other words, the algebraic equation
solver can be embedded in the differential equation model. Examples will be used to
demonstrate the solution procedures.
x (t) sin y (t) + (y (t))2 = −2x(t)y(t)e−x (t) + x(t)x (t)y (t),
{
x(t)x (t)y (t) + cos y (t) = 3y(t)x (t)e−x(t) .
Solutions. The state variables can still be selected as x1 (t) = x(t), x2 (t) = x (t), x3 (t) =
y(t), and x4 (t) = y (t). Then
It is obvious that the method in Example 4.18 cannot be used to solve the alge-
braic equations, since the explicit expressions of x2 (t) and x4 (t) cannot be found. The
numerical method can be used solve them from the given x(t).
From the given equations, letting p1 (t) = x (t) and p2 (t) = y (t), the algebraic
equations can be established as
p1 (t) sin x4 (t) + p22 (t) + 2x1 (t)x3 (t)e−x2 (t) − x1 (t)p1 (t)x4 (t) = 0,
{
x1 (t)p1 (t)p2 (t) + cos p2 (t) − 3x3 (t)x2 (t)e−x1 (t) = 0.
The algebraic solution solver can be used to find p1 (t) and p2 (t), after this the
results can be assigned to x2 (t) and x4 (t) so that
In this way the first-order explicit differential equation model can be written, and
then a MATLAB solver can be used to solve the differential equations. Such a procedure
is not suitable to implement using anonymous functions, since intermediate variables
are needed, which are not supported for anonymous functions. The following MATLAB
function can be written to describe the differential equation model:
function dy=c4impode(t,x)
dx=@(p)[p(1)*sin(x(4))+p(2)^2+...
2*x(1)*x(3)*exp(-x(2))-x(1)*p(1)*x(4);
x(1)*p(1)*p(2)+cos(p(2))-3*x(3)*x(2)*exp(-x(1))];
ff=optimset; ff.Display=’off’; ff.TolX=eps;
dx1=fsolve(dx,x([1,3]),ff); % embedded algebraic equation solver
dy=[x(2); dx1(1); x(4); dx1(2)]; % describe differential equations
Note that the continuation notation “...” was used. One has to make sure that oper-
ators are used in front of the continuation signs. To see why, consider the following
erroneous representation:
dx=@(p)[p(1)*sin(x(4))+p(2)^2...
+2*x(1)*x(3)*exp(-x(2))-x(1)*p(1)*x(4);
If the command it written like this, MATLAB may accept the expression in the first line
as one expression, and that on the second line as another, led by +. Therefore, the
function may be wrongly interpreted, and errors may occur in the function call.
Inside the function, anonymous functions describing the algebraic equations are
defined, with p1 (t) and p2 (t) being the unknowns. The two unknowns can be solved for
using the solver fsolve(). And so p1 (t) and p2 (t) can be found. They can be assigned
to x2 (t) and x4 (t), respectively, such that the first-order explicit differential equations
can be set up. A trick is used in the function, namely, the initial search points p1 (0) = x1
and p2 (0) = x3 are used so as to speed up the algebraic equation solution process.
With the standardized differential equation model, the following commands can
be used to solve the differential equations directly. The time responses of the states
can be obtained as shown in Figure 4.15. The elapsed time is 12.65 seconds, with 2 217
points computed. The whole process is quite time consuming, since in each step of the
differential equation solver, the nonlinear algebraic equation is solved once.
where M, C, and K are n×n matrices, while X and F are n×1 column vectors. Introduc-
ing the state vectors x 1 (t) = X(t), x 2 (t) = X (t), we get x 1 (t) = x 2 (t) and x 2 (t) = X (t).
It can be seen from (4.4.1) that
Now selecting the state vectors x(t) = [x T1 (t), x T2 (t)]T , the state space model can be
established as
x 2 (t)
x (t) = [ ]. (4.4.3)
M [Fu(t) − Cx 2 (t) − Kx 1 (t)]
−1
130 | 4 Standard form conversions of ordinary differential equations
It can be seen that the differential equations thus established are already in stan-
dard form for the state vector x(t), therefore, the following MATLAB commands can
be used to solve them directly. An example is given next to demonstrate the solution
process.
where θ = [a, θ1 , θ2 ]T , a is the position of the cart, θ1 and θ2 are respectively the angles
of the upper and lower bars, while the matrices in the inverted pendulum are
For the given parameters in the double pendulum system, mc = 0.85 kg, m1 =
0.04 kg, m2 = 0.14 kg, L1 = 0.1524 m, and L2 = 0.4318 m, solve numerically the equa-
tions and draw the step responses of the signals.
Solutions. It can be seen that the coefficient matrices M(θ1 , θ2 ), C(θ1 , θ2 ), and F(θ1 , θ2 )
are nonlinear functions of the x vector. For instance, the sine and cosine functions
of θ1 make the original differential equation nonlinear. Introducing the additional
parameters x 1 = θ, x 2 = θ , the new state vector x = [x T1 , x T2 ]T can be composed.
The following MATLAB function can be written to describe the first-order explicit dif-
ferential equations:
function dx=inv_pendulum(t,x,u,mc,m1,m2,L1,L2,g)
M=[mc+m1+m2, (0.5*m1+m2)*L1*cos(x(2)), 0.5*m2*L2*cos(x(3))
(0.5*m1+m2)*L1*cos(x(2)),(m1/3+m2)*L1^2,0.5*m2*L1*L2*cos(x(2))
0.5*m2*L2*cos(x(3)),0.5*m2*L1*L2*cos(x(2)),m2*L2^2/3]; % M matrix
C=[0,-(0.5*m1+m2)*L1*cos(x(5))*sin(x(2)),-0.5*m2*L2*x(6)*sin(x(3))
0, 0, 0.5*m2*L1*L2*x(6)*sin(x(2)-x(3))
0, -0.5*m2*L1*L2*x(5)*sin(x(2)-x(3)), 0]; % C matrix
F=[u; (0.5*m1+m2)*L1*g*sin(x(2)); 0.5*m2*L2*g*sin(x(3))]; % F matrix
dx=[x(4:6); inv(M)*(F-C*x(4:6))]; % compute x (t)
4.4 Conversions for matrix differential equations | 131
If a step signal is used to excite the system, the following commands can be used to
find the numerical solutions as shown in Figures 4.16 and 4.17:
It should be noted that since the double inverted pendulum system is naturally
unstable, it is meaningless to apply step input to the system in reality. Appropriate
control signals should be applied to stabilize the pendulum system.
Figure 4.17: Step responses of the derivative signals in the double inverted pendulum.
132 | 4 Standard form conversions of ordinary differential equations
Besides, if the matrices M, C, K and F are all independent of X(t), the equation
becomes a linear differential equation. Through simple conversions, the following
linear state space equation can be found:
(4.4.4)
where X(t) is a matrix that can be expanded in the column-wise format as a column
vector, which can then be implemented in MATLAB as x(t) = X(:). With the vector
representation, function X=reshape(x ,n,m) can be used to convert it back to an n×m
matrix. The following MATLAB function can be written to express Sylvester differential
equation:
function dx=c4msylv(t,x,A,B)
[n1,m1]=size(A); [n2,m2]=size(B);
X=reshape(x,n1,n2); dx=A*X+X*B; dx=dx(:);
where A and B are additional parameters. With this function, the Sylvester differen-
tial equation can be solved numerically employing solvers. The analytical solutions
obtained in Section 2.5.3 can also be used to assess the accuracy and efficiency of the
numerical solutions. An example is given next to demonstrate the numerical solutions
of Sylvester differential equations.
Example 4.21. Solve numerically the Sylvester differential equation in Example 2.34.
Study the accuracy and efficiency of the method by comparing with the analytical
solution in the example.
Solutions. Inputting the relevant matrices into MATLAB, an anonymous function can
be written as an interface for MATLAB function c4msylv() in order to avoid additional
parameters. The variables A and B in MATLAB workspace can be extracted directly.
4.4 Conversions for matrix differential equations | 133
Therefore, with the following statements, the numerical solutions of Sylvester matrix
equation can be found. Some of the variables are shown in Figure 4.18. The readers
may find the analytical solution in Example 2.34, and compare the accuracy of the
solution. This is left as an exercise.
where B and C are symmetric matrices. It is known at time tn that the terminal value
P(tn ) is given. A numerical solution in the time interval (t0 , tn ) is expected.
To solve such equations numerically, they should be converted into the standard
form of first-order explicit differential equations. Then numerical solvers can be ap-
plied. In mathematics, the column-wise vector expansion can be denoted as vec(P(t)).
In MATLAB, P (:) can be used in direct implementation. If the vector is to be trans-
formed back to a matrix, function reshape() can be called.
The following MATLAB function may be written to describe Riccati differential
equation:
134 | 4 Standard form conversions of ordinary differential equations
function dy=ric_de(t,x,A,B,C)
P=reshape(x,size(A));
Y=A’*P+P*A+P*B*P+C; dy=Y(:); % describe Riccati equation
where the given matrices A, B, and C are fed into the function through additional
parameters. In this way, a solver such as ode45() can be used to find the numerical
solution of Riccati differential equations. Note that, in solvers such as ode45(), the
terminal time is allowed to be smaller than the starting time.
[t ,p]=ode45(@ric_de,[t1 ,0],P 1 (:),options,A,B,C )
Example 4.22. The matrices of a Riccati differential equation and terminal condition
are given below. Solve numerically the differential equation if
6 6 17 0 0 0 1 2 0 1 0 0
A = [ 1 0 −1] , B = [0 4 2 ] , C = [ 2 8 0] , P 1 (0.5) = [0 3 0] .
[ ] [ ] [ ] [ ]
[−1 0 0 ] [0 2 1 ] [0 0 4] [0 0 5 ]
Solutions. The matrices should be input first, then the solver can be called to solve
the differential equation, and the results are shown in Figure 4.19.
For this particular example, if additional parameters are not expected in the solution
process, an interface can be designed through an anonymous function, and then the
4.5 Conversions of a class of Volterra integro-differential equations | 135
following commands can be used to solve the differential equation directly. Identical
results can be obtained in this way.
>> f=@(t,x)ric_de(t,x,A,B,C);
[t,p]=ode45(f,[0.5,0],P1(:),ff);
With the above method, the initial state matrix X(0) at time t = 0 can be found. From
the initial matrix, the differential equation can be solved again, the same results can
be found. The maximum error of the terminal matrix recovered in this way is 4.6863 ×
10−12 . It can be seen that the original problem can be restored precisely into an initial
value problem.
It can be seen that in (4.5.3) and (4.5.4), the same integral term appears. Thus it
can be eliminated simply by substituting one equation into another, and the original
integro-differential equation can be converted into an ordinary differential equation,
so that numerical solvers can be used to study the original integro-differential equa-
tions. In this section, the combined method is demonstrated through examples to
solve some Volterra integro-differential equations.
x (t) = 1 − ∫ x(s)ds
with given initial value x(0) = 0. It is known that the analytical solution is x(t) = sin t.
Convert it into an ordinary differential equation.
Solutions. For this simple problem, the manual method can be used directly to find
the solution. Substituting t into the equation, it is found that x (0) = 1. Taking deriva-
tives of both sides with respect to t, it is seen that
with initial values x(0) = 0 and x (0) = 1. The differential equation has, in fact,
multiple analytical solution expressions, namely x(t) = sin(2kπ + t), where k is an
integer. The solution provided, x(t) = sin t, is merely one of them.
with initial values x(0) = x (0) = 1. It is known that the analytical solution is x(t) = et .
Assess the accuracy of the numerical solution.
Now taking the first-order derivative of (4.5.5) with respect to t, there must be
an identical term on the right-hand side of the result. Substituting (4.5.6) into the
equation, the right-hand side of the equation no longer contains the integral
The left-hand side is x (t). The third-order differential equation is found as
The third initial value can also be found from the above expression, as x (0) =
4 − 2x (0) − x(0) = 1.
This is a normal third-order differential equation with three initial values. Select-
ing the state variables x1 (t) = x(t), x2 (t) = x (t), and x3 (t) = x (t), it can be converted
into the following first-order explicit differential equation in standard form
x2 (t)
x (t) = [
[ ]
x3 (t) ]
x
[ 1 (t) + 4e−t
x1 (t) + x 2 (t) − x 3 (t) − 4]
with initial values x 0 = [1, 1, 1]T . With the solver ode45(), the numerical solution can
immediately be found, and the error of the solution is 6.1937×10−14 . The elapsed time is
0.012 seconds. It can be seen that in this way, the Volterra integro-differential equation
can be solved effectively.
In the previous example, the integral term in one equation could be substituted
into the other to effectively eliminate the integral term so that the equation can be
converted into ordinary differential equations. In some particular cases, this action
may lead to singular differential equations. Other methods should be introduced for
such problems. This will be demonstrated by the following example.
t
t
x (t) = x(t) + ∫ tx2 (s)ds − 2e−t + (e−2t − 1) (4.5.7)
2
0
with given initial value x (0) = 1. It is known that the analytical solution is x(t) =
e−t . Solve the integro-differential equation with the numerical method and assess its
accuracy.
>> F3=simplify(diff(F2,t))
4.6 Exercises | 139
the following ordinary differential equation can be derived, which no longer has an
integral term in it:
x (t) = x (t) + 2tx(t)x (t) + 2x 2 (t) − 2e−t − 2e−2t + 2te−2t .
For the third-order differential equation, normally three initial values are needed.
It is known that x(0) = 1. The other two can be derived with the following statements:
It can be seen that x (0) = x(0) − 2 = −1 and x (0) = x (0) + 2 = 1. With the differential
equation and initial values, the following commands can be used to find the numerical
solution. It is found that the error of the numerical solution is 1.4462 × 10−13 , and the
elapsed time is 0.028 seconds. It can be seen that the solution process is effective.
>> f=@(t,x)[x(2:3);
x(3)+2*t*x(1)*x(2)+2*x(1)^2-2*exp(-t)-...
2*exp(-2*t)+2*t*exp(-2*t)]; % describe ODE
ff=odeset; ff.RelTol=100*eps; ff.AbsTol=100*eps;
tic, [t,x]=ode45(f,[0,1],[1;-1;1],ff); toc % solution
plot(t,x), norm(x(:,1)-exp(-t)) % find error
It is worth pointing out that the conversion method introduced here has certain limita-
tions. It cannot be used to handle typical Volterra integro-differential equations. Only
the case if the kernel can be separated may be handled by this method. The interested
readers may try other numerical integral methods to compute the kernel integral. The
readers may also study the code in [21].
4.6 Exercises
4.1 Solve the following differential equation and draw the y(t) curve:
y (t) + ty(t)y (t) + t 2 y (t)y2 (t) = e−ty(t)
where y(0) = 2 and y (0) = y (0) = 0.
Is there an analytical solution? When the fixed-step fourth-order Runge–Kutta
algorithm is used to solve the differential equation, what should be the suitable
step-size to ensure expected precision? Use existing MATLAB functions to solve
the problem, and assess the speed and accuracy.
4.2 Find the analytical and numerical solutions of the following differential equa-
tions:
x (t) = −2x(t) − 3x (t) + e−5t ,
{
y (t) = 2x(t) − 3y(t) − 4x (t) − 4y (t) − sin t
{
{ x (t) − 2x(t)z(t)x (t) = 3t 2 x2 (t)y(t),
{
(2) {y (t) − ey(t) y (t) = 4t 2 x(t)z(t),
{
{ x(t)y(t)
{z (t) − 2tz (t) = 2te
4.6 For the given differential equation model, if u(0) = 1, u (0) = 2, v (0) = 2 and
v(0) = 1, select a set of states to convert the equation into first-order explicit
differential equation. Solve the equation and draw the u(t) and v(t) trajectories
if
4.7 A system of differential equations is given below,[53] where u1 (0) = 45, u2 (0) = 30,
u3 (0) = u4 (0) = 0, and g= 9.81. Solve it numerically and draw the time responses
of the states:
{
{ u1 (t) = u3 (t),
{
{
{u2 (t) = u4 (t),
{
{2u (t) + cos(u1 (t) − u2 (t))u (t) = −g sin u1 (t) − sin(u1 (t) − u2 (t))u2 (t),
{
{
{
{ 3 4 4
{ 2
{ cos(u 1 (t) − u 2 (t))u
3 (t) + u
4 (t) = −g sin u 2 (t) + sin(u1 (t) − u2 (t))u3 (t).
{
{ s (t) + 0.042s (t) + 0.961s(t) = θ (t) + 0.063θ(t),
{
{
{
{
{
{ u (t) + 0.087u (t) = s (t) + 0.025s(t),
{
{
{v (t) = 0.973(u(t) − v(t)),
{
{
{
{
{w (t) = 0.433(v(t) − w(t)),
{
{
{
{
{
{ x (t) = 0.508(w(t) − x(t)),
{
{
{θ (t) = −0.396(x(t) − 47.6)
where s(0) = s (0) = u (0) = θ(0) = 0, u(0) = 50, and v(0) = w(0) = x(0) = 75. If t
increases indefinitely, what may be the final limit of v(t)?
4.10 Consider the following double inverted pendulum model:[40]
where Δθ(t) = θ1 (t) − θ2 (t), g = 9.81, m = L = 1, θ1 (0) = π/4, θ2 (0) = 2π/4, and
p1 (0) = p2 (0) = 0. Solve this system of differential equations. Is this differential
equation system a genuine implicit one? Is there a method to convert it into a
first-order explicit differential equation? If there is, solve the equation again and
compare the results.
142 | 4 Standard form conversions of ordinary differential equations
{
{ x (t) = (cos ϕ(t) sin θ(t) cos ψ(t) + sin ϕ(t) sin ψ(t))U1 /m,
{
{
{
{
{
{y (t) = (cos ϕ(t) sin θ(t) cos ψ(t) − sin ϕ(t) sin ψ(t))U1 /m,
{
{
{z (t) = cos ϕ(t) cos θ(t)U /m − g,
{ 1
{
{
{
{ϕ
(t) = θ
(t)ψ
(t)(Iyy − Izz )/Ixx + U2 /Ixx ,
{
{
{
{
{
{θ (t) = ϕ (t)ψ (t)(Izz − Iyy )/Iyy + U3 /Iyy ,
{
{
{ψ (t) = ϕ (t)θ (t)(Ixx − Iyy )/Izz + U4 /Izz
where the given constants are Ixx = Iyy = 0.0081, Izz = 0.0142, m = 1, g = 9.81, b =
54.2×10−6 , d = 1.1×10−6 , and L = 0.24. Assuming that fi = 2 031.4, i = 1, 2, 3, 4, it is
found that ωi = 2πfi . The constants U1 = b(ω21 + ω22 + ω23 + ω24 ), U2 = bL(−ω22 + ω24 ),
U3 = bL(ω22 − ω24 ), and U4 = b(−ω21 + ω22 − ω23 + ω24 ) can also be found. If the
initial values of the states and their first-order derivatives are all zero, solve the
differential equations numerically.
4.12 Convert the following differential equations into the standard forms:
4.13 Convert the following differential equations into first-order explicit differential
equations:
{
{ p1 (t) = p22 (t) cos q1 (t)/ sin3 q1 (t),
{
{
{p2 (t) = 0,
{
{
{
{
{ q1 (t) = 1,
{
{ 2
{q2 (t) = p2 (t)/ sin q1 (t).
4.14 Find the analytical and numerical solutions of the following differential equa-
tions. Draw the trajectory of (x, y), and assess the accuracy of the numerical so-
lutions if
The initial values are given as x(0) = x (0) = 1 and y(0) = y (0) = 0.
4.15 Find the numerical solutions of the following implicit differential equations,
with x1 (0) = 1, x1 (0) = 1, x2 (0) = 2 and x2 (0) = 2. Draw the trajectory of the
solutions if
2
x1 (t)x2 (t) sin(x1 (t)x2 (t)) + 5x1 (t)x2 (t) cos(x12 (t)) + t 2 x1 (t)x22 (t) = e−x2 (t) ,
{
x1 (t)x2 (t) + x2 (t)x1 (t) sin(x12 (t)) + cos(x2 (t)x2 (t)) = sin t.
4.6 Exercises | 143
4.16 Assess the precision of the numerical solution for the Sylvester matrix differential
equation studied in Example 4.21.
4.17 If the initial values are x(0) = 1 and x (0) = −1, solve the differential equation in
(4.5.9) numerically. Observe whether the exact numerical solution can be found,
why? It is known that the exact solution is x(t) = e−t .
4.18 Convert the following Volterra integro-differential equations into standard ordi-
nary differential equations,[18] and find their numerical solutions. With the given
analytical solutions, assess the accuracy and efficiency:
t
2 2
(1) x (t) = 1 + x(t) − te−t − 2 ∫ tse−x (s)
ds, x(0) = 0, with analytical solution
0
x(t) = t;
1
1 1
(2) x (t) = x(t) +
x2 (t) + ∫ e2(t−s) x 2 (s)ds, x(0) = 1, with analytical
1 500 3 000
−1
solution x(t) = e−t .
5 Special differential equations
It can be seen from the presentations and examples in the previous two chapters that
ordinary differential equations of various forms can normally be converted to first-
order explicit differential equations and then solved numerically by MATLAB solvers,
such as ode45(). It is also shown that the solvers such as ode45() may not work,
as in the case of stiff equations. Therefore dedicated solvers should be introduced
to solve stiff differential equations. Besides, differential-algebraic equations, implicit
differential equations, and many others should be considered. Their solutions should
also be considered to fill the gap left by the ode45() solver.
In Section 5.1, stiffness phenomenon is demonstrated, followed by the introduc-
tion of dedicated solvers for stiff differential equations. Some thought on stiffness
detection and problems in fixed-step methods will be shown. In Section 5.2, the solvers
of implicit differential equations are introduced. The so-called implicit differential
equations are those which cannot be converted into first-order explicit differential
equations. The mathematical formula and consistent initial value computation prob-
lems are introduced. Also implicit equations with multiple solutions are discussed.
In Section 5.3, solutions of differential-algebraic equations are addressed. The semi-
explicit differential-algebraic equations are studied first, then the implicit equation
based methods are discussed for the solution of differential-algebraic equations. In
Section 5.4, switched differential equations are presented. The zero-crossing detection
and even response handling problems are discussed, and nonlinear switched differ-
ential equations are explored. In Section 5.5, linear stochastic differential equations
are considered. A discretization method is proposed for linear stochastic differential
equations.
https://doi.org/10.1515/9783110675252-005
146 | 5 Special differential equations
It can be seen in Chapter 2 that the analytical solutions of linear differential equa-
tions are weighted sums of exponential functions. For stable differential equations,
the exponential functions are decaying. These exponential functions are referred to
as transient solutions. Normally, a time constant is usually used as a specification to
describe the speed of the transient solution decaying process. The definition of the
time constant is proposed next.
Definition 5.1. For a first-order linear differential equation with constant coefficient,
u (t) + λu(t) = 0, the analytical solution is u(t) = Ce−λt , where 1/λ is referred to as the
time constant of the differential equation.
Definition 5.2. It has been claimed that the analytical solution of a linear differential
equation with constant coefficients is a weighted sum of exponential functions. The
smallest constant of the exponential functions can be regarded as the time constant
of the system.
In the 1950s, the phenomena and solution methods of stiff differential equations at-
tracted the attention of scholars and researchers in the numerical analysis community.
There were some important international symposia dedicated to the stiff differential
equations problems.[72] Here an example is used to present the stiffness behavior and
its impact on numerical solutions.
Example 5.1. Consider the initial value problem of the differential equation
Find the analytical solution and draw the curve for α = 50.
Solutions. With the foundations studied in Chapter 2, the following statements can
be written to solve this differential equation directly, and draw the curve within the
interval (0, π/2), as shown in Figure 5.1:
α α2 −αx
y0 (t) = (sin x + α cos x) − e .
α2 + 1 α2 + 1
5.1 Stiff differential equations | 147
It can be seen from the analytical solution and curve that the e−50x term vanishes
rapidly. Its impact lasts very shortly, then the remaining response is the persistent co-
sine function. When x is increasing, the solution of the differential equation is almost
a periodic function. Here only a very small interval of x is selected to draw the curve,
the periodic behavior cannot be witnessed in the plot. It can be seen from the analytic
solution that the solution is smooth. When x is relatively large, the solution is almost
a cosine function.
In Example 5.1, the time constant of the differential equation is 1/50 = 0.02. If the
time constant of the system is small, while the step-size is similar or larger than its
scale, there might be oscillations in the numerical results. One can be imagine that
if the time constant is not 0.02, but a much smaller value, such as 10−7 , the impact
of the transient response may not be captured by conventional numerical algorithms.
Therefore the correct numerical solution may not be found.
Example 5.2. Use a fixed-step algorithm to solve numerically the differential equation
in Example 5.1, and observe the behavior.
Solutions. Since the original system is already in standard form, an anonymous func-
tion can be written to describe the differential equation. The step-sizes of h = 0.04 and
h = 0.03 can be selected. With Euler’s method, the solution can be found, as shown
in Figure 5.2. It is obvious that the solutions are incorrect.
h=0.03; [x2,y2]=ode_euler(f,[0,h,pi/2],x0);
plot(x1,y1,x2,y2) % draw the solutions for different step-sizes
If the fourth-order Runge–Kutta algorithm is used, with slightly larger step-sizes, there
may also be large errors, as in Figure 5.3; when h = 0.03, the error is large. A further
increase of the step-size may make the numerical process unstable.
the other hand, since the time constant is 1/50, the stiffness of the differential equation
is not very serious. If it is 1/1 000 or even smaller, the frequently used single-step
algorithms may also fail. Dedicated solvers for stiff differential equations are expected.
It can be seen from the phenomena that the sudden change in the solution may
mislead the numerical algorithm, such that blurs or large errors may occur. The errors
continue misleading the subsequent solution process, such that erroneous solutions
are found. The differential equation itself does not have these problems. This kind of
equation is also known as a stiff differential equation.
It is also said that although the differential equations are referred to as stiff, the
equations themselves are not stiff. The initial values in certain regions are stiff.[26]
Many stiff differential equations are not suitable to solve with function ode45(), since
in certain points variable-step algorithms must select extremely small step-size so as
to satisfy the specified error tolerance. The step-sizes are so small and the number
of points selected is so large that computer memory is exhausted, and the solution
processes are aborted.
For stiff differential equations, some dedicated solvers are provided, such as the
variable-order solver ode15s(), trapezoidal rule based solver ode23t(), trapezoidal
rule with backward difference formula based solver ode23tb(), and so on. The syn-
taxes of these functions are the same, as that of ode45(). Therefore, if one wants to
solve a stiff differential equation, one just needs to substitute the solver ode45() with
a stiff solver.
Example 5.3. When the van der Pol equation was discussed earlier, μ was selected as
μ = 1 000, where t ∈ (0, 3 000). Solve this differential equation again.
Solutions. It was pointed out in Example 4.4 that ode45() function cannot be used to
find the numerical solution to this differential equation. Similar to the earlier example,
the following MATLAB commands can be used, and within 2.55 seconds the numerical
solution is found.
It can be seen that the solution can easily be found with the stiff differential equation
solver. The time responses of the two states can be drawn, as shown in Figures 5.4
and 5.5. It can be seen that there are sudden changes in the solutions x1 (t) and x2 (t)
at certain points. Therefore, when μ = 1 000, van der Pol equation is a typical stiff
differential equation. Dedicated solvers should be used for this equation. It is the most
exact numerical solution under the double precision framework.
Figure 5.4: Solution y(t) of van der Pol equation for μ = 1 000.
Figure 5.5: Solution y (t) of van der Pol equation for μ = 1 000.
It can be seen that at the same points, the curves of y(t) and y (t) are extremely steep. In
fact, if the regions are zoomed-in, one can observe gradual change in the tiny intervals.
In other smooth intervals, the step-size can be selected relatively large.
In the solution process, the step-size is as shown in Figure 5.6. In order to keep the
error small, at certain points the step-size must be set to small values such as 2.1073 ×
10−9 , while at other points the step-size may be selected as large as 4. A total of 24 441
5.1 Stiff differential equations | 151
Example 5.4. Use stiff differential equation solvers to solve again the problem in Ex-
ample 3.19.
Solutions. Exactly the same code can be used in solving the problem in Example 3.19.
Just replace the solver ode45() by the dedicated stiff differential equation solver
ode15s(). If can be seen that the solution is exactly the same as that in Example 3.19.
No discrepancy can be witnessed from the curves.
>> f=@(t,y)[-1.71*y(1)+0.43*y(2)+8.32*y(3)+0.0007;
1.71*y(1)-8.75*y(2);
-10.03*y(3)+0.43*y(4)+0.035*y(5);
8.32*y(2)+1.71*y(3)-1.12*y(4);
-1.745*y(5)+0.43*y(6)+0.43*y(7);
-280*y(6)*y(8)+0.69*y(4)+1.71*y(5)-0.43*y(6)+0.69*y(7);
280*y(6)*y(8)-1.81*y(7)
-280*y(6)*y(8)+1.81*y(7)];
y0=[1;0;0;0;0;0;0;0.0057]; % set the initial values
ff=odeset; ff.RelTol=100*eps; ff.AbsTol=100*eps;
tic, [t,x]=ode15s(f,[0,321.8122],y0); toc
plot(t,x), xlim([0,5]) % solve differential equation and draw curves
With a stiff differential equation solver, the step-size curve can be drawn as in
Figure 5.7. With the dedicated solver, the elapsed time is only 0.056 seconds, much
smaller than that for ode45(). The number of points computed is only 119. At the
152 | 5 Special differential equations
initial stage, the minimum step-size is only 2.6756 × 10−4 , and later, a huge step-size
of 30 is allowed. It can be seen that the dedicated solver is of high efficiency.
In fact, there are many problems which were considered as stiff, but under the stan-
dard of MATLAB, the stiffness was not reflected. Ordinary solvers are suggested to find
the numerical solutions. There is no need to solve problems with stiff solvers, since
accuracy may be sacrificed. Of course, for certain problems where ordinary solvers
cannot be used, stiff solvers must be employed, as seen in Example 5.6.
−21 19 −20 1
y (t) = [ 19 20 ] y(t), y(0) = [ 0 ] .
[ ] [ ]
−21
[ 40 −40 −40] [−1]
Solve the differential equation with MATLAB.
Solutions. The analytical solution of the equation can be found directly evaluating
the matrix exponential with Symbolic Math Toolbox commands in MATLAB.
>> syms t;
A=[-21,19,-20; 19,-21,20; 40,-40,-40]; % input the matrices
y0=[1; 0; -1];
y=expm(A*t)*y0 % find the analytical solution with matrix exponential
The numerical and analytical solutions of the equation are as shown in Figure 5.8.
The two curves just coincide. The maximum difference between the numerical and
154 | 5 Special differential equations
analytical solution is 1.9295 × 10−13 , and the number of points computed is 4 937, with
the elapsed time of 0.0345 seconds.
The following statements can be used to draw the step-size curve, as shown in
Figure 5.9, with the minimum step-size of 4.6476 × 10−6 :
If a stiff solver is used to solve such a problem, the elapsed time is only about 0.023 sec-
onds, and the number of points computed is only 1 540. Compared with the analytical
solution, the norm of the error is 3.4377 × 10−11 . It can be seen that with the dedicated
solvers, the number of points is significantly reduced, but the cost is heavier, since
the error is evidentally increased, which means that the accuracy is sacrificed. It is
shown in this example that for such problems, it is not really necessary to introduce
dedicated stiff solvers in handling the differential equation like this.
5.1 Stiff differential equations | 155
For this specific problem, it can be seen that the accuracy of the numerical so-
lution is high, and the solution process is fast. The stiffness issue seems to be not
reflected in the solution process. This is because the variable-step solvers in MATLAB
adjust the step-size automatically, according the assigned error tolerance. Therefore
the stiffness issue cannot be felt. It is also seen that the difference in time constant
is only 40/2=20 times; this is a relatively small number under the double precision
standard. Therefore ordinary solvers are sufficient in handling this problem.
2
y (t) = 0.04(1 − y1 (t)) − (1 − y2 (t))y1 (t) + 0.0001(1 − y2 (t)) ,
{ 1 2
y2 (t) = −104 y1 (t) + 3 000(1 − y2 (t))
where the initial values are y1 (0) = 0 and y2 (0) = 1. If the interval of interest is t ∈
(0, 100), select a suitable algorithm to find the numerical solution of the equations.
Solutions. Based on the given ordinary differential equations, the following anony-
mous function can be written. The following MATLAB commands can be tried:
After 1.49 seconds of waiting, the numerical solution can be found as shown in Fig-
ure 5.10. It can be seen that the ordinary solver ode45() needs too much time, and the
number of points computed is as high as 481 289. Now the dedicated stiff solver is tried
as follows:
It can be seen that since the precision requirement is rather high, very small step-
sizes must be explored. The actual step-size curve for this problem is as shown in
Figure 5.11. It is seen that during the solution process the step-size is alternating in
an interval around 0.0003. This means that the variable-step method is trying to find
156 | 5 Special differential equations
an appropriate step-size for the problem. At the initial stage, the step-size is switching
in a small neighborhood around 2 × 10−5 .
If the ode15s() is used to replace ode45(), the elapsed time is 0.25 seconds, and
the number of points computed is 2 404. It can be seen that the time needed is sig-
nificantly reduced, and the efficiency is boosted by 5.96 times. It can be seen that the
curves are almost identical with those in Figure 5.10. No error can be witnessed from
the curves.
The step-size curve of the new solver can be drawn, using the above code, as shown
in Figure 5.12. Compared with the time response curves, it is seen that, when the time
response is smooth, the step-size of a stiff solver may reach h = 9 ∼ 10. Therefore
the efficiency of the solver is significantly higher than that of the ordinary solvers. To
ensure precision requirements, the initial step-size is selected as small as 8.4294×10−9 .
5.1 Stiff differential equations | 157
Luckily, the phenomena like this only lasted a very small period of time, and did not
affect the efficiency too much.
Example 5.7. Solve again the differential equation in Example 5.3 with a fixed-step
Runge–Kutta algorithm.
The error curves of the state variables are shown in Figure 5.13. It can be seen that the
2-norm of the error signal is 2.6184 × 10−6 .
158 | 5 Special differential equations
It is seen from the results that obviously, since the step-size is selected too large, the
solution is not precise. The step-size should be reduced. However, if the step-size is
reduced to 10−7 , the total elapsed time is increased to 22.21 seconds, more than 500
times that of the ode45() solver, and the 2-norm of the error is still as high as 8.57×10−8 .
Therefore variable-step algorithms should be adopted in actual differential equation
solutions.
It can be seen from the curves that the differences in the changes among the three
curves are not very significant, the system is not really a stiff one, under the current
standard in the software. In the past, due to the limitations in the computer support,
it was mistakenly considered as a stiff one.
example, we tried to convert the equation into the standard form of (3.1.1), but in each
step of the solution process, an algebraic equation had to be solved. This is obviously
not a good choice for implicit differential equations, since the solution process is ex-
tremely slow. In this section, direct methods are introduced for dealing with implicit
differential equations.
It can be seen from the standard form that the explicit expression illustrated ear-
lier is no longer needed. The implicit expression of the differential equations is needed
to be described in MATLAB. This equation format is much simpler and more flexible. It
appears from the mathematical model that not only the regular x 0 , but also the initial
value of the first-order derivative is needed. This is usually not a usual condition. An
example is given here to show how to describe an implicit differential equation with
MATLAB.
Example 5.8. Express the differential equation in Example 4.19 in the standard form of
an implicit differential equation. For ease of presentation, the original model is given
below:
x (t) sin y (t) + (y (t))2 = −2x(t)y(t)e−x (t) + x(t)x (t)y (t),
{
x(t)x (t)y (t) + cos y (t) = 3y(t)x (t)e−x(t) .
The initial values of the equations are x(0) = y (0) = 1 and x (0) = y(0) = 0.
Solutions. Selecting the state variables x1 (t) = x(t), x2 (t) = x (t), x3 (t) = y(t), and
x4 (t) = y (t), the original differential equations can be described by the following
implicit differential equations in standard form:
From the given initial values, the initial value vector can be established as x(0) =
[1, 0, 0, 1]T . Therefore the following anonymous function can be established to express
the implicit differential equations:
>> f=@(t,x,xd)[xd(1)-x(2);
xd(2)*sin(x(4))+xd(4)^2+2*x(1)*x(3)*exp(-x(2))-x(1)*xd(2)*x(4);
xd(3)-x(4);
x(1)*xd(2)*xd(4)+cos(xd(4))-3*x(3)*x(2)*exp(-x(1))];
It can be seen that the description is neat and simple. There is no need to convert it
into the form in Example 4.19, with algebraic equation embedded. Simple commands
can be used to solve the implicit differential equations directly.
Comparing the explicit differential equations and the implicit one given here,
there are several differences:
(1) The standard form here can be used to describe the complicated implicit differ-
ential equations, with an algebraic equation embedded. Since no algebraic equa-
tions are solved each time, the solution efficiency may be boosted.
(2) In standard implicit differential equations, not only x(t0 ), but also an extra vector
x (t0 ) is expected. The latter should be found before the solution of the differential
equations can be carried out. In fact, observing the standard implicit form in
(5.2.1), it can be found that if t0 and x(t0 ) are substituted into (5.2.1), the consistent
x (t0 ) can be found directly. For implicit differential equations, the algebraic
equation needs to be solved only once, and it will not affect the solution efficiency.
It can be seen from the above presentation that the whole solution process can be
divided into two parts:
(1) The consistent initial value of the first-order derivative x (t0 ) can be found using
the implicit differential equations.
(2) The entire implicit differential equation can be solved using the given consistent
initial values.
Since the initial values have a significant impact on the accuracy of the solutions,
consistent initial values should be very accurate, otherwise they may yield large er-
rors. The solution procedures of implicit differential equations will be systematically
illustrated next.
used, with algebraic equation solvers embedded. In the current versions, implicit
differential equation solvers can be used directly. Here MATLAB based solvers are
presented.
Implicit differential equations are different from explicit ones. Before the solution
process, the initial values of x(t0 ) and x (t0 ) should both be provided. They cannot
be arbitrarily assigned. At most n components in them should be assigned indepen-
dently, while the others should be solved for from the implicit differential equations.
Otherwise, conflicting initial values may be found.
Here the first step in solving implicit differential equations is introduced. We will
shoe how to use the given implicit differential equation model to find the consistent
first-order derivative initial values x (t0 ).
It can be seen in mathematics that by substituting t0 and x(t0 ) into (5.2.1), the
following algebraic equation appears:
There are various of ways of solving such algebraic equations. Volume IV of this
book also provided some solvers. If the user does not want to solve the equations, the
solver decic() provided in MATLAB can be used to find the consistent initial values,
with the syntax
∗ F
[x ∗0 ,x 0 ]=decic(fun,t0 ,x 0 ,x F0 ,x 0 ,x 0 ), %find consistent initial values
∗ F
[x ∗0 ,x 0 ]=decic(fun,t0 ,x 0 ,x F0 ,x 0 ,x 0 ,options)
∗ F
[x ∗0 ,x 0 ,f0 ]=decic(fun,t0 ,x 0 ,x F0 ,x 0 ,x 0 ,options)
where fun is the function handle for the implicit differential equation. It can be a
regular MATLAB function or an anonymous function, with input arguments of t, x(t),
and x (t). Examples were used earlier to show the description of implicit differential
equations.
∗
In the solution process, the consistent x 0 should be found first by calling function
decic(). In the function call, the input arguments x 0 and x 0 are the initial values.
They can be specific or initial values for the algebraic equation solution process. The
F
x F0 and x 0 are both n-dimensional column vectors, indicating which values in the two
initial value vectors should be retained. If the value is 1, then the corresponding initial
value should be retained, otherwise, it should be found with an algebraic equation
solver. The total number of 1’s in the two vectors should not exceed n. After the solution
∗
process, consistent initial values x ∗0 and x 0 are retained. The options is the control
option which could be set by function odeset(). The members such as RelTol can be
selected to define the expected recision. It should be noted that in the solution process,
RelTol member should not be set to too small numbers, otherwise consistent initial
values may not be found.
If no error messages appear in the solution process, the returned arguments x ∗0
∗
and x 0 are consistent initial values, and f0 is the norm of the error when the consistent
values are substituted back to (5.2.2).
162 | 5 Special differential equations
Examples will be given next to demonstrate the consistent initial value computa-
tion methods.
Example 5.9. Find the consistent initial values x 0 for the implicit differential equation
in Example 4.19.
Solutions. For this specific example, the whole vector x 0 is given. It should be input
into the computer, and x F0 should be set to a vector of ones, indicating that each ele-
ment in vector x 0 should be retained. Since x 0 is unknown, it can be chosen randomly,
F
or set in other forms, but vector x 0 should be set to a zero vector, or an empty vector,
indicating that each element in x 0 should be computed such that consistent initial val-
ues can be found. The following commands should be employed to find the consistent
initial values:
>> f=@(t,x,xd)[xd(1)-x(2);
xd(2)*sin(x(4))+xd(4)^2+2*exp(-x(2))*x(1)*x(3)-x(1)*xd(2)*x(4);
xd(3)-x(4);
x(1)*xd(2)*xd(4)+cos(xd(4))-3*exp(-x(1))*x(3)*x(2)];
x0=[1;0;0;1]; x0F=ones(4,1); % retain x 0
xd0=rand(4,1); xd0F=zeros(4,1); % initial first-order derivatives are needed
[x0,xd0,f0]=decic(f,0,x0,x0F,xd0,xd0F) % use x 0 to determine x0
With the above function call, the consistent x 0 = [0, 1.6833, 1, −0.5166]T can be found
and the norm of the error vector is 1.1102 × 10−16 , indicating that the solution is suc-
cessful.
In fact, the solver decic() provided in MATLAB occasionally fails to find consis-
tent initial values, and returns an error message “Convergence failure in DECIC”. An
outer loop structure can be designed to find the consistent initial values.
function [x0,xd0,f0]=decic_new(f,t0,x0,x0F,varargin)
n=length(x0);
[xd0,xd0F,a,b,tol]=default_vals(...
{rand(n,1),zeros(n,1),0,1,eps},varargin{:});
while (1)
xd0=rand(n,1); x0=a+(b-a).*x0;
try % find consistent values
[x0,xd0,f0]=decic(f,t0,x0,x0F,xd0,xd0F);
catch, continue; end
if abs(f0)<tol, break; end % if found then terminate the loop
end
5.2 Implicit differential equations | 163
function varargout=default_vals(vals,varargin)
if nargout=length(vals), error(’number of arguments mismatch’);
else, nn=length(varargin)+1; % assign default values
varargout=varargin; for i=nn:nargout, varargout{i}=vals{i};
end, end, end
With consistent initial values, the next step is to solve the implicit differential equa-
tions.
With the consistent x 0 and x 0 , and the standard form of the implicit differential equa-
tions, the solver ode15i() provided in MATLAB can be called to solve them:
∗
[t ,x ]=ode15i(fun,tspan, x ∗0 ,x 0 ,options)
where the definition of tspan is the same as that studied earlier. It can be the interval
[t0 ,tn ], or a user specified time vector t. The control options options is also the same
as those discussed earlier.
Examples are given next to demonstrate the solution process of implicit differen-
tial equations.
Solutions. In Examples 5.8 and 5.9, the standard form the implicit differential equa-
tions and the consistent initial values were found. With the following MATLAB com-
mands, the differential equations can be described with an anonymous function, and
the consistent initial values are found. Finally, the equations can be solved directly.
The total elapsed time is 0.94 seconds, and the number of points computed is 6 334.
The efficiency is significantly higher than in Example 4.19, since the elapsed time there
was 12.65 seconds, with 2 217 points computed. The elapsed time was 13.5 times more
than here.
164 | 5 Special differential equations
>> f=@(t,x,xd)[xd(1)-x(2);
xd(2)*sin(x(4))+xd(4)^2+...
2*x(1)*x(3)*exp(-x(2))-x(1)*xd(2)*x(4);
xd(3)-x(4);
x(1)*xd(2)*xd(4)+cos(xd(4))-3*x(3)*x(2)*exp(-x(1))];
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
x0=[1,0,0,1]’; x0F=ones(4,1); tic
[x0,xd0,f0]=decic_new(f,0,x0,x0F) % consistent initial value
[t,x]=ode15i(f,[0,2],x0,xd0,ff); toc
plot(t,x), length(t) % solution and plot
In Example 4.19, the main reason for the large time consumption was that in each
differential equation solution step, the algebraic equation was solved once. While in
the algorithm here, the algebraic equation is solved only once, and the information
can be used to solve the implicit differential equation. Therefore the solver here is more
efficient.
Example 5.11. Solve the implicit differential equations in Example 5.9 again. Compare
the results with the those in Example 4.17. For convenience, the original differential
equation model is recalled as follows:
Solutions. The standard form the implicit differential equation can be written as
>> f=@(t,x,xd)[xd(1)*sin(x(1))+xd(2)*cos(x(2))+x(1)-1;
-xd(1)*cos(x(2))+xd(2)*sin(x(1))+x(2)];
x0=[0;0]; x0F=ones(2,1); tic
[x0,xd0,f0]=decic_new(f,0,x0,x0F); % consistent initial values
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
[t,x]=ode15i(f,[0,10],x0,xd0,ff); % solve the equation
plot(t,x); length(t), toc % draw the solution curves
5.2 Implicit differential equations | 165
Solutions. At first glance, it seems that this is an explicit differential equation. Yet in
the last term, the quantities y7 (t) and y8 (t) are both contained. Therefore, in fact, this
is an implicit differential equation. If the equation is to be solved, the standard model
should be established:
>> f=@(t,y,yd)[yd(1)+1.71*y(1)-0.43*y(2)-8.32*y(3)-0.0007;
yd(2)-1.71*y(1)+8.75*y(2);
yd(3)+10.03*y(3)-0.43*y(4)-0.035*y(5);
yd(4)-8.32*y(2)-1.71*y(3)+1.12*y(4);
yd(5)+1.745*y(5)-0.43*y(6)-0.43*y(7);
yd(6)+280*y(6)*y(8)-0.69*y(4)-1.71*y(5)+0.43*y(6)-0.69*y(7);
yd(7)-280*y(6)*y(8)+1.81*y(7);
yd(8)+yd(7)]; % describe the equation
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
x0=[1;0;0;0;0;0;0;0.0057]; x0F=ones(8,1); tic
166 | 5 Special differential equations
The consistent values are y1 (0) = −1.7093 and y2 (0) = 1.7100, with the rest of the
first-order values being zero. The norm of the error is 1.4496 × 10−16 . It is seen that
the initial values are very accurate. The total elapsed time is 0.38 seconds, with the
number of points being 2 558. The solutions of the differential equation are as shown
in Figure 5.14.
It can be seen from the results that the efficiency in the implicit differential equa-
tion solution process is significantly higher than that in Example 4.3.4 of Section 4.3.3.
Therefore for implicit differential equations, if possible, the implicit differential equa-
tion solver is the top choice.
When solving for consistent initial values of implicit differential equations, algebraic
equations are involved. For these algebraic equations, the uniqueness of the solutions
should also be considered. It means that multiple consistent initial values may be
found, whereas each set of consistent initial values may yield a numerical solution
for the original implicit differential equations. Examples are presented next to find
the multiple solutions of implicit differential equations.
Example 5.13. Consider the differential equation in Example 4.9. For convenience, the
differential equation is given below. Solve the following equation:
Solutions. In Example 4.9, attempts were made to derive the explicit form. There were
two branches in the first-order explicit differential equations. Numerical methods were
adopted in solving them. Now the implicit differential equation solver can be used to
solve again the original problem.
Introducing the variables x1 (t) = y(t) and x2 (t) = y (t), the standard form of the
implicit differential equation can be written as
with x1 (0) = 0 and x2 (0) = 0.1. With an anonymous function to describe the consistent
initial values, the numerical solution of the differential equation can finally be found.
The following commands can be used:
>> f=@(t,x,xd)[xd(1)-x(2);
xd(2)^2-4*(t*x(2)-x(1))-2*x(2)-1];
x0=[0; 0.1]; x0F=[1;1]; tic
[x0,xd0,f0]=decic_new(f,0,x0,x0F) % get consistent conditions
[t1,x1]=ode15i(f,[0,1],x0,xd0,ff); toc
plot(t1,x1(:,1)), length(t) % draw the solutions
It can be seen that the consistent initial values are x1 (0) = 0.1 and x2 (0) = −1.0954, with
the error being 2.2204 × 10−16 . From this initial values, after 0.045 seconds of time, 122
points are computed, the solution is the lower branch shown in Figure 4.5.
In Example 4.9, the precision of the lower branch is extremely low. The difference
between the numerical and analytical solution can be distinguished easily from the
curves. Here the analytical solution is found again, and compared with the numerical
solutions. The norm of the error vector is reduced to 1.7955 × 10−13 . It can be seen that
the accuracy is boosted in the new solutions.
In fact, repeating the above code, especially repeating it for finding the consistent
initial values, when function decic_new() is called several times, another set of con-
sistent values x1 (0) = 0.1 and x2 (0) = 1.0954 can be found. It can be seen that in
the two initial values, the absolute values of x2 (0) are the same, satisfying the square
characteristic property in the x2 (t) term. It can be seen that the error found is 1.7955 ×
10−13 . With such an initial value, the numerical solution of the other branch can be
found. From practical computations, it can be seen that the accuracy is much higher
than that obtained in the previous sections.
168 | 5 Special differential equations
It can be seen that the initial value problems of differential equations may have
multiple solutions. Therefore in the solution process, the consistent value solver
decic_new() should be executed several times. If different initial values can be found,
the differential equations may have multiple solutions. The solutions on different
branches can be obtained from different initial values.
Definition 5.4. The so-called differential-algebraic equation set is such that some of
the equations are given in the form of differential equations, while others are given
by algebraic equations, which can be regarded as the algebraic constraints among the
state variables. This type of equations cannot be solved directly with the differential
equation solvers. The general form of differential-algebraic equations is
where x(t0 ) = x 0 . The number of equations in F(t, x(t), x (t)) is smaller than the num-
ber of unknowns, where each equation must contain at least one first-order derivative
term. The total number of equations in the two sets of F(⋅) and G(⋅) equals to the
number of unknowns.
5.3 Differential-algebraic equations | 169
and the usual methods can be used to solve the resulting differential equation.
For genuine differential-algebraic equations, matrix M(t, x(t)) is singular. Special
methods for this kind of equations should be studied.
Solutions. It can be seen that the last equation is an algebraic one. It can be regarded
as the algebraic constraint among the three state variables. The matrix form of the
differential-algebraic equation can be written as
>> f=@(t,x)[-0.2*x(1)+x(2)*x(3)+0.3*x(1)*x(2);
2*x(1)*x(2)-5*x(2)*x(3)-2*x(2)*x(2);
x(1)+x(2)+x(3)-1]; % right-hand side of the equation
Matrix M can be entered into MATLAB workspace, and the following commands can
be written. The differential-algebraic equation can be solved, and the solutions are
shown in Figure 5.15. The elapsed time of the solution process is 0.51 seconds, and
2 268 points are computed.
>> f=@(t,x)[-0.2*x(1)+x(2)*(1-x(1)-x(2))+0.3*x(1)*x(2);
2*x(1)*x(2)-5*x(2)*(1-x(1)-x(2))-2*x(2)*x(2)];
After a transformation like this, the following commands can be used to solve the dif-
ferential equations numerically, so as to find the solutions of the original differential-
algebraic equations. The solutions obtained in this way are exactly the same as those
obtained earlier.
>> fDae=@(t,x)[-0.2*x(1)+x(2)*(1-x(1)-x(2))+0.3*x(1)*x(2);...
2*x(1)*x(2)-5*x(2)*(1-x(1)-x(2))-2*x(2)*x(2)];
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps; ff.Mass=M;
x0=[0.8; 0.1]; [t1,x1]=ode45(fDae,[0,20],x0);
plot(t1,x1,t1,1-sum(x1’))
Note that even though the solver ode45() is used here, no error or warning messages
appear.
Example 5.15. Use a differential-algebraic equation solver to solve again the implicit
differential equations studied in Example 5.12.
Solutions. Slightly modifying the last equation, it is easy to find the matrix form of
the original differential equations;
1 0 0 0 0 0 0 0
[ ]
[0 1 0 0 0 0 0 0]
[ ]
[0 0 1 0 0 0 0 0]
[ ]
[0 0 0 1 0 0 0 0]
M=[
[ ]
].
[0 0 0 0 1 0 0 0]
[ ]
[0 0 0 0 0 1 0 0]
[ ]
[0 0 0 0 0 0 1 0]
[ ]
[0 0 0 0 0 0 1 1]
If the mass matrix is fed into MATLAB, the following MATLAB commands can be
used to solve the differential equations directly:
>> f=@(t,y)[-1.71*y(1)+0.43*y(2)+8.32*y(3)+0.0007;
1.71*y(1)-8.75*y(2);
-10.03*y(3)+0.43*y(4)+0.035*y(5);
8.32*y(2)+1.71*y(3)-1.12*y(4);
-1.745*y(5)+0.43*y(6)+0.43*y(7);
-280*y(6)*y(8)+0.69*y(4)+1.71*y(5)-0.43*y(6)+0.69*y(7);
280*y(6)*y(8)-1.81*y(7); 0];
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
M=eye(8); M(8,7)=1; ff.Mass=M; x0=[1;0;0;0;0;0;0;0.0057];
tic, [t,x]=ode45(f,[0,10],x0,ff); toc
plot(t,x), length(t) % solve the differential equations
The elapsed time is 0.57 seconds, and the number of points computed is 9 913. It can be
seen from the solution efficiency that the speed of the solution process is slightly lower
than that of the implicit differential equation solver. Therefore, implicit differential
equation solvers are better for this type of problems.
For this particular problem, the mass matrix is clearly singular. There is no way
to find M −1 , and convert the equation into a first-order explicit differential equation.
Therefore, ode45() solver cannot be called in this way to solve the problems:
The numerical solution can be found directly for the differential equation. The total
elapsed time is 0.24 seconds, which is obviously better than that obtained previously.
The number of points computed is 9 913, the same as with the differential-algebraic
equation solver, and more than with the implicit differential equation solver.
5.3 Differential-algebraic equations | 173
1
{ k (t) = (c(t) − k(t)),
{
{
{ 20
{
{
{ 1 1
{
{ c (t) + p (t) = − (p(t) − 99.1),
{
{
{
{ 15 75
{
{M (t) = μ(t) − m(t),
{
{
{
p(t)
{
{ = 3.35 − 0.075m(t) + 0.001m2 (t),
{
{
{ P(t)
{ 2
{P 2 (t) = 49.582 − ( μ(t) ) ,
{
{
{
{
{
{
{ 1.2k(t)
{
{
{
{M(t) = 20P(t),
{μ(t) = 15 + 5 tanh(t − 10)
Solutions. It can be seen from the given model that the dynamical signals k(t), c(t),
p(t), M(t), P(t), m(t), and μ(t) can be regarded as the state variables. If the given signal
μ(t) is substituted into the differential equation, it can be seen that there are 7 state
variables, and there are also 7 equations. Let x1 (t) = k(t), x2 (t) = c(t), x3 (t) = p(t),
x4 (t) = M(t), x5 (t) = m(t), x6 (t) = P(t) and x7 (t) = μ(t). The differential-algebraic
equation can easily be set up as
Unfortunately, the error message “This DAE appears to be of index greater than 1”
is displayed and the solution process is aborted. Since the rank of M is 3, and there
are 7 state variables, the differential-algebraic equation is of index 4, which cannot
be solved by many solvers. Manual methods may be used to reduced the index. For
instance, μ(t) is a given function, the signal can be computed directly without assign-
ing a state for it. Besides, from a simple algebraic relationship M(t) = 20P(t), one
redundant state P(t) can be eliminated. Of course, the manual index reduction like this
can only reduce the index to 2, rather than 1. The semi-explicit differential-algebraic
equation solver discussed earlier still cannot be used.
It can be seen from the example that there is a major limitation in the solver that
the equation must have index 1. Later the solutions for differential-algebraic equations
of index larger than 1 will be explored.
{p (t) = u(t),
{
{
{
{
{
{
{q (t) = v(t),
{
{mu (t) = −p(t)λ(t), (5.3.4)
{
{
mv (t) = −q(t)λ(t) − mg,
{
{
{
{
{
{ 2 2 2
{0 = p (t) + q (t) − l
where g = 9.81, m and l are constants. The model describes the motion of a pendulum
of mass m and length l.
This equation is a differential-algebraic equation of index 3. It cannot be solved
directly with solvers such as ode45(). The numerical methods will be explored later.
5.3 Differential-algebraic equations | 175
It can be seen from the previous examples that MATLAB provides a differential-
algebraic equation solver applicable only for index-1 equations, with semi-explicit
form. The solution process has limitations.
Now consider the implicit differential equation shown in (5.2.1). It can be seen
from the mathematical model that it does not require each model have derivative
terms. Therefore even if some of the equations are degenerated to algebraic con-
straints, the modeling structure can still be used. In this section, examples are given
to explore the solution of differential-algebraic equations with implicit differential
equation solvers. Equations having higher indices or those which cannot be expressed
by semi-explicit differential-algebraic equations are also explored.
Example 5.18. Solve again the differential-algebraic equation in Example 5.14 with an
implicit differential equation solver.
>> f=@(t,x,xd)[xd(1)+0.2*x(1)-x(2)*x(3)-0.3*x(1)*x(2);
xd(2)-2*x(1)*x(2)+5*x(2)*x(3)+2*x(2)^2;
x(1)+x(2)+x(3)-1];
If the implicit differential equation solver ode15i() provided in MATLAB is used, the
consistent initial values should be found first. If one selects x 0 = [0.8, 0.1, 2]T , and
initial derivative values are x 0 = [1, 1, 1]T , and the following MATLAB commands can
be issued:
While the error message “Try freeing 1 fixed component” is returned, indicating that
three values in x 0 cannot be assigned. One of them should be set to a free value. For
instance, the initial value of x3 needs to be set to a free value. The vector x 0 can be
kept unchanged, we only need to modify its indicator. The following commands can be
used to compute consistent initial values, and directly solve the differential-algebraic
176 | 5 Special differential equations
equation. For this particular example, the elapsed time is 0.4 seconds and the number
of points is 2 916. They are similar to the previous case using other methods.
Example 5.19. Use the differential-algebraic equation solver to solve the implicit dif-
ferential equation in Example 4.17.
Solutions. In Example 4.19, the inverse of matrix A(x) was used directly to convert the
model into a first-order explicit differential equation, so that solvers such as ode45()
could be used it. The numerical solution of the differential equation could then be
found. Since the assumption that A(x) is a nonsingular matrix was made, the solution
process could not be very rigorous. Therefore, the equation should be solved again
without making such an assumption.
It can be seen that two anonymous functions can be used to describe the equation
and the mass matrix:
>> f=@(t,x)[1-x(1);-x(2)];
M=@(t,x)[sin(x(1)),cos(x(2));-cos(x(2)),sin(x(1))];
The result obtained is exactly the same as that shown in Figure 5.15. Since there is no
analytical solution for the original differential equation, there is no method to decide
which solution is more accurate.
Example 5.20. Solve the differential-algebraic equation in Example 5.16 with the im-
plicit differential equation solver.
When the above statements are employed to find the consistent initial values, an er-
ror message “Try freeing 4 fixed components” is displayed. Again it shows that the
index of the differential-algebraic equation is 4. If the latter 4 components in x(0) are
assumed as free values, the following statements can be used to solve the differential-
algebraic equation. Note that the differential equation model is a stiff one, and the
error tolerance cannot be assigned to very small values, otherwise the solution process
may not converge. According to [31], the relative tolerance is set to 10−4 .
The consistent initial values are obtained as follows. The curves of k(t) and m(t) are
obtained as shown in Figure 5.16. It can be seen that the results are the same as those
Solutions. The idea of index reduction for this problem was discussed in Exam-
ple 5.16. That is, we should select state variables
x1 (t) = k(t), x2 (t) = c(t), x3 (t) = p(t), x4 (t) = M(t), x5 (t) = m(t).
The other two functions need not be selected as states, since they are either a given
function or can be expressed simply by other state variables. That is, P(t) = 0.05x4 (t)
and μ(t) = 10 + 5 tanh(t − 10). Therefore the new implicit differential equation can be
written as
The index is reduced to 2. The latter two initial values can be set to free values, so
that the following commands can be issued to solve the differential equation again.
The results are the same as those in Example 5.20. The elapsed time is 2.52 seconds,
similar to that in the previous example.
>> f=@(t,x,xd)[xd(1)-(x(2)-x(1))/20;
xd(2)+xd(3)/15+(x(3)-99.1)/75;
xd(4)-15-5*tanh(t-10)+x(5);
x(3)-0.05*[3.35-0.075*x(5)+0.001*x(5)^2]*x(4);
-(0.05*x(4))^2+49.58^2-[(15+5*tanh(t-10))/(1.2*x(1))]^2];
x0=[0.25; 0.2500145529515559; 99.1;
734.0477598381585; 9.998240239254898];
x0F=[1;1;1;0;0]; xd0=rand(5,1); x0dF=0*x0F;
[x0,xd0]=decic_new(f,0,x0,x0F,xd0,x0dF,1e-14)
ff=odeset; ff.AbsTol=1e-10; ff.RelTol=1e-4;
tic, [t,x]=ode15i(f,[0,40],x0,xd0,ff); toc % solve equation
5.3 Differential-algebraic equations | 179
Good news is that when the index is reduced, a smaller relative error tolerance can
be used, also it may become very time-consuming. For instance, if the relative error
tolerance is set to 10−5 , the elapsed time is 19.25 seconds, while if it is set to 10−6 , the
time needed is increased to 209.41 seconds.
The consistent initial values are as follows. Note that the initial values for the
state variables are the same as those in the previous example, but the initial derivative
vector is different:
Solutions. It was indicated in Example 5.17 that, due to the existence of the p2 (t) +
q2 (t) − l2 = 0 term, the differential equation cannot be converted into a semi-explicit
differential-algebraic equation. Therefore an implicit differential equation solver can
be tried by selecting the state variables x1 (t) = p(t), x2 (t) = q(t), x3 (t) = u(t), x4 (t) =
v(t), and x5 (t) = λ(t). The standard form of the implicit differential equation can be
written as
With the implicit differential equation model, an anonymous function can be writ-
ten to express it, and the consistent initial values can be found with the following
commands:
With the consistent initial values, the following commands can be tried to obtain
more accurate values. But it is found that in the solution process, the value of λ (0) is
not unique.
With the obtained consistent initial values, the differential-algebraic equation can be
solved. For this particular problem, the solution process may fail, since it is singular
at the initial time instance.
It has been mentioned that if the index is larger than 1, some of the solution methods
may fail. An index reduction technique should be introduced to convert the differen-
tial equations. For instance, derivatives can be taken of an algebraic equation, such
that a new differential-algebraic equation can be found. Examples are shown next to
demonstrate the index reduction and solution method.
Solutions. Taking the first-order derivative of the algebraic equation p2 (t) + q2 (t) − l2 =
0 with respect to t, it is found that
p(t)u(t) + q(t)v(t) = 0.
The new equation can be used to replace the original algebraic constraint in
(5.3.4), and the index of the equation is reduced to 2. The p2 (t)+q2 (t)−l2 = 0 constraint
vanishes. If the differential-algebraic equations are tried to be solved, this still cannot
5.3 Differential-algebraic equations | 181
With the standard form of the implicit differential equation, the following state-
ments can be used to solve the equation:
It can be seen that the maximum error in the algebraic constraint is 6.1655 × 10−11 . The
time responses of the states obtained are as shown in Figure 5.17. The elapsed time
is 7.23 seconds, and the total number of points is 60 632. It can be seen that by index
reduction, the original problem can be solved, and the results are satisfactory.
Solutions. It is not difficult to write the standard form of the semi-explicit differential-
algebraic equation as follows:
182 | 5 Special differential equations
1 0 0 0 0 x3 (0)
[0 1 0 0 0] [ x ]
[ ] [ 4 (0) ]
]
0] x (t) = [
[ [ ]
[0 0 1 0 x1 (t)x5 (t) ].
[ ] [ ]
[0 0 0 1 0] [ x2 (t)x5 (t) + 9.81 ]
2 2
[0 0 0 0 0] [x3 (t) + x4 (t) − x5 (t) − 9.81x2 (t)]
Definition 5.6. The multiple models of the differential equation of switched systems
can be expressed as
Certain switching laws are defined such that the entire model is switching under
the control of switching laws, among the subsystems. With the switched system the-
ory, a controller can be designed to stabilize the whole system, under certain switching
laws. The subsystems f i (⋅) can be unstable.
In this section, numerical solutions of linear switched systems are demonstrated.
Then the concept of zero-crossing detection is proposed. Finally, numerical solutions
of nonlinear switched differential equations are studied.
In the theoretical references such as,[66, 78, 81] we commonly see demonstrative exam-
ples to allow switching among different subsystems. In this section, linear switched
differential equation is formulated with physical explanations. Then examples are
used to demonstrate the numerical solution methods.
Definition 5.7. The state space form of a linear switched system model is
Here the physical meaning of the so-called switching law is that if the time and
states satisfy the ith preset condition, the system is switched to the ith subsystem
model. It is not hard to see that if the switching law can be expressed clearly in math-
ematics, anonymous or MATLAB functions can then be used to describe the entire
state space model such that numerical methods can be used to solve switched systems
directly. A simple example is given next to demonstrate the numerical solutions of the
switched linear differential equation.
184 | 5 Special differential equations
Example 5.25. Assume that the system model is given by x (t) = Ai x(t) where
0.1 −1 0.1 −2
A1 = [ ], A2 = [ ].
2 0.1 1 0.1
It can be seen that the two subsystems are unstable. If x1 (t)x2 (t) < 0, that is, the
states are located in quadrants II and IV in the phase plane, the coefficient is switched
to subsystem A1 , while if x1 (t)x2 (t) ⩾ 0, that is, if the states are located in quadrants
I and III, the coefficient matrix is switched to subsystem A2 . If the initial states are
x1 (0) = x2 (0) = 5, solve the differential equation.
Solutions. According to the system model and switching laws, it is easy to use an
anonymous function to describe the switched system as
Therefore, the following statements can be used to solve the switched differential
equation directly. The time responses of the state variables are obtained as shown in
Figure 5.18.
The phase plane trajectory of the system can also be drawn as shown in Figure 5.19. It
can be seen that although the two subsystems are unstable, under appropriate switch-
ing laws, the entire switched system is stable.
It can be seen from the previous example that the switching laws between the two
subsystems are respectively x1 (t)x2 (t) > 0 and x1 (t)x2 (t) ⩽ 0. The two conditions are
mutual exclusive. If one of them is satisfied, the other is not. Since the signals x1 (t)
and x2 (t) are continuous, there must be a certain point when x1 (t)x2 (t) = 0 is satisfied.
This is the so-called zero-crossing point. If the zero-crossing point can be detected
accurately, the simulation process may run correctly. Now let us have a look on the
importance of zero-crossing detection.
Example 5.26. Draw the curve of the function y = | sin t 2 |, t ∈ (0, π).
It is obvious that the values of the function at some particular points are wrong. If the
function values are close to 0, say at points A, B, and C, the function value should
be decreased to 0, and then increase gradually. It should take a turning at the sus-
pending point A. If there is a mechanism to find the exact point when the function
value is 0, then that point can be inserted into vector t such that the correct curve
can be drawn. The method of finding the exact point is the so-called zero-crossing
detection. In variable-step differential equation solvers, if the error tolerance is set
to a small enough value, the zero-crossing detection facilities will be automatically
implemented, while in fixed-step algorithms, zero-crossing detection is not ensured.
Therefore the phenomena in Figure 5.20 may be inevitable. Therefore, a variable-step
186 | 5 Special differential equations
algorithm with a small error tolerance must be employed in solving switched differen-
tial equations.
The user may select the zero-crossing points of the signal which is to be monitored.
If the user is interested in a particular signal, then the point at which the signal equals
zero is regarded as a zero-crossing point. The case when the signal equals zero can be
regarded as an event. In the differential equation solution process, the user may define
responses to certain events. For instance, the intermediate data at the zero-crossing
points or solution processes may be controlled by setting these events.
If one wants to detect zero-crossing points, and make certain responses for such
points, responding functions should be written. Responding functions are MATLAB
functions. Meanwhile, the Events member of the control should be set to the function
handle. The event setting and responding will be demonstrated next through exam-
ples.
If an event detection mechanism is used, the syntax of the solver ode45() should
be adjusted to
where t e , x e , and ie return respectively the time instance the event happens, and the
state values and crossing direction when the event happens.
Example 5.27. Consider the switched system model in Example 5.25. If in the plot
in Figure 5.18, the x1 (t)x2 (t) = 0 event points are needed to be superimposed, the
expression |x1 (t)x2 (t)| = 0 can be set as the condition for the zero-crossing points.
Superimpose all the zero-crossing points on the plot.
Solutions. Under the double precision framework, the condition |x1 (t)x2 (t)| = 0 is too
strict from the numerical viewpoint. Therefore, condition |x1 (t)x2 (t)| < ϵ is usually
used instead, for instance, with ϵ = 10−11 . For this particular problem, if ϵ is selected
5.4 Switched differential equations | 187
too small, the zero-crossing point may not be detected. For better describing zero-
crossing points, logic expressions are used to describe the value of the zero-crossing
conditions.
If the information at zero-crossing points needs to be intercepted, a response func-
tion in MATLAB is written. The input arguments are t and x, and there can be ad-
ditional parameters. The returned argument is values, returning the values of the
zero-crossing points. Besides, other arguments can be returned, such as istermimnal,
which can be used to control the solution process, with 1 for termination of the solution
process; argument direction is used to express the direction of the zero-crossing.
Normally it is set to 1. If set to zero, the two directions are both included in the events,
which means that a zero-crossing event may be detected twice.
function [values,isterminal,direction]=event_fun(t,x)
values=abs(x(1)*x(2))>=1e-11; isterminal=0; direction=1;
With such a response function, the following commands can be used to solve the
original problem again. The time responses of the states are drawn, superimposed
with zero-crossing points, as shown in Figure 5.21. It can be seen that all 27 zero-
crossing points are detected. The returned vector ie is a vector of 1s, indicating that
zero-crossings in the positive direction are detected. It is as one has expected.
In this section, functions such as ode45() are used mainly to solve differential
equations. The simulation method for switched differential equations is not restricted
to linear systems. It can be applied directly for nonlinear switched differential equa-
tions.
Similar to the ideas discussed earlier, a MATLAB function can be used to express
the switched differential equation. Then solvers such as ode45() can be used to solve
these differential equations directly. The examples given next demonstrate the numer-
ical solution process of nonlinear switched differential equations.
The column vector x(t) = [x1 (t), x2 (t), θ(t)]T is composed such that the robot model
may reach the final states at x(tn ) = 0. The following switching laws are considered:
Condition D1 is set to |x3 (t)| > ‖x(t)‖/2. If this condition is satisfied, then the
control law is set to
1, if α ⩾ 0,
sgn(α) = {
−1, otherwise
5.4 Switched differential equations | 189
and
{
{ y1 (t) = θ(t),
{
{ y2 (t) = x1 (t) cos θ(t) + x2 (t) sin θ(t),
{
{
{y3 (t) = x1 (t) sin θ(t) − x2 (t) cos θ(t).
If the initial values are x1 (0) = x2 (0) = 5, θ(0) = π, solve the switched differential
equation.
Solutions. This example is rather complicated. It is not suitable to describe the sys-
tem with anonymous functions. A MATLAB function should be written instead. For
MATLAB functions, the input arguments are t and x. The intermediate variables y1 (t),
y2 (t), and y3 (t) can be computed first. Then from Condition D1 , the control laws can
be implemented with a conditional structure to compute uI (t) and uII (t), and finally
to compose the derivative of the states x (t). Based on the above consideration, the
following MATLAB function can be written:
function dx=c5mwheels(t,x)
c=cos(x(3)); s=sin(x(3));
y=[x(3); x(1)*c+x(2)*s; x(1)*s-x(2)*c];
if abs(x(3))>norm(x)/2
u=[-4*y(2)-6*y(3)/y(1)-y(3)*y(1); -y(1)];
else
sgn=-1; if y(2)*y(3)>=0, sgn=1; end
u=[-y(2)-y(3)*sgn; -sgn];
end
dx=[u(1)*c; u(1)*s; u(2)]; % compute the derivatives of the states
It can be seen that the following commands can be used to solve the nonlinear dif-
ferential equations directly. Under the control laws, the phase plane plot obtained is
shown in Figure 5.22. Note that the solution process is rather time consuming (it takes
16.4 seconds).
It is a pity that, although the author faithfully described the theory and MATLAB
implementations, and different control options were tried, the expected solution could
never be reached.
190 | 5 Special differential equations
The so-called discontinuous differential equations are those where certain parame-
ters may have discontinuities or sudden jumps. In normal cases, in order to solve
discontinuous differential equations well, the relative error tolerances should not be
assigned to large quantities, such as 10−5 . Besides, stiff equation solvers are not rec-
ommended. An example of a discontinuous system is demonstrated next.
where D = 0.1, μ = 4, A = 2, and ω = π. The initial values are y(0) = 3 and y (0) = 4.
Solutions. Due to the existence of the μ sgn(y (t)) term, used in modeling the Coulomb
friction, the coefficient of y (t), that is, the velocity, has jumps in the Coulomb friction
in the range from −4 to 4. The jump scale is large such that the differential equation is
discontinuous. Many numerical algorithms cannot be used in handling such jumps.
Now a stiff solver is used to solve the problem. Letting x1 (t) = y(t) and x2 (t) = y (t),
the original differential equation can be rewritten into the standard form of first-order
explicit differential equations:
x2 (t)
x (t) = [ ].
A cos ωt − 2Dx2 (t) − μ sgn(x2 (t)) − x1 (t)
Under the setting here, the number of points computed is as high as 5 211 569. If a
smaller relative error tolerance is adopted, the time needed may further increase, and
the solutions may not even be found.
It can be seen from the result that at t1 = 0.5628, the first jump happens, yet the
solvers ode15s() and ode45() both successfully found the solution at this point. At
the second jump t2 = 2.0352, the solver ode15s() failed and aborted. The ode45()
solver automatically changed into a small step-size mode, until t3 = 2.6281, when the
jump period terminates. Large step-size computation is resumed until the next jump.
The changes of step-size in the entire solution process are as shown in Figure 5.24.
The minimum step-size is 1.9440 × 10−7 . One can clearly see the behavior of the solver
in the jumping regions.
Theorem 5.1. From the given linear stochastic differential equation model, the corre-
sponding transfer function can be written down as
b0 sm + b1 sm−1 + ⋅ ⋅ ⋅ + bm−1 s + bm
G(s) = . (5.5.2)
sn + a1 sn−1 + ⋅ ⋅ ⋅ an−1 s + an
In control system theory, the transfer function of a system is defined as the ratio
of the Laplace transforms of the output and input signals. The transfer function can
be regarded as the gain of the system in the s-domain.
Theorem 5.2. If the input signal ξ (t) is Gaussian white noise with zero mean and vari-
ance σ 2 , the output signal is also Gaussian, with zero mean and a variance of
j∞
σ2
σy2 = ∫ G(s)G(−s)ds. (5.5.3)
2πj
−j∞
In many academic research fields, continuous stochastic simulations are carried out
where the differential equations are driven by random number generators, so as to
find the simulation results. In fact, the method is incorrect. An example is given next
to show the phenomenon.
where the initial value for the equation is y(0) = 0. In the system with a = 1, the input
signal ξ (t) is N(0, σ 2 ) white noise. Find the probability density function of the output
signal.
Solutions. It is known from stochastic differential equation theory that[4] the output
signal y(t) is also Gaussian, with zero mean and variance σy2 = σ 2 /2.
Now let us observe the simulation analysis of such a differential equation. Al-
though derivatives cannot be used mathematically for such equations, in the solution
process, they can be regarded as “derivatives”:
1 1
y (t) = − y(t) + ξ (t).
a a
Let us assume that σ = 1. It is natural to generate signal ξ (t) as a set of random
numbers from N(0, 1), then solve the differential equation. Fixed-step Euler’s method
can be used, and some different step-sizes are selected. Let us observe the simulation
results.
194 | 5 Special differential equations
It can be seen that for different step-size h, different variances of the output signal are
found as follows:
This is obviously wrong, since the theoretical one is a fixed variance of 1/2. It is
obvious that this simulation method has problems.
It can also be shown in theory that with the simulation method, the variance of the
output signal is not a constant. Assume that within each step-size, the input signal is
a constant ek , having normal distribution N(0, 1), then discretizing the system model,
the following system model can be obtained:
2
E[yk+1 ] = e−2h/a + 2σe−h/a E[ek yk ] + σ 2 (1 − e−h/a )2 E[ek2 ].
2
If the input and output signals are both stationary processes, then E[yk+1 ] =
E[yk2 ] 2
= σy . Since the two signals yk and ek are mutually independent, E[yk ek ] = 0. It
can be shown that
σ 2 (1 − e−h/a )2 σ 2 (1 − e−h/a )
= .
(1 − e−2h/a ) 1 + e−h/a
where A, B, and C are compatible matrices. Signal d(t) is deterministic, while γ(t) is a
Gaussian white signal, satisfying
Defining a variable γc (t) = Bγ(t), it can be shown that γc (t) is also a Gaussian
white noise, satisfying
Assuming that t0 = kh, t = (k + 1)h, where h is the step-size, and that within each
step, the deterministic input signal d(t) is a constant for Δt ⩽ t ⩽ (k + 1)h, one has
d(t) = d(kh). The discretized form of (5.5.9) can be written as
where
h
and
(k+1)h h
A[(k+1)h−τ]
γd (kh) = ∫ e γc (t)dτ = ∫ eAt γc [(k + 1)h − τ]dτ. (5.5.12)
kh 0
It can be seen that the matrices F and G are the same as the discretized ones in
the deterministic system. They can be found easily. If the system contains stochastic
196 | 5 Special differential equations
signals, the discretized form is different from that of the deterministic systems. It can
be shown that γd (t) is also a Gaussian white noise, satisfying
h
T
where V = ∫ eAt V c eA t dt. With the Taylor series technique, it is found that
0
h ∞
Rk (0) k ∞
V =∫∑ t dt = ∑ V k (5.5.14)
k=0
k! k=0
0
with initial values R0 (0) = R(0) = V c and V 0 = V c h. It is known from singular value
decomposition that matrix V can be written as V = UΓU T , where U is an orthogo-
nal matrix, while Γ is a diagonal matrix whose diagonal elements are nonzero. With
Cholesky decomposition, V = DDT . It is found that γd (kh) = De(kh), where e(kh) is an
n × 1 vector, and e(kh) = [ek , ek+1 , . . . , ek+n−1 ]T , such that each component ek has the
standard normal distribution, i. e., ek ∼ N(0, 1). The recursive discretized form can be
written as
Based on the above algorithm, the following MATLAB function can be written to
discretize linear stochastic differential equations:
function [F,G,D,C]=sc2d(G,sig,T)
G=ss(G); G=balreal(G); Gd=c2d(G,T);
A=G.a; B=G.b; C=G.c; i=1;
F=Gd.a; G=Gd.b; V0=B*sig*B’*T; Vd=V0; V1=Vd;
while (norm(V1)<eps)
V1=T/(i+1)*(A*V0+V0*A’); Vd=Vd+V1; V0=V1; i=i+1;
end
[U,S,V0]=svd(Vd); V0=sqrt(diag(S)); Vd=diag(V0); D=U*Vd;
The syntax of the function is [F ,G,D,C ]=sc2d(G,σ ,h), where G is the system
model, σ is the covariance matrix of the input signal, h is the sample time, and
(F, G, D, C) are the corresponding matrices in the discretized state space model.
5.5 Linear stochastic differential equations | 197
d4 y(t) + 10d3 y(t) + 35d2 y(t) + 50dy(t) + 24y(t) = d3 ξ (t) + 7d2 ξ (t) + 24dξ (t) + 24ξ (t),
and assume that the stochastic differential equation has zero mean. If a white noise
signal is used to excite the system, use the simulation method to find the statistical
properties of the output signal.
s3 + 7s2 + 24s + 24
G(s) = .
s4 + 10s3 + 35s2 + 50s + 24
Assuming that the sample time is selected as h = 0.02 seconds, the following state
can be used to discretize the model:
For the obtained discrete state space model, the following commands can be used
to perform simulation in MATLAB, where the total number of simulation points is
selected as n = 30 000. If this number is small, then statistical analysis results may be
meaningless.
It can be seen that the ℋ2 of the output is v = 0.6655. The time response curve is shown
in Figure 5.25. It can be seen that the output signals are in a mess, since the output is
stochastic. Therefore one curve of the output signal is usually meaningless. Statistical
analysis should be performed instead.
Since the variance of the output signal is known to be v2 , it can be shown in theory
that[4] the probability density function of the output signal is
1 2 2
p(y) = e−y /(2v ) .
√2πv
Statistical methods can also be used to obtain the histogram of the response data.
Specifically, the range of the output (−2.5, 2.5) is divided equally into subintervals of
width w = 0.2. The total number of points falling into each subinterval can be found,
and then, dividing by nw, the numerical probability density function from the data can
be found and superimposed on the theoretical probability density function curve, as
shown in Figure 5.26. The results from simulation match the theoretical results very
well.
5.6 Exercises
5.1 The simplified dynamic model of a catalytic fluidized bed is[42]
{
{ x (t) = 1.30(y2 (t) − x(t)) + 2.13 × 106 ky1 (t),
{
{
{y1 (t) = 1.88 × 103 (y3 (t) − y1 (t)(1 + k)),
{
{
{
{
{
{
{ y2 (t) = 1 752 − 269y2 (t) + 267x(t),
{
{
{y3 (t) = 0.1 + 320y1 (t) − 321y2 (t)
where x(0) = 761, y1 (0) = 0, y2 (0) = 600, y3 (0) = 0.1, and k = 0.006e20.7−15 000/x(t) .
If t ∈ (0, 100), find the numerical solution of the stiff differential equation.
5.2 Solve the following linear stiff differential equation:[23]
−2a a 0
[ 1 1
[ ] [0]
−2 ] [ ]
[ ]
[ 0 1 −2 1 ] [ ]
[0 ]
y (t) = [ ] y(t) + [
[ ]
... ... ... [ .. ]
]
[
[
]
] [.]
[ ]
[
[ 1 −2
]
1 ] [0]
[ b −2b] [b]
where a = 900 and b = 1 000. If the initial vector y(0) is zero and the order of
the coefficient matrix is n = 9, solve the stiff differential equation in the interval
t ∈ (0, 120).
200 | 5 Special differential equations
{
{ y1 (t) =
y2 (t),
{
{
{
{
{
{ y2 (t) = y3 (t),
{
{
{ y (t) = y4 (t),
3
{
{ y2 (t)y3 (t)
(y12 (t) − sin y1 (t) − Γ4 ) + ( − 4Γ3 )
{
{
{ y (t) =
{ 4
{
{ y12 (t) + 1
{
{
{ 2
where Γ = 100 and t ∈ (0, 1). Assume that the initial state vector is zero.
5.4 Solve the following stiff differential equation:[33]
{
{ x1 (t) = −k1 x1 (t) + k2 y(t),
{
{
{ x2 (t) = −k4 x2 (t) + k3 y(t),
{
{
{
y (t) = k1 x1 (t) + k4 x2 (t) − (k2 + k3 )y(t)
{
where x1 (0) = y(0) = 0, x2 (0) = 1, and k1 = 8.430327×10−10 , k2 = 2.9002673×1011 ,
k3 = 2.4603642 × 1010 , k4 = 8.760058 × 10−6 , t ∈ (0, 100). In fact, since this is a
linear differential equation, function dsolve() can be used to find the quasi-
analytical solution. Assess the accuracy of the numerical solutions.
5.6 Solve the following stiff differential equation:[33]
{
{ y1 (x) = −0.04y1 (x) + 104 y2 (x)y3 (x),
{
{
{ y2 (x) = 0.04y1 (x) − 104 y2 (x)y3 (x) − 3 × 107 y22 ),
{
{
y (x) = 3 × 107 y22 )
{
{ 3
where y1 (0) = 1, y2 (0) = 0, y3 (0) = 0, and t ∈ [0, 0.3].
5.7 The mathematical model of a chemical reaction is[33]
where A = 7.89 × 10−10 , B = 1.1 × 107 , C = 1.13 × 103 , M = 106 , y1 (0) = 1.76 × 10−3 ,
y2 (0) = y3 (0) = y4 (0) = 0, and t ∈ (0, 1 000). Solve the stiff differential equa-
5.6 Exercises | 201
tion. If the interval is increased to t ∈ (0, 1013 ), solve the differential equation
again.
5.8 Use regular and stiff differential equation solvers to solve the following differen-
tial equation:[32]
{
{ y1 (x) = 77.27[y2 (x) + y1 (x)(1 − 8.375 × 10−6 y1 (x) − y2 (x))],
{
{ 1
y2 (x) = [y3 (x) − (1 + y1 (x))y2 (x)],
{
{
{ 77.27
{
{y3 (x) = 0.161(y1 (x) − y3 (x))
{
{ u1 (t) = u3 (t),
{
{
{u2 (t) = u4 (t),
{
{2u (t) + cos(u1 (t) − u2 (t))u (t) = −g sin u1 (t) − sin(u1 (t) − u2 (t))u2 (t),
{
{
{
{ 3 4 4
{ 2
{ cos(u 1 (t) − u 2 (t))u
3 (t) + u
4 (t) = −g sin u 2 (t) + sin(u1 (t) − u2 (t))u3 (t).
{
{ x1 (t) = x3 (t) − 2x1 (t)y2 (t),
{
{
{
{
{
{ x2 (t) = x4 (t) − 2x2 (t)y2 (t),
{
{
{x (t) = −2x (t)y (t),
{ 3 1 1
{
{
{
{ x
4 (t) = −g − 2x 2 (t)y1 (t),
{
{ 2 2
{
{
{
{ x1 (t) + x2 (t) = 1,
{
{
{x1 (x)x3 (t) + x2 (t)x4 (t) = 0,
with known initial values x(0) = [1, 0, 0, 0]T , y(t) = 0. The constant g = 9.81,
and the interval of interest is t ∈ (0, 6). Find the numerical solution of the
differential-algebraic equation.
5.11 Solve the following time-varying differential-algebraic equation:[3]
{
{ 0 = y1 (t) − y2 (t),
{
{
{
{
{
{ 0 = (m1 + m2 )y2 (t) − m2 Ly6 (t) sin y5 (t) − m2 Ly62 (t) cos y5 (t),
{
{
{0 = y (t) − y (t),
{ 3 4
{
{
{
{ 0 = (m 1 + m 2 )y4 (t) + m2 Ly6 (t) cos y5 (t) − m2 Ly6 (t) sin y5 (t) + (m1 + m2 )g,
{
{
{
{
{
{0 = y5 (t) − y6 (t),
{
{ 2 2
{0 = −Ly2 (t) sin y5 (t) − Ly4 (t) cos y5 (t) − L y6 (t) + gL cos y5 (t)
where m1 = m2 = 0.1, L = 1, g = 9.81, y 0 = [0, 4, 2, 20, −π/2, 2]T , and the time
interval is t ∈ (0, 4).
5.13 In Example 5.21, an index 4 differential-algebraic equation has been analyzed,
and the index has been reduced to 2. Try to further reduce the index and
see whether it can be reduced to an index-1 differential-algebraic equation.
Solve the equation and see whether it can be solved with a high efficiency
method.
5.14 At the website [52], many benchmark problems on stiff differential equations
and differential-algebraic equations are provided. The readers may visit that
website to download relevant problems, and test whether the problems can be
solved with MATLAB. What are the accuracy and efficiency when solving these
problems?
5.15 Solve the following linear switched differential equation:
{
{ −4x1 (t), if x1 (0) > 0,
{
f (x1 (t)) = {2x1 (t), if −1 ⩽ x1 (0) ⩽ 0.
{
{
{−x1 (t) − 3, if x1 (0) < −1.
1 0 1 1 1 0
A1 = [ ], A2 = [ ], B1 = [ ] , B2 = [ ] ,
1 1 0 1 0 1
and the two state feedback vectors are k 1 = [6, 9] and k 2 = [9, 6]. It is known
that the condition to switch from subsystem 1 to 2 is |x1 (t)| = 0.5|x2 (t)|, and
5.6 Exercises | 203
the condition to switch from subsystem 2 to 1 is |x1 (t)| = 2|x2 (t)|. If the initial
states are x 0 = [100, 100]T , solve the switched system and draw the phase plane
trajectory.
5.17 Solve the following discontinuous differential equation,[32] with initial value of
y(0) = 0.3:
2
t 2 + 2y2 (t), if (t + 0.05)2 + (y(t) + 0.15) ⩽ 1,
y (t) = { 2 2
2t + 3y2 (t) − 2, if (t + 0.05)2 + (y(t) + 0.15) > 1.
5.18 Consider a linear feedback control system with unit negative feedback, as shown
in Figure 5.27, where the plant and controller models are given as
r(t)
- h e(t)- Gc (s) - G(s)
y(t)
-
− 6
In Example 1.6, the latent period of a disease was introduced, so that the original
differential equations were changed to delay differential equations. Another exam-
ple is that, in practical engineering applications, sensors are used to measure a cer-
tain signal. If these sensors have their own time delay, the current measured data
may be the actual value of the signal at a previous time. Strictly speaking, this is
also a delay differential equation model. In some applications, the delays may be too
short, and they can be neglected. Therefore the delay differential equations can be
https://doi.org/10.1515/9783110675252-006
206 | 6 Delay differential equations
Example 6.1. Consider the delay differential equation with history functions[2]
1
u (x) = − sin u(x) − (x + 1)u(x − 1) + x
16
where 0 ⩽ x ⩽ 2. It is also known that
1 1
u(x) = x − , −1 ⩽ x ⩽ 0, u(2) = − .
2 2
Solutions. Since the delay term u(x − 1) exists, the equation here cannot be solved
with the solvers discussed earlier. Special manipulation is needed. Here two cases are
considered for this differential equation.
(1) Let t = x and u1 (t) = u(x), then, when −1 ⩽ x ⩽ 0, u(x − 1) = u(t − 1) = (t − 1) − 1/2 =
t − 3/2. It can be seen that
1 3
u
1 (t) = − sin u1 (t) − (t + 1) (t − ) + t.
16 2
(2) Let t = x − 1, that is, x = t + 1, and u2 (t) = u(x) = u(t + 1), then
1
u (x) = u (t + 1) = − sin u(t + 1) − (t + 2)u1 (t) + t + 1.
16
1 3
{u1 (t) = − 16 sin u1 (t) − (t + 1) (t − 2 ) + t,
{
{
{ 1
{u2 (t) = − 16 sin u2 (t) − (t + 2)u1 (t) + t + 1
6.1 Numerical solutions of delay differential equations with constant delays | 207
with boundary conditions u1 (0) = −1/2, u2 (1) = −1/2, u1 (1) = u2 (0), and u1 (1) =
u2 (0). It can be seen that in the manually transformed formula, an auxiliary function
is introduced and the delay term is eliminated. A high-order differential equation is
found.
In fact, the manual method can be used to eliminate delay terms, if there are only a
finite number of delay terms, such that ordinary differential equations can be found.
Unfortunately, this kind of manual conversion may be too tedious and error-prone.
A dedicated solver is needed for delay differential equations.
The mathematical forms of delay differential equations with constant delays are intro-
duced in this section. Then a solver provided in MATLAB for delay differential equa-
tions is described, and the solutions of delay differential equations with zero history
information are illustrated.
Definition 6.2. The general form of a delay differential equation with constant delays
is given by
In the mathematical description, x(t) contains the state variables at the current
time instance t, while x(t − τi ) can be understood as the state variable vector τi sec-
onds ago. Assuming that there are m delay constants, τ = [τ1 , τ2 , . . . , τm ], and the
delay constants are given values, based on the information, the values of the first-
order derivative x (t) at time instance t can be computed. Similar to the cases in other
differential equation solutions, the standard form of the delay differential equations
should be written by the user, in the format understandable in MATLAB.
An implicit Runge–Kutta algorithm solver is provided in MATLAB, named
dde23(); the solver can be used to solve delay differential equations with the following
syntax:
where, as before, options are the controls in the solver, whose initial members can
be extracted with function ddeset(). The function is similar to that in the odeset()
solver. The member names are similar, for instance, AbsTol, RelTol, OutputFcn, and
Events. The definitions are the same as for the differential equation solvers.
In the statements, f1 is a MATLAB function to describe the delay differential equa-
tion, whose syntax will be demonstrated later through examples. Function f2 is used
208 | 6 Delay differential equations
to express the state variables when t ⩽ t0 . If they are functions, then MATLAB function
handles are used. If they are constants, then constant vectors can be used directly.
The returned argument sol is a structured variable whose sol.x member is the
time vector t and the member sol.y is the matrix x composed of the solutions at
different time instances. The format is different from that used in the solver ode45().
The data in sol.y is arranged in rows. It can be seen that the syntaxes of the functions
are not quite unified. It is expected that in later versions they can be unified.
In describing the delay differential equations, apart from the regular scalar t and
state vector x, a matrix Z is also used as the input argument. The kth column Z(:, k)
stores the values of the state at the delay constant τk , i. e., x(t − τk ).
Examples are used next to demonstrate the solutions of simple delay differential
equations and control parameters, so as to provide useful suggestions for delay differ-
ential equation solutions.
Example 6.2. For the given delay differential equations with constant delays
Solutions. It can be seen that the values of the variables x(t) and y(t) at time instances
t, t −1, and t −0.5 are involved. Therefore, dedicated delay differential equation solvers
are expected. To find the numerical solutions, the differential equations must be first
converted to the standard forms of the first-order explicit differential equations.
A straightforward method for the conversion is to introduce a set of state variables
x1 (t) = x(t), x2 (t) = y(t), and x3 (t) = y (t). Then the first-order explicit differential
equations can be written as follows:
In this equation, two delay constants τ1 = 1 and τ2 = 0.5 are involved. It can be
seen from the first equation that the first delay τ1 corresponds to the first column of the
state matrix Z. Therefore x2 (t − τ1 ) is the first column, second row element of matrix Z,
denoted as Z(2, 1). If the state variable x1 (t − τ2 ) is used, the element Z(1, 2) should
be extracted. Therefore the standard form of the delay differential equation can be
written as
The following anonymous function can be written to describe the delay differen-
tial equations:
>> f=@(t,x,Z)[1-3*x(1)-Z(2,1)-0.2*Z(1,2)^3-Z(1,2);
x(3);
4*x(1)-2*x(2)-3*x(3)];
Since it is known that the history information of the unknown signal x(t) at time
instances t ⩽ 0 is all zero, this kind of problem is also known as a problem with zero
history functions. The zero history information can be described by a zero vector f2 .
Therefore the following statements can be tried to solve the delay differential equa-
tions. Unfortunately, the statements cannot be used in finding the numerical solutions
of the delay differential equations.
It seems that there is no problem in the above statements. After an extremely long
period, nothing is found. What happens in this kind of phenomena? In the previous
presentations, the readers became familiar with the error tolerance 100*eps. Unfor-
tunately, such tough error tolerance cannot be used in solving delay differential equa-
tions. The error tolerances must be increased for delay differential equation solvers.
The setting of the error tolerance and is impact on solution efficiency will be explored
through examples.
Example 6.3. Consider the above example again. Solve the delay differential equation
with different error tolerance and assess the efficiency.
Solutions. With the following MATLAB commands, the numerical solution of the de-
lay differential equation can be found as shown in Figure 6.1:
>> f=@(t,x,Z)[1-3*x(1)-Z(2,1)-0.2*Z(1,2)^3-Z(1,2);
x(3);
4*x(1)-2*x(2)-3*x(3)]; % delay differential equation
ff=ddeset; ee=100*eps; ff.RelTol=ee; ff.AbsTol=ee;
tau=[1 0.5]; x0=zeros(3,1); % setting the delay vector
tic, tx=dde23(f,tau,x0,[0,10],ff); toc % solve the equation
plot(tx.x,tx.y), length(tx.x) % draw the state variables
Note that in the returned variables tx.y, the results are stored in rows. If the signal y(t)
in the original equation is expected, the variable tx.y (2,:) should be used, rather
than tx.y (:,2).
210 | 6 Delay differential equations
Since the error tolerance is set too tough, the solution process is rather time consum-
ing. The total elapsed time is 89.9 seconds, with 60 006 points computed. Different
tolerance ee values are tried and compared in Table 6.1.
tolerance ee 2.2204 × 10−14 10−13 10−12 10−11 10−10 10−9 10−8 default
elapsed time 89.9 60.16 11.41 2.84 0.78 0.18 0.10 0.041
number of points 60 006 36 339 16 871 7 833 3 639 1 692 789 70
In the dde23() function call, in order to speed up the computation process, the ex-
pected precision must be reduced. For instance, under the default setting, the elapsed
time is 0.041 seconds, with only 70 points computed. If the obtained curve is superim-
posed on the exact one, no differences can be witnessed.
Although the relationship of error tolerance and elapsed time is summarized, this
information is obtained from a particular example. It is not really of any significance,
but the trends are meaningful. In normal cases, the value of ee can be set to 10−10 ,
and used to solve the delay differential equations. If the solutions can be found im-
mediately, the error tolerance should be reduced slightly, so as to find solutions of
higher accuracy. If the solutions cannot be obtained in a long time, the error tolerance
should be increased so as to find the approximate solutions. In latter discussions, if
not specially indicated, the error tolerance is uniformly set to 10−10 .
Example 6.4. In a typical feedback control system shown in Figure 6.2, the plant
model G(s) and the controller Gc (s) are described by the following transfer function
6.1 Numerical solutions of delay differential equations with constant delays | 211
models:
Draw the step responses of the system under zero initial values.
Solutions. With the properties of Laplace transform, the transfer function models can
be converted back to differential equations. Since there exists a delay in the plant
model, the corresponding model is a delay differential equation. Let us observe first
the differential equation model from signal u(t) to signal y(t). It is easy to find that the
corresponding delay differential equation model can be written directly as
where δ(t) is a Dirac function. Dirac function is named after a British theoretical physi-
cist Paul Adrien Maurice Dirac (1902–1984), whose definition is the derivative of the
step signal. It can be understood as having infinite magnitude at t = 0, while the
function value is zero elsewhere.
Selecting the state variables x1 (t) = y(t), x2 (t) = y (t), and x3 (t) = u(t), the stan-
dard form of the explicit differential equations can be written as
x2 (t)
[ ]
x (t) = [
[ x3 (t − 1) − 3x2 (t) − 2x1 (t) ].
]
[1.66δ(t) − 1.66x2 (t) + 0.91 − 0.91x 1 (t) ]
For the same problem, Control System Toolbox function can be used to solve the prob-
lem directly. The two sets of curves coincide.
Note that in the conversion from transfer functions to differential equations, we cannot
forget the δ(t) function term, otherwise wrong results are obtained.
In the previous examples, it is always assumed that when t ⩽ 0, the history function is
x(t) = 0. In real applications, when t ⩽ 0, the state variables are not always so simple.
They can be given constants or functions.
In the solving statement, if x 0 is a constant vector, it means that, when t ⩽ t0 ,
the history values of x are kept as constant x 0 . Care must be taken in understanding
such differential equations. An example is given to demonstrate the constant history
function and its impact on the delay differential equations.
Example 6.5. Consider again the delay differential equations in Example 6.2. If the
history values of the three states are respectively x1 (t) = −1, x2 (t) = 2, and x3 (t) = 0,
when t ⩽ 0, solve the delay differential equations.
6.1 Numerical solutions of delay differential equations with constant delays | 213
Solutions. Compared with the statements in Example 6.2, the only difference is that
the zero vector is changed to vector x 0 . The solutions of the delay differential equations
can be found directly, as shown in Figure 6.4. The elapsed time is 1.24 seconds.
>> f=@(t,x,Z)[1-3*x(1)-Z(2,1)-0.2*Z(1,2)^3-Z(1,2);
x(3);
4*x(1)-2*x(2)-3*x(3)]; % delay differential equations
ff=odeset; ee=1e-10; ff.RelTol=ee; ff.AbsTol=ee;
tau=[1 0.5]; x0=[-1; 2; 0]; % set the delay vector
tic, tx=dde23(f,tau,x0,[0,10],ff); toc % solve the equations
plot(tx.x,tx.y), length(tx.x) % draw the state variables
If the history values are not constants, but rather functions of time, or other state
related functions, anonymous or MATLAB functions can be used to describe the “pre-
history” functions, and express them in function handle f2 . Then the delay differential
equations can be solved numerically. For the users, solving these differential equa-
tions is no more complicated than solving the zero initial value problems, since the
history functions can be described directly. Apart from the history functions, the other
statements are exactly the same. Examples are used next to demonstrate the solutions
of delay differential equations with time-varying history functions.
Example 6.6. Consider again the delay differential equation studied in Example 6.2.
If the history functions of the three state variables are known, as x1 (t) = e2.1t , x2 (t) =
sin t, and x3 (t) = cos t, when t ⩽ 0, solve again the delay differential equations.
Solutions. An anonymous function can be used to depict the history functions when
t ⩽ 0. The following statements can then be used to directly solve the delay differential
equations with history functions, as shown in Figure 6.5. The elapsed time of code
214 | 6 Delay differential equations
execution is 1.01 seconds, which is similar to that of the previous example, meaning
that the computer burden is not increased.
>> f=@(t,x,Z)[1-3*x(1)-Z(2,1)-0.2*Z(1,2)^3-Z(1,2);
x(3); 4*x(1)-2*x(2)-3*x(3)];
f2=@(t,x)[exp(2.1*t); sin(t); cos(t)]; % describing history functions
lags=[1 0.5]; % specify the delay vector
ff=ddeset; ff.RelTol=1e-10; ff.AbsTol=1e-10;
tic, tx=dde23(f,lags,f2,[0,10],ff); toc, plot(tx.x,tx.y)
It can be seen from the examples that if t ⩽ t0 , the definition of the history func-
tion is different, the same delay differential equations may yield completely different
results. Therefore in solving these delay differential equations, the definition of the
initial value and history functions must be noted. Besides, although the definitions
may be different, the initial values are not the same, the final values of the solutions
of these delay differential equation are virtually the same.
The mathematical form of variable delay differential equations is given first in this
section, then MATLAB solutions of the problems are presented.
x (t) = f (t, x(t), x(τ1 (t, x(t))), x(τ2 (t, x(t))), . . . , x(τm (t, x(t)))) (6.2.1)
where 0 ⩽ τi (t, x(t)) ⩽ t are functions of time t, or even functions of the state vari-
ables x(t). It should be noted that the delay should be described as τ1 (t, x(t)), not
t − τ1 (t, x(t)).
Three types of delays are explored in particular, and the solution methods are also
discussed:
(1) Time-dependent delays. If the delays can be expressed as t − τi (t), with τi (t) ⩾ 0,
they are referred to as time-dependent delays.
(2) State dependent delays. If the delays can be written as t−τi (x(t)), with τi (x(t)) ⩾ 0,
they are referred to as state-dependent delays.
(3) Generalized delays. The delays in the general form in Definition 6.3 are referred
to as generalized delays. The delay form is no longer described by t − τi , and it is
a function of x(t) at the time instance τi (t, x(t)). For instance, x(αt) with α ⩽ 1 can
be regarded as a generalized delay.
All these forms of delays can be handled with the MATLAB solver ddesd(), where func-
tion handles are allowed to describe the delays. Of course, function ddesd() can be
used to directly take care of the three types of delays. They can be handled individually
with the solvers in this section.
The syntax of function ddesd() is
where f is the function handle for describing the first-order explicit differential equa-
tions; fτ is the function handle for describing the delay functions; f2 is the function
handle for describing history functions. All these functions can be MATLAB or anony-
mous functions.
the solutions of time-dependent delay differential equations with zero history func-
tions. Then the nonzero history function problems are considered.
Example 6.7. If the history functions for each state variables are zero, solve the fol-
lowing variable delay differential equations:
It should be noted that since an anonymous function is used to describe the delays,
the second delay should be expressed as 0.8. It must be written as t − 0.8, otherwise
erroneous results may be obtained, since 0.8 may be misunderstood as x2 (0.8), which
6.2 Differential equations with variable delays | 217
is the state value at time 0.8, not the correct x2 (t − 0.8). Besides, although the delays
are functions of time t, they are independent of states, so when expressing the delay
function, the argument x should still be used to hold the place, otherwise an error
message may be displayed.
Now let us see the solutions with nonzero history functions. As illustrated before,
if the history function is expressed in a constant vector x 0 , the machine understands
that when t ⩽ t0 , the values of the state are kept the same as the constants x 0 . If the
history functions are given time domain functions, anonymous or MATLAB functions
should be used to describe history functions directly. Examples are demonstrated next
for such problems.
Example 6.8. For the previous delay differential equations, if the history functions are
given below, solve the equation:
Figure 6.7: Numerical solutions of differential equations with nonzero history functions.
218 | 6 Delay differential equations
Example 6.9. If the history values for t < 0 are all zero, and only at time t = 0 a
nonzero initial value is defined, solve the differential equations from the previous
example.
Solutions. If the history functions are nonzero only at the time instance t = 0, the
following commands can be used to describe the history functions, where the dot
product of t ==0 ensures that at time t = 0, the initial values can be found. At earlier
times, the history values are forced to be converted to zeros.
Therefore the following MATLAB commands can be used to describe again the differ-
ential equations, and the solutions found are shown in Figure 6.8.
Figure 6.8: Numerical solutions of differential equations with nonzero initial values.
It can be seen that, since the definitions of history values are different, there exist
significant differences in the solutions. The history functions are important and affect
the transient solutions. Therefore, before solving the delay differential equations, the
definition of the history functions must be understood, to avoid ambiguous solutions.
6.2 Differential equations with variable delays | 219
In certain applications, the delay quantities are functions of the states, and such dif-
ferential equations are referred to as state-dependent delay differential equations. In
[34], some examples with state-dependent delays are listed. As introduced before, if
the delay function handles can be constructed, the solver ddesd() can be used in
solving the differential equations directly. By default, the function handles of the delay
quantities are functions of time t and state x(t). Therefore, there is no special treatment
needed to describe the delays. Examples are shown next to demonstrate the solutions
of differential equations with state-dependent delays.
Example 6.10. Solve the following simple delay differential equation with state-
dependent delay:[34]
where, when t ⩽ 0, the history function is given by the following piecewise function:
{
{ −1, t <= −1,
{
y(t) = {3(t + 1)1/3 /2 − 1, −1 < t ⩽ −7/8,
{
{
{10t/7 + 1, −7/8 < t <= 0,
Solutions. The differential equation has one delay term whose mathematical expres-
sion is τ = t − |x(t)|, which is based on the state variable x(t). The delay differential
equation can be easily described with an anonymous function. For history functions,
since they are piecewise functions, an anonymous function can also be used. There-
fore the following statements can be employed to solve the delay differential equation.
The norm of the error is 1.0649 × 10−15 . It can be seen that the accuracy is rather high.
Example 6.11. Consider the delay differential equation in Example 6.7. If t ⩽ 0, the
history functions of the states are respectively x1 (t) = 1 and x2 (t) = x3 (t) = 0. The
220 | 6 Delay differential equations
If some of the delays cannot be expressed as x(t − τi ), but they can be expressed as
x(g(t)), where g(t) ⩽ t, or using other complicated forms, the problems like this are
known as differential equations with generalized delays. Differential equations with
generalized delays can also be solved directly with function ddesd(). Here examples
are given to demonstrate the solution of differential equations with generalized delays.
u(t)
x (t) = [x(t/u(t))]
where u(t) = (1 + 2t)2 and x(0) = 1, with t ∈ (0, 1). The analytical solution is x(t) = et .
Solutions. For this specific example, since (1 + 2t)2 ⩾ 1, we have t/(1 + 2t)2 ⩽ t. The
delay here is a generalized one. Besides, since the differential equation does need the
information for t < 0, the given x(0) is sufficient for the original delay differential
equation.
Anonymous functions can be used directly to describe the differential equation
and the delay term. Then the following commands can be used to find the numerical
solutions. The error can then be evaluated by comparing the analytical solutions, and
its norm 2.7766 × 10−10 . It can be seen that the solution is reliable.
Example 6.13. Slightly modifying the differential equations in Example 6.7, the differ-
ential equations can be described as follows. Solve the following differential equations
with generalized delays, where α = 0.77:
Solutions. It can be seen that the function contains the x2 (0.77t) term, not t − 0.77t,
therefore this equation is with a generalized delay. The value of the state x2 at 0.77t is
involved. The following anonymous function can be used to describe the delay terms:
With the delay description, the remaining commands are the same as those given
above. The following commands can be used to directly solve the delay differential
equations, with the results shown in Figure 6.10:
>> f=@(t,x,Z)[-2*x(2)-3*Z(1,1);
-0.05*x(1)*x(3)-2*Z(2,2)+2; % explicit form
0.3*x(1)*x(2)*x(3)+cos(x(1)*x(2))+2*sin(0.1*t^2)];
ff=ddeset; ff.RelTol=1e-10; ff.AbsTol=1e-10;
sol=ddesd(f,tau,zeros(3,1),[0,10],ff);
plot(sol.x,sol.y) % the solutions and plots
If the delay vectors contain the values of the unknown functions and their derivatives
at future instances t, such as in Example 6.13 where the value α = 1.1 indicates that the
future values of the unknowns are involved, there is no algorithm which can be used
to solve such differential equations. Still the user may use the following expression for
the delay function:
tau=@(t,x)[t-0.2*abs(sin(t)); 1.1*t]
Unfortunately, even function ddesd() is unable to solve this kind of problems, since
the state x(t) at time 1.1t is not previously known. The equations do not have numerical
solutions. The values of x(t) are automatically used to substitute for the values in
x(1.1t).
The previous values of the derivatives are not involved. In delay differential equations
research, if the derivatives of the state variables contain the values from the past, such
differential equations are referred to as neutral-type delay differential equations. In
this section, numerical methods for various neutral-type delay differential equations
are presented.
Definition 6.4. The general form of neutral-type delay differential equations can be
expressed as
x (t) = f (t, x(t), x(t − τp1 ), . . . , x(t − τpm ), x (t − τq1 ), . . . , x (t − τqk )) (6.3.1)
where the delays of the states and their derivatives are involved. Two vectors τ 1 =
[τp1 , τp2 , . . . , τpm ] and τ 2 = [τq1 , τq2 , . . . , τqk ] are used.
From the given definition, it can be seen that τp and τq can be constant vectors
or functions τp (t, x(t)) and τq (t, x(t)). These functions can be time-dependent, state-
dependent, or generalized. The mathematical form of the delays refers to the presen-
tation given earlier.
Neutral-type delay differential equations can be solved directly with the solver
ddensd(), with the syntax
If the delays in the differential equations are not fixed constants, the format in function
ddesd() can be used to express τ 1 and τ 2 as function handles. They can be described
by anonymous or MATLAB functions.
y (t) = y(t) + y (t − 1)
where, when t ⩽ 0, y(t) = 1 and t ∈ (0, 3). The analytical solution of the delay
differential equation is known as
t
{
{ e, if 0 ⩽ t ⩽ 1,
{ t
y(t) = {e + (t − 1)et−1 , if 1 < t < 2,
{
{ t t−1 t−2
{e + e + (t − 2)(t + 2e)e /2, if 2 ⩽ t ⩽ 3.
224 | 6 Delay differential equations
Solutions. The following commands can be used to directly describe the neutral-type
delay differential equations. The delay information can also be described, such that
the error of the numerical solutions is found to be 3.2820 × 10−4 . The solutions are
found as shown in Figure 6.11.
Figure 6.11: Numerical and analytical solutions of neutral-type delay differential equations.
In the solution process, the error message “Warning: DDENSD is intended only for mod-
est accuracy. RelTol has been increased to 1e–05 (ddensd()” indicates that the error
tolerance cannot be assigned to extremely small values. For this specific problem, the
solution is satisfactory.
where the input signal is u(t) ≡ 1, and the known matrices are
−13 3 −3 0.02 0 0 0
A1 = [106 A2 = [ 0 B = [1] .
[ ] [ ] [ ]
−116 62 ] , 0.03 0 ],
[207 −207 113] [ 0 0 0.04] [2]
Solutions. Since the equation contains simultaneously the terms x (t) and x (t − 0.5),
the solver dde23() cannot be used. The solver ddensd() should be used in the solution
6.3 Neutral-type delay differential equations | 225
process. Here the delay of the state variable is τ 1 = 0.15, while the derivative delay is
τ 2 = 0.5. The following anonymous function can be used to describe the neutral-type
delay differential equation. Then the following commands can be used to solve the
equation, and the curves of the states are obtained as shown in Figure 6.12.
Example 6.16. Consider now the neutral-type delay differential equations with non-
zero history functions[6]
and, for t ⩽ 0, y1 (t) = 0.33 − 0.1t, y2 (t) = 2.22 + 0.1t. In the interval t ∈ (0, 6), find the
numerical solution of the neutral-type delay differential equations.
It can be seen from the syntaxes of function ddensd() that it can be used to directly
solve neutral-type variable delay differential equations. The two delays should be first
described by MATLAB or anonymous functions. Then the function can be used to
directly solve the differential equations. Examples are given next to illustrate the nu-
merical solutions of neutral-type variable delay differential equations.
Example 6.17. The neutral-type delay differential equations in Example 6.16 are
changed here to the following form with variable delays:
y2 (0.6t)y2 (t)
{
{ y1 (t) = y1 (t)(1 − y1 (t − |y2 (t)|) − 2.9y1 (t − | sin y1 (t)|)) − 1 2 ,
y1 (t) + 1
{
{
2
{
{ y (0.6t)
{y2 (t) = ( 1
{ − 0.1) y2 (t)
{ y12 (t) + 1
where, for t ⩽ 0, y1 (t) = 0.33 − 0.1t and y2 (t) = 2.22 + 0.1t. In the interval t ∈ (0, 6), solve
the neutral-type delay differential equations.
Solutions. It can be seen from the mathematical model that the two delays contained
in the states are t − |y2 (t)| and 0.6t. There is one delay in the derivative, t − | sin y1 (t)|.
Therefore anonymous functions can be used to describe the delays. An anonymous
6.3 Neutral-type delay differential equations | 227
function can be used to describe also the original differential equation model. The
original problem can be solved with the following MATLAB commands, and the solu-
tion is shown in Figure 6.14. Selecting different solvers and error tolerances, it can be
seen that consistent curves are obtained.
Of course, the function can be used to solve delay differential equations of non-
neutral type. However, there are limitations, since the error tolerance cannot be set to
too small numbers. In this consideration it may not be as good as function ddesd().
Therefore it is not recommended to use this function for non-neutral-type problems.
Example 6.18. Consider again the variable delay differential equation shown in Ex-
ample 6.13, and solve it with the neutral-type solver.
>> f=@(t,x,Z,z)[-2*x(2)-3*Z(1,1);
-0.05*x(1)*x(3)-2*Z(2,2)+2;
228 | 6 Delay differential equations
0.3*x(1)*x(2)*x(3)+cos(x(1)*x(2))+2*sin(0.1*t^2)];
ff=ddeset; ff.RelTol=1e-5; ff.AbsTol=1e-10;
tau=@(t,x)[t-0.2*abs(sin(t)); 0.77*t]; % describing the delays
sol=ddensd(f,tau,[],zeros(3,1),[0,10],ff);
plot(sol.x,sol.y) % draw solutions
t
5 t
x (t) = t − e4x(t/2) + t ∫ sex(s) ds. (6.4.1)
2 2
0
Solutions. It is known from Example 4.25 that for this system, the integral term cannot
be eliminated by the substitution method. The original equation should be differen-
tiated twice such that the integral term can be removed. The third-order delay differ-
ential equation can be derived. The formulation process can be carried out under the
symbolic framework, so that no manual work is needed.
The original equation is converted into the following third-order delay differential
equation:
t 2
x (t) = 3tex(t) − 2e4x(t/2) x (t/2) + t 2 ex(t) x (t) − e4x(t/2) x (t/2) − 2te4x(t/2) (x (t/2)) .
2
1 In the reference, the integral was erroneously written as ses . It is modified to sex(s) , otherwise the
analytical solution is not x(t) = t 2 .
6.5 Exercises | 229
To solve this equation, three initial values are needed. The given x(0) = 1 is just
one of them, and the other two can be found by the following commands:
The two other initial values are x (0) = 0 and x (0) = 5/2 − e4x(0) /2 = 2.
It can be seen from the delay differential equation that there is a generalized delay
τ = t/2. Selecting the state variables as x1 (t) = x(t), x2 (t) = x (t), and x3 (t) = x (t), and
denoting z(t) = x(t/2), the standard form of the delay differential equation can be
written as
x2 (t)
[ ]
x (t) = [
[ x3 (t) ].
]
x1 (t)
[3te − 2e4z1 (t) z2 (t) + t 2 ex1 (t) x2 (t) − te4z1 (t) z3 (t)/2 − 2te4z1 (t) z22 (t)]
With the delay differential equation and initial states (in this example, history
information is not used), the following commands can be written to compute the nu-
merical solutions. Compared with the analytical solution, it can be seen that the error
norm is 3.0899 × 10−11 , and the elapsed time is 0.096 seconds. It can be seen that the
method here is quite efficient.
>> f=@(t,x,Z)[x(2:3);
3*t*exp(x(1))-2*exp(4*Z(1))*Z(2)+t^2*exp(x(1))*x(2)-...
t*exp(4*Z(1))*Z(3)/2-2*t*exp(4*Z(1))*Z(2)^2];
tau=@(t,x)t/2;
ff=ddeset; ff.RelTol=1e-13; ff.AbsTol=1e-13;
tic, sol=ddesd(f,tau,[0;0;2],[0,1],ff); toc
t=sol.x; y=sol.y; plot(t,y), norm(y(1,:)-t.^2)
6.5 Exercises
6.1 Consider the following delay differential equation:[32]
where, when t ⩽ 0, y1 (t) = 5, y2 (t) = 0.1, and y3 (t) = 1. Solve the delay differential
equation for t ∈ (0, 40).
6.2 Solve the following delay differential equation:
by(t − τ)
y (t) = − ay(t)
1 + yn (t − τ)
where a = 0.1, b = 0.2, n = 10, and r = 20, in the interval t ⩽ 1 000.
230 | 6 Delay differential equations
{
{ V (t) = (h1 − h2 F(t))V(t),
{
{ C (t) = ξ (m(t))h3 F(t − τ)V(t − τ) − h5 (C(t) − 1),
{
{
{F (t) = h4 (C(t) − F(t)) − h8 F(t)V(t)
1, if m(t) ⩽ 0.1,
ξ (m(t)) = {
10(1 − m(t))/9, if 0.1 ⩽ m(t) ⩽ 0.1
{
{ y1 (t) = y5 (t − 1) + y3 (t − 1),
{
{
{y2 (t) = y1 (t − 1) + y2 (t − 0.5),
{
{
{
{
y (t) = y3 (t − 1) + y1 (t − 0.5),
{ 3
{
{
y4 (t) = y5 (t − 1)y4 (t − 1),
{
{
{
{
{
{
{y5 (t) = y1 (t − 1)
where, when t ⩽ 0, y1 (t) = y4 (t) = y5 (t) = et+1 , y2 (t) = et+0.5 , and y3 (t) = sin(t + 1).
The solution interval is t ∈ [0, 1].
6.6 Consider the following complicated delay differential equations:[6]
with H(⋅) being the Heaviside function. When t ⩽ 0, the history functions are
y1 (t) = 5 × 10−6 , y2 (t) = 10−15 , and y3 (t) = y4 (t) = y5 (t) = y6 (t) = 0. If t ∈ (0, 300),
solve the delay differential equations.
6.7 Solve the following delay differential equations:[6]
1 t t t − 20
y (t) = −y(t) + y(t − 20) + cos ( ) + sin ( ) − sin ( )
20 20 20 20
where, when t ⩽ 0, y(t) = sin(t/20), and t ∈ (0, 1 000).
6.8 Consider the following delay differential equations:
by(t − τ)
y (t) = − ay(t)
1 + yn (t − τ)
where b = 0.2, a = 0.1, n = 10, τ = 20, and t ∈ (0, 1 000). Solve the delay
differential equations and draw the relationship between y(t) and y(t − τ).
6.9 Solve the following variable delay differential equations:[6]
t−1
y (t) = y(t − ln t − 1)y(t), t⩾1
t
where, when 0 ⩽ t ⩽ 1, y(t) = 1.
6.10 Solve the following delay differential equations:[6]
y2.5 (t − 2)
y (t) = −y(t) + y(t − 2) (2.5 − 1.5 )
1 0002.5
where 0 ⩽ t ⩽ 40. When −2 ⩽ t ⩽ 0, y(t) = 999.
6.11 Solve the following delay differential equation:[6]
t t t α(t)
y (t) = −y(t) + y(α(t)) + cos + sin − sin
20 20 20 20
where 0 ⩽ t ⩽ 1 000 and α(t) = t − 1 + sin t. If −20 ⩽ t ⩽ 0, y(t) = sin(t/20).
6.12 Solve the following delay differential equation:[63]
{
{ −0.4r(1 − t)y(t), 0 ⩽ t ⩽ 1 − c,
{
{−ry(t)(0.4(1 − t) + 10 − eμ y(t)),
{
{ 1 − c < t ⩽ 1,
y (t) = {
{−ry(t)(10 − eμ y(t)),
{ 1 < t ⩽ 2,
{
{
{ μ
{−re y(t)(y(t − 1) − y(t)), 2−c <t
where, when t ⩽ 0, y(t) = 10. The constants are c = 1/√2, r = 0.5, and μ = r/10.
Find the numerical solution of the delay differential equation for 0 ⩽ t ⩽ 10.
6.13 Solve the following delay differential equation:[18]
where, when t ⩽ 0, x(t) = sin t. The analytical solution of the equation is x(t) =
sin t; assess the precision of the numerical solution.
232 | 6 Delay differential equations
t−0.1
There are two major ways for describing linear differential equations with constant
coefficients. One of them is with high-order linear differential equations (see Defini-
tion 2.15)
The other is through first-order explicit differential equations, also known as state
space equations (see Definition 2.18)
t
A(t−t0 )
x(t) = e x(t0 ) + ∫ eA(t−τ) Bu(τ) dτ. (7.1.3)
t0
Definition 7.1. For given differential equations, if a small change in the input signal
yields a small change in the state variables, the system is stable. In other words, if the
state variables are finite, the differential equation is stable; while if the state variables
tend to infinity, the differential equation is unstable.
Theorem 7.1. If the input signal u(t) is bounded, the necessary and sufficient condition
of stability of the linear differential equations with constant coefficients is R [pi ] < 0 for
all i = 1, 2, . . . , n, where pi are the eigenvalues of the coefficient matrix A.
It can be seen from the current viewpoints that if the coefficients of the linear
differential equations are known, the stability judgement is an easy thing, since an ad-
vanced calculator or any computer mathematical language can be used to find all the
eigenvalues of the coefficient matrix A. What the user needs is merely to see whether
there exists an eigenvalue whose real part is larger than 0. If there is none, the differ-
ential equation is stable, otherwise it is unstable, since its corresponding signal tends
to infinity when t → ∞. The differential equation is then divergent. In particular, if
there are eigenvalues whose real part are zero, there are oscillations with constant
magnitudes as t → ∞. The differential equation is then critically stable.
2 1 −2 −1 0 −1 1
0 2 1 0 1] [0]
[ ] [ ]
[ −1
[ ] [ ]
[ 4 0 −5 −2 −1 −1] [0]
x (t) = [
[ 2
] x(t), x(0) = [ ]
[0] .
[ −3 −2 −2 1 −1]] [ ]
[ ] [ ]
[ −5 1 3 1 −2 1] [0]
[−4 2 4 3 1 0] [0]
7.1 Stability of differential equations | 235
Solutions. It is simple to judge whether this differential equation is stable or not with
direct methods. The coefficient matrix A can be loaded into MATLAB workspace first,
then the eigenvalues can be found. For convenience, symbolic computation is adopted
in computing the eigenvalues.
It can be seen that the eigenvalues are −2, −2, −2, 1, −1 ± j. Due to the existence of the
s = 1 eigenvalue, with positive real part, the differential equation is unstable.
Example 7.2. Judge the stability of the differential equation in Example 7.1 from the
analytical solution or time domain response.
Solutions. In fact, by the method in Chapter 2, it is not hard to compute the analytical
solution as x(t) = eAt x(0).
It can be seen that since the et term exists, when t → ∞, the analytical solution
x(t) tends to infinity, while the terms such as te−2t tend to zero. Therefore the stability
of the differential equation is determined by the existence of eigenvalues with positive
real parts. If there is one such eigenvalue, the differential equation is unstable. The
time domain response of the state variables can be drawn, with the following state-
ments:
The curves of the state variables are obtained as shown in Figure 7.1. It can be seen that,
when t increases, some of the states increase as exponential functions. They tend to
infinity quickly. Therefore the differential equation is unstable. It is also seen from the
analytical solution that the states x2 (t) and x4 (t) do not contain the et components,
the two signals are stable, while the whole differential equation is not.
a1 a3 a5 a7 ... 0
[a a2 a4 a6 ... 0]
[ 0 ]
[ ]
[0 a1 a3 a5 ... 0]
H=[
[0
]. (7.1.5)
[. a0 a2 a4 ... 0]
[. .. .. .. ..
]
]
[. . . . . 0]
[0 0 0 0 ... an ]
If the determinants of all the upper-left submatrices of matrix H are all positive, the
differential equation is stable:
a a3
Δ1 = a1 > 0, Δ2 = 1 = a1 a2 − a0 a3 > 0, (7.1.6)
a0 a2
a a3 a5
1
Δ3 = a0 a2 a4 = a3 Δ2 − a1 (a1 a4 − a0 a5 ) > 0, . . . (7.1.7)
0 a1 a3
The so-defined Hurwitz matrix is a rectangular matrix. Since only the major sub-
matrices are involved, the first n rows are sufficient. The generation of Hurwitz matrix
is demonstrated with examples, with their applications in stability analysis.
Example 7.3. Assess the stability of the differential equation in Example 7.1 with
Routh–Hurwitz criterion.
Solutions. Matrix A can be entered into MATLAB workspace first, then the character-
istic polynomial can be established
The characteristic polynomial of matrix A can be computed directly with the above
statements as
Of course, since the signs are changing in the polynomial matrix, it is sufficient
to say that the differential equation is unstable. Now, the Hurwitz matrix can be con-
structed with the following statements:
H(2*i,i-1+(2:2:n+2)/2)=p(1:2:n+1);
end, H=H(1:n,1:n) % extract Hurwitz matrix
for i=1:n, d=[d,det(H(1:i,1:i))]; end, d
7 18 −24 0 0 0
[1 18 0 0]
[ ]
−4 −16
[ ]
[0 7 18 −24 0 0]
H=[
[0
].
[ 1 18 −4 −16 0]]
[ ]
[0 0 7 18 −24 0]
[0 0 1 18 −4 −16]
It can be seen that there is one negative determinant in d, therefore the differen-
tial equation is unstable. This is consistent with the conclusion in Example 7.1. The
judgement method here is much more complicated than the direct method. It is not
possible to find which of the states are stable, and which are not, with the indirect
method. Therefore indirect methods are not recommended.
Example 7.4. Consider the fourth-order characteristic polynomial s4 +ps3 +qs2 +rs+h =
0. Show that the stability conditions for the differential equation are:[60]
Solutions. With the following commands, the Hurwitz matrix can be established,
from which the determinants of the four major submatrices can be found. Compared
with the code presented earlier, the zero matrix H is converted to a symbolic one
before the loop structure, otherwise error messages are given. The other statements
are exactly the same as those in the previous example.
p r 0 0
[1 q h 0]
H=[
[ ]
]
[0 p r 0]
[0 1 q h]
To ensure that the differential equation is stable, the four terms must be positive.
The first three terms are the same as expected. Now let us have a look at the fourth
term (−hp2 + qpr − r 2 )h. Since the expression inside the parentheses is the third major
determinant d3 , therefore d3 h > 0 can be reduced to h > 0.
The definitions of positive and negative definiteness of functions are introduced in this
section first, followed by the definition of Lyapunov function. Finally, the Lyapunov
stability and its criterion are presented.
Definition 7.2. If a scalar function V(x) satisfies V(x) > 0 when x ≠ 0, and V(x) = 0
when x = 0, function V(x) is referred to as a positive definite function; if V(x) ⩾ 0,
then V(x) is a semipositive definite function.
Definition 7.3. If a function V(x) satisfies V(x) < 0 when x ≠ 0, and V(x) = 0 if
x = 0, V(x) is referred to as a negative definite function; if V(x) ⩽ 0, then V(x) is a
seminegative definite function.
Solutions. For any a, b, x1 , and x2 , we have abx1 x2 cos(x1 + x2 ) ⩾ −|ax1 ||bx2 |, therefore,
it is easily seen that V(x1 , x2 ) is a positive definite function:
Definition 7.4. If a given function V(x(t)) is positive definite, and its derivative
dV(x(t))/dt is negative definite, then V(x(t)) is referred to as a Lyapunov function.
240 | 7 Properties and behaviors of ordinary differential equations
Definition 7.5. If a differential equation does not explicitly have the time t term, it
is referred to as an autonomous one. Autonomous differential equations are briefly
denoted as x (t) = f (x(t)).
In other words, autonomous systems require that the parameters in the model are
all time-invariant. If there exist time-varying parameters, Lyapunov criterion cannot
be used in assessing the stability of the system.
Theorem 7.3. If function x(t) in Definition 7.4 is the state described by x (t) = f (x(t)),
then having a Lyapunov function for it is sufficient for the Lyapunov stability of the
autonomous differential equation.
Note that the condition here is just a sufficient condition. There are no meth-
ods known of how to construct Lyapunov functions in Lyapunov stability assessment
method. This theorem is also known as the second Lyapunov method, or Lyapunov
direct assessment method. Lyapunov stability assessment is based on the art of Lya-
punov function construction. If a Lyapunov function can be constructed, the differen-
tial equation is shown to be stable. If a Lyapunov function cannot be constructed, or a
positive definite V(x(t)) is composed, but when it is substituted into the differential
equation, V (x(t)) < 0 cannot be shown, this does not necessarily imply that the
differential equation is unstable.
Many theorems are based on the autonomous differential equations, that is, t is not
explicitly contained in the differential equations. In real applications, if a time-varying
(or nonautonomous) differential equation is to be studied, a new state can be intro-
duced such that the differential equation can be augmented to become an autonomous
system. An example is given next to show how to convert a time-varying differential
equation into an autonomous one.
2
v (t) = v (t)v(t) − 2t(v (t)) .
Solutions. In normal cases, the state variables can be selected as x1 (t) = v(t), x2 (t) =
v (t), and x3 (t) = v (t) such that the original equation can be converted into a first-
order explicit differential equation. While such differential equations explicitly con-
tain t, if an autonomous differential equation is expected, which does not explicitly
contain t, an augmented state x4 (t) = t can be introduced such that the following
autonomous differential equation can be established:
x2 (t)
[ x 3 (t)
]
x (t) = [
[ ]
2]
[x2 (t)x1 (t) − 2x4 (t)(x3 (t)) ]
[ 1 ]
where x4 (0) = 0, and the initial values of other states are exactly the same as
those defined with ordinary methods. Now the converted differential equation is
an autonomous one.
In this section, simple nonlinear differential equations are used to demonstrate sta-
bility judgement, and validate the assessment with examples.
Consider the following autonomous nonlinear differential equations:
Differentiating this function, and substituting the differential equation into the
results, it is found that
If one is able to show that V (t) < 0, the differential equations can then be shown
to be stable; if V (t) cannot be proved negative definite, another test function should
be chosen. Examples are shown for the stability analysis of nonlinear differential
equations.
Solutions. Choosing the positive definite test function according to (7.1.9), the follow-
ing commands can be tried to compute its derivative:
>> syms x1 x2
f=x2-x1*(x1^2+x2^2); g=-x1-x2*(x1^2+x2^2);
Vd=2*x1*f+2*x2*g; simplify(Vd) % find the derivative
Through MATLAB simplification command, it is found that V (t) = −2(x12 (t) + x22 (t))2 .
It can be seen that V (t) is negative definite, indicating that the system is stable.
Example 7.8. Show that the following system of differential equations is stable:[47]
x (t) = −x1 (t)(x12 (t) + x22 (t))(1 − cos ln(x12 (t) + x22 (t)) − sin ln(x12 (t) + x22 (t))),
{ 1
x2 (t) = −x2 (t)(x12 (t) + x22 (t))(1 − cos ln(x12 (t) + x22 (t)) − sin ln(x12 (t) + x22 (t))).
Solutions. As in the previous example, the following MATLAB code can be written:
2
V (t) = 2(x12 (t) + x22 (t)) (√2 sin ( + ln (x12 (t) + x22 (t))) − 1) .
π
4
If it can be shown that the expression in the parenthesis is always smaller than
zero, then theoretically the system will be stable. In [47] it seems that V (t) < 0 was
apparently proved, but the author of this book is suspicious about the result. Luckily,
with the powerful MATLAB graphical facilities provided, the surface of V (t) can be
drawn directly, as shown in Figure 7.2. One may pick a point t0 on the surface and see
that V (t0 ) = 1.311 > 0, which is sufficient to indicate that V (t) < 0 does not always
hold.
But even if this happens, it is not necessarily the case that the system is unstable. From
the theoretical viewpoint, one can only blame the improper choice of the function
V(t). Other similar functions must be tried. No conclusion can be made so far with
Lyapunov criterion.
7.1 Stability of differential equations | 243
Example 7.9. Now let us see a frequently used example of second-order differential
equation in stability related textbooks:
Solutions. If a positive definite function V(t) = x2 (t) + y2 (t) is chosen, the following
commands can be used to compute V (t) directly:
It can be seen that the result is V (t) = 2(x 2 (t) + y2 (t))(x2 (t) + y2 (t) − 1), where 2(x 2 (t) +
y2 (t)) ⩾ 0 always holds. The final result is determined by the (x 2 (t) + y2 (t) − 1) term.
If this term is ⩽ 0, then V (t) is negative definite such that the original system is
stable. Unfortunately, (x2 (t) + y2 (t) − 1) < 0 does not always hold, because it cannot be
guaranteed that the coordinate (x(t), y(t)) always lies inside a unit circle. Therefore,
no useful clue can be found regarding the stability of the system.
It can be seen in the earlier examples that, in order to construct a Lyapunov function,
the structure and parameters in the examples were deliberately chosen. For instance,
if the sign of a term in Example 7.8 is altered, the function V(t) cannot be constructed.
Therefore the direct use of Lyapunov judgement is quite restricted. For practical stabil-
ity judgement, simulation methods cannot be used in assessing the system stability.
244 | 7 Properties and behaviors of ordinary differential equations
Example 7.10. Consider again Example 7.9. Use the numerical method to assess the
stability of the differential equations.
Solutions. If initial points (x(0), y(0)) are select randomly in the interval [−5, 5], the
differential equation can be solved, and the phase plane trajectory can be drawn as
shown in Figure 7.3. It can be seen that the system is unstable. If a system is unstable,
no conclusion can be found with merely using Lyapunov stability judgement.
>> f=@(t,x)[-x(2)+x(1)*(x(1)^2+x(2)^2-1);
x(1)+x(2)*(x(1)^2+x(2)^2-1)];
ff=odeset; ff.RelTol=1e-10; ff.AbsTol=1e-10;
for i=1:100, i, x0=-5+10*rand(2,1); % random initial value
[t,x]=ode15s(f,[0,100],x0,ff); line(x(:,1),x(:,2))
end
Example 7.11. Selecting initial values in a large range, assess the stability of the dif-
ferential equations in Example 7.8.
Solutions. The theoretical formulation of the stability test depends heavily the selec-
tion of the test function. Especially if a system is unstable, there is no way to validate
that with Lyapunov criterion. Therefore, the simulation method can be used to assess
that. For example, 100 random initial values are generated in [−500, 500] to find the
7.1 Stability of differential equations | 245
numerical solutions of differential equations. All the solutions converge to fixed val-
ues, and there is no case where the differential equation is divergent. For the sake of
safety, 10 000 more simulations are carried out, without a single case where the system
is divergent. Therefore it is sufficiently safe to say that the system is stable. In Figure 7.4,
the zoomed responses in the interval (0, 0.0001) are shown, and the simulation region
is (0, 100).
>> D=@(x)(x(1)^2+x(2)^2);
f=@(t,x)[-x(1)*D(x)*(1-cos(log(D(x)))-sin(log(D(x))));
-x(2)*D(x)*(1-cos(log(D(x)))-sin(log(D(x))))];
ff=odeset; ff.RelTol=100*eps; ff.AbsTol=100*eps;
for i=1:100, i, x0=-500+1000*rand(2,1); % large range trial
[t,x]=ode15s(f,[0,100],x0,ff); line(t,x), drawnow
end, xlim([0,0.0001]) % display transient response
Example 7.12. Consider the following differential equations, and assess its stability:
x1 (t) = −4x1 (t) − 2x1 (t)x2 (t) sin |x1 (t)|,
{
x2 (t) = x1 (t)x2 (t) + 3x2 (t)e−x2 (t) .
>> f=@(t,x)[-4*x(1)-2*x(1)*x(2)*sin(abs(x(1)));
x(1)*x(2)+3*x(2)*exp(-x(2))];
246 | 7 Properties and behaviors of ordinary differential equations
It can be seen that the simulation method is a practical one. Especially when there
is a divergent case found, the assumption that the system is stable can be overturned.
This cannot be established with the ordinary Lyapunov criterion.
In Chapter 4, the phase plane trajectory of van der Pol equation was demonstrated,
where the trajectory would start from a certain point, and then settle done on a closed-
path. Periodic motion then formed the path. The closed-path is known as the limit
cycle of the differential equation. Limit cycle phenomena were first discovered by a
French mathematician Jules Henri Poincaré (1854–1912) and a Swedish mathematician
7.2 Special behaviors of differential equations | 247
Ivar Otto Bendixson (1861–1935). Limit cycles stimulated research in many fields such
as physics, chemistry, and biology.
either approaches f1 (y1 (t), y2 (t)) = f2 (y1 (t), y2 (t)) = 0, or a periodic solution, or a limit
cycle, when t tends to infinity.
In this section, examples are used to demonstrate limit cycles through numerical
solutions of differential equations.
with A = 1 and B = 3, and initial values x1 (0) = x2 (0) = 0. Draw the phase plane
trajectory. Selecting different initial values study the limit cycles.
Solutions. The equation was used to describe the dynamic process of a chemical re-
action, known as Brusselator. It can be described by an anonymous function. Then
regular commands can be used to solve the equations directly.
The elapsed time of the solution process is about 0.56 seconds, and the number of
points is 193 241. Since the relative error tolerance is set to a small number, the solution
obtained is of high accuracy.
If the values are set to x1 (0) = 4 and x2 (0) = 0, the numerical solution must be
found again. The phase plane trajectories from the two initial values are obtained as
shown in Figure 7.6. It can be seen that no matter how the initial values are selected,
the final phase plane curves settle down on the same closed-path, that is, the limit
cycle.
Example 7.14. Assume that in Example 7.13 the parameters such as A and B are not
constants. They are dynamic functions of t, satisfying the differential equations[32] be-
low. It was noticed that for the differential equations provided in the reference, the
limit cycles are not observed. Therefore some modifications were made, and the new
model is
{
{ A (t) = −k1 A(t),
{
{
{B (t) = −k2 B(t)X(t),
{
{
{
{
{
{D (t) = k B(t)X(t),
{ 2
{
{
{
{ E
(t) = k 4 X(t),
{
{
{
{
{
{ X (t) = k1 (A(t) + 1) − k2 (B(t) + 4)X(t) + k3 X 2 (t)Y(t) − k4 X(t),
{
{ 2
{Y (t) = k2 (B(t) + 4)X(t) − k3 X (t)Y(t).
Solutions. Selecting state variables x1 (t) = A(t), x2 (t) = B(t), x3 (t) = D(t), x4 (t) = E(t),
x5 (t) = X(t), and x6 (t) = Y(t), the original differential equations can be manually
rewritten into the standard form:
{
{ x1 (t) = −k1 x1 (t),
{
{
{
{
{
{ x2 (t) = −k2 x2 (t)x5 (t),
{
{
{x (t) = k x (t)x (t),
{ 3 2 2 5
{
{
{
{ x
4 (t) = k x
4 5 (t),
{
{ 2
{
{
{
{ x 5 (t) = k1 (x1 (t) + 1) − k2 (x2 (t) + 4)x5 (t) + k3 x5 (t)x6 (t) − k4 x5 (t),
{
{ 2
{x6 (t) = k2 (x2 (t) + 4)x5 (t) − k3 x5 (t)x6 (t).
7.2 Special behaviors of differential equations | 249
The following commands can be used to describe the first-order explicit differ-
ential equations. Then the equations are solved, in 2.98 seconds, with the number
of points being 1 325 665. The new phase plane trajectory is obtained as shown in
Figure 7.7. If the solver is changed to ode15s(), the time is increased to 34.4 seconds.
Example 7.15. In fact, by observing the differential equations in Example 7.14, we no-
tice that the signals D(t) and E(t) do not appear on the right-hand of the equations,
indicating that they have no impact on the other state signals. Therefore they can be
removed from the differential equations. Solve again the simplified differential equa-
tions.
Solutions. Letting x1 (t) = A(t), x2 (t) = B(t), x3 (t) = X(t), and x4 (t) = Y(t), the new
differential equations can be manually written as
{
{ x1 (t) = −k1 x1 (t),
{
{
{x2 (t) = −k2 x2 (t)x3 (t),
{
{x (t) = k1 (x1 (t) + 1) − k2 (x2 (t) + 4)x3 (t) + k3 x 2 (t)x4 (t) − k4 x3 (t),
{
{
{
{ 3 3
{ 2
x
{ 4 (t) = k (x
2 2 (t) + 4)x 3 (t) − k x
3 3 (t)x 4 (t).
250 | 7 Properties and behaviors of ordinary differential equations
Solving the simplified equations again, it can be seen that the solutions are exactly
the same as those obtained earlier. The number of points is still the same. Since two
redundant states were removed, the time needed was reduced to 2.83 seconds.
Example 7.16 (Multiple limit cycle problems). Consider the following differential
equations:[22]
where f (r) = r 2 sin(1/r). Observe the limit cycles under different initial values.
Solutions. Letting x1 (t) = x(t) and x2 (t) = y(t), and substituting the f (⋅) function
directly into the equation, the standard form the first-order explicit differential equa-
tions can be written as
−x2 (t) + x1 (t)(x12 (t) + x22 (t)) sin 1/√x12 (t) + x22 (t)
x (t) = [ ].
2 2 2 (t) + x 2 (t)
[ x 1 (t) + x 2 (t)(x1 (t) + x 2 (t)) sin 1/√x 1 2 ]
The following anonymous function can be used to describe the first-order explicit
differential equations. If two different initial values are selected as (0.1,0.2) and
(0.1,0.01), we can solve the differential equations and then draw the phase plane
trajectories in the same plot, as shown in Figure 7.8. It can be found that the two
curves settle down as two different limit cycles.
>> D=@(x)x(1)^2+x(2)^2;
f=@(t,x)[-x(2)+x(1)*D(x)*sin(1/sqrt(D(x)));
x(1)+x(2)*D(x)*sin(1/sqrt(D(x)))];
ff=odeset; ff.RelTol=100*eps; ff.AbsTol=100*eps;
[t,x]=ode45(f,[0,100],[0.1; 0.2],ff); plot(x(:,1),x(:,2))
[t,x]=ode45(f,[0,100],[0.1; 0.01],ff); line(x(:,1),x(:,2))
In fact, if a smaller initial value is tried, more limit cycles can be found. This
indicates that the same differential equation may have different limit cycles. For this
particular differential equation, there may be infinitely many limit cycles.
7.2 Special behaviors of differential equations | 251
Figure 7.8: Different initial values may yield different limit cycles.
It has been indicated that if the real parts of a pair of complex conjugate eigenval-
ues of a linear differential equation with constant coefficients are zeros, when the
transient response vanishes and the states of the system may have oscillations with
equal magnitude. The system responses then can be regarded as periodic. Besides,
the limit cycles can also be understood as periodic solutions of nonlinear differential
equations. In fact, many differential equation solutions are periodic. For instance,
such is the multibody system discussed earlier. In this section, examples are proposed
to explore the periodicity of differential equation solutions, and how the period can
be extracted from the numerical solutions.
Example 7.17. Solve the following linear differential equations with constant coeffi-
cients:
y(4) (t) + 2y (t) + 3y (t) + 4y (t) + 2y(t) = u(t)
where u(t) = 1, y(0) = 2, y (0) = 1, y (0) = y (0) = 0, and t ∈ (0, 30).
Solutions. It can be seen that this is a linear differential equation with constant coef-
ficients. The analytical solution can be found with function dsolve().
19 −t 5 −t 11 13√2 1
Y(t) = e + te − cos √2 t + sin √2 t + .
9 3 18 18 2
252 | 7 Properties and behaviors of ordinary differential equations
It can be seen that the e−t and the related terms are transient in the system re-
sponses. When t increases, these terms vanish gradually. The remaining ones are os-
cillating terms, with an additional constant 1/2. These terms may retain oscillation
with equal magnitudes. The periodicity is T0 = 2π/√2.
The analytical solution can be used to directly draw the time domain responses
of the solution y(t), as seen in Figure 7.9. It is seen that when the transient response
vanishes, the output signal is oscillatory with equal magnitudes.
How can we find the period of the signal? Two important subintervals t ∈ (5, 10) and
t ∈ (25, 30) can be observed. The maximum value and time instance can be measured
in these subintervals. In this period there are five cycles visible. The time difference
can be computed and divided by 5, so that the period can be estimated. This may be
more accurate than finding the period of a single cycle.
>> t=0:0.00001:30;
y=(19*exp(-t))/9-(11*cos(2^(1/2)*t))/18+(5*t.*exp(-t))/3+...
(13*2^(1/2)*sin(2^(1/2)*t))/18+1/2;
i01=find(t>=5&t<10); [xm1,i1]=max(y(i01(1):end));
i02=find(t>=25); [xm2,i2]=max(y(i02(1):end));
t1=t(i01(1)+i1), t2=t(i02(1)+i2), T=(t2-t1)/5
hold on, plot(t1,xm1,’ko’,t2,xm2,’ko’)
It can be seen that the time instances at a and b points can be found as t1 = 5.9233
and t2 = 28.1493, the average period is found as T = 4.4452, which is quite close to the
theoretical value of T0 = 2π/√2 = 4.4429.
7.2 Special behaviors of differential equations | 253
Linear differential equations may also have their limit cycles. The phase plane
trajectory between y(t) and y (t) can be obtained as shown in Figure 7.10, which is the
limit cycle curve.
Example 7.18. Solve again the differential equations in Example 7.13. Draw the time
responses of the state variables and observe their periodicity.
Two circles are labeled in the figure, to indicate the first maximum points of x2 (t)
values. Now the subintervals t ∈ (0, 10) and t ∈ (40, 50) are used to extract the peak
values and measure the time instances. They are respectively t1 = 7.9917 and t2 =
43.7779, therefore the period found is T = 7.1573.
The so-called chaos means that, for a determined system, if there is only a small dif-
ference in the initial values, the output signal changes significantly in a form which
looks like a random one. This phenomenon is also known as chaotic behavior, which
is not a random but deterministic one.
Although the observation of this phenomenon may be traced back to the Henri
Poincaré’s works, it first received attention in a paper by an American meteorologist
Edward Lorenz [49]. In 1961, when Lorenz was solving differential equations, and sim-
ply used an initial value of 0.506127 instead of 0.506, completely different results were
obtained.[28] He began studying the phenomenon and achieved a series of results. He
coined a new term, “butterfly effect”, to describe the phenomenon, since the phase
space trajectory of his equation looks like a butterfly. The equation was solved in Chap-
ter 3, and the butterfly curve can be seen in Figure 3.6.
What really became a sensation and widely spread was the title of Edward
Lorenz’s plenary talk in 1972 “Does the flap of a butterfly’s wings in Brazil set off
a tornado in Texas?”.[50] Therefore, the butterfly effect became the precursor of chaos.
Several chaotic differential equations like Lorenz and Chua equations are simu-
lated in this section, and the concept of attractors is presented.
{
{ x (t) = −σx(t) + σy(t),
{
{ y (t) = −x(t)z(t) + rx(t) − y(t),
{
{
{z (t) = x(t)y(t) − bz(t)
7.2 Special behaviors of differential equations | 255
where σ = 10, b = 8/3, and r = 28. The initial values are x(0) = z(0) = 0 and y(0) =
0.01. This example is close to that in Example 3.7. Find the state value at t = 100. If
y(0) = 0.02, solve the differential equation again and observe the impact of the initial
value to the results.
Solutions. If we let x1 (t) = x(t), x2 (t) = y(t) and x3 (t) = z(t), the differential equations
can be described directly and solved.
Although the phase space trajectories of the sets of solutions seem very close, the
butterfly shaped trajectories are found in Figure 3.6, the progress has significant dif-
ferences, as shown in Figure 7.12. The shapes of the curves of y(t) are very close in the
first 30 seconds. Afterward they are totally different. The terminal values are x 1 (100) =
[−14.2366, −20.8535, 26.7349]T and x 2 (100) = [8.8711, 13.3504, 20.2361]T , which implies
that they have significant differences. It can be seen that a very small difference in
initial values yields significant results.
An American scholar Leon Ong Chua (1936–) built a hardware circuit, known as
Chua circuit, to reproduce chaotic behavior. The basic form of the circuit is illustrated
in Figure 7.13(a), with the mathematical model being:[22]
{
{ i (t) = v1 (t)/L,
{
v (t) = (v2 (t) − v1 (t))/(C1 R) − i(t)/C1 ,
{ 1
{
{
{v2 (t) = (v1 (t) − v2 (t))/(C2 R) − v2 (t)/(C2 r)
where r is the nonlinear element shown in Figure 7.13(b). The nonlinear element is
referred to as Chua diode. Later, nonlinear elements of various forms appeared to
model chaotic behavior of different kinds.
R ir (t)
i(t) + +
+
− ir (t)
− −
Chua circuit opened up a new world for simulating chaotic behaviors with hardware
circuits. Later similar circuits were proposed to model different chaotic behaviors,[7]
including solid circuits and simulation models.
Example 7.20. According to Chua circuit, the dimensionless differential equations are
Solutions. The control parameters can be input into MATLAB workspace, and anony-
mous functions can be used to describe the Chua nonlinear element and state space
7.2 Special behaviors of differential equations | 257
equations, so that they can be solved. The phase plane trajectory is obtained as shown
in Figure 7.14. Note that since the simulation time interval is large, the selection of the
initial values are not thus important.
It can be seen from the chaotic behavior shown in Figure 7.14 that the trajectory is
formed around two central points. The two central points are referred to as attractors.
The curve in this example is known as a double-scroll attractor curve. The butterfly
effect in Lorenz equations also has two attractors.
If the following nonlinear function is introduced, the chaotic behavior of an
n-scroll attractor can be witnessed in theory. Note that q and n are not in one-to-one
correspondence:
2q−1
1
h(a) = m2q−1 + ∑ (m − mi )(|a + ci | − |a − ci |). (7.2.2)
2 i=1 i−1
Solutions. The following statements can be used to directly construct function h(a).
Note that the function is suitable for any number of attractors. The vectors m and c
should be written as row and column vectors, respectively, and the length of m is 1
more than that of c. With such a nonlinear function, the 3-scroll attractors in phase
258 | 7 Properties and behaviors of ordinary differential equations
plane are drawn, as seen in Figure 7.15. It can be seen from the curve that the phase
plane trajectory indeed looks like a 3-scroll attractor model.
It can be seen from the trajectory that the central point in-between the two at-
tractors seems to be covered with the trajectories. In fact, this is not the case, since
the plot shown in the figure is the projection on the x1 (t)–x2 (t) plane. In a real three-
dimensional display, as show in Figure 7.17, the attractors are located at different val-
ues of x3 (t), such that the attractors are not covered.
Suppose in the differential equation x (t) = f (t, x(t)), the input f (t, x(t)) is a periodic
function with period T. Imagine that there is a plane. In each cycle, the trajectory
penetrates the plane once, and leaves a mark on it. A series of such marks can be gen-
erated in this way, to form the Poincaré section. The map is known as Poincaré map.
Poincaré map is named after a French mathematician Jules Henri Poincaré (1854–
1912). An example is introduced to demonstrate Poincaré map.
where the parameters are α = 1, β = 5, δ = 0.02, γ = 8, and ω = 0.5. Draw the Poincaré
section for the differential equation.
260 | 7 Properties and behaviors of ordinary differential equations
Solutions. Duffing equation is named after a German engineer Georg Duffing (1861–
1944), who used it in describing nonlinear oscillation problems. It can be seen that
the period of the input signal is T = 2π/ω. Suppose 100 000 cycles of simulation are
made and, with each cycle, a point in Poincaré section is computed, then the Poincaré
section is obtained as shown in Figure 7.18. Note that the results should be marked by
dots, rather than segments. Otherwise Poincaré section cannot be drawn.
Example 7.23. In fact, the beautiful Poincaré section shown in Figure 7.18 is question-
able. In the solution process, the function ode45() is called with no control options.
That is, the default relative error tolerance RelTol, or 10−3 , is used. This value is too
large, and there may be large errors in this problem. Select a reasonable error tolerance
and draw Poincaré section again.
Solutions. If a stricter, but reliable error tolerance 10−10 is used, the Poincaré section
can be drawn again. After 593.3 seconds of waiting, the genuine Poincaré section can
be redrawn, as shown in Figure 7.19. Compared with the plot in Figure 7.18, the accurate
one looks not beautiful at all, and seems disordered. But it is the actual one.
Imagine a particle moving in the space. What conditions must be satisfied such that
the particle stops moving? When the projections of speed at all the axes are zero, the
particle stops moving. The position the particle where it stops moving is referred to
as the equilibrium point. Normally speaking, a differential equation may have several
equilibria.
How can we find all the equilibria of a given differential equation? Suppose the
first-order explicit autonomous differential equation is known as
where x (t) is the speed of the state variables, or understood as the projection of the
speed of the particle at all the axes. If all the components of the speed equal to zero,
the particle stops moving, so that an equilibrium point of the particle can be found.
Unfortunately, this method applies only to autonomous differential equations. If a
differential equation contains t explicitly, the number of equations is smaller than the
number of unknowns, the equilibrium points cannot be found.
Theorem 7.5. The equilibria of an autonomous state equation x (t) = f (x(t)) can be
found from the following algebraic equation:
f (x(t)) = 0. (7.3.2)
The following examples are used to demonstrate the positions of the equilibria,
and their properties are also illustrated.
Example 7.24. Find the equilibria for the following nonlinear differential equations
with two states:
Solutions. Since the expressions of x (t) and y (t) are known, the algebraic equations
can be solved directly to find the equilibria:
A symbolic function solve() in MATLAB can be used to analytically find the so-
lution of the algebraic equations.
>> syms x y;
[x0,y0]=solve(x*(1-x+y)==0, y*(1+x-2*y)==0) % solve algebraic equations
The solutions obtained are x 0 = [0, 1, 0, 3]T , y 0 = [0, 0, 1/2, 2]T , which implies that
there are four equilibria (0, 0), (1, 0), (0, 1/2), and (3, 2).
What is the relationship between the equilibria and solutions? A simulation ex-
periment can be made. In the square 0 ⩽ x, y ⩽ 1, 100 initial points are generated
randomly, and the differential equations are solved such that the phase plane trajec-
tories can be drawn, as shown in Figure 7.20. It can be seen that all the solutions flow
to the equilibrium point (3, 2).
Which initial values may yield solutions flowing to other equilibria? If an initial
point is located on the x axis, the solution may flow to the equilibrium point (1, 0); if
the initial point is on the y axis, the solution may settle down at (0, 1/2). The origin
(0, 0) is an isolated point. Unless the initial point is selected at the origin, all other
positions make the point move away and flow to other equilibria.
It can be seen from the example that if the algebraic equation is analytically solv-
able, function solve() can be used to find all the equilibria in one function call. Un-
fortunately, in real applications, it is not the case. Function vpasolve() can be used,
however, only one solution can be found at a time. If the equation set has multiple
solutions, a dedicated solver provided in Volume IV, more_sols(), can be used to find
all the solutions in one function call.
Example 7.25. Find all the equilibria for the Chua circuit studied in Example 7.20.
With the above statements, three equilibria can be found at (0, 0, 0), (−1.5, 0, 1.5), and
(1.5, 0, −1.5). Compared with Figure 7.13, it can be seen that the latter two equilibria are
the double-scroll attractors. The origin is an isolated equilibrium point.
In fact, if the phase space trajectory of Chua circuit is drawn, and rotated properly,
the effect in Figure 7.21 can be found. In fact, besides the two equilibria on the two
sides, all trajectories avoid the origin. Therefore, the origin is also one of the equilibria.
Example 7.26. Find the equilibria of the 3-scroll attractors circuit in Example 7.21 and
locate the positions of the attractors.
It is somewhat surprising to see that there are altogether seven equilibria found:
at (0, 0, 0), (−5.6354, 0, 5.6354), (5.6354, 0, −5.6354), (−1.3, 0, 1.3), (2.8786, 0, −2.8786),
(−2.8786, 0, 2.8786), and (1.3, 0, −1.3). The following statements can be used to solve
again the differential equations, and draw the phase space trajectories. The equilibria
are all superimposed on the chaotic trajectories, as shown in Figure 7.22. It can be seen
7.3 Linearization approximation of differential equations | 265
the leftmost and rightmost ones are unstable equilibria. The second, fourth, and sixth
from left are the 3-scroll attractors, while the third and fifth are isolated equilibria.
The equilibria can further be classified into stable, unstable, and saddle points.
They can be further judged according to the linearized models.
for which an equilibrium point (x0 , y0 ) is known. Selecting a nearby point (x0 +Δx, y0 +
Δy), such that |Δx| ≪ 1 and |Δy| ≪ 1, it can be found that
𝜓 𝜓
{ F(x + Δx, y0 + Δy) = F(x0 , y0 )Δx + F(x0 , y0 )Δy + ⋅ ⋅ ⋅ ,
{ 0
{ 𝜓x 𝜓y
{ (7.3.4)
{
{G(x + Δx, y + Δy) = 𝜓 G(x , y )Δx + 𝜓 G(x , y )Δy + ⋅ ⋅ ⋅
0 0 0 0 0 0
{ 𝜓x 𝜓y
266 | 7 Properties and behaviors of ordinary differential equations
where the higher-order terms of Δx and Δy can be omitted, and the terms F(x0 , y0 ) and
G(x0 , y0 ) can also be omitted, since at the equilibrium point, they are both zeros. It is
not hard to find from (7.3.4) that
𝜓 𝜓
{
{ Δx (t) ≈ F(x0 , y0 )Δx(t) + F(x0 , y0 )Δy(t),
{ 𝜓x 𝜓y
{ (7.3.5)
{ 𝜓 𝜓
{Δy (t) ≈ G(x0 , y0 )Δx(t) + G(x0 , y0 )Δy(t).
{ 𝜓x 𝜓y
Letting z1 (t) = Δx(t) and z2 (t) = Δy(t), the linear state space equation can be
written for z(t) as
𝜓 𝜓
F(x0 , y0 ) F(x0 , y0 )
[ 𝜓x 𝜓y ]
J=[
[𝜓
].
] (7.3.7)
𝜓
G(x0 , y0 ) G(x0 , y0 )
[ 𝜓x 𝜓y ]
Theorem 7.6. More generally, the nonlinear first-order explicit differential equations in
(3.1.1) can be linearized into the form of (7.3.6), where
Example 7.27. Find the linearized models of the following differential equations:
Solutions. It can be seen from the model that the equilibrium point of the original
system is (0, 0, 0), so the Jacobian matrix of the linearized model from symbolic com-
putation can be obtained. Also the linearized matrix and the eigenvalues at the equi-
librium point are
7.3 Linearization approximation of differential equations | 267
>> syms x1 x2 x3
F=[-2*x1+x2-x3-x1^2*exp(x1);
x1-x2-4*x1^3*x2-x3^2;
x1+x2-x3+8*exp(x1)*(x2^2+x3^2)];
A=jacobian(F,[x1;x2;x3]) % Jacobi matrix
A0=subs(A,{x1,x2,x3},{0,0,0}) % linearized model at equilibrium point
double(eig(A0)) % eigenvalues of the linearized model
The Jacobian matrix A of the differential equation and the linearization matrix A0 at
the equilibrium point can be found. The eigenvalues of matrix A0 are −2.4656 and
−0.7672 ± 0.7926j. Since all of them have negative real parts, the linearized model is
stable:
Now observe the numerical solutions of the original nonlinear differential equa-
tions. If an anonymous function is used to describe them, and very small initial values
are selected randomly, numerical solutions can be found. If 100 simulations are made,
the solutions are shown in Figure 7.23. It can be seen that the solutions from some of
the initial values are divergent, therefore the original system of differential equations
is unstable.
>> f=@(t,x)[-2*x(1)+x(2)-x(3)+x(1)^2*exp(x(1));
x(1)-x(2)-4*x(1)^3*x(2)-x(3)^2;
x(1)+x(2)-x(3)+8*exp(x(1))*(x(2)^2+x(3)^2)];
ff=odeset; ff.RelTol=100*eps; ff.AbsTol=100*eps;
Figure 7.23: Solutions of the nonlinear differential equations from 100 initial values.
268 | 7 Properties and behaviors of ordinary differential equations
Example 7.28. Find the linearized model of the Chua circuit in Example 7.20, and find
the eigenvalues of the matrix at the equilibria.
Solutions. The following commands can be used to compute the linearized model:
Substituting the three equilibria (0, 0, 0), (−1.5, 0, 1.5), and (1.5, 0, −1.5) into the
Jacobian matrix, the corresponding matrices can be found.
>> J1=subs(J,{x1,x2,x3},{0,0,0})
J2=subs(J,{x1,x2,x3},{-1.5,0,1.5})
J3=subs(J,{x1,x2,x3},{1.5,0,-1.5}) % the three equilibrium points
double(eig(J1)), double(eig(J2)), double(eig(J3))
Therefore the three Jacobian matrices are obtained, and their eigenvalues have posi-
tive real parts, such that the three linearized models are all unstable.
9/7 9 0 −18/7 9 0
J1 = [ 1 J2 = J3 = [
[ ] [ ]
−1 1] , 1 −1 1] .
[ 0 −71 443/5 000 0] [ 0 −71 443/5 000 0]
For this specific problem, the unstable linearized model is not of any value, since
the approximation to the original nonlinear model is not satisfactory, and one is not
likely to see chaotic behavior in the linearized model.
It can be seen from the two examples that the stability judgements from linearized
models yield totally different results as for the original nonlinear differential equa-
tions. This sufficiently indicates that linearized models should not be used to judge
the stability of the original nonlinear differential equations. If a nonlinear differential
7.4 Bifurcation of differential equations | 269
The computation of equilibria is discussed in this section. The Jacobian matrix com-
putation is also introduced. If one substitutes the information of a certain equilibrium
point into the Jacobian matrix, the system can be approximated around this equilib-
rium point. The eigenvalues of the Jacobian matrix A determine the properties of this
equilibrium point.
If all the eigenvalues have negative real parts, the equilibrium point is stable. The
trajectory converges to the equilibrium point. If some of the eigenvalues have a positive
real part, while others have negative ones, the point is referred to as a saddle point. If
all the eigenvalues have positive real parts, a trajectory may diverge.
In fact, for the three equilibria in Example 7.28, some of the eigenvalues have a
positive real part, while some have negative. Therefore they are all saddle points.
Example 7.29. Consider the differential equation in Example 7.13. If A = 1, observe the
impact of parameter B to the behavior of the differential equation.
Solutions. The Jacobian matrix can be obtained under the symbolic framework, and
the equilibria are found. It is found that x10 = A, x20 = A/B. Substituting these values
into the Jacobian matrix, the eigenvalues of the matrix can be found.
>> syms A B x1 x2
F=[A+x1^2*x2-(B+1)*x1; B*x1-x1^2*x2]; % two given functions
J=jacobian(F,[x1 x2]) % compute Jacobian matrix
[x10,x20]=solve(F,[x1,x2]) % compute the equilibrium
J0=subs(J,{x1,x2},{x10,x20}) % substitute them into Jacobian matrix
e1=simplify(eig(J0)) % compute eigenvalues of Jacobian
270 | 7 Properties and behaviors of ordinary differential equations
2x1 x2 − B − 1 x1 2 B−1 A2
J=[ ], J0 = [ ].
B − 2x1 x2 −x1 2 −B −A2
In other words, although a bifurcation point was found, the point was obtained
from the linearized model, whereas the original differential equation does not have
these bifurcation phenomena. The so-called bifurcation in this example may be the
feature brought by the linearized model. Bifurcation issues are not further discussed
in this book.
7.5 Exercises
7.1 Judge the stability of s6 + 6s5 + 16s4 + 25s3 + 24s2 + 14s + 4. Compose a linear
differential equation with constant coefficients from the polynomial, and validate
the stability by the solution curve.
7.2 Assess the stability of the following linear differential equation, and validate the
solution with the analytical solution and the curve of the solution if
−2 −1 −1 1 0 1
[ 1 −3 −2 0 1] [ 1]
[ ] [ ]
x (t) = [ −2 0] x(t) + [ −1] u(t)
[ ] [ ]
2 −2 −1
[ ] [ ]
[ −1 2 −1 −4 1] [ 0]
[−4 3 1 −1 −3] [−2]
x (t) = y(t),
{
y (t) = x(t) − x3 (t) − ϵy(t) + γ cos ωt
7.5 Exercises | 271
where γ = 0.3 and ω = 1. If the parameter ϵ is selected as 0.22 and 0.15, respectively,
solve the differential equation, and observe the phase trajectory.
7.4 In [7], a set of Chua circuit parameters with 10-scroll attractors are given, where
α = 9.35 and β = 11.4. When i is even, mi = −1.4, while when is odd, mi = −0.6.
Besides, c1 = 1, c2 = 1.9, c3 = 2.6, c4 = 3.75, c5 = 4.75, c6 = 5.85, c7 = 6.46, c8 = 7.5,
and c9 = 8.55. Observe the phase space trajectory with MATLAB, and also note the
attractors.
7.5 Find the equilibria and linearized models of the following differential equation.
Assess the behavior of each equilibrium point if
7.6 Select an initial value x 0 = [eps, 0]T and solve the differential equation in Exam-
ple 7.24. Observe to which of the equilibria the differential equation trajectory may
finally converge to.
7.7 Find all the equilibria of the 4-scroll attractor in Example 7.21, and observe the
behaviors of each of them.
8 Fractional-order differential equations
It is known that dn y/dxn is used to represent the nth order derivative of y with respect
to x. What if n = 1/2? This was the question asked by a French mathematician Guil-
laume François Antoine L’Hôpital (1661–1704) to one of the inventors of calculus, Got-
tfried Wilhelm Leibniz, 300 years ago.[70] This marked the beginning of fractional cal-
culus. Strictly speaking, the term “fractional-order” is misused, proper names should
be “noninteger-order” or “arbitrary-order”. The irrational number √2 can be used as
the order, but it is not a fraction. Since the term “fractional-order” is already widely
used in the research community, it is also used in this book, while it really means
arbitrary-order.
Although the fractional calculus research has more than 300 years of history, the
earlier research was concentrated only on theoretical issues. In recent decades, this
research has been introduced into many fields. For instance, in automatic control,
fractional-order control systems is a relatively new and active research topic. In this
chapter, we concentrate on introducing the definitions and various computing meth-
ods. The solutions of linear and nonlinear fractional-order differential equations are
fully discussed.
A dedicated MATLAB toolbox for fractional calculus and fractional-order control
has been designed[74, 76] by the author of this book. It is named FOTF Toolbox, down-
loadable for free from the following website:[75]
http://cn.mathworks.com/matlabcentral/fileexchange/60874-fotf-toolbox
https://doi.org/10.1515/9783110675252-008
274 | 8 Fractional-order differential equations
are also presented. The numerical algorithms and solvers, especially those with high
precision, are presented, along with MATLAB implementations.
1 α
[(t−t0 )/h]
GL α
t0 Dt f (t) = lim ∑ (−1)j ( )f (t − jh) (8.1.1)
h→0 hα j
j=0
where t0 is the initial time instance, and we assume that, when t < t0 , f (t) ≡ 0.
α
The symbol ( ) denotes the binomial coefficient. The computation methods will be
j
presented later.
t
RL −α 1 f (τ)
t0 Dt f (t) = ∫ dτ (8.1.2)
Γ(α) (t − τ)1−α
t0
where α > 0 and t0 is the initial time instance. If t0 = 0, a simple notation is used,
RL −α
Dt f (t). If there is no conflict in the definitions, the mark RL can be omitted.
Riemann–Liouville definition is one of the commonly used definitions in fractional
calculus. Especially, the subscripts on the two sides of D are the lower and upper
bounds in the integral expression.[37]
8.1 Definitions and numerical computation in fractional calculus | 275
It can be shown that[56] for a very wide category of practical functions, Grünwald–
Letnikov and Riemann–Liouville definitions are completely equivalent. The two defi-
nitions are not distinguished in this book. Caputo and Riemann–Liouville definitions
are different in the order of integral and derivative symbols. The difference and rela-
tionship will be further illustrated next.
If y(t) has nonzero initial value, then, when α ∈ (0, 1), it is seen by comparing the
Caputo and Riemann–Liouville definitions that
C α
t0 Dt f (t) = RL α
t0 Dt (f (t) − f (t0 )) (8.1.6)
that the relationship between Caputo and Riemann–Liouville definitions can derived
as
C α f (t0 )(t − t0 )−α
t0 Dt f (t) = RL α
t0 Dt f (t) − . (8.1.7)
Γ(1 − α)
276 | 8 Fractional-order differential equations
If α < 0, it has been indicated that the Riemann–Liouville and Caputo definitions
are exactly the same, so that either of the two can be used.
Some of the properties in fractional calculus are summarized below, without
proofs:[55]
(1) The fractional-order derivatives t0 Dtα f (t) of an analytic function f (t) are analytic
in t and α.
(2) If α = n is an integer, the fractional-order and integer-order derivatives are identi-
cal, and t0 Dt0 f (t) = f (t).
(3) Fractional-order operator is linear, that is, for any constants a and b,
α
t0 Dt [af (t) + bg(t)] = a t0 Dtα f (t) + b t0 Dtα g(t). (8.1.9)
Theorem 8.2. Under Riemann–Liouville definition, the Laplace transform of the frac-
tional-order derivative satisfies
n−1
RL α α k RL α−k−1
L [t0 Dt f (t)] = s L [f (t)] − ∑ s t0 Dt f (t) . (8.1.11)
t=t0
k=1
Theorem 8.3. The Laplace transform of the fractional-order derivative under Caputo
definition satisfies
n−1
C γ γ γ−k−1 (k)
L [t0 Dt f (t)] = s F(s) − ∑ s f (t0 ). (8.1.12)
k=0
Especially, if the initial values of function f (t) and its derivatives are all 0, it is
found that L [t0 Dtα f (t)] = sα L [f (t)].
It can be seen that, in Caputo definition, the initial values of the function and its
integer-order derivatives are involved. This is the case expected in real applications.
While in Riemann–Liouville definition, the initial values of fractional-order deriva-
tives are involved, which are not provided in real applications. Therefore Caputo equa-
tions are more suitable for dynamic system description for systems with nonzero initial
values.
8.1 Definitions and numerical computation in fractional calculus | 277
1 α
[(t−t0 )/h]
α
t0 Dt f (t) = lim ∑ (−1)j ( ) f (t − jh)
h→0 hα j
j=0
1
[(t−t0 )/h]
≈ α ∑ wj(α) f (t − jh) (8.1.13)
h j=0
α
where wj(α) = (−1)j ( ) are the polynomial expansion coefficients of (1 − z)α . These
j
coefficients can be recursively from
α+1
w0(α) = 1, wj(α) = (1 − ) wj−1
(α)
, j = 1, 2, . . . (8.1.14)
j
If the step-size h is small enough, (8.1.13) can be used to directly compute the nu-
merical values. It can be shown that[56] the precision is o(h). Therefore with Grünwald–
Letnikov definition, the following solver can be written to compute the fractional-
order derivatives, based on Grünwald–Letnikov definition, for given functions:
function dy=glfdiff(y,t,gam)
if strcmp(class(y),’function_handle’), y=y(t); end % function handle
h=t(2)-t(1); w=1; y=y(:); t=t(:); % data stored in column vector
for j=2:length(t), w(j)=w(j-1)*(1-(gam+1)/(j-1)); end % binomial coeffs
for i=1:length(t), dy(i)=w(1:i)*[y(i:-1:1)]/h^gam; end % derivative
It can be seen from the presentation that the integrals in Caputo and Grünwald–
Letnikov definitions are exactly the same. Function glfdiff9() can be used to
evaluate the fractional-order integrals directly. If α > 0, the compensations in (8.1.8)
278 | 8 Fractional-order differential equations
Example 8.1. For the given function f (t) = e−t , find its 0.6th order Caputo derivative.
Select different step-sizes and values of p and assess the precision. The analytical
expression of the Caputo derivative is y0 (t) = −t 0.4 E1,1.4 (−t).
Solutions. Eα,β (⋅) is referred to as a Mittag-Leffler function with two parameters. It will
be further explained in the next section. Mittag-Leffler function can be numerically
evaluated with function ml_func() provided in the FOTF Toolbox.
Selecting a step-size of h = 0.01, different p can be tried to evaluate fractional-
order derivatives. Compared with analytical solution, the errors are listed in Table 8.1.
It can be seen that, when p = 6, the maximum error is as low as 10−13 , many orders
of magnitude higher than for the other existing algorithms. If the order p is further
increased, it is not likely to increase the accuracy due to the limitations in the double
precision data structure. The quality may even deteriorate.
order p 1 2 3 4 5 6 7
maximum 0.0018 1.19 × 10−5 8.89 × 10−8 7.07 × 10−10 5.85 × 10−12 3.14 × 10−13 7.33 × 10−13
error
If a larger step-size h = 0.1 is selected, for different values of p, the numerical Caputo
derivatives can also be evaluated with maximum errors listed in Table 8.2. It can be
seen that even if a large step-size like this is chosen, the error is still at the 10−10 level,
if p = 8.
order p 3 4 5 6 7 8 9
error
8.2 Analytical solution of linear commensurate-order differential equations | 279
If the mathematical form of y(t) or its samples is not known, block diagram
method should be used to find its fractional-order derivatives. The related presenta-
tions will be given in Chapter 9. Besides, in Volume II, the numerical computation and
implementation of high-order fractional-order derivatives are presented. Interested
readers may refer to the related materials, and they are not discussed further here.
z z2 ∞ k
z ∞
zk
ez = 1 + + + ⋅⋅⋅ = ∑ = ∑ . (8.2.1)
1! 2! k=0
k! k=0 Γ(k + 1)
where α ∈ C , with C being the set of complex numbers. The convergence condition of
the infinite series is Re(α) > 0.
If 1 in the above gamma function is replaced by another free constant β, the series
then becomes the Mittag-Leffler function with two parameters.
280 | 8 Fractional-order differential equations
Definition 8.5. The mathematical form of the Mittag-Leffler function with two param-
eters is
∞
zk
Eα,β (z) = ∑ (8.2.3)
k=0
Γ(αk + β)
where α, β ∈ C , and the convergence conditions for the infinite series of z ∈ C are
Re(α) > 0 and Re(β) > 0.
Definition 8.6. More generally, Mittag-Leffler functions with three and four parame-
ters are respectively defined as[64]
γ
∞
(γ)k z k γ,q
∞ (γ)kq zk
Eα,β (z) = ∑ , Eα,β (z) = ∑ (8.2.4)
k=0
Γ(αk + β) k! k=0
Γ(αk + β) k!
where α, β, γ ∈ C . For any z ∈ C , the convergence conditions for the infinite series are
Re(α) > 0, Re(β) > 0, and Re(γ) > 0. It is noted that q ∈ N , where N is the set of
integers. The symbol (γ)k is also known as Pochhammer symbol.
Γ(k + γ)
(γ)k = γ(γ + 1)(γ + 2) ⋅ ⋅ ⋅ (γ + k − 1) = . (8.2.5)
Γ(γ)
y=ml_func(v ,t ,n,ϵ)
where for Mittag-Leffler function with one parameter, v = α; while for that having
two parameters, v = [α, β]. The input argument v can also be selected as [α, β, γ] and
[α, β, γ, q] for Mittag-Leffler functions with three and four parameters. The argument n
is the integer order of the Mittag-Leffler function. For Mittag-Leffler function computa-
tion, n = 0. The argument ϵ is the error tolerance. The returned argument y is the nth
order derivative of the Mittag-Leffler function at the time vector t.
where bi and ai are real coefficients, while γi and ηi are the orders. The signal u(t) can
be regarded as the input to the system, while y(t) is the output signal.
Definition 8.9. As a special case, if there exists an order α such that the above differ-
ential equation can be written as
Definition 8.10. If the commensurate-order differential equation has zero initial val-
ues, and denoting λ = sα , the integer-order transfer function of the operator λ can be
established as
d1 λm + d2 λm−1 + ⋅ ⋅ ⋅ + dm λ + dm+1
G(λ) = . (8.2.8)
c1 λn + c2 λn−1 + ⋅ ⋅ ⋅ + cn λ + cn+1
An important Laplace transform formula is presented first. Then several special cases
are considered, from which the analytical solutions of certain fractional-order differ-
ential equations can be formulated.
sαγ−β γ
L
−1
[ ] = t β−1 Eα,β (−at α ) (8.2.9)
(sα + a)γ
γ
where Eα,β (⋅) is the Mittag-Leffler function with three parameters.
Theorem 8.6. If γ = 1 and αγ − β = −1, that is, if β = α + 1, then (8.2.9) can be written as
1
L
−1
[ ] = t α Eα,α+1 (−at α ). (8.2.11)
s(sα + a)
This formula can be regarded as the analytical solution of the fractional-order sys-
tem 1/(sα + a) driven by a step input signal.
Theorem 8.7. If γ = k is an integer, and αγ = β, that is, if β = αk, then (8.2.9) can be
written as
1
L
−1
[ ] = t αk−1 Ekα,αk (−at α ) , (8.2.12)
(sα + a)k
and it can be regarded as the analytical solution of the fractional-order transfer function
1/(sα + a)k driven by an impulsive signal
In the analytical analysis of integer-order linear differential equations, the partial frac-
tion expansion technique plays a very important part. This idea can also be extended
to the analytical solution of commensurate-order systems. In this section, the partial
fraction expansions of commensurate-order systems are presented, and then step and
impulse responses based methods are presented.
Theorem 8.9. If there is a set of distinct poles −pi for λ in the commensurate-order
transfer function, the integer-order transfer function of λ can be expressed by a partial
fraction expansion in the form of
n n
ri r
G(λ) = ∑ =∑ α i . (8.2.14)
i=1
λ + pi i=1 s + pi
If there exist repeated poles at −pi , with multiplicity m, the partial fraction expan-
sion of the relevant part is written as
m rij
ri1 ri2 rim
α
+ + ⋅ ⋅ ⋅ + α m
= ∑ . (8.2.15)
s + pi (s + pi )
α 2 (s + pi ) α
j=1 (s + pi )
j
8.2 Analytical solution of linear commensurate-order differential equations | 283
Definition 8.11. For commensurate-order systems with base order α, the partial frac-
tion expansion can be written as
N mi rij
G(s) = ∑ ∑ (8.2.16)
i=1 j=1 (sα + pi )j
where pi and rij are complex numbers, the multiplicity mi of the ith pole pi is an integer,
and m1 + m2 + ⋅ ⋅ ⋅ + mN = n.
Theorem 8.10. More generally, since commensurate-order transfer functions are factor-
ized in the form defined in Definition 8.11, the analytical solutions for impulsive input
signals can be written as
N mi rij N mi
j
yδ (t) = L −1 [∑ ∑ ] = ∑ ∑ rij t αj−1 Eα,αj (−at α ), (8.2.19)
i=1 j=1 (sα + pi ) j
i=1 j=1
N mi rij N mi
j
yu (t) = L −1 [∑ ∑ ] = ∑ ∑ rij t αj Eα,αj+1 (−at α ). (8.2.20)
i=1 j=1 s(sα + pi ) j
i=1 j=1
Example 8.2. Find the analytical solution of the following fractional-order differen-
tial equations for step inputs:
0.8
0D y(t) + 0.750 D 0.4 y(t) + 0.9y(t) = 5u(t).
Solutions. It can be seen that the base order is α = 0.4. Denote λ = s0.4 , then the
commensurate-order transfer function model can be derived as follows, which is an
integer-order transfer function of λ:
5 5
G(s) = ⇒ G(λ) = .
s0.8 + 0.75s0.4 + 0.9 λ2 + 0.75λ + 0.9
284 | 8 Fractional-order differential equations
−2.8689j 2.8689j
G(s) = +
s0.4 + 0.3750 − 0.8714j s0.4 + 0.3750 + 0.8714j
From the above expansion, it is immediately seen that the analytical solution of
the step response can be written as
Example 8.3. Find the analytical and numerical solutions for the impulse response of
the following fractional-order system:
s1.2 + 3s0.4 + 5
G(s) = .
s1.6 + 10s1.2 + 35s0.8 + 50s0.4 + 24
λ3 + 3λ + 5
G(λ) = .
λ4 + 10λ3 + 35λ2 + 50λ + 24
With residue() function in MATLAB
With the properties in (8.2.10), the analytical solution of the impulse response can
be written as
y(t) = 71t −0.6 E0.4,0.4 (−4t 0.4 )/6 − 31t −0.6 E0.4,0.4 (−3t 0.4 )/2
+ 9t −0.6 E0.4,0.4 (−2t 0.4 )/2 + t −0.6 E0.4,0.4 (−t 0.4 )/6.
Based on the analytical solution formula, the following MATLAB commands can
also be used to evaluate the numerical solutions, as shown in Figure 8.1.
31/2*t.^(-0.6).*ml_func([0.4,0.4],-3*t1)+...
9/2*t.^(-0.6).*ml_func([0.4,0.4],-2*t1)+...
1/6*t.^(-0.6).*ml_func([0.4,0.4],-t1);
plot(t,y) % solution curves of the differential equations
Example 8.4. Solve the following fractional-order differential equation with zero ini-
tial values:
1.2 0.9
D y(t) + 5D y(t) + 9D 0.6 y(t) + 7D 0.3 y(t) + 2y(t) = u(t)
Solutions. Selecting the base order 0.3 and letting λ = s0.3 , the integer-order transfer
function of λ can be found as
1
G(λ) = .
λ4 + 5λ3 + 9λ2 + 7λ + 2
With the following commands, the partial fraction expansion can be found:
If the input signal u(t) is a unit impulsive signal, the Laplace transform of the
output signal is
1 1 1 1
Y(s) = G(s) = − + − + .
s0.3 + 2 s0.3 + 1 (s0.3 + 1)2 (s0.3 + 1)3
286 | 8 Fractional-order differential equations
It can be seen from (8.2.10) and (8.2.12) that the analytical solution of the impulse
response is
If the input u(t) is a unit step signal, the Laplace transform of the output signal is
1 1 1 1 1
Y(s) = G(s) = − 0.3 + − + .
s s(s + 2) s(s0.3 + 1) s(s0.3 + 1)2 s(s0.3 + 1)3
The curves of the step and impulse responses of the output signal can be obtained
as shown in Figure 8.2.
>> t=0:0.002:0.5;
y1=-t.^-0.7.*ml_func([0.3,0.3],-2*t.^0.3)...
+t.^-0.7.*ml_func([0.3,0.3],-t.^0.3)...
-t.^-0.4.*ml_func([0.3,0.6,2],-t.^0.3)...
+t.^-0.1.*ml_func([0.3,0.9,3],-t.^0.3);
y2=-t.^0.3.*ml_func([0.3,1.3],-2*t.^0.3)...
+t.^0.3.*ml_func([0.3,1.3],-t.^0.3)...
-t.^0.6.*ml_func([0.3,1.6,2],-t.^0.3)...
+t.^0.9.*ml_func([0.3,1.9,3],-t.^0.3);
plot(t,y1,t,y2) % impulse and step responses
It is worth mentioning that the analytical solutions have too many restrictions. On the
one hand, the differential equations must be linear and have commensurate orders.
On the other hand, only special input signals such as step and impulse are allowed. Al-
though the theory can be expanded for signals such as ramp input, the signal types are
too much restricted. For ordinary input signals, the method here cannot be used. If the
analytical solutions cannot be found, numerical solutions become the only choice. In
the latter presentation of this chapter, numerical methods are discussed for fractional-
order differential equations.
If the initial values of u(t), y(t), and their derivatives are all zero, the right-hand side
of the expression in (8.2.6) is equivalently denoted as u(t),̂ so the original differential
equation can be simplified as
γ γ γ γ
a1 Dt 1 y(t) + a2 Dt 2 y(t) + ⋅ ⋅ ⋅ + an−1 Dt n−1 y(t) + an Dt n y(t) = u(t)
̂ (8.3.1)
where u(t)
̂ is a linear combination of u(t) and its fractional-order derivatives, which
can be evaluated independently as
For simplicity, assume that γ1 > γ2 > ⋅ ⋅ ⋅ > γn−1 > γn > 0. If the following two
special cases appear, transformations must be carried out first.
(1) If the orders of the above equations are not the same as those above, sorting must
be made first.
(2) If there exists a negative order γi , integro-differential equations are involved. They
γ
are not easy to solve. A new variable z(t) = Dt n y(t) should be introduced. The
original equation can be transformed into an equation of z(t), and for the results,
numerical integrals should be evaluated to find the output signal y(t).
288 | 8 Fractional-order differential equations
i=1
hi
Now consider the general form of the linear fractional-order differential equation
in (8.2.6). The algorithm introduced here seeks to find first the equivalent signal u(t) ̂
on the right, and then solve the differential equation in (8.3.1). The idea is illustrated in
Figure 8.3(a). Unfortunately, this idea is not feasible. For instance, if the input signal
is a step one, and there happen to be integer-order derivatives, the contributions of the
terms may be neglected. Wrong results can be found. Therefore new alternative ideas
must be employed.
A different idea is adopted in real programming. For instance, the original linear
problem can be equivalently tackled by using u(t) to compute directly the output y(t), ̂
then to compose y(t) as a weighted sum of fractional-order derivatives. The idea is
̂
illustrated in Figure 8.3(b). Since the original system is linear, it can be divided into
two parts, N(s) and 1/D(s), where N(s) and D(s) are pseudopolynomials. In a linear
system framework, they can be swapped, such that the obtained y(t) are the same.
The algorithm is implemented in a MATLAB solver given later.
u(t) u(t)
̂ 1 y(t) u(t) 1 y(t)
̂ y(t)
N(s) N(s)
D(s) D(s)
- - - - - -
Based on the above algorithm, the following MATLAB function fode_sol() can be
written, for solving linear fractional-order differential equations with zero initial val-
ues. In the function, W is a matrix whose jth row stores the wj vector of the jth order.
function y=fode_sol(a,na,b,nb,u,t)
h=t(2)-t(1); D=sum(a./[h.^na]); nT=length(t);
D1=b(:)./h.^nb(:); nA=length(a); vec=[na nb];
y1=zeros(nT,1); W=ones(nT,length(vec));
for j=2:nT, W(j,:)=W(j-1,:).*(1-(vec+1)/(j-1)); end
for i=2:nT
A=[y1(i-1:-1:1)]’*W(2:i,1:nA);
y1(i)=(u(i)-sum(A.*a./[h.^na]))/D;
end
for i=2:nT, y(i)=(W(1:i,nA+1:end)*D1)’*[y1(i:-1:1)]; end
8.3 Numerical solutions of linear fractional-order differential equations | 289
The syntax of the function is y=fode_sol(a,na ,b,nb ,u,t ), where the time and input
vectors are provided in t and u.
Example 8.5. If the input signal is u(t) = sin t 2 , find the numerical solution of the
following linear fractional-order differential equation:
Solutions. This fractional-order differential equation cannot be solved with the an-
alytical methods discussed earlier, since the input is not a step or impulsive signal.
Therefore the method presented in this section can be used to find the numerical
solutions directly.
It is found from the equations that the vectors a, na , b, and nb can immediately
be found. The equally-spaced time and input vectors can then be computed such that
function fode_sol() can be called to solve the original equation. The results obtained
are shown in Figure 8.4. The two step-sizes, 0.002 and 0.001, are used to cross-validate
the results. The two results are the same, meaning that the solutions are correct.
The above illustrated closed-form algorithm only applies for Riemann–Liouville linear
differential equation solutions. Unfortunately, the precision is only o(h). Therefore
290 | 8 Fractional-order differential equations
the algorithm has certain limitations in practical use. In [74], an o(hp ) closed-form
algorithm is given.
If the differential equation in (8.3.1) has initial values, another closed-form for-
mula similar to (8.3.3) can be constructed from Grünwald–Letnikov definition. If the
coefficients wj in (8.3.3) are obtained by a high precision algorithm, then the following
closed-form solution can be formulated:
(1) Replace the operator in (8.3.1) with Grünwald–Letnikov operator.
(2) For each order, use a high precision recursive formula to compute wj .
(3) Find the numerical solution yk from (8.3.3).
function y=fode_sol9(a,na,b,nb,u,t,p)
h=t(2)-t(1); n=length(t); vec=[na nb]; u=u(:);
g=double(genfunc(p)); t=t(:); W=[];
for i=1:length(vec), W=[W; get_vecw(vec(i),n,g)]; end
D1=b(:)./h.^nb(:); nA=length(a); y1=zeros(n,1);
W=W.’; D=sum((a.*W(1,1:nA))./[h.^na]);
for i=2:n
A=[y1(i-1:-1:1)]’*W(2:i,1:nA);
y1(i)=(u(i)-sum(A.*a./[h.^na]))/D;
end
for i=2:n, y(i)=(W(1:i,nA+1:end)*D1)’*[y1(i:-1:1)]; end
The necessary condition of using the pth order algorithm is that the first p values in y(t)
must be zero or very close to zero, to avoid the impact to the results, if the first few terms
wj are missing. For certain differential equations, where the necessary conditions are
not satisfied, high precision algorithms to be presented next can be used instead.
RL 2.5
y (t) + 0 Dt y(t) + y(t) = −1 + t − t 2 /2 − t 0.5 E1,1.5 (−t).
Solutions. Selecting a slightly larger step-size h = 0.1, the following commands can
be used to solve the differential equation for different p. It can be seen that the nu-
merical solutions are as shown in Figure 8.5. It is immediately seen that the errors are
rather large in the solutions.
8.3 Numerical solutions of linear fractional-order differential equations | 291
For the four selected values of p, the error curves can be obtained with the above
statements, as shown in Figure 8.5. It is seen that the precision for p = 2 is clearly
better than that for p = 1, obtained by fode_sol(). Further increasing the value to
p = 3 and p = 4, the precision is reduced, since the first four initial values of y(t) are
respectively 0, −0.0002, −0.0013, and −0.0042. The first two are close to zero, but the
latter two are large. Therefore the approximation in the solution is not good.
Note that the condition “the initial values of y(t) and their derivatives are suffi-
ciently small” is too strict. Therefore this algorithm is not suitable for large values of
p. The high-precision Caputo equation solver to be presented next is recommended.
γ γ γ
a1 Ct0 Dt 1 y(t) + a2 Ct0 Dt 2 y(t) + ⋅ ⋅ ⋅ + an Ct0 Dt n y(t) = u(t),
̂ (8.3.4)
It is worth mentioning that, in the Caputo definition, the initial values are the
values of the signal y(t) and its integer-order derivatives. Therefore this definition is
more suitable for describing practical systems. Further explorations are needed for the
numerical solutions of Caputo equations.
If the differential equation in (8.3.4) has nonzero initial values, an auxiliary func-
tion T(t) should be introduced such that the original equation can be mapped into a
differential equation of z(t) with zero initial values.
and
where the initial values of the auxiliary function T(t) are the same as those of signal
y(t), while z(t) has zero initial values.
Substituting the signal y(t) by z(t) + T(t) in (8.3.4), the differential equation of y(t)
can be mapped into an equation of signal z(t), which contains zero initial values:
γ γ γ
a1 Ct0 Dt 1 z(t) + a2 Ct0 Dt 2 z(t) + ⋅ ⋅ ⋅ + an Ct0 Dt n z(t) = u(t)
̂ − P(t) (8.3.9)
and is expressed as
γ γ γ
P(t) = (a1 Ct0 Dt 1 + a2 Ct0 Dt 2 + ⋅ ⋅ ⋅ + an Ct0 Dt n )T(t). (8.3.10)
Since the z(t) signal has zero initial values, Ct0 D α z(t) = GL α
t0 D z(t). Therefore, the
equivalent signal on the right-hand side is u(t)
̂ − P(t). The numerical solutions zm of
this equation can be obtained from (8.3.3), used for the zero initial value problems.
8.3 Numerical solutions of linear fractional-order differential equations | 293
n
1 ai m
ym = (û m − Pm − ∑ ∑ w y ) + Tm . (8.3.11)
n
a hγi j=1 j m−j
∑ γi i=1
i=1
h i
Ay (t) + BD 3/2 y(t) + Cy(t) = C(t + 1), and y(0) = y (0) = 1.
Solutions. From the given initial values, an auxiliary function can be written as T(t) =
t + 1. Therefore the original signal can be decomposed as y(t) = z(t) + t + 1, where z(t) is
the signal with zero initial values. The original differential equation can be rewritten
as the Grünwald–Letnikov solution in terms of signal z(t) with zero initial values.
Let us observe again the Caputo definition, since the 1.*th order derivative of y(t)
is taken, the second-order derivative must be evaluated, then we need to evaluate
fractional-order integrals. This means that the compensating term t + 1 will vanish.
The original equation can be rewritten as
It can be seen that the C(t + 1) terms on both sides can be canceled. The original
differential equation is rewritten as
It can be seen that since z(t) has zero initial values, and there is no external ex-
citation in the above equation, it implies that z(t) ≡ 0. Therefore the solution of the
original equation is y(t) = t + 1. That is, the solution of the equation is independent of
the constants A, B, and C.
The auxiliary function T(t) is the Taylor series expansion of the output signal
y(t). The difference between T(t) and y(t) is very small in the very beginning of the
simulation process. Since y(t) is a bounded function, the value of |y(t) − T(t)| may
increase when t increases. When t is sufficiently large, |T(t)| is an increasing function
such that |z(t)| = |y(t) − T(t)| is also an increasing function. Since the two terms are
evaluated separately, they cannot be perfectly canceled when t is very large. This may
lead to huge computational error when t is large. It is obvious that the computation
error cannot be maintained small. Examples will be shown next to illustrate this kind
of phenomenon.
294 | 8 Fractional-order differential equations
It has been indicated that, if the pth order algorithm is used, the necessary condition is
that the first p values of z(t) = y(t)−T(t) are zero or close to zero. This has been demon-
strated in Example 8.6. Since z(t) here is a zero initial value function, the condition is
satisfied.
Since the order p can be selected independently, there are two possibilities for
p and the actual highest order q in the Caputo equations. One is that p ⩽ q, so the
necessary conditions are satisfied, and high precision solutions can be found. The
other one is when p > q, then p equivalent initial values are needed. Only then the
high-precision solution can be found. In the latter case, two-step method is used.
No matter what the value of q in Caputo equation, the Taylor auxiliary function
T(t) is constructed such that
p−1
tk
T(t) = ∑ ck . (8.3.12)
k=0
k!
If p ⩽ q, then, when k = 0, 1, . . . , p−1, letting ck = y(k) (0), the first step is completed.
If p > q, then, when k = 0, 1, . . . , q − 1, letting ck = y(k) (0), the remaining p − q
initial values ck can be computed. To let T(t) and y(t) have the same initial values, the
undetermined constants should still be denoted by ck , when q ⩽ k ⩽ p − 1. Therefore
the z(t) signal has zero initial values.
For simplicity, function T(t) is rewritten as
p−1
T(t) = ∑ ck Tk (8.3.13)
k=0
where Tk = t k /k!. Since z(t) has zero initial values, the first p values in the interpolation
polynomial are zero, so that the first p initial values of T(t) and y(t) are the same. In
other words, the first p initial values of T(t) satisfy the original Caputo differential
equation. Equation (8.3.13) can be substituted into the original equation such that
p
∑ ck xk (t) = u(t)
̂ (8.3.14)
k=0
where
η η η
xk = (a1 Ct0 Dt 1 + a2 Ct0 Dt 2 + ⋅ ⋅ ⋅ + an Ct0 Dt n )Tk (t). (8.3.15)
It can be seen that the first p points can be evaluated directly. In (8.3.14), letting
t = h, 2h, . . . , Kh, where K = p − q, the following linear algebraic equation can be
8.3 Numerical solutions of linear fractional-order differential equations | 295
established:
x0 (h) x1 (h) ... xp (h) c0 u(h)̂
[ x (2h) x1 (2h) ... xp (2h) ] [ c1 ] [ u(2h)
] [ ] [ ̂ ]
[ 0 ]
[ .
[ . .. .. .. ] [ .. ] [ .. ]
] [ ] = [
]. (8.3.16)
[ . . . . ][ . ] [ . ]
[x0 (Kh) x1 (Kh) ... xp (Kh)] [cp−1 ] [u(Kh)
̂ ]
From this equation it can be seen that ck , 0 ⩽ k ⩽ q − 1 are the initial values of
the equation. The number of the unknowns is K, and the quantities can be evaluated
directly from the linear algebraic equation in (8.3.16). Therefore the constants ci , 0 ⩽
i ⩽ p − 1 can be regarded as the new equivalent initial values. With these initial values,
(8.3.13) can be used to build up the Taylor auxiliary function T(t), such that the first p
terms in y(t) can be computed directly from T(t). The above ideas are included in the
following algorithm:
(1) For 0 ⩽ k ⩽ p − 1, construct Tk = t k /k!, and compute xk from (8.3.15).
(2) Letting K = p − q, the coefficient matrix in (8.3.16) can be established with xk .
(3) For the given u(h),
̂ u(2h),
̂ . . . , u(Kh),
̂ the values are obtained from the equivalent
input on the right-hand side of (8.3.4).
(4) The first q coefficients ck equal to the given initial values. The rest can be found
from the linear algebraic equation in (8.3.16), such that all equivalent initial values
are found.
Based on the above considerations, the MATLAB function caputo_ics() can be writ-
ten for finding the first p equivalent initial values, aiming at obtaining high precision
solutions of Caputo differential equations:
function [c,y]=caputo_ics(a,na,b,nb,y0,u,t)
na1=ceil(na); q=max(na1); K=length(t);
p=K+q-1; y0=y0(:); u=u(:); t=t(:); d1=y0./gamma(1:q)’;
I1=1:q; I2=(q+1):p; X=zeros(K,p);
for i=1:p, for k=1:length(a)
if i>na1(k)
X(:,i)=X(:,i)+a(k)*t.^(i-1-na(k))*gamma(i)/gamma(i-na(k));
end, end, end
u1=0; for i=1:length(b), u1=u1+b(i)*caputo9(u,t,nb(i),K-1); end
X(1,:)=[]; u2=u1(2:end)-X(:,I1)*d1; d2=inv(X(:,I2))*u2;
c=[d1;d2]; y=0; for i=1:p, y=y+c(i)*t.^(i-1); end
vectors u and t are the input and time vectors, having length p + 1, from which the
value p is loaded into the function. The returned argument c contains the equivalent
initial values, and vector y returns the first p values of the solution.
If the precision requirement is not very high, that is, p ⩽ q, the given initial values
y 0 are sufficient. If the precision requirements are high, that is, p > q, the given vector
y 0 is not sufficient. The equivalent initial values are, in fact, the high order terms of
Taylor series of y(t). Since the y(t) signal is not known in real applications, high order
terms in its Taylor series can only be found by solving linear algebraic equations.
with initial values of y(0) = 1, y (0) = 4/5, and y (0) = −16/25. The analytical solution
is y(t) = √2 sin(4t/5 + π/4). If 0 ⩽ t ⩽ 30, for different step-sizes h try to reconstruct
the Taylor series coefficients and validate the results.
Solutions. The following commands can be used to compute the Taylor series coeffi-
cients from the analytical solutions:
It can be seen that the first seven terms in Taylor series are
4 8 32 3 32 4 128 5 256 6
y(t) = 1 + t − t2 − t + t + t − t + o(h6 ).
5 25 375 1 875 46 875 703 125
If the first few terms in the Taylor series are to be rebuilt, different values of p and
step-sizes h are tried. The equivalent initial values can be found, and the errors are
given in Table 8.3.
>> a=[1 1/16 4/5 3/2 1/25 6/5]; na=[3 2.5 2 1 0.5 0];
y0=[1 4/5 -16/25]; b=1; nb=0;
h=[0.1,0.05,0.02,0.01,0.005,0.002,0.001];
for i=1:7, for p=1:6
t=[0:h(i):p*h(i)]; u=172/125*cos(4/5*t);
y=sqrt(2)*sin(4*t/5+pi/4); % compute the analytical solution
[ee,yy]=caputo_ics(a,na,b,nb,y0,u,t); err(p,i)=norm(yy-y’);
end, end
It is seen that the errors obtained are all very small. Therefore the equivalent initial
values can be used, where the necessary conditions are satisfied. They can be used in
finding high precision solutions of Caputo differential equations.
8.3 Numerical solutions of linear fractional-order differential equations | 297
The equivalent initial values are, in fact, the first p values of the numerical solution
y(t). Taylor auxiliary function T(t) can be established directly. With the above meth-
ods, the high precision solutions of Caputo differential equations can then be found.
Summarizing the above ideas, the following algorithm is proposed:
(1) Compute the equivalent initial values.
(2) Establish the auxiliary function T(t) from the equivalent initial values. Signal y(t)
is decomposed as T(t) + z(t).
(3) Find the high precision solutions z(t) of the differential equations with zero initial
values.
(4) From y(t) = T(t) + z(t), the high precision solution can be constructed.
function y=fode_caputo9(a,na,b,nb,y0,u,t,p)
T=0; dT=0; t=t(:); u=u(:);
if p>length(y0)
yy0=caputo_ics(a,na,b,nb,y0,u(1:p),t(1:p));
y0=yy0(1:p).*gamma(1:p)’;
elseif p==length(y0)
yy0=caputo_ics(a,na,b,nb,y0,u(1:p+1),t(1:p+1));
y0=yy0(1:p+1).*gamma(1:p+1)’;
end
for i=1:length(y0), T=T+y0(i)/gamma(i)*t.^(i-1); end
for i=1:length(na), dT=dT+a(i)*caputo9(T,t,na(i),p); end
u=u-dT; y=fode_sol9(a,na,b,nb,u,t,p)+T’;
The syntax of the function is y=fode_caputo9(a,na ,b,nb ,y 0 ,u,t ,p). This function
can be used to directly solve linear Caputo differential equations with nonzero initial
values.
298 | 8 Fractional-order differential equations
Example 8.9. Solve again the equation in Example 8.8, and assess the precision of the
solutions for different orders and step-sizes.
Solutions. Selecting a large step-size h = 0.1 and different orders p, the numerical
solutions of the differential equations can be found. Compared with the analytical
solutions, the errors at different time instances are measured, as shown in Table 8.4.
It can be seen that, when p increases, the errors are significantly reduced. The case of
p = 6 is a counterexample. When t is large, the error is also large. It is still significantly
better than when using the o(h) algorithm. In this example, p = 5 is the best choice.
If smaller step-size h = 0.01 is selected, the computational errors are found for dif-
ferent selections of p, as shown in Table 8.5. It can be seen that the accuracy are not
in general as good as in the case of step-size h = 0.1. This is because that when the
step-size is reduced, the total number of points increases, and the accumulative error
also increases. In this example, the total number of points is 3 001, and p = 3 is a good
choice.
It is seen that to avoid the increase of cumulative errors, the step-size should be
suitably chosen, such that the total number of points does not exceed 1 500, otherwise
the cumulative errors may affect the final solutions.
Example 8.10. Consider the zero initial value problem in Example 8.6. In the previous
example, the behavior of the high precision algorithm was not satisfactory. Solve again
the problem with the high precision Caputo equation solver.
Solutions. It was pointed out in Example 8.6 that the high precision algorithm failed
because the necessary conditions of the algorithm were not satisfied. With the new
algorithm, the equivalent initial values are computed, such that the differential equa-
tion can be solved with the new solver. The error curve is shown in Figure 8.6, from
which it is seen that, if p = 4, there is no apparent error.
α α
F(t, y(t), Ct0 Dt 1 y(t), . . . , Ct0 Dt n y(t)) = 0 (8.4.1)
where F(⋅) is a function of time t, unknown y(t), and its fractional-order derivatives. It
is safe to assume that αn > αn−1 > ⋅ ⋅ ⋅ > α2 > α1 > 0.
Definition 8.16. Denoting q = ⌈αn ⌉ in Definition 8.15, the necessary initial values for
the fractional-order differential equation satisfy (8.3.5).
C α α α
0 Dt y(t) = f (t, y(t), C0 Dt 1 y(t), . . . , C0 Dt n−1 y(t)). (8.4.2)
8.4 Solution of nonlinear fractional-order differential equations | 301
Let us review the algorithm in Chapter 5. The key point is to introduce a Taylor
auxiliary function T(t), which ensures that the output signal can be decomposed into
y(t) = z(t) + T(t), where z(t) has zero initial values, and T(t) is the Taylor auxiliary
function defined in (8.3.7). The definition is given again here:
q−1 q−1
y(k) (0) k y
T(t) = ∑ t = ∑ k tk . (8.4.3)
k=0
k! k=0
k!
C α α α
0 Dt z(t) = f (t, z(t), C0 Dt 1 z(t), . . . , C0 Dt n−1 z(t)). (8.4.4)
γ γ
Besides, since C0 Dt z(t) = RL
0 Dt z(t), it is found that
RL α α α
0 Dt z(t) = f (t, z(t), RL 1 RL
0 Dt z(t), . . . , 0 Dt
n−1
z(t)). (8.4.5)
RL α
0 Dt z(t) = f ̂. (8.4.6)
function [y,t]=nlfep(fun,alpha,y0,tn,h,p,err)
m=ceil(tn/h)+1; t=(0:(m-1))’*h; ha=h.^alpha; z=0;
[T,dT,w,d2]=aux_func(t,y0,alpha,p);
302 | 8 Fractional-order differential equations
y=z+T(1); dy=zeros(1,d2-1);
for k=1:m-1
zp=z(end); yp=zp+T(k+1); y=[y; yp]; z=[z; zp];
[zc,yc]=repeat_funs(fun,t,y,d2,w,k,z,ha,dT,T);
while abs(zc-zp)>err
yp=yc; zp=zc; y(end)=yp; z(end)=zp;
[zc,yc]=repeat_funs(fun,t,y,d2,w,k,z,ha,dT,T);
end, end
% make the repeatable function as a subfunction
function [zc,yc]=repeat_funs(fun,t,y,d2,w,k,z,ha,dT,T)
for j=1:(d2-1)
dy(j)=w(1:k+1,j+1)’*z((k+1):-1:1)/ha(j+1)+dT(k,j+1);
end, f=fun(t(k+1),y(k+1),dy);
zc=((f-dT(k+1,1))*ha(1)-w(2:k+1,1)’*z(k:-1:1))/w(1,1);
yc=zc+T(k+1);
where the supporting function aux_func() is given below. The target is to reduce
the repeatable code in the correction algorithm to be presented later. The code is as
follows:
function [T,dT,w,d2]=aux_func(t,y0,alpha,p)
an=ceil(alpha); y0=y0(:); q=length(y0); d2=length(alpha);
m=length(t); g=double(genfunc(p));
for i=1:d2, w(:,i)=get_vecw(alpha(i),m,g)’; end
b=y0./gamma(1:q)’; T=0; dT=zeros(m,d2);
for i=1:q, T=T+b(i)*t.^(i-1); end
for i=1:d2
if an(i)==0, dT(:,i)=T;
elseif an(i)<q
for j=(an(i)+1):q
dT(:,i)=dT(:,i)+(t.^(j-1-alpha(i)))*...
b(j)*gamma(j)/gamma(j-alpha(i));
end, end, end
It can be seen that the solutions obtained are not the solutions of the original dif-
ferential equations. They can be regarded as the initial values for the corrector solvers
to be introduced next.
Example 8.11. Find the numerical solution of the following nonlinear fractional-order
differential equation:[20]
Solutions. The analytical solution is e−t . In [20], the original form of the equation is
given where the two Mittag-Leffler functions are described wrongly as E1.545 (−t) and
E1.445 (−t). If the analytical solution y(t) = e−t is substituted back to the original equa-
tion, it can be seen that the two sides are different. To ensure the analytical solutions,
the Mittag-Leffler functions should be modified into two parameter functions.
It can be seen that the order vector is α = [1.455, 0.555, 1], y 0 = [1, −1]. Introducing
the signal as d1 (t) = C0 Dt0.555 y(t) and d2 (t) = y (t), the standard form of the original
equation can be written as follows:
>> f=@(t,y,Dy)-t.^0.1.*ml_func([1,1.545],-t).*exp(t)./...
ml_func([1,1.445],-t).*y.*Dy(:,1)+exp(-2*t)-Dy(:,2).^2;
where variables t and y are time and output column vectors. The argument Dy is a
matrix, whose columns correspond the fractional-order derivatives of the state space
expressions.
The following commands can be used to solve Caputo differential equations. The
prediction results are shown in Figure 8.7. It can be seen that the solution is not sat-
isfactory in the predictions. The accuracy when p = 2 is rather worse than the case
of p = 1. The elapsed time is about 0.1 seconds. The maximum errors are respectively
0.0264 and 0.0364.
A better solver is presented in this section. The method is the corrector method. For
the same step-size h, the predictor solution y p can be employed. If it is substituted into
the right-hand side of (8.4.2), a single-term differential equation can be established.
With iteration methods, the solutions of the differential equations can be found. The
ideas are implemented in the following vectorized algorithm:
(1) Assume that the predictor solution is y p .
(2) Substituting y p into (8.4.2), a corrector solution can be found as y.̂
(3) If ‖ŷ − y p ‖ < ϵ, the solution ŷ is accepted, otherwise, let y p = ŷ and go back to step
(2) to continue the iteration process, until the solutions are found.
Based on the above algorithm, the following MATLAB function can be written. It can
be used to directly solve explicit Caputo differential equations of any complexity:
function y=nlfec(fun,alpha,y0,yp,t,p,err)
yp=yp(:); t=t(:); h=t(2)-t(1); m=length(t); ha=h.^alpha;
[T,dT,w,d2]=aux_func(t,y0,alpha,p);
[z,y]=repeat_funs(fun,t,yp,T,d2,alpha,dT,ha,w,m,p);
while norm(z)>err, yp=y; z=zeros(m,1);
[z,y]=repeat_funs(fun,t,yp,T,d2,alpha,dT,ha,w,m,p);
end
% the repetitive subfunction
function [z,y]=repeat_funs(fun,t,yp,T,d2,alpha,dT,ha,w,m,p)
for i=1:d2, dyp(:,i)=glfdiff9(yp-T,t,alpha(i),p)’+dT(:,i); end
f=fun(t,yp,dyp(:,2:d2))-dyp(:,1); y=yp; z=zeros(m,1);
for i=2:m, ii=(i-1):-1:1;
z(i)=(f(i)*(ha(1))-w(2:i,1)’*z(ii))/w(1,1); y(i)=z(i)+yp(i);
end
8.4 Solution of nonlinear fractional-order differential equations | 305
The syntax of the function is y=nlfec(fun, α,y 0 ,y p ,t ,p,ϵ), where the input argu-
ments are almost the same as those in nlfep(). Compared with the above algorithm, it
can be seen that the predictor algorithm is used to provide the initial values, or the so-
lution, for iterative process. They can be replaced by others, e. g., y p =ones(size(t )).
Example 8.12. Solve again the Caputo differential equation in Example 8.11.
Solutions. From the predictor results, selecting p = 2, the following commands are
used to find the corrector solutions. The maximum error is 3.9337 × 10−5 , and the
elapsed time is 2.098 seconds. It can be seen that the efficiency of the algorithm is
rather high.
>> f=@(t,y,Dy)-t.^0.1.*ml_func([1,1.545],-t).*exp(t)./...
ml_func([1,1.445],-t).*y.*Dy(:,1)+exp(-2*t)-Dy(:,2).^2;
alpha=[1.455,0.555,1]; y0=[1,-1];
tn=1; h=0.01; err=1e-8; p=1;
tic, [yp1,t]=nlfep(f,alpha,y0,tn,h,p,err); toc
tic, [y2,t]=nlfec(f,alpha,y0,yp1,t,2,err); toc
max(abs(y2-exp(-t))) % compute the maximum error
If a smaller step-size h = 0.001 is used, the following statements can be used to solve
the differential equation again. The maximum error is reduced to 3.9716 × 10−7 , and
the elapsed time is 3.39 seconds. Further reducing the step-size to h = 0.0001, the
maximum error is 6.8851 × 10−9 , with the elapsed time of 80.11 seconds. If one selects
p = 3, the maximum error is reduced to 3.8361 × 10−9 , and the elapsed time is 367.06
seconds. Although with p = 3 the precision was improved, the theoretical level of o(h3 )
was not really reached.
In fact, a larger step-size can still be tried, for instance, h = 0.01. If p = 4, the maximum
error obtained is 7.0546 × 10−9 , and the elapsed time is 65.37 seconds. It can be seen
that the efficiency is higher than using smaller step-sizes.
Up to now, the nonlinear Caputo differential equation algorithms were all for explicit
differential equations. If implicit equations in Definition 8.15 are involved, implicit
306 | 8 Fractional-order differential equations
solvers are expected. The general form of nonlinear implicit fractional-order differ-
ential equations is given by
α α
F(t, y(t), Ct0 Dt 1 y(t), . . . , Ct0 Dt n y(t)) = 0 (8.4.7)
and the initial values are yi , i = 0, 1, . . . , ⌈max(αi )⌉ − 1. Signal y(t) can still be decom-
posed as y(t) = z(t) + T(t), where T(t) is the same as defined in (8.4.3). The z(t) signal
has zero initial values. If the differential equation is revised into Riemann–Liouville
definition, the following equations are still satisfied:
α α
F(t, y(t), RL 1 RL
t0 Dt z(t), . . . , t0 Dt z(t)) = 0.
n
(8.4.8)
Recalling the matrix method of the pth order algorithm, Riemann–Liouville equa-
tions can be found. The original signal y(t) can be decomposed into u(t) + v(t), where
p
u(t) = ∑ ck (t − t0 )k (8.4.9)
k=0
and ck = yk /k!. Then the αi th order differential equation can be written directly as
p
RL αi 1 Γ(k + 1)
t0 Dt y(t) = W αi v + ∑ ck (t − t0 )k−αi (8.4.10)
hαi k=0
Γ(k + 1 − α i )
For the time vector t = [0, h, 2h, . . . , mh], the following nonlinear algebraic equa-
tion can be composed:
f (t, B1 v, B2 v, . . . , Bn v) = 0. (8.4.12)
The equation can be solved with the MATLAB solver fsolve(). The following
MATLAB code is written to implement the algorithm, where Bi and matrix du can be
passed to the equation involving f as additional parameters; Bi are described as three-
dimensional arrays:
function [y,t]=nlfode_mat(f,alpha,y0,tn,h,p,yp)
y0=y0(:); alfn=ceil(alpha); m=ceil(tn/h)+1;
8.4 Solution of nonlinear fractional-order differential equations | 307
Example 8.13. Solve the fractional-order differential equation in Example 8.11 with
matrix methods.
Solutions. The implicit differential equation can be written from the given explicit
one
(du(:,3)+B(:,:,3)*v).^2;
tic, [y1,t]=nlfode_mat(f,alpha,y0,tn,h,1); toc
tic, [y2,t]=nlfode_mat(f,alpha,y0,tn,h,2); toc
max(abs(y1-exp(-t))), max(abs(y2-exp(-t)))
8.5 Exercises
8.1 In Section 8.2, the analytical solutions of step and impulse responses for com-
mensurate-order systems are provided. If the input signal is a ramp function,
u(t) = t, derive the analytical solutions of the responses.
8.2 Consider the single-term differential equation
1 0.3
{ Γ(1.3) t , 0 ⩽ t ⩽ 1,
{
{
C 0.7
0 Dt y(t) ={ 1 2
{
{ t 0.3 − (t − 1)1.3 , t > 1,
{ Γ(1.3) Γ(2.3)
where the initial value is y(0) = 0, and find the numerical solution of the
fractional-order differential equation for t ∈ (0, 10). The analytical solution of
the fractional-order differential equation is given below:
t, 0 ⩽ t ⩽ 1,
y(t) = {
t − (t − 1)2 , t > 1.
y (t) + C0 Dt2.5 y(t) + y (t) + 4y (t) + C0 Dt0.5 y(t) + 4y(t) = 6 cos t,
where the initial values are y(0) = 1, y (0) = 1, and y (0) = −1. Evaluate the accu-
racy and speed of the numerical solution. The analytical solution of the problem
is y(t) = √2 sin (t + π/4).
8.5 Exercises | 309
find its numerical solution. If the orders are approximated, for instance, 2.2 is
regarded as 2, while 0.9 is approximated by 1, solve the integer-order differential
equation and observe the differences.
8.5 Solve the linear fractional-order differential equation with zero initial values:
1.2 0.9
D y(t) + 5D y(t) + 9D 0.6 y(t) + 7D 0.3 y(t) + 2y(t) = u(t)
where the time interval is t ∈ (0, 1), with initial values y(0) = 0 and y (0) = 0. It is
known that the analytical solution of the equation is y(t) = t 8 − 3t 4+α/2 + 9t α /4. If
α = 1.25, assess the speed and accuracy of the MATLAB solver.
8.7 Consider the following nonlinear Caputo equation:
C √2 √2 E1,3−√2 (−t) 2
0 Dt y(t) = −t 1.5− et y(t)C0 Dt0.5 y(t) + e−2t − [y (t)] ,
E1,1.5 (−t)
with given initial values y(0) = 1 and y (0) = −1. If the analytical solution is y(t) =
e−t , solve the numerical solution of the Caputo equation for t ∈ (0, 2), and assess
the precision and elapsed time.
8.8 Solve the following nonlinear fractional-order differential equations with zero ini-
tial values, where f (t) = 2t + 2t 1.545 /Γ(2.545). If the equation is a Caputo one, and
y(0) = −1 and y (0) = 1, solve again the differential equation:
2 1.455 2
D x(t) + D x(t) + [D 0.555 x(t)] + x3 (t) = f (t).
9 Block diagram solutions of ordinary differential
equations
Initial value problems of various differential equations were systematically introduced
in the previous chapters. There the solution methods were globally referred to as
command-driven methods. The procedures of the command-driven methods were
that the entire differential equation should be expressed by anonymous or MATLAB
functions, then appropriate solvers could be called to find the numerical solutions of
the differential equations.
From the numerical analysis viewpoint, the solution pattern like this is perfect.
A direct command can be used to solve the whole differential equation. From the
application viewpoint, there are limitations in the command-driven methods. Since
for large-scale systems the entire system is composed of many independent parts, each
can be by itself a differential equation or implicit differential equation, or composed of
other models. Therefore, to convert such a complicated system into a single standard
model is a very tedious, if not impossible, job. For instance, in an aeroplane control
problem, if a command-driven method is to be used, the internal system may have
mechanical, electrical, electronic components, and may have other nonlinearities and
discrete controllers. A set of state variables must be selected to form a state vector,
and one has to write a set of independent first-order explicit or implicit differential
equations so that the solution process can be invoked. If in the solution process, a com-
ponent is replaced by another one, the state variables should be selected again, and
differential equations must be rewritten. This is an impossible task. A slight careless-
ness may lead to wrong simulation results.
A more practical way to solve this problem is the use of a block diagram based
solution pattern. A large-scale system can be divided into several independent sub-
systems, and these subsystems are interconnected in certain ways. Therefore when
replacing components, a simulation subsystem can be created, without affecting any
other subsystems.
Simulink is an ideal tool to implement diagram based simulation modeling and
simulation. Details on Simulink will be further presented in Volume VI. In this sec-
tion, Section 9.1 introduces Simulink and provides some essential knowledge about it.
This can be regarded as a fundamental background for latter materials in this chap-
ter. In Section 9.2, the Simulink modeling methodology for differential equations is
proposed. Examples are used to demonstrate how to draw differential equations in
Simulink using a graphical approach. It is also demonstrated on how to set simulation
control parameters. In Section 9.3, examples of a variety of differential equations, such
as differential-algebraic equations, delay differential equations, stochastic differential
equations, switched differential equations, and so on, are illustrated. In Section 9.4,
modeling and simulation of fractional-order differential equations are introduced.
https://doi.org/10.1515/9783110675252-009
312 | 9 Block diagram solutions of ordinary differential equations
Simulink environment was first released by MathWorks in 1990. The original name
was SimuLAB. In 1992, it was renamed Simulink, whose two key words – simu and
link – are respectively for simulation and modeling. The environment can be used to
represent complicated systems in a block diagram modeling format. Various differen-
tial equations can be described graphically with such a powerful tool.
Of course, differential equation modeling and solution is a very small part in
Simulink modeling. It provides also a significant number of blocks used for control
and many other systems. The multidomain physical modeling facilities enable the re-
searchers to build mechanical, motor, power electronic, and communication systems
to create simulation models using building block patterns. The Simulink environment
is very powerful. With Simulink and its blocksets, differential equations of arbitrary
complexity can be solved. For the relevant materials the readers are referred to [68] or
[77]. In the latter book, only differential equation related systems are discussed.
In this section, a brief introduction to the commonly used blocks in differential
equation modeling are introduced, and then an example is used to demonstrate how
to create graphical models and solve differential equations with Simulink.
(3) Commonly used input blocks include a Constant block to generate a constant sig-
nal, Sine block to generate a sinusoidal signal, and Step block to generate a step
input signal.
(4) Integrator block can be used to define key signals, for instance, its input can be
regarded as the first-order derivative of the output signal. If in differential equation
modeling the input of the ith integrator is defined as xi (t), then the output is
xi (t). If a first-order differential equation is to be described, the key signals can be
defined by the integrators. For high-order linear differential equations, Transfer
Fcn block can also be used.
(5) Transport Delay block can be used to find the signal value at t − τ. The block is
useful in modeling delay differential equations.
(6) Gain blocks. The related blocks such as Gain, Matrix Gain, and Sliding Gain are
useful gain blocks, the names are self-explanatory. Gain block is used to amplify
signals. If the input signal is u, the output is then Ku. A dialog box can also be
used to change to matrix gain matrix. Sliding Gain block allows the user to adjust
the gain using a mouse to drag the scroll bar.
(7) Math computing blocks. Dedicated blocks for arithmetic computations on signals
are provided. Also logical and comparison operation blocks can be used. An alge-
braic equation solver block is provided, and it is useful in modeling differential-
algebraic equations or implicit differential equations.
(8) Math function blocks. Nonlinear function evaluations of signals are supported,
such as trigonometric functions and logarithmic functions.
(9) Signal vectorization blocks. Mux block is used to compose several signals into a
vectorized one, while DeMux block separates a vector signal into individual ones.
If one observes closely the ordinary differential equation models, it is easily seen
that differential equations are used to describe the relationship among time t, un-
known signals xi (t), and their derivatives. Sometimes, even high-order derivatives are
involved. These signals are key ones in differential equation modeling.
9.2 Block diagram modeling methodologies in differential equations | 315
Signal t can be provided by the Clock block in the Sources group or on the com-
monly used group in Figure 9.2.
There are two ways in defining unknown signals and their arbitrary derivatives.
One way is to use n Derivative block connected serially, as shown in Figure 9.3. As-
suming that the signal flowing into the first block is y(t), the signals y (t), y (t), . . . ,
y(n) (t) can then be defined. This method seems simple and straightforward, yet there
are two problems. The first is that numerical differentiation algorithms are not quite
reliable. Therefore high-order differentiation may bring in numerical troubles, and the
numerical stability is questionable; the other is that there is no place to specify the
initial values. Therefore this method is not further considered.
With the key signals, other blocks can be used to manipulate these signals, and finally,
the differential equation model can be established.
In practical differential equation modeling process, the following actions are usually
taken to draw block diagrams:
(1) If two signal are equal, they can be joined. This is usually used to close the simu-
lation loop.
(2) Assuming that signal u(t) is known and a Gain block is connected to it, by double-
clicking the block to enter the value of k, the signal ku(t) can be constructed.
316 | 9 Block diagram solutions of ordinary differential equations
(3) If a nonlinear action on signal v(t) is expected, for instance, the nonlinear signal is
(v(t) + v3 (t)/3) sin v(t), then signal v(t) can be fed into block Fcn. Double-clicking
the block, one can fill (u+u^3/3)*sin(u). Note that the input signal to the Fcn
block is denoted as u. Through this modeling method, the output of the block is
the expected nonlinear function.
(4) Several signals can be fed into the Mux block, and the output of this block is the
vector composed of these inputs. If the vector signal is fed into Demux block, it is
decomposed into scalar signals.
(5) If the u(t) signal is known, and the u(t − d) delayed signal is expected, u(t) can
be fed into Transport Delay block. Double-clicking the block, parameter d can be
filled to the block. The output signal is the expected delayed signal.
(6) If a signal is to be observed, it can be connected to a Scope block or Out block, so
that it can be displayed or returned.
Example 9.1. Consider Lorenz equations in Example 3.7. For convenience, the differ-
ential equations are given again:
where we let β = 8/3, ρ = 10 and σ = 28. The initial values are x1 (0) = x2 (0) = 0, and
x3 (0) = ϵ. Establish a Simulink model.
Solutions. It can be seen from the differential equations that there are 6 (3 pairs) key
signals to be defined first, x1 (t) and x1 (t), x2 (t) and x2 (t), as well as x3 (t) and x3 (t). Three
integrators are used to define these key signals, as shown in Figure 9.5. Since only
first-order derivatives are expected, there is no need to build integrator chains. One
integrator can be used to define a signal and its derivative. Besides, double-clicking
the integrator, the initial value of the signal can be specified in the dialog box.
Let us see the first differential equation. There is a term 3x1 (t)/8, which can be ex-
pressed by the x1 (t) signal followed by a Gain block. Double-clicking the block, the
9.2 Block diagram modeling methodologies in differential equations | 317
parameter 3/8 can be specified in the dialog box. The output of the block is the ex-
pected 3x1 (t)/8. If x2 (t)x3 (t) signal is also expected, the two signals x2 (t) and x3 (t) can
be fed into a Product block. Then the output of the block is the expected x2 (t)x3 (t).
The two signals constructed above can be added up by feeding them into a Sum block.
Note that before adding them, the sign of 3x1 (t)/8 should be altered. Double-clicking
the Sum bock, the signs of the two signals can be edited. In this case, the signs should
be changed to -+. The output of the Sum signal is x1 (t). Since it is x1 (t), and the input
terminal of the first integrator is also x1 (t), they can be connected directly to form a
closed loop. The first equation can be described graphically in this manner, as shown
in Figure 9.6.
Similarly, the second and third differential equations can be described graphically,
so that the entire system of differential equations is represented in the model. The
facilities in Simulink can be used to find the numerical solution of the differential
equations.
The modeling process presented here is far too complicated for differential
equations. It is not recommended, and it is only used to illustrate the ideas in Simulink
modeling. For differential equations, a better modeling method will be illustrated
next.
Example 9.2. Compared with Example 3.7, it can be seen that the modeling method
is trivial and error-prone. Is there a better way for differential equations? Establish a
Simulink model.
Solutions. A vector integrator is needed in the modeling process. Just using the in-
tegrator block, double-clicking it, and specifying a vector as its initial values, the
block is automatically changed into a vector integrator. For this particular example,
the initial vector [0,0,1e-10] is specified, so that the integrator is a 3×1 vector, whose
output is defined as x(t) = [x1 (t), x2 (t), x3 (t)]T , and then the input becomes x (t). The
relationship between them can be described by a MATLAB Function block. A MATLAB
function as follows can be written:
318 | 9 Block diagram solutions of ordinary differential equations
function dy=lorenz_mod(x)
dy=[-x(1)+x(2)*x(3);
-10*x(2)+10*x(3); -x(1)*x(2)+28*x(2)-x(3)];
Filling-in the function name of the block, the Simulink model for the given differential
equations can be established as shown in Figure 9.7. Modeling in this way is as simple
as using the ode45() solver.
With the Simulink model, clicking the icon in the toolbar, a simulation can be
started, and a numerical solution of the equations can be found directly. An alternative
solution for the differential equations is to call function sim(), with the syntax
[t ,x ,y ]=sim(mod_name,tspan,options)
where the format of t and y is the same as that in function ode45(). The argument x
returns the time response data of the state variables. The actual states x are assigned
internally in Simulink model. The output vector y returns the signals connected into
the outport Out in the model. If the user is interested in a particular signal, the signal
can be connected to an Out block.
In the input arguments, mod_name is the Simulink model name. The suffix of the
model name is slx, and in the earlier versions, the suffix was mdl. The difference is
9.2 Block diagram modeling methodologies in differential equations | 319
that the latter is in an ASCII file. The other two input arguments are the same as in
ode45().
In normal cases, it is not necessary to provide options, since the control options
can be assigned in a visual manner in Simulink modeling. The argument tspan can
also be omitted, since it can be assigned in the Simulink model.
Although a simple simulation method was demonstrated for the Simulink model, it is
not recommended in this book, since it was pointed out that the validation process is
a very important issue in differential equation solutions. Control parameters should
be set by the user to ensure that the solution is correct.
The control parameters can be specified with dialog boxes. The user may click the
Simulation → Model Configuration Parameters menu item, or click the button in
the toolbar, such that a dialog box in Figure 9.8 is opened. It can be seen that at Start
time and Stop time edit boxes, the start and stop time of simulation can be specified.
Besides, Type listbox can be used to set fixed- or variable-step algorithms. Normally,
Variable-step option is recommended, so as to find high-efficiency simulation results.
If the Solvers listbox is opened, the simulation algorithms can be selected from it. For
average users, the automatic selection method is recommended. Simulink will select
an algorithm automatically based on the model. If the listbox is opened, the options
ode45, ode15s, as well as others, can be selected.
If the item Solver details is selected, the dialog box is expanded as shown in
Figure 9.9. Options like Relative tolerance, Absolute tolerance, and others are allowed.
320 | 9 Block diagram solutions of ordinary differential equations
returned to MATLAB workspace. The former is time and the latter contains the output
signals. If there is more than one output block used, the output signals are returned
in each column of yout matrix. Of course, this implies that Format is set to Array. If it
is set to the default Dataset, the handling becomes complicated, and is not presented
here.
Since by default States is not checked, the variable xout is not returned after the
simulation process.
Example 9.3. Establish a Simulink model to describe and solve the Lorenz equation
in Example 3.7.
Solutions. For the Simulink model c9mlor3.slx of Lorenz equation in Example 9.2,
two ways can be used to invoke the solution process. For instance, clicking the
button, two variables are established in MATLAB workspace, under the names tout
and yout. The following commands can be used to draw the simulation results. The
results are the same as in Figure 3.5.
Another simulation method is to employ the following statements. The same results
can be found. Note that function curves of y are drawn, not x. The arguments x are
the states internally assigned in Simulink, they may not be the same, as those ex-
pected.
For normal ordinary differential equation sets, there is usually no need to convert them
manually into first-order explicit differential equations, since several sets of integrator
chains are adequate in constructing the unknown functions and their derivatives. Sev-
eral examples are given here to illustrate the modeling process of differential equation
sets.
322 | 9 Block diagram solutions of ordinary differential equations
Example 9.4. Use Simulink to solve the high-order differential equation in Exam-
ple 4.7. For convenience of presentation, the differential equation is given again
2
y(t)y(6) (t) + 6y (t)y(5) (t) + 15y (t)y(4) (t) + 10(y (t)) = 2 sin3 2t
Solutions. To set up the simulation model, the explicit form of the highest-order
derivative of y(t) is written. In fact, Example 4.7 was used to write the explicit form
1 2
y(6) (t) = [−6y (t)y(5) (t) − 15y (t)y(4) (t) − 10(y (t)) + 2 sin3 2t].
y(t)
Of course, with the method in Example 4.7, the first-order explicit differential
equations is a modeling method. A MATLAB function is written. Compared with the
anonymous function in Example 4.7, the input argument is x, rather than the t and x
in the anonymous function.
function dx=c9mode1a(x)
dx=[x(2:6);
(-6*x(2)*x(6)-15*x(3)*x(5)-10*x(4)^2+2*sin(2*t)^3)/x(1)];
With this function, the differential equation can be modeled in the framework in Fig-
ure 9.7.
A direct modeling method is demonstrated here. Since the highest-order deriva-
tive is 6, therefore 6 integrators connected serially can be used to define the key sig-
nals. With them, the expressions on the left-hand side of the equal sign can be con-
structed then divided by y(t). The result is the same as the y(6) (t) signal, so both can
be connected, and the closed-loop simulation model can be established as shown in
Figure 9.11.
From the given model, some modeling details are not presented. For this particu-
lar system, only y(0) is nonzero, while the others are zero. How can these be set? We
should double-click the integrators and fill in the values to the parameter dialog box.
The stoping time and error tolerance are assigned using an appropriate dialog box. If
a simulation process is initiated, the same result is obtained as in Example 4.7.
Solutions. It can be seen from the model constructed above that the display is in a
mess. There are too many crossing lines. In fact, in Simulink modeling, if there is no
solid dot in the display, although there are crossing lines, one should not be worried.
There are no faulty connections.
If neat and concise models are expected, Mux blocks should be used to construct
vectorized signals. Based on the vectorized operations, the new model in Figure 9.12
is constructed. Note that when constructing vectorized signals, the time signal should
9.3 Modeling examples of differential equations | 323
be included. For instance, in this example, the time signal is used as the seventh
input. If the simulation process is invoked, the result obtained is exactly the same
as in Example 4.7.
Example 9.6. Construct a Simulink model for the Apollo equations in Example 4.12.
Here four Fcn blocks are used. The two on the right are already displayed, while for
the two on the left, due to the space restrictions, their contents are listed below:
-2*u(2)+u(3)-(1-1/82.45)*u(3)/u(5)^3-1/82.45*u(3)/u(6)^3
2*u(4)+u(1)-(1-1/82.45)*(u(1)+1/82.45)/u(5)^3-...
1/82.45*(x(1)-1+1/82.45)/u(6)^3
For complicated differential equation modeling, the methods introduced thus far are
rather complicated. State space equations should be used instead. Then the frame-
work in Example 9.2 should be used.
Since block Clock can be used in generating the time signal t, time-varying differ-
ential equations can also be modeled with Simulink. Examples are given next to show
the modeling and simulation process of time-varying differential equations.
Example 9.7. Model the time-varying differential equations in Example 4.5 with
Simulink. For convenience, the mathematical model is given again here, where the
independent variable is changed from x to t:
t 5 y (t) = 2(ty (t) − 2y(t)), y(1) = 1, y (1) = 0.5, y (1) = −1, t ∈ (1, π).
Solutions. From the given differential equation model, the explicit form can be writ-
ten directly as follows:
2
y (t) = (ty (t) − 2y(t)).
t5
Since the highest order here is three, three integrators in a chain can be used to
define the key signals. The vectorized signal is defined composed of the key signals.
For convenience, the fourth channel is the time signal t. The simulation system model
9.3 Modeling examples of differential equations | 325
shown in Figure 9.14 is constructed. Performing simulation to this model, we may find
the numerical solution of the original differential equations.
In Example 4.5, an attempt was made to perform simulation from starting time
t0 = 1 to stoping time tn = 0.2. Unfortunately, in the current Simulink mechanism,
the stoping time is not allowed to be smaller than the initial time. The solution in this
interval cannot be obtained in this way.
In the solution process, if there is no solution found for a long period of time,
the differential equations are probably stiff. Stiff solvers such as ode15s should be
assigned in the dialog box in Figure 9.8. The stiff solvers can then be used for solving
the differential equations.
The initial values are x1 (0) = 0.8 and x2 (0) = x3 (0) = 0.1.
326 | 9 Block diagram solutions of ordinary differential equations
How can we describe algebraic equations? A block function Fcn can be used to de-
scribe one side of the equation, for this particular problem, x1 (t) + x2 (t) + x3 (t) − 1,
then we can feed it into the Algebraic Constraint block. Its output can be any signal. If
the signal is connected to a given port, then the port receives the signal. It happens
that Mux block has one idle port, therefore the signal can be connected to it, and
the signal is then x3 (t). In this way, the simulation model of the entire differential-
algebraic equations is established. Performing simulation of the model, the numerical
solution is exactly the same as that obtained in Example 5.14.
With the following statements, a warning message “Warning: Block diagram
‘c9mdae1’ contains 1 algebraic loop(s)” is received. This is a normal phenomenon.
The differential-algebraic equations also contain an algebraic loop. The message can
be neglected. The elapsed time is 0.24 seconds, with the number of computed points
being 1 544. The efficiency is higher than in Example 5.14.
Example 9.9. Solve the differential-algebraic equations in Example 9.8 again, without
using the constraint block.
Solutions. For this particular problem, it can be seen that x3 (t) = 1 − x1 (t) − x2 (t).
Therefore, from the given signals x1 (t) and x2 (t), the signal x3 (t) can be directly con-
9.3 Modeling examples of differential equations | 327
structed. The Mux block can be used to describe the x(t) vector, and the simulation
model is shown in Figure 9.16. The simulation results obtained are the same.
It should be noted that the algebraic loop in the simulation model is eliminated. It can
be seen that the elapsed time and number of points are almost the same as previously.
The key in switched differential equations modeling in Simulink is the used of the
switch blocks. Switch blocks are located in the Signal Routing group, where Switch and
Multiport Switch blocks can be used directly. Here examples are used to demonstrate
the modeling and simulation of switched differential equations.
Example 9.10. Build the switched differential equations in Example 5.25 using
Simulink.
Solutions. A vectorized integrator can be used to define the key signals x(t) and x (t).
The Fcn block is used to generate the x1 (t)x2 (t) signal, and feed it into the control port
of the switching signal. If the first signal is positive, the signal A2 x(t) is connected,
while when the control signal is negative, the other branch is connected. The switched
system is constructed as shown in Figure 9.17. Since zero-crossings are detected auto-
matically in Simulink models, the model can be normally executed, such that correct
numerical solutions are found. The results are the same as those obtained in Exam-
ple 5.25.
328 | 9 Block diagram solutions of ordinary differential equations
In the modeling process, a trick is used. That is, matrices A1 and A2 are automatically
loaded into MATLAB workspace with the model. Specifically, the menu item File →
Model Properties → Model Properties is chosen. Selecting Callback, the dialog box
shown in Figure 9.18 is displayed. In the PreLoadFcn option, the matrices A1 and A2
are assigned. In this case, every time the Simulink model is opened, the parameters
are loaded automatically.
Example 9.11. Consider the nonlinear switched differential equations in Example 5.28.
Use Simulink to simulate the system and draw the control curve.
Solutions. Low-level modeling method is used directly with Simulink for the switched
differential equations, as shown in Figure 9.19. In the model, three Interpreted MATLAB
Function blocks are written to compute respectively y(t), the dynamical model of the
wheeled robot and the switching laws. The contents of the three blocks are:
function y=c9mswi2a(x)
c=cos(x(3)); s=sin(x(3));
y=[x(3); x(1)*c+x(2)*s; x(1)*s-x(2)*c];
9.3 Modeling examples of differential equations | 329
function dx=c9mswi2b(x)
u=x(1:2); x=x(3:5);
dx=[u(1)*cos(x(3)); u(1)*sin(x(3)); u(2)];
function u=c9mswi2c(x)
y=x(4:6); x=x(1:3);
if abs(x(3))>norm(x)/2
u=[-4*y(2)-6*y(3)/y(1)-y(3)*y(1); -y(1)];
else
sgn=-1; if y(2)*y(3)>=0, sgn=1; end
u=[-y(2)-y(3)*sgn; -sgn];
end
Executing the model, the numerical solution of the switched differential equa-
tions is obtained. The phase plane trajectory is the same as in Figure 5.22.
Besides, the two control signals are obtained as shown in Figure 9.20. It can be seen
that, at a certain point, the control law is switching all the time. Therefore it is not a
>> plot(t,y(:,[4,5]))
Example 9.12. Solve again the discontinuous differential equations in Example 5.29
in Simulink.
In the Continuous group in Simulink, various delay blocks are provided, including
Transport Delay block, Variable Transport Delay block, and Variable Time Delay block.
If a signal is fed into a delay block, the output of the block can be regarded as the delay
signal. In this example, examples are used to demonstrate various delay differential
equations.
9.3 Modeling examples of differential equations | 331
Example 9.13. Set up the simulation model for the delay differential equations in Ex-
ample 6.2:
Solutions. It can be seen from the second equation that the transfer function 4/(s2 +
3s + 2) can be used to describe the relationship between x(t) and y(t). An integrator
can be used to define the key signals x(t) and x (t). The Simulink model in Figure 9.22
can be constructed. Executing the model, the results are the same as in Example 6.2.
Example 9.14. Construct the variable delay differential equation in Example 6.7, with
zero initial values:
Solutions. Variable delay signal can be implemented by using Variable Transport De-
lay block. This block has two input channels. The first is the original signal, while
the other describes the variable delay. The signal is driven by t, and the output is the
variable delay signal. In this example, vectorized signals are still used, including three
state signals, with the 4th, 5th, and 6th, respectively, being the constant delay signal
x2 (t − 0.8), variable delay signal x1 (t − 0.2| sin t|), and time t. Therefore the following
MATLAB function can be written to describe the right-hand side of the equations:
332 | 9 Block diagram solutions of ordinary differential equations
function dx=c9mdde2a(x)
dx=[-2*x(2)-3*x(5);
-0.05*x(1)*x(3)-2*x(4)+2;
0.3*x(1)*x(2)*x(3)+cos(x(1)*x(2))+2*sin(0.1*x(6)^2)];
The output signal is then x (t), which can be linked to the input terminal of the vec-
torized integrator, to form the closed loop. The simulation model in Figure 9.23 can be
constructed. The simulation results are the same as in Example 6.7.
−13 3 −3 0.02 0 0 0
A1 = [106 A2 = [ 0 B = [1] .
[ ] [ ] [ ]
−116 62 ] , 0.03 0 ],
[207 −207 113] [ 0 0 0.04] [2]
Solutions. It is not easy to solve neutral-type delay differential equations, even if the
solver ddensd() is used. Simulink modeling is relatively simple. If a vectorized inte-
grator block is used, the key signals are x(t) and x (t). These signals can be used to
build the delay signals, then construct the simulation model as shown in Figure 9.24.
The solutions obtained for this model are the same as those in Example 6.15.
It is a pity that compared with the delay differential equation solver ddesd(), the
handling of delay terms xi (αt) is not allowed in Simulink.
9.3 Modeling examples of differential equations | 333
If delay differential equations are with nonzero initial values, a similar modeling tech-
nique can be used to handle the modeling problem directly. If t ⩽ t0 , the state variables
are given functions, and the previous method cannot be used in the modeling. Switch
blocks should be used for setting the functions of t ⩽ 0. In this section, examples are
used to demonstrate these problems.
Example 9.16. Consider the delay differential equations in Example 6.8. Since the his-
tory function is nonzero when t ⩽ 0, direct modeling in Simulink is not simple. Use
Simulink to represent the history function, and perform simulation again.
Solutions. The Simulink model in Figure 9.23 is used as the fundamental model.
Based on it, the nonzero history function can be expressed by a switch block, con-
trolled by time t. If time is larger than 0, the output of the integrator is fed into the first
input port of the switch, such that the equation for t > 0 is described. The description
is the same as in the case studied earlier. If t ⩽ 0, the clock bock is used to drive an
Interpreted MATLAB Function block to generate nonzero history function, defined as
function x=c9mdde2b(u)
x=[sin(u+1); cos(u); exp(3*u)];
Therefore, when t ⩽ 0, this channel is connected to set the states x(t) as the history
function. With these ideas in mind, the Simulink model is constructed directly as
shown in Figure 9.25. Simulating the system model, the results obtained are the same
as in Figure 6.7.
Example 9.17. Solve in Simulink the neutral-type delay differential equations of Ex-
ample 6.14.
Solutions. It was pointed out in Example 6.14 that function ddensd() can be used
in solving neutral-type delay differential equations, however, the error tolerance
should not be selected too small. Therefore Simulink is used again to solve the
334 | 9 Block diagram solutions of ordinary differential equations
Figure 9.25: Neutral-type delay differential equation with nonzero history functions (file:
c9mdde2x.slx).
differential equations, and see whether more accurate results can be found. From
the given neutral-type differential equations, the simulation model in Figure 9.26 can
be constructed. The following commands can be used to simulate the system, and the
error can also be computed:
>> [t,~,y]=sim(’c9mdde4’);
z=exp(t)+(t-1).*exp(t-1).*(t>=1 & t<2)+...
(exp(t-1)+(t-2).*(t+2*exp(1)).*exp(t-2)/2).*(t>=2);
norm(z-y), plot(t,y,t,z) % compare with exact values
Figure 9.26: Neutral-type delay differential equations with nonzero history functions (file:
c9mdde4.slx).
Although very tough error tolerance is set in the Simulink model, the norm of the error
found is 0.0014, which is much larger than that in Example 6.14. The error can be
distinguished from the plot.
used to find the discrete model, so that pseudorandom numbers can be used to drive
the discrete system to find the simulation results. Statistical analysis can be performed
for the simulation results. Unfortunately, this method cannot be used arbitrarily in the
case of nonlinear stochastic differential equations. Generalized simulation methods
for nonlinear stochastic differential equations are expected.
A block Band-Limited White Noise is provided in MATLAB. With such a block,
pseudorandom numbers can be generated to drive the stochastic differential equa-
tions. Fixed-step simulation algorithm should be selected to solve them. The simula-
tion method like this is no longer restricted to linear stochastic differential equations.
An example is given here to solve linear stochastic differential equations, also nonlin-
ear stochastic differential equations are handled.
Example 9.18. Solve again the linear stochastic differential equations in Example 5.31
using Simulink:
d4 y(t) + 10d3 y(t) + 35d2 y(t) + 50dy(t) + 24y(t) = d3 ξ (t) + 7d2 ξ (t) + 24dξ (t) + 24ξ (t).
Solutions. From the given differential equations, the transfer function model can be
established as
s3 + 7s2 + 24s + 24
G(s) = .
s4 + 10s3 + 35s2 + 50s + 24
With the transfer function model, the simulation model in Figure 9.27 can be es-
tablished. The step-size can be set to T = 0.1, with fixed-step simulation algorithm. The
following commands can be used to solve the differential equations, and the results
are virtually the same as in Example 5.31.
Example 9.19. Consider the block diagram of a nonlinear system shown in Fig-
ure 9.28, where the linear part of the transfer function G(s) is
s3 + 7s2 + 24s + 24
G(s) = .
s4 + 10s3 + 35s2 + 50s + 24
336 | 9 Block diagram solutions of ordinary differential equations
If the deterministic signal is given by d(t) = 0, draw the probability density func-
tion of signal e(t).
Solutions. With the Band-Limited White Noise block to simulate the white noise input
signal, and selecting fixed-step simulation methods, it is not hard to construct the
simulation model, as shown in Figure 9.29.
Figure 9.29: Simulation model of nonlinear stochastic differential equations (file: c9mrand2.slx).
Simulating the stochastic differential equations, the error signal e(t) can be found. The
probability density function can be obtained in histograms as shown in Figure 9.30.
The approximation block for a fractional-order operator is introduced first in this sec-
tion. Then the modeling techniques under Riemann–Liouville and Caputo definitions
are introduced. Examples are also used to demonstrate fractional-order implicit dif-
ferential equations and delay differential equations.
Theorem 9.1. Assuming that the frequency range of interest is (ωb , ωh ), an Nth order
continuous filter can be designed as
N
s + ωk
Gf (s) = K ∏ (9.4.1)
k=1
s + ωk
Based on the above algorithm, the following function is written to design the
continuous filters directly. If signal f (t) is fed into such a filter, the output signal can
γ
be regarded as the RL Dt f (t) signal.
function G=ousta_fod(gam,N,wb,wh)
if round(gam)==gam, G=tf(’s’)^gam; % for order is integer
else, k=1:N; wu=sqrt(wh/wb); % find intermediate frequency
wkp=wb*wu.^((2*k-1-gam)/N); wk=wb*wu.^((2*k-1+gam)/N); % zero/pole
G=zpk(-wkp,-wk,wh^gam); G=tf(G); % construct Oustaloup filter
end
The syntax of the function is G1 =ousta_fod(γ , N ,ωb ,ωh ), where γ is the fractional
order, which can be positive or negative; N is the filter order; ωb and ωh are respec-
tively the user selected lower and upper bounds of the frequency range. The fitting
outside this interval is always neglected. Due to the powerful and effective facilities
in MATLAB, a large frequency range such as (10−5 , 105 ) can be selected, and the order
can be set large such as N = 25. Accurate computation is implemented in solving
fractional-order differential equations.
The author of this book has constructed a fractional-order operator block in
Simulink. The block implements many filters, including Oustaloup filter, to approx-
imate fractional-order differentiator or integrator. The block is contained in FOTF
blockset, with many other blocks,[76, 74] and can be used directly. Double-clicking the
block, a dialog box shown in Figure 9.31 is displayed. The parameters of the filter can
be assigned in the block.
With such a block, if a signal is fed into the block, the output of the block is the
Riemann–Liouville derivative or integral of the input signal. The block cannot be used
directly for Caputo derivatives. The computation of Caputo derivatives will be pre-
sented later.
With the modeling strategy of integer-order differential equations, the unknown func-
tion and its derivatives can be constructed with an integrator chain. Oustaloup filters
can be used to compute fractional-order derivatives. Finally, Simulink models can be
established for fractional-order differential equations. In this section, the modeling of
linear and nonlinear fractional-order differential equations are demonstrated.
Example 9.20. Find the numerical solution of the linear fractional-order differential
equation in Example 8.5. For convenience of presentation, the differential equation is
recalled below:
The differential equation can be described by the Simulink model shown in Fig-
ure 9.32. Simulating the model, the numerical solution of the fractional-order differ-
ential equation can be found.
Solutions. Based on the equation, the explicit form of function y(t) can be written as
The simulation model for describing y(t) is described in Figure 9.33. It can be seen
from the model that the fractional-order derivative signals can be constructed directly
with the filters. The precision of the simulation results mainly depends upon the fitting
of the filters to fractional-order derivatives. The selection of frequency range and order
of the filter has a certain impact on the solution accuracy. In Figure 9.34, comparisons
are made to different frequency ranges and orders, and the results are virtually the
same. The one with a slightly larger error is from the fitting with ωb = 0.001, ωh = 1 000
and order n = 5. Therefore for this example, selecting n = 9 and a larger frequency
range may give identical results.
Oustaloup and other filters are normally designed for handling zero initial value
Riemann–Liouvile problems. They cannot be used to generate directly Caputo deriva-
tives of certain signals. Alternative methods should be explored to compute Caputo
derivatives. Two important properties are presented first to describe the relationship
between Caputo and Riemann–Liouville derivatives.
The physical interpretation of the theorem is that the γth order Caputo derivative
of y(t) can be constructed by feeding the integer-order derivative y(⌈γ⌉) (t) through the
(⌈γ⌉ − γ)th order Oustaloup integrator.
It can be seen from the idea of key signals in the integrator chain that the integer-
order derivatives of y(t) are usually constructed by the chain. If one wants to compute
the 2.4th order Caputo derivative, the signal y (t) is used, and fed into the 0.6th order
Oustaloup integrator. The output of the integrator is the expected C D 2.4 y(t).
Theorem 9.3. Taking the (⌈γ⌉ − γ)th order Riemann–Liouville derivatives of both sides
of (9.4.3), the following relationship can be established:
RL ⌈γ⌉−γ C γ
t0 Dt [t0 Dt y(t)] = y(⌈γ⌉) (t). (9.4.4)
The physical interpretation of the theorem is that, for the γth order Caputo deriva-
γ
tive signal Ct0 Dt y(t), taking (⌈γ⌉ − γ)th order Riemann–Liouville derivative, the integer-
order derivative y(⌈γ⌉) (t) can be found. That is, an integer-order derivative can be con-
structed by feeding Caputo derivatives to an Oustaloup filter.
With the two theorems and suitable processing with the Oustaloup filters, Caputo
derivatives can be constructed so that fractional-order Caputo differential equations
can be established.
The construction of Caputo derivatives is introduced first. With the method and the-
oretic foundations, the integrator chain can be constructed to define the unknown
342 | 9 Block diagram solutions of ordinary differential equations
signal and its integer-order derivatives, then, with appropriate properties, the Caputo
derivatives can also be defined, such that Caputo differential equations can be ex-
pressed in Simulink. Examples are shown here to model and solve two classes of Ca-
puto differential equations.
Example 9.22. Use Simulink to solve the nonlinear Caputo differential equation in
Example 8.11. For convenience, the differential equation is recalled:
Solutions. Since the highest order in the equation is 1.455, q = 2, so two integra-
tors are needed to define the key signals y(t), y (t), and y (t). The initial values are
assigned accordingly to the two integrators. Now if the key signal C0 Dt0.555 y(t) is ex-
pected, it can be seen from (9.4.3) that signal y (t) should be fed into the 0.445th order
Oustaloup integrator. The output of the integrator is the expected C0 Dt0.555 y(t) signal.
With these key signals, the right-hand side of the equation can be expressed with
low-level blocks, that is, we construct the C0 Dt1.455 y(t) signal. Now, the simulation loop
should be closed. It can be seen from (9.4.4) that, if the above signal is fed into the
0.445th order Oustaloup derivative block, the output of the block is y (t). This signal
happens to be the starting signal in the integrator chain. Therefore the two signals
defining y (t) can be joined together to construct the closed loop, as shown in Fig-
ure 9.35. For simplicity, the static nonlinear function evaluation regarding t is de-
scribed in an Interpreted MATLAB Fcn block, defined as
Figure 9.35: Simulink model of the nonlinear fractional-order differential equation (file:
c9mfode3.slx).
9.4 Simulink modeling of fractional-order differential equations | 343
The following parameters for the Oustaloup filters can be selected as follows. Simu-
lation results can then be obtained. Compared with the given analytical solution, the
maximum error is 5.5179 × 10−7 , and the elapsed time is 0.63 seconds.
Solutions. The implicit Caputo equation can be converted into the standard form as
C 0.2 C 1.8
0 Dt y(t) 0 Dt y(t) + C0 Dt0.3 y(t) C0 Dt1.7 y(t)
t t t t t
+ [E (− ) E (− ) + E1,1.7 (− ) E1,1.3 (− )] = 0.
8 1,1.8 2 1,1.2 2 2 2
Based on the above modeling method, the key signals y(t), y (t), and y (t) are
defined first. The Caputo derivatives D 0.2 y(t), D 0.3 y(t), D 1.7 y(t), and D 1.8 y(t) are then
defined. With these key signals, the left-hand side of the standard form given above
can be constructed and fed into the Algebraic Constraint block. The target is to have
the output signal of the block equal to D 1.8 y(t). To ensure this relationship, a further
0.2th order derivative is taken to yield y (t). The signal can be joined with signal
y (t) in the integrator chain, to close the loop for the implicit Caputo differential
equation. The model is as shown in Figure 9.36. Since the initial values are defined in
the integrator chain, it is not necessary to consider them elsewhere. The Oustaloup
filter can be used to express fractional-order derivatives. The contents of Interpreted
MATLAB Fcn block is
With the following Oustaloup filter parameters, the numerical solution of the
implicit differential equation can be found, with the maximum error of 3.8182 × 10−5 ,
and elapsed time of 151.2 seconds. Compared with other models, the simulation took
too long. This is because of the algebraic loop in the model. In each simulation step, an
algebraic equation is invoked once. In the solution process, a warning message “Block
diagram contains 1 algebraic loop” is displayed. This is unavoidable and should be
neglected.
344 | 9 Block diagram solutions of ordinary differential equations
where y(0) = 1 and y (0) = −1. If t < 0, y(t) = y (t) = 0. Find the numerical solution of
the differential equation.
Solutions. For the c9mfode3.slx model given in Figure 9.35, a slight modification is
made such that delay constant can be appended to the key signals, and the model in
9.4 Simulink modeling of fractional-order differential equations | 345
Figure 9.37 can be constructed. Simulating the model and trying different parameters
for the Oustaloup filter, the same output curves are found as seen in Figure 9.38.
It is worth pointing out that the fractional-order delay differential equation studied
here has zero initial values. For nonzero constant history functions, the method in
Example 9.16 can be used to model and simulate the system again.
In fact, it is seen from the previous examples that, in theory, the modeling tech-
nique can be used to describe differential equations of any complexity. If the parame-
ters in Oustaloup filter are chosen well, the numerical solutions of any fractional-order
differential equations can be found effectively.
346 | 9 Block diagram solutions of ordinary differential equations
9.5 Exercises
9.1 Consider a simple linear differential equation
with the initial values y(0) = 1, y (0) = y (0) = 1/2, y (0) = 0.2. Establish the
simulation model with Simulink, and draw the simulation results.
9.2 Consider the above model. Assume that the given differential equation is
changed into a time-varying one:
The initial values are y(0) = 1, y (0) = y (0) = 1/2, and y (0) = 0.2. Establish
the simulation model with Simulink, and draw the simulation results.
9.3 In Example 9.6, a Simulink model was created for the Apollo differential equa-
tions. Establish a Simulink model with the method in Example 9.2. Compare the
modeling efficiency and results.
9.4 Consider the differential-algebraic equations in Example 5.14:
Assume that the equation has zero initial values when t ⩽ 0. Establish a Simulink
model to describe this equation. Also solve it with function dde23() and com-
pare the results. Draw the curve of signal y(t).
9.7 Establish a Simulink model for the differential equations in Example 4.18 and
compare the simulation results.
9.8 Assume that a Simulink model is as shown in Figure 9.39. Write down its math-
ematical model.
9.9 Construct a Simulink model for the single-term Caputo differential equations in
Problem 8.6.
9.5 Exercises | 347
9.10 Solve the following nonlinear fractional-order differential equations with zero
initial values, where f (t) = 2t + 2t 1.545 /Γ(2.545). If the equation is a Caputo one,
and y(0) = −1, y (0) = 1, solve the equation again:
2
2 1.455
D x(t) + D x(t) + [D 0.555 x(t)] + x 3 (t) = f (t).
with y(0) = y (0) = 1. Show that the solution of the equation is independent to
the values of the constants A, B, and C.
348 | 9 Block diagram solutions of ordinary differential equations
y (t) + C0 Dt2.5 y(t) + y (t) + 4y (t) + C0 Dt0.5 y(t) + 4y(t) = 6 cos t.
If the initial values are y(0) = 1, y (0) = 1, y (0) = −1, and 0 ⩽ t ⩽ 10, the
analytical solution is y(t) = √2 sin(t + π/4).
(3) Fractional-order nonlinear state space model
C 0.5 1 1/6
{ 0 Dt x(t) = ([(y(t) − 2)(z(t) − 3)] + √t),
{
{
{
{C 0.2 2Γ(1.5)
D y(t) = Γ(2.2)[x(t) − 1], (9.5.1)
{0 t
{
{ D 0.6 z(t) = Γ(2.8) [y(t) − 2]
{
{ C
{0 t Γ(2.2)
where x(0) = 1, y(0) = 2, and z(0) = 3. The analytical solution of the state
space equations is x(t) = t + 1, y(t) = t 1.2 + 2, z(t) = t 1.8 + 3, and 0 ⩽ t ⩽ 5 000.
10 Boundary value problems for ordinary differential
equations
The ordinary differential equation studied so far were based on the assumption that
the initial values are known. That is, the x 0 vector was known. Solutions were then
found for the state variables at other time instances. These differential equations are
known as initial value problems. In practice, some of the state variables are known
at time t = t0 , and some at t = tn . Such problems are the so-called boundary value
problems. Solvers such as ode45() cannot be used to solve boundary value problems
directly. In this chapter, we explore how to solve boundary value problems for differ-
ential equations.
In Section 10.1, the general form of boundary value problems is proposed. Nec-
essary interpretations and comments are made on the mathematical forms. In Sec-
tion 10.2, three low-level algorithms are presented to solve two-point problems of
second-order differential equations. These may be considered as the basis for general
purpose solvers for the description and solution of the boundary value problems. If
the readers are interested only in finding the numerical solutions to boundary value
problems using MATLAB, this section can be skipped. The subsequent materials can
be visited directly.
In Section 10.3, a general purpose solver for two-point boundary value problems
is discussed. Various differential equations are solved with this solver. Complicated
boundary value problems are also studied, including semi-infinite interval boundary
value problems.
For some specific problems, the solver provided in Section 10.3 is limited. Some
problems cannot be solved with it. In Section 10.4, an optimization based solution
pattern is presented. Some of the problems which cannot be solved with other meth-
ods are considered, including the boundary value problems of implicit differential
equations, delay differential equations, and fractional-order differential equations.
Simulink based methods are often adopted, such that the boundary value problems
of differential equations of any complexity can be solved. These solution methods do
not exist in any other literature. With such an idea, boundary value problems in even
more complicated cases can be solved.
https://doi.org/10.1515/9783110675252-010
350 | 10 Boundary value problems for ordinary differential equations
Definition 10.1. The first-order explicit differential equations with undetermined co-
efficients are given by
where y(t) is the state variable vector while θ is a vector consisting of all the undeter-
mined coefficients. The boundary conditions of the equation are
It can be seen from the formulas that compared with initial value problems, some
of the initial values of the states and some terminal values satisfy the algebraic equa-
tions in (10.1.2). These equations can be given in simple as well as complicated forms.
The problem to be studied in this chapter is of finding the numerical solutions from
the given differential equations and boundary value conditions.
Although the mathematical form in (10.1.1) is introduced, it does not imply the
differential equations to be studied can only be described by explicit forms. In this
chapter, boundary value problems of other forms, such implicit differential equations,
delay differential equations, and fractional-order differential equations, are also con-
sidered. The differential equations which cannot be explained with (10.1.1) will also
be discussed.
From the given values at these two points, the undetermined constants C1 and C2 can
still be determined. The differential equation problem in this case is changed into
a boundary value problem. For second-order nonlinear differential equations, if the
boundary conditions are known, the numerical solution of the differential equations
may or may not be uniquely determined.
Note that (10.2.1) is a special form of (10.1.2). This type of boundary value problems
is considered in the section.
The idea of solving boundary value problem is that, if we assume that the value
of ŷ (a) is also known, functions like ode45() can be used to solve the problem. The
10.2 Shooting methods in two-point boundary value problems | 351
solution y(b)
̂ must have errors compared with the given y(b). Using the error informa-
tion, the initial value ŷ (a) can be repeatedly modified, until a consistent solution is
found, such that |y(b)
̂ − y(b)| < ϵ. When the consistent initial value ŷ (a) is found, it
can be used to directly find the solutions of the original boundary value problems. This
type of methods is also known as shooting methods. In this section, different shooting
methods are introduced so as to illustrate the idea and methods in solving boundary
value problems for second-order differential equations.
First consider a simple problem of finding the numerical solutions of linear time-
varying second-order differential equations described as boundary value problems.
Generally speaking, second-order linear time-varying differential equations have no
analytical solutions. Numerical solutions are the only way to study such systems.
where p(t), q(t), and f (t) are all given functions. The boundary conditions studied in
this section are given by (10.2.1).
The basic idea of a shooting method is to find the corresponding initial values of
y(a) and y (a), such that the boundary values in (10.1.2) are satisfied. Then the initial
value problem solvers can be used to solve the boundary value problems.
The shooting method for linear differential equations can be summarized by the
following procedures:
(1) Find the initial value y1 (b) for the following homogenous differential equation:
y1 (t) + p(t)y1 (t) + q(t)y1 (t) = 0, y1 (a) = 1, y1 (a) = 0. (10.2.3)
(2) Find the final value y2 (b) from the following initial value problem of a homoge-
neous differential equation:
y2 (t) + p(t)y2 (t) + q(t)y2 (t) = 0, y2 (a) = 0, y2 (a) = 1. (10.2.4)
(3) Solve the following initial value problem for the inhomogeneous differential equa-
tions and find yp (b):
yp (t) + p(t)yp (t) + q(t)yp (t) = f (t), yp (a) = 0, yp (a) = 1. (10.2.5)
(5) Solve the initial value problem, and let y(t) be the numerical solution of the orig-
inal boundary value problem:
To find the numerical solutions, the first-order explicit differential equations should
be found. Selecting the state variables x1 (t) = y(t) and x2 (t) = y (t), the following
standard form can be found, and denoted as f2 (t, x(t)):
Another auxiliary differential equation for f1 (t, x(t)) should be constructed, which
is used to describe the homogeneous part:
With the above equations and algorithm, the following MATLAB function can be
written:
function [t,y]=shooting(p,q,f,tspan,x0f,varargin)
ga=x0f(1); gb=x0f(2); % extract the boundaries
f1=@(t,x)[x(2); -q(t)*x(1)-p(t)*x(2)]; % homogeneous equations
f2=@(t,x)f1(t,x)+[0; f(t)]; % inhomogeneous equations
[t,y1]=ode45(f1,tspan,[1;0],varargin{:}); % (10.2.3)
[t,y2]=ode45(f1,tspan,[0;1],varargin{:}); % (10.2.4)
[t,yp]=ode45(f2,tspan,[0;0],varargin{:}); % (10.2.5)
m=(gb-ga*y1(end,1)-yp(end,1))/y2(end,1); % (10.2.6), find initial values
[t,y]=ode45(f2,tspan,[ga;m],varargin{:}); % (10.2.7)
It can be seen that each statement in the function faithfully implements the mathe-
matical description. It is immediately seen that MATLAB has advantages in solving
scientific computing problems.
The syntax of the function is
[t ,y ]=shooting(p,q,f ,tspan,x 0 ,controls)
where p, q and f are respectively the function handles of p(t), q(t), and f (t); tspan is
the vector of the start and stop times; x 0 =[γa ;γb ] contains the boundary values, where
x 0 is just a notation, not initial values. Besides it can be seen that in solving initial
value problems, the solver ode45() is used directly, and the “controls” are allowed
to pass varargin to the ode45() solver. If the user thinks that ode45() solver is not
suitable for certain problems, other solvers can be used instead. Simple examples are
given next to demonstrate the use of the new solver.
10.2 Shooting methods in two-point boundary value problems | 353
Example 10.1. Find the solutions of the following linear differential equations in the
interval [0, π]:
2 2
(sin(t + 1) + 2) y (t) − 2(cos(t + 1) + 2) y (t) − 16 sin(t + 1)y(t) = 4e−2t (4 cos(t + 1) + 9)
where y(0) = 1 and y(π) = e−2π . It is known that the analytical solution of the differen-
tial equation is y(t) = e−2t .
With the following commands, the solution to the original problem can be found, as
shown in Figure 10.1. It can be seen that the solution y1 (x) satisfies both boundary
value conditions. Besides, in the solution process, the error tolerance is set to be very
strict, so the solution satisfies the original differential equations.
It is seen from validating the result that the error norm between the numerical and
analytical solutions is e = 2.0422 × 10−12 . The precision obtained is rather high.
The numerical solution approach for the boundary value problems of linear time-
varying differential equations was presented earlier, and it can be used to effectively
solve certain boundary value problems. In this section, a matrix based numerical
method is presented. We also consider how to increase the effectiveness of the method.
Theorem 10.1. For a set of equally-spaced samples y(x1 ), y(x2 ), . . . , y(xn ), the o(h2 )
central-point difference method can be used to approximate the terms involving the
first- and second-order derivatives of y(t):
The first- and second-order difference formulas can be substituted into (10.2.2) so
that
h h
(1 + p(xi )) y(xi+1 ) + (−2 + h2 q(xi ))y(xi ) + (1 − p(xi )) y(xi−1 ) = h2 f (xi ). (10.2.12)
2 2
h h
ti = −2 + h2 q(xi ), vi = 1 + p(xi ), wi = 1 − p(xi ), bi = h2 f (xi ) (10.2.13)
2 2
t1 v1 y(x2 ) b1
[ ][ ] [ ]
[w2 t2 v2 ] [ y(x3 ) ] [ b2 ]
[ ][ ] [ ]
.. .. .. ] [ .. ] [ .. ]
(10.2.16)
[
[
[ . . . ][ . ] = [ . ]
][ ] [ ]
[
[ wn−2 tn−2 ] [y(xn−1 )] [bn−2 ]
vn−2 ] [ ] [ ]
and y(x1 ) = γa , y(xn+1 ) = γb . Solving this matrix equation, the numerical solution of
the differential equations y(xi ) can be found.
The mathematical form of the algorithm is simple, however, a low-precision
central-point finite difference method is adopted, so the accuracy of the function may
not be very high. The theoretical precision is o(h2 ).
The MATLAB implementation of the algorithm is
function [x,y]=fdiff0(p,q,f,tspan,x0f,n)
t0=tspan(1); tn=tspan(2); ga=x0f(1); gb=x0f(2);
x=linspace(t0,tn,n); h=x(2)-x(1); x0=x(1:n-1);
t=-2+h^2*q(x0); b=h^2*f(x0);
p0=p(x0); v=1+h*p0/2; w=1-h*p0/2; % compute the 4 vectors in (10.2.13)
b(1)=b(1)-w(1)*ga; b(n-1)=b(n-1)-v(n-1)*gb; % modified parameters (10.2.14)
A=diag(t)+diag(v(1:end-1),1)+diag(w(2:end),-1); % the tri-diagonal matrix
y=inv(A)*b(:); y=[ga; y; gb]’; % solve the linear algebraic equation
The syntax of the function is very close to that of the shooting() function. The differ-
ences are that the functions p, q, and f are expressed in dot operations. Besides, n is
the number of points to compute. It can be seen that in the algorithm, the points are
equally-spaced, therefore the algorithm may not be practical. Examples are used here
to demonstrate the use of the function.
Example 10.2. Solve again the two-point linear time-varying differential equations in
Example 10.1 with the new algorithm.
Solutions. Try now n = 5 000 to solve the problem. The values of y can be found, and
function fdiff0() can be used to draw the y(t) curve. After 2.72 seconds of waiting,
the maximum error is found as 4.7675 × 10−4 . It can be seen that the algorithm is of
very low efficiency, and the error is too large.
356 | 10 Boundary value problems for ordinary differential equations
>> p=@(t)-2*(cos(t+1)+2).^2./(sin(t+1)+2).^2;
q=@(t)-16*sin(t+1)./(sin(t+1)+2).^2;
f=@(t)4*exp(-2*t).*(4*cos(t+1)+9)./(sin(t+1)+2).^2;
tic, [t,y]=fdiff0(p,q,f,[0,pi],[1,exp(-2*pi)],5000); toc
y0=exp(-2*t); plot(t,y,t,y0,’--’), norm(y-y0,inf)
Since in the solution process a huge matrix of size 5 000 × 5 000 is used, the efficiency
is not satisfactory. To increase the accuracy, the value of n should also be increased.
When n = 10 000, the elapsed time increases to 27.16 seconds while the error is only
reduced to 2.3836 × 10−4 . Compared with the precision in Example 10.1, the error is too
large. Further increase of n may result in memory problems, and the speed is signifi-
cantly reduced, such that the solutions cannot be achieved on the current computers.
Therefore this algorithm is not recommended for solving boundary value problems of
linear time-varying differential equations.
Of course, with the properties of the tri-diagonal matrix in (10.2.16), sparse matri-
ces can be used to solve the problem. A sparse matrix based method is implemented.
The modifications are made on sparse matrix representation of A. Repeating the com-
mands in Example 10.2, even though n is increased to 2 000 000, the solution can be
found in 1.246 seconds. The error norm is 1.1340 × 10−6 . It can be seen that the solver
is improved significantly in efficiency, but the error is still too large. The solver is not
of any use.
function [x,y]=fdiff(p,q,f,tspan,x0f,n)
t0=tspan(1); tn=tspan(2); ga=x0f(1); gb=x0f(2);
x=linspace(t0,tn,n); h=x(2)-x(1); x0=x(1:n-1);
t=-2+h^2*q(x0); b=h^2*f(x0); i=1:n-2;
p0=p(x0); v=1+h*p0/2; w=1-h*p0/2; % compute the 4 vectors in (10.2.13)
b(1)=b(1)-w(1)*ga; b(n-1)=b(n-1)-v(n-1)*gb; % modified parameters (10.2.14)
A=sparse([i i+1 i n-1],[i+1 i i n-1],[v(1:end-1),w(2:end),t]);
y=A\b; y=[ga; y(1:end-1); gb]’; % solve the linear algebraic equation
Since an o(h2 ) algorithm is used in the solver, the solution method is not of high ac-
curacy. The o(hp ) high precision algorithm from Volume II can be used. High accu-
racy results may be achieved, however, the programming may become more compli-
cated.
It can be seen from the previous examples that in solving the boundary value
problems of second-order linear time-varying differential equations, the first method
is recommended.
10.2 Shooting methods in two-point boundary value problems | 357
In the previous sections, only linear time-varying differential equations were con-
sidered. However, the two methods cannot be used in handling the general cases in
(10.1.1) and (10.2.1). In this section, numerical solutions of simple boundary value
problems of second-order nonlinear differential equations are explored.
Assuming that the original problem can be converted into the following initial
value problem:
y(b, mi ) − γb v (b) − γb
mi+1 = mi − = mi − 1 (10.2.19)
(𝜓y(t)/𝜓m)(b, mi ) v3 (b)
where the new state variables are v1 (t) = y(t, mi ), v2 (t) = y (t, mi ), v3 (t) = 𝜓y(t, mi )/𝜓m,
and v4 (t) = 𝜓y (t, mi )/𝜓m. The initial value problem of the following auxiliary differ-
ential equations can be posed:
{
{ v1 (t) = v2 (t),
{
{
{v2 (t) = F(t, v1 (t), v2 (t)),
{
{
(10.2.20)
{v3 (t) = v4 (t),
{
{
{
{
{v (t) = 𝜓 F(t, v (t), v (t))v (t) + 𝜓 F(t, v (t), v (t))v (t)
{
1 2 3 1 2 4
{ 4 𝜓y 𝜓y
function [t,y]=nlbound(funcn,funcv,tspan,x0f,tol)
if nargin==4, tol=100*eps; end
ga=x0f(1); gb=x0f(2); m=1; m0=0;
ff=odeset; ff.RelTol=tol; % set the solution precision
while (norm(m-m0)>tol), m0=m; % shooting with a loop
[t,v]=ode45(funcv,tspan,[ga;m;0;1],ff);
m=m0-(v(end,1)-gb)/(v(end,3)); % implement (10.2.19)
end
[t,y]=ode45(funcn,tspan,[ga;m],ff);
where f1 and f2 are respectively the function handles of the explicit and the auxiliary
differential equations. The argument tol is the relative error tolerance, with default
value of 100eps. Examples are used next to demonstrate the solution process.
Example 10.3. Solve the boundary value problem of the nonlinear differential equa-
tion
y (t) = F(t, y(t), y (t)) = 2y(t)y (t), y(0) = −1, y(π/2) = 1.
Solutions. The partial derivatives 𝜓F/𝜓y = 2y (t) and 𝜓F/𝜓y = 2y(t) can easily be
found. Substituting them into (10.2.20), the fourth equation v4 (t) = 2x2 (t)x3 (t) +
2x1 (t)x4 (t) can be constructed. Therefore anonymous functions can be used to describe
the original and auxiliary differential equations.
MATLAB commands can be used to solve the original problem, and the solution curves
are as shown in Figure 10.2. It can be seen that the boundary values of x1 (t) satisfy the
given conditions.
For this simple differential equation, the following commands can be used to find the
analytical solution:
The analytical solution is y(t) = tan(t − π/4). Therefore, the following commands can
be used to assess whether the solutions are accurate or not. The norm of the error is
10.2 Shooting methods in two-point boundary value problems | 359
found to be 1.3068 × 10−13 , meaning that high precision solutions can be found.
Example 10.4. Solve the boundary value problem for the linear time-varying differen-
tial equation in Example 10.1.
Selecting the state variables y1 (t) = y(t) and y2 (t) = y (t), the corresponding state
equation can be written as
y2 (t)
y (t) = [ ]. (10.2.21)
−p(t)y2 (t) − q(t)y1 (t) + f (t)
𝜓F 𝜓F
= −q(t), = −p(t).
𝜓y 𝜓y
v2 (t)
[ ]
[−p(t)v (t) − q(t)v (t) + f (t)]
2 1
v (t) = [
[ ]
].
[
[ v4 (t) ]
]
[ −q(t)v 3 (t) − p(t)v 4 (t) ]
With the following anonymous functions, the two equations can be described,
and then the boundary value problem can be solved. The norm of the error is 1.3011 ×
360 | 10 Boundary value problems for ordinary differential equations
10−12 , and the elapsed time is 0.14 seconds. It can be seen that the method here is the
most efficient.
>> D=@(t)(sin(t+1)+2)^2;
p=@(t)-2*(cos(t+1)+2)^2/D(t); q=@(t)-16*sin(t+1)/D(t);
f=@(t)4*exp(-2*t)*(4*cos(t+1)+9)/D(t);
F1=@(t,y)[y(2); -p(t)*y(2)-q(t)*y(1)+f(t)]; % original equation
F2=@(t,v)[F1(t,v(1:2));
v(4); -q(t)*v(3)-p(t)*v(4)]; % auxiliary equation
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
tic, [t,y]=nlbound(F1,F2,[0,pi],[1,exp(-2*pi)],ff); toc
y0=exp(-2*t); plot(t,y,t,y0,’--’), norm(y(:,1)-y0)
Definition 10.4. The so-called two-point boundary value problem is that if the solu-
tion interval is (a, b), the given conditions should be the function values at terminal
points a and b. The forms of boundary conditions can be different. They can be gen-
erally described as in (10.1.2).
The methods discussed so far contained the solvers from low-level algorithms.
In this section, a more general differential equations solver is presented to deal with
various boundary value problems.
The solver bvp5c() provided in MATLAB is a very good one. It can be used to solve
well many boundary value problems. To solve a complicated boundary value problem
of differential equations, the following procedures are used:
(1) Parameter interpolation. Function bvpinit() can be used to input informa-
tion. Of course, it is not limited to describing the boundary values, other undetermined
coefficients can also be handled.
The syntax of the function is
sinit=bvpinit(v ,x 0 ,θ0 )
sol=bvp5c(fun1,fun2,sinit,options)
where fun1 and fun2 are respectively the function handles of differential equations
and boundary conditions, which can be anonymous or MATLAB functions. The op-
tions are virtually the same as used in ode45() solver. The returned sol is a struc-
tured variable, with sol.x member representing t row vector, sol.y is a matrix whose
rows contain the information of each state variable. The member sol.parameters
returns the undetermined coefficients θ, if any.
Examples are introduced later to demonstrate the syntaxes of the functions, and
also the solution process. If differential equations are to be solved, two MATLAB func-
362 | 10 Boundary value problems for ordinary differential equations
tions must be written to describe respectively the first-order explicit differential equa-
tions and boundary conditions. The former is exactly the same as for initial value prob-
lems. Another MATLAB function is still needed to describe the boundary conditions.
Details are presented later in examples.
The second-order differential equations studied in Section 10.2 can also be solved with
bvp5c() function. Of course, this function can also be used in handling high-order
differential equations. In this section, the earlier examples are solved again, and the
accuracy and efficiency are assessed.
Example 10.5. Use the bvp5c() solver to solve again the boundary value problem
y (t) = F(t, y(t), y (t)) = 2y(t)y (t), y(0) = −1, y(π/2) = 1.
Solutions. Letting x1 (t) = y(t) and x2 (t) = y (t), the first-order explicit differential
equations can be obtained as follows. Anonymous functions can be used to describe
the differential equations
x2 (t)
x (t) = [ ].
2x1 (t)x2 (t)
x1 (a) + 1 = 0, x1 (b) − 1 = 0.
Simpler notations are defined in MATLAB, where the two boundary values can be
written as
x a (1) + 1 = 0, x b (1) − 1 = 0.
Here x a is the state variable vector at time a, while x b is the state at time b. The notation
“(1)” here means the first component in the state vector. Note that the expression here
is only a notation.
The MATLAB descriptions of the differential equations and the boundary condi-
tions are presented as
With the descriptions on the differential equations and boundary conditions, the fol-
lowing statements can be used to solve the boundary value problem directly. Five in-
terpolation points can be selected, and the results are identical to those in Figure 10.2.
If the analytical solution is used for comparison, it can be seen the norm of the error
is 1.2331 × 10−14 . The solution is indeed the most accurate solution under the double
precision data type.
Example 10.6. Solve again the problem in Example 10.1 with the provided solver.
Solutions. The first-order explicit differential equations can be found from the origi-
nal differential equations as
x2 (t)
x (t) = [ ].
−p(t)x2 (t) − q(t)x1 (t) + f (t)
The elapsed time is 0.76 seconds and the norm of the error is 3.4325 × 10−15 , which is
more accurate than for the other methods. The efficiency is also higher than for the
solvers in Example 10.1.
where the boundary conditions are y1 (0) = 2, y2 (0) = 0, y1 (1) = e−λ +e−1 , and y2 (1) = −1.
The analytical solution is y1 (t) = e−λt + e−t and y2 (t) = −t. If λ = 20, find the solution
for t ∈ (0, 1).
x2 (t)
[ 2 ]
[λ x (t) + x (t) + t + (1 − λ2 )e−t ]
[ 1 3
x (t) = [
]
].
[
[ x 4 (t) ]
]
x3 (t)
[ −x 1 (t) + e + e−λt
]
Besides, assuming that a and b are the terminal points, the boundary conditions
can be written as
>> lam=20;
f=@(t,x)[x(2); lam^2*x(1)+x(3)+t+(1-lam^2)*exp(-t);
x(4); -x(1)+exp(x(3))+exp(-lam*t)];
g=@(xa,xb)[xa(1)-2; xa(3); xb(1)-exp(-lam)-exp(-1); xb(3)+1];
In the solution process, a warning message “Warning: Unable to meet the tolerance
without using more than 2 500 mesh points” is displayed, claiming that the mesh grid
number n is not sufficient. The value of n can be set to 3 000, yet, when solving the
problem again, the warning message persists. The norm of error matrix is increased
to 0.0013. It is obvious that the result is incorrect. Therefore the warning like this can
be neglected.
where u(1) = u (1) = u(2) = u (2) = 0, and the analytical solution is
1 1 1
u(x) = (10 ln 2 − 3)(1 − x) + [ + (3 + x) ln x − x] .
4 2 x
Solutions. It is obvious that the differential equations like this cannot be solved. They
should be converted first into the standard form. One way, of course, is to derive the
equation manually, but a better way is to derive the left-hand side of the equation with
symbolic commands.
Since the interval is x ∈ (1, 2), x ≠ 0, the explicit form can be found immediately
as
6 6 1
u(4) (x) = − u (x) − 2 u (x) + 3 .
x x x
366 | 10 Boundary value problems for ordinary differential equations
Selecting y1 (x) = u(x), y2 (x) = u (x), y3 (x) = u (x), and y4 (x) = u (x), the standard
form can be derived as
y2 (x)
[ ]
[ y3 (x) ]
y (x) = [
[ ]
].
[
[ y4 (x) ]
]
2 3
[−6y4 (x) /x − 6y3 (x) /x + 1/x ]
The standard form of the boundary conditions is
Calling the following statements, the equations can be solved directly. The elapsed
time is 0.33 seconds, and the error is 2.4773 × 10−15 . The curves of u(x) and u (x) are
shown in Figure 10.4. The u(x) curve is exactly the same as the theoretical one.
Solutions with simple boundary conditions were found in the previous presentation.
If the boundary conditions are complicated, such as the one in (10.1.2), the boundary
10.3 Two-point boundary value problems for high-order differential equations | 367
conditions should be modified carefully into the standard forms, such that the solu-
tion process can be performed. For differential equations to have unique solutions,
the number of boundary conditions must equal the number of unknowns plus the
number of undetermined coefficients. Examples are shown next to demonstrate the
description of complicated boundary conditions.
Example 10.9. Consider again the problem in Example 10.2. If the boundary condi-
tions are changed into the following form:
function bvp5c() can still be used to solve the boundary value problem. Solve it again.
Solutions. If the symbols a and b are used to indicate the terminal points, the bound-
ary conditions under MATLAB notation can be written as
Therefore the boundary condition function f2 can also be described, and the fol-
lowing statements can be used to solve the differential equations. The results are as
shown in Figure 10.5:
If a boundary value problem has undetermined coefficients, for instance, the unde-
termined coefficient λ in Example 10.7, the solver bvp5c() can still be used in solving
the differential equations. The solution of the equations as well as the undetermined
coefficients are found. An extra boundary condition is needed for each undetermined
coefficient. If the number of boundary conditions is not adequate, the differential
equations cannot be solved. In this section, the differential equations with undeter-
mined parameters are studied through examples.
Example 10.10. Consider the problem in Example 10.7. If λ is unknown, five boundary
conditions are needed to solve the problem. Assume that an extra condition y1 (0) = −6
is introduced, and the y1 (1) condition is changed to y1 (1) = e−5 + e−1 . Solve again the
boundary value problem.
x2 (t)
[v2 x (t) + x (t) + t + (1 − v2 )e−t ]
[ 1 3
x (t) = [
]
].
[ x4 (t) ]
x3 (t)
[ −x 1 (t) + e + e−vt
]
With the information, the problem is described by anonymous functions, and the
solutions of the differential equations can be found. For better solving the original
problem, the number of interpolation points, n, is slightly increased. Note that when
describing the differential equations and boundary conditions, apart from the fixed
arguments, an extra parameter v is used.
For this particular problem, there is an about 50 % chance that the exact solution of
the equations can be found. That is, λ = 5. The error found is 1.0819 × 10−14 . This is the
best solution. Sometimes unsatisfactory results are found. For instance, with negative
values of λ. Sometimes an error message “Unable to solve the collocation equations —
a singular Jacobian encountered”. If this happens, the solver can be called again. The
commands should be called repeatedly, to find the correct results.
In real applications, when such problems are solved, while the analytical solution
is not known, the code can be executed repeatedly. If in several executions, the results
are the same, they can probably be the genuine solution of the problem.
Example 10.11. For the differential equations given below, find the constants α and β,
and solve the differential equations:
Solutions. The state variables x1 (t) = x(t) and x2 (t) = y(t) can be selected. Besides,
letting v1 = α and v2 = β, the original problem is converted into the following standard
form:
Therefore the following commands can be used to describe the differential equations
and boundary conditions:
>> f=@(t,x,v)[4*x(1)+v(1)*x(1)*x(2);
-2*x(2)+v(2)*x(1)*x(2)];
g=@(xa,xb,v)[xa(1)-2; xa(2)-1; xb(1)-4; xb(2)-2];
The function bvpinit() can be called for the initialization process first, for the time
grids, and parameters α and β. Since there are two initial states and two undeter-
mined coefficients, they can both be set to rand(2,1). With these parameters, func-
tion bvp5c() can be called to determine the parameters α and β and solve the bound-
ary value problem. The results of the two states are obtained as shown in Figure 10.6.
The boundary value problems discussed so far were defined on the interval [a, b],
with a, b being given numbers. In this section, semi-infinite interval boundary value
problems are explored.
Definition 10.5. If the boundary value problem is defined on an infinite interval, that
is, a = −∞ or b = ∞, the boundary value problem is referred to as semi-infinite interval
problem.
For ordinary semi-infinite boundary value problems, it seems that there are no
good solution methods in mathematics. The usual way is to select a large value of L
to replace infinity terms. Of course, when the solutions are found, their curves should
be examined to see whether at L the curves are flat. If they are, this means that the
solution is valid, otherwise it means that the selected L may be too small. The value of
L should be increased and one should try again. In this section, semi-infinite boundary
value problems are explored.
10.3 Two-point boundary value problems for high-order differential equations | 371
Example 10.12. Consider the differential equations in Example 10.1. If the boundary
conditions are y(0) = 1 and y(∞) = 0, solve the differential equations.
Solutions. The state variables y1 (t) = y(t) and y2 (t) = y (t) can be selected first. The
state space equation in (10.2.21) can be constructed. Besides, selecting a stoping time
of t = L, the boundary condition can be written as
y a (1) − 1 = 0, y b (1) = 0.
Letting L = 20, the following statements can be used to approximately solve the
problem. The norm of the error between analytical and numerical solution can be
found as 3.2678 × 10−14 . The solutions are shown in Figure 10.7. It can be seen that the
selection of the stoping time is successful, since when L = 20, the y(t) curve becomes
flat.
>> D=@(t)(sin(t+1)+2)^2;
p=@(t)-2*(cos(t+1)+2)^2/D(t); q=@(t)-16*sin(t+1)/D(t);
f=@(t)4*exp(-2*t)*(4*cos(t+1)+9)/D(t);
F=@(t,y)[y(2); -p(t)*y(2)-q(t)*y(1)+f(t)];
G=@(ya,yb)[ya(1)-1; yb(1)]; L=20; x0=rand(2,1);
sinit=bvpinit(linspace(0,L,10),x0);
ff=odeset; ff.RelTol=100*eps; ff.AbsTol=100*eps;
sol=bvp5c(F,G,sinit,ff); t=sol.x; y=sol.y;
y0=exp(-2*t); plot(t,y,t,y0,’--’), norm(y(1,:)-y0)
In fact, if the analytical solution is not known, different values of L can be tried, and
one can see whether consistent solutions are found. If they are not consistent, a larger
value of L can be tried, and solutions should be validated again.
For this particular example, if L = 5 is selected, the norm of the error is increased
to 1.2695 × 10−4 . The error is large, indicating that L is not chosen properly. It should
372 | 10 Boundary value problems for ordinary differential equations
be increased.
It should be noted that when L = 20, the initial value problem solution from the
theoretical initial value x 0 = [1, −2]T may cause problems. The solution is divergent.
If L = 10, the forward simulation is successful. Therefore the problem is sensitive to
initial values. In numerical solution studies, care must be taken.
>> L=20;
[t,x]=ode45(F,[0,L],[1;-2],ff); plot(t,x(:,1))
It has been indicated earlier that for linear differential equations with m unknown
functions and undetermined coefficients, there should be m boundary conditions to
uniquely determine the solutions. In theory, the m boundary conditions must be lin-
early independent. In real problems, it is not possible to judge whether the conditions
are linearly independent or not. Besides, if the m equations are nonlinear algebraic
equations, it is hard to ensure the uniqueness of the solutions. Sometimes the differ-
ential equations have more than one solution.
In this section, a special example is explored. In the example, the initial and
terminal values are not given as fixed numbers. The relationship between them is
known. The problem is referred to as a boundary value problem with floating bounds.
An example is used to show that the solver bvp5c() fails to find solutions. In later
sections, effective methods are introduced.
where u(0) = u(2π) and u (0) = u (2π). In [19], the analytical solution of the equation
is given as u(t) = sin t and u = − sin t.
Solutions. Selecting the state variables as x1 (t) = u(t) and x2 (t) = u (t), the standard
form of first-order explicit differential equations is written as
x2 (t)
x (t) = [ ].
min(x1 (t) + 2, max(−x1 (t), x1 (t) − 2))
10.3 Two-point boundary value problems for high-order differential equations | 373
With the following commands, the differential equations and boundary condi-
tions are described. Function bvp5c() can be tried to solve the differential equations.
However, the solution found is u(t) ≡ 0, and the number of points found is the same
as the value of N. No matter how may times the commands are tried, the solution is
the same. It should be noted that the analytical solution listed in [19] is not complete.
Indeed, u(t) ≡ 0 is also a solution of the original differential equations. With the func-
tion call here, no other solutions can be found, which implies that for this particular
problem, the general-purpose solver fails.
In Section 10.4, the same problem will be revisited again, and we will indicate that
differential equations might have infinitely many solutions. The solver bvp5c() fails
to find them.
In Section 4.5, some integro-differential equations were studied. Some of the equa-
tions could be converted into ordinary differential equations so that MATLAB could
be used. In this section, an example is used to show the transformation and solution
method for some boundary value problems for integro-differential equations.
Example 10.14. Consider the boundary value problem of the following integro-diffe-
rential equation:[18]
1
1 1
x (t) = x(t) − x2 (t) + ∫ e2(t−s) x2 (s)ds (10.3.1)
1 500 3 000
−1
where the boundary values are x(−1) = 1 and x(1) = e2 . It is known that the exact
solution is x(t) = e1+t . Convert the equation into ordinary differential equation, and
find its numerical solution. Assess the precision of the numerical solutions.
Solutions. Although the integrals are different from those discussed in Section 4.5,
a similar idea from that section can still be used to carry out manipulation. Moving
the function of t out of the integrand, it is found from (10.3.1) that the explicit term of
374 | 10 Boundary value problems for ordinary differential equations
1
3 000 1
∫ e−2s x2 (s)ds = [x (t) − x(t) + x 2 (t)] .
e2t 1 500
−1
Taking a derivative of (10.3.1) with respect to t, and substituting the integral ex-
pression back to the result
1 2 1
x (t) = x (t) − x(t)x (t) − 2x(t) + 3x (t).
750 750
With the differential equation and boundary values, the following statements can
be used to solve the boundary value problem directly. Compared with the exact solu-
tion, it is found that the norm of the error function is 4.0039 × 10−14 , and the elapsed
time is 0.46 seconds. It can be seen that this method can be tried for some integro-
differential equations.
be selected as the decision variables for the optimization problem. Combined with the
idea of a shooting method in Section 10.2, the final value obtained in simulation and
the given final value may have a difference. The error between them can be used as the
objective function. Therefore the boundary value problem can be converted into an
optimization problem. If there is more than one final value, the objective function can
be selected as the sum of absolute values of the errors. Through optimization process,
the consistent initial values are found, from which the initial value problem can be
solved again, yielding the solution of the original boundary value problem.
In this section, a simple boundary value problem is demonstrated first. Then the
differential equations which are not suitable to be solved with the solver bvp5c() will
be explored. These include implicit differential equations and delay differential equa-
tions. Simulation based boundary value problems and the boundary value problems
of fractional-order differential equations are studied.
To demonstrate the conversion from boundary value problems into optimization prob-
lems, a simple boundary value problem is introduced first. For this simple boundary
value problem, the solver bvp5c() may be more effective. There is no need to use the
conversion process for this particular example.
Example 10.15. Solve the problem in Example 10.8 with the optimization method.
Solutions. It can be seen from the boundary values that the initial values of y1 (t) and
y3 (t) are known, whereas the initial values of y2 (t) and y4 (t) are unknown. Therefore
the decision variables can be selected as x1 = y2 (1) and x2 = y4 (1). Then the differential
equations can be solved from the initial values. Of course, the solution obtained in
this way may not be the one we are expecting, since the terminal conditions are not
necessarily satisfied. There might be errors between the expected terminal values and
those obtained by simulation methods. The errors can be used as the objective func-
tion. Therefore the MATLAB function given next can be used to compute this objective
function. Note that to demonstrate solutions, the expected value “0” is also written
into the objective function. In real applications, the term “−0” can be omitted.
function y=c10mbvp(x,f,ff)
y0=[0; x(1); 0; x(2)]; % construct the initial values
[t,ym]=ode45(f,[1,2],y0,ff); % solve differential equations
y=abs(ym(end,1)-0)+abs(ym(end,3)-0); % compute the objective function
The following commands can be used to solve the boundary value problem. The con-
sistent initial values are y2 (1) = 0.017132 and y4 (1) = −0.5. The elapsed time is 1.68
seconds, larger than that in Example 10.8. The norm of the error is 4.3324 × 10−15 . It
376 | 10 Boundary value problems for ordinary differential equations
can be seen that the method here is effective. The curves obtained are exactly the same
as in Figure 10.4.
Since there are two terminal conditions, the sum of the absolute errors is used
to define the objective function. In classical studies of optimization, many researcher
are even interested in using the sum of the squared errors as the objective function,
since it is “differentiable”. In fact, the selection like this is not quite appropriate, since
under the double precision framework, the maximum error tolerance can be set to
10−16 . If the squared sum is used as the objective function, the solution process may
terminate prematurely at 10−8 . Therefore the precision of the solutions may not be
as high. In modern optimization solvers, having “differentiable” objective functions is
not thus important. If there is no special meaning, this consideration can be neglected.
More meaningful objective functions should be selected, so as to better solve practical
problems.
It can be seen from this example that the boundary value problem of differential
equations can be converted into an optimization problem. Although for this simple
example the efficiency of the optimization based solution method is slightly lower
than that of the bvp5c() solver, bvp5c() solver has limitations. For instance, implicit
differential equations and delay differential equations with boundary values cannot
be solved at all with the bvp5c() solver. Optimization methods can be used to solve
these differential equations. Several examples are used in this section to explore the
boundary value problems for differential equations which cannot be solved with the
bvp5c() solver.
Boundary value problems of implicit differential equations cannot be solved with the
solvers such as bvp5c(). Under the current version of MATLAB, there is no such solver
for implicit differential equations.
It can be seen from the previous introduction that, no matter what the boundary
conditions, the conversion method discussed above can be used to convert the original
problem into an optimization problem. More specifically, the unknown initial values
10.4 Optimization-based boundary value problem solutions | 377
can be selected as decision variables to solve the differential equations, while the so-
lution at the other end can be compared with the given one, and the error can be used
as the objective function. Therefore through an optimization technique, the consistent
initial values can be found, from which the numerical solution of the differential equa-
tions can finally be established. In this section, an example is used to demonstrate the
solution of boundary value problems of implicit differential equations.
Example 10.16. Consider the implicit differential equations in Example 5.11. If the
given conditions are x1 (0) = 0 and x2 (10) = 0, solve the differential equations.
function [y,xd0]=c10mimp1(x,f,ff)
x0=[0; x(1)]; x0F=ones(2,1);
[x0,xd0,f0]=decic_new(f,0,x0,x0F); % compute consistent initial values
[t,x]=ode15i(f,[0,10],x0,xd0,ff); y=abs(x(end,2)-0);
Compared with the earlier examples, there are two different points to consider: (1) the
function handle f is an implicit differential equation; (2) the consistent initial values
of the first-order derivatives are also required, but this is internal. An extra returned
argument is left for the vector; (3) the internal call to the solver is different. No matter
what the differences, the idea is the same, that is, computing the error between the
final values and the expected terminal conditions.
With the following statements, the undetermined initial value x2 (0) = 0.06266
can be found. The consistent initial values of the first-order derivatives are x (0) =
[0.06279, 1.00197]T . The total elapsed time is 24.7711 seconds. The curves obtained
are very close to those shown in Figure 4.14, indicating that the solution process is
successful.
>> f=@(t,x,xd)[xd(1)*sin(x(1))+xd(2)*cos(x(2))+x(1)-1;
-xd(1)*cos(x(2))+xd(2)*sin(x(1))+x(2)];
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
F=@(x)c10mimp1(x,f,ff); opt=optimset; opt.TolX=eps;
tic, x=fminsearch(F,rand(1,1),opt); % optimization
[~,xd0]=c10mimp1(x,f,ff) % consistent initial values
[t,y]=ode15i(f,[0,10],[0; x(1)],xd0,ff); toc
plot(t,y) % solve implicit equations
It should be noted that since the final value x2 (10) = 0 is different from that in the
original example, it is only an approximation. Therefore the value of x2 (0) obtained is
not 0, but a small number. The solutions are reasonable. Besides, in each step of the
objective function evaluation, an implicit differential equation is solved once. There-
fore the solution process is rather time consuming. It is a lucky coincidence that the
378 | 10 Boundary value problems for ordinary differential equations
solution of a boundary value problem for implicit differential equations was found in
an acceptable time.
Example 10.17. Now consider the differential equations with multiple solutions in Ex-
ample 4.9:
If the given conditions are y(0) = 0 and y(1) = 1, find the analytical and numerical
solution and assess the errors in the numerical solution, and also the efficiency in the
solution process.
Solutions. Since the original differential equation has analytical solutions, the fol-
lowing commands can be used to solve this differential equation directly:
The analytical solutions of the differential equations can be found, where there are
two branches:
t 1 √ 2 t2 t3 t4
y(t) = ( ( 11 ± 1) − 1) ∓ (√11 ± 1) + + .
2 4 4 6 12
with x1 (0) = 0 and x1 (1) = 1. This problem cannot be solved with the solvers such
as bvp5c(). An optimization technique should be introduced to find the solutions.
The objective function in Example 5.13 can be embedded into the following MATLAB
function:
function [y,xd0]=c10mimp2(x,f,ff)
x0=[0; x(1)]; x0F=ones(2,1);
[x0,xd0,f0]=decic_new(f,0,x0,x0F); % consistent initial values
[t,x]=ode15i(f,[0,1],x0,xd0,ff); y=abs(x(end,1)-1);
With the following statements, the optimization problem can be solved, and the con-
sistent initial value is found, x2 (0) = 1.8292. The elapsed time of the solution process
10.4 Optimization-based boundary value problem solutions | 379
is about 3.36 seconds. Compared with the analytical solutions, the norm of the error
is 2.4960 × 10−11 . It can be seen that the solution process is feasible.
>> f=@(t,x,xd)[xd(1)-x(2);
xd(2)^2-4*(t*x(2)-x(1))-2*x(2)-1];
ff=odeset; ff.AbsTol=100*eps; ff.RelTol=100*eps;
F=@(x)c10mimp2(x,f,ff); opt=optimset; opt.TolX=eps;
tic, x=fminsearch(F,rand(1,1),opt) % consistent initial values
[~,xd0]=c10mimp2(x,f,ff) % derivative initial values
[t,y]=ode15i(f,[0,1],[0; x(1)],xd0,ff); plot(t,y), toc
y0=t*((sqrt(11)+1)^2/4-1)/2-t.^2/4*(sqrt(11)+1)+t.^3/6+t.^4/12;
norm(y(:,1)-y0)
Executing the above statements repeatedly, another solution can also be found. The
consistent initial value is x2 (0) = 0.1708. Of course, sometimes the above commands
may also yield error messages like “Exiting: Maximum number of function evaluations
has been exceeded – increase MaxFunEvals option. Current function value: 0.202097”.
In fact, the expected objective function is 0, otherwise the solution process fails. The
result x can be substituted into the objective function to see whether the value of the
function is 0 or not, so as to judge whether the solution process is successful. If this
error message appears, the statements should be executed again. It is not necessary
to increase the value of MaxFunEvals as prompted.
In Example 6.1, attempts were made to convert a delay differential equation into dif-
ferential equations with no delays. This method is rather complicated, and does not
have any universality, since most of the delay differential equations cannot be con-
verted like this. Here, an optimization technique can be used to solve boundary value
problems of delay differential equations. The basic idea is still the same. Consistent
initial values can be found such that the errors in the final values are minimized.
Since there are almost no examples on boundary value problems in delay differential
equations in the literature, the examples here are converted from those in Chapter 6. In
this section, examples are used to demonstrate the numerical solutions of boundary
value problems of delay differential equations.
Example 10.18. Consider the delay differential equations in Example 6.13, with fixed
delays. If the boundary values are x1 (0) = 0, x2 (10) = 0.333, and x1 (10) = x3 (10) + 0.17,
and the history function are all constants, solve the boundary value problem of the
given delay differential equations.
380 | 10 Boundary value problems for ordinary differential equations
Solutions. It can be seen from the boundary conditions that x1 (0) is given, therefore
x2 (0) and x3 (0) are to be solved for with optimization techniques. They can be selected
as decision variables. Since the boundary conditions are x2 (10) = 0.333 and x1 (10) =
x3 (10) + 0.17, the following MATLAB function can be written, to compute the objective
function, which is defined as the sum of the absolute errors in the final values.
function y=c10mdde1(x,f,tau,ff)
x0=[0; x(1); x(2)]; sol=dde23(f,tau,x0,[0,10],ff);
x=sol.y; y=abs(x(2,end)-0.333)+abs(x(1,end)-x(3,end)-0.17);
Therefore the following commands can be used to solve the boundary value problem
for delay differential equations. The elapsed time is 29.35 seconds, and the consistent
initial values are x2 (0) = −2.7206 and x3 (10) = 2.0130. The state curves are as shown
in Figure 10.8.
>> f=@(t,x,Z)[1-3*x(1)-Z(2,1)-0.2*Z(1,2)^3-Z(1,2);
x(3); 4*x(1)-2*x(2)-3*x(3)];
ff=odeset; ff.RelTol=1e-8; ff.AbsTol=1e-8;
tau=[1 0.5]; % set the two delay constants
F=@(x)c10mdde1(x,f,tau,ff); opt=optimset; opt.TolX=eps;
tic, x=fminsearch(F,rand(2,1),opt)
sol=dde23(f,tau,[0; x(:)],[0,10],ff); plot(sol.x,sol.y), toc
Note that, as indicated in Chapter 6, the error tolerance in solving delay differential
equations cannot be set to small quantities. Otherwise the solution process cannot be
completed successfully. The error tolerance here is set to 10−8 .
10.4 Optimization-based boundary value problem solutions | 381
Example 10.19. Consider the delay differential equations in Example 6.13, which is
modified as follows:
where α = 0.77. If the boundary values are x1 (0) = x3 (0) = 0 and x3 (10) = 5.5, and
the history functions of the differential equations are all constant, solve the boundary
value problem.
Solutions. The current solvers in MATLAB cannot be used in solving boundary value
problems of delay differential equations. The optimization method should be tried
again for solving this kind of problems. In the specific problem, x1 (0) and x3 (0) are
given, and x2 (0) is unknown, which can be selected as a decision variable x2 (0). The
quantity |x3 (10)−5.5| is used as an objective function. The following MATLAB function
can be written to compute the objective function:
function y=c10mdde2(x,f,tau,ff)
x0=[0; x(1); 0]; sol=ddesd(f,tau,x0,[0,10],ff);
x=sol.y; y=abs(x(3,end)-5.5);
To solve this optimization problem, the consistent initial value obtained is x2 (0) =
0.0543, and the elapsed time is 8.96 seconds. The curves obtained are very close to
those in Figure 6.10.
Function bvp5c() discussed earlier can only be used to solve two-point boundary
value problems, while in real applications, more than two points for the differential
equations may be given. Therefore function bvp5c() cannot be used. An optimization
technique can be introduced to solve multipoint boundary value problems. In this
section, examples are used to demonstrate multipoint boundary value problems.
382 | 10 Boundary value problems for ordinary differential equations
Example 10.20. It is seen from Example 2.30 that, in the differential equations, the
information at t = 0, π, 2π is known. Such a problem cannot be solved with two-point
solvers. Use an optimization technique to find the solution of the differential equa-
tions.
Solutions. Before solving this problem, the mathematical form of the right-hand side
of the equation for the given input u(t) should be derived. This can be done with the
symbolic commands
Selecting the state variables x1 (t) = y(t), x2 (t) = y (t), x3 (t) = y (t), and x4 (t) =
y (t), the first-order explicit differential equations can be constructed as
x2 (t)
[ ]
[ x3 (t) ]
x (t) = [
[ ]
].
[
[ x4 (t) ]
]
[u1 (t) − 10x4 (t) − 35x3 (t) − 50x2 (t) − 24x1 (t)]
>> u1=@(t)87*exp(-5*t)*cos(2*t+1)+92*exp(-5*t)*sin(2*t+1)+10;
f=@(t,x)[x(2); x(3); x(4);
u1(t)-10*x(4)-35*x(3)-50*x(2)-24*x(1)];
It can be seen from the initial values that only x1 (t) is known, the other three states
can be used to construct the decision variables. Besides, the terminal values y (π) =
y (2π) = y (2π) = 0 can also be written as x2 (π) = x3 (2π) = x2 (2π) = 0.
To implement this multipoint problem, the solution interval can be divided into
two parts, [0, π] and [π, 2π]. The result of the first interval is regarded as the initial
value of the second one. Therefore the following objective function can be written:
function y=c10mmult(x,f,ff)
x0=[1/2; x(:)];
[t1,x1]=ode45(f,x0,[0,pi],ff);
[t2,x2]=ode45(f,x1(end,:).’,[pi,2*pi],ff); % piecewise solution
y=abs(x1(end,2)-0)+abs(x2(end,3)-0)+abs(x2(end,2)-0);
10.4 Optimization-based boundary value problem solutions | 383
With repeated trials, or with the global optimization solver fminunc_global() from
Volume IV, the consistent initial values are x2 (0) = 0.7985, x3 (0) = −4.0258, and
x3 (0) = −9.0637. The value of the objective function is F(x) = 9.1898 × 10−17 . From
the consistent initial values, the numerical solution of the original boundary value
problem can be found.
Note that the above commands should be executed repeatedly, until consistent initial
values are found, such that the value of the objective function is very close to zero.
The numerical solutions of the boundary value problem can alternatively be found
by solving initial value problems. Compared with the analytical solutions, the norm
of the error is 6.6661 × 10−9 . It can be seen that the results are more satisfactory.
In Section 10.3, an example was used to show that the general purpose solver bvp5c()
failed. The characteristics of the example are that the values of the unknowns at the
boundaries are not exactly known. Only the relationship of the boundary values is
known. The solution of these differential equations may not be unique. The solver
cannot be used to find any other solutions satisfying the floating boundary conditions.
Here the optimization based technique is used to solve boundary value problems.
Example 10.21. Solve again the boundary value problem in Example 10.13.
function y=c10mnon(x,f,ff)
x0=x(:); [t,x]=ode45(f,[0,2*pi],x0,ff);
y=abs(x(1,1)-x(end,1))+abs(x(1,2)-x(end,2)); % objective function
384 | 10 Boundary value problems for ordinary differential equations
With such an objective function, the following commands can be tried directly to solve
the differential equations. Since the problem has infinitely many solutions, each time
the following statements are called, a set of consistent initial values are found. For in-
stance, the consistent initial value vector is x 0 = [0.3224, −0.9271]T . The solutions are
shown in Figure 10.9, superimposed by two broken lines, indicating that the boundary
conditions are also satisfied.
If the above code is executed again, since the initial values are randomly chosen, it
is quite probable that another solution is found. Further execution may yield even
more solutions. It can be seen that the solutions are sine and cosine curves in one
cycle, which can be regarded as the left–right translation of the curves in Figure 10.9,
since the boundary conditions are satisfied. It can be seen that the analytical solution
in [19], u(t) = ± sin t, is incomplete, since u(t) = A sin(t + θ) (|A| ⩽ 1) also satisfies
the original boundary value problem. Besides, u(t) ≡ 0, ±2 are also solutions of the
original problem.
methods can be used in solving the boundary value problems. Here only an example
is given to demonstrate the solution method.
Example 10.22. Use Simulink and the optimization technique to solve again the semi-
infinite interval boundary value problem in Example 10.12. The differential equation
is a linear time-varying one.
Solutions. From the given differential equations, it is not difficult to build up the
Simulink model, as shown in Figure 10.10. The functions p(t), q(t), and f (t) are de-
scribed by Fcn blocks, whose contents are respectively:
-2*(cos(u+1)+2)^2/(sin(u+1)+2)^2
-16*sin(u+1)/(sin(u+1)+2)^2
4*exp(-2*u)*(4*cos(u+1)+9)/(sin(u+1)+2)^2
and the initial value of the left integrator is set to variable a. This variable can be
selected as the decision variable. Set the stoping time to L = 10.
With the Simulink model, the following MATLAB function can be written, to describe
the objective function. The physical meaning is that the value |y(L)| is made as small
as possible (as close as possible to zero, such that the boundary condition is satis-
fied).
function y=c10mbvp1a(x)
assignin(’base’,’a’,x)
[t,~,y0]=sim(’c10mbvp1’); y=abs(y0(end)); % objective function
The consistent initial value found is −1.999999999999975, very close to the exact
value of −2. Computing from the initial values, the solution of the original differential
386 | 10 Boundary value problems for ordinary differential equations
Since these differential equations are very sensitive to the initial values, a slight error
in the initial value may yield huge errors. For this particular example, the norm of the
error is 1.2225 × 10−4 , with the elapsed time of 7.96 seconds. Although the efficiency
is significantly higher than that in Example 10.12, the method presented here may
have broader application fields, since differential equations of any complexity can be
tackled. For instance, we can solve the boundary value problems of fractional-order
differential equations to be discussed next.
To the author’s knowledge, there are no methods capable of solving boundary value
problems for fractional-order differential equations. In this section, a feasible method
is proposed to solve such problems.
An effective block diagram based method is proposed in [74], which can be used
for solving initial value problems of nonlinear fractional-order differential equations.
Therefore, the method can be combined with optimization techniques, such that the
boundary value problems for fractional-order differential equations can be solved.
Solutions. In Example 9.22, a Simulink model c9mfode3.slx was created for the
fractional-order differential equation. If the initial value of the y (t) integrator, the
left one in Figure 9.35, is changed from −1 to variable a, a new model c10mfode.slx
can be constructed. This model can be used to solve initial value problems of this
fractional-order differential equation.
In normal cases, if a certain a, that is, x2 (0), makes the problem singular, the
solution process may abnormally abort, and error message is given. This is not what
we are expecting. A trial structure can be used, if the solution process yields errors,
and the objective function is artificially set to a large value. The singular points are
avoided such that the optimization process can be continued. Based on the idea, the
following objective function can be written:
10.5 Exercises | 387
function y=c10mfode1(x)
assignin(’base’,’a’,x)
try
[t,~,y0]=sim(’c10mfode’,1); y=abs(y0(end)-exp(-t(end)));
catch, y=10; end
With the objective function, suitable parameters of Oustaloup filter are selected. The
following commands can be used to solve the boundary value problem. The elapsed
time of the solution process is 69.3 seconds. The consistent initial value is x2 (0), that
is, a = −0.999998506, which is very close to the theoretical value −1. Starting from the
point, the numerical solution can be found, and the norm of the error is 1.3728 × 10−6 .
It can be seen that the solution process here is successful.
10.5 Exercises
10.1 Solve the following boundary value problem:[61]
y (t) = −0.05y (t) − 0.02y2 (t) sin t + 0.00005 sin t cos2 t
− 0.05 cos t − 0.0025 sin t
where y(0) = y(2π), y (0) = y (2π), with known analytical solution of y(t) =
0.05 cos t.
10.2 Solve the following boundary value problem:
12
y(4) (t) = 6e−4y(t) −
(1 + t)4
where y(0) = 0, y(1) = ln 2, y (0) = −1, and y (1) = −0.25. The analytical solution
is y(t) = ln(1 + t).
10.3 Solve the following boundary value problem:[61]
e−20 20x 1
y(x) = e + e−20x − cos2 πx.
1 + e−20 1 + e−20
388 | 10 Boundary value problems for ordinary differential equations
10.4 Magnetic monopoles mapped to the (0, 1) interval are described by differential
equations[2]
{
{ y1 (t) = ay1 (t)(y3 (t) − y1 (t))/y2 (t),
{
{
{y2 (t) = −a(y3 (t) − y1 (t)),
{
{
{
{
y (t) = [b − c(y3 (t) − y5 (t)) − ay3 (t)(y3 (t) − y1 (t))]/y4 (t),
{ 3
{
{
y4 (t) = a(y3 (t) − y1 (t)),
{
{
{
{
{
{
{y5 (t) = −c(y5 (t) − y3 (t))/d
where a = 100, b = 0.9, c = 1 000, and d = 10. The boundary values are y1 (0) =
y2 (0) = y3 (0) = 1, y4 (0) = −10, and y3 (1) = y5 (1).
10.6 Solve the following boundary value problem:[63]
{
{y1 (t) = y2 (t),
{
{
{y2 (t) = y3 (t),
{
{
{
{
{ y3 (t) = −(3 − n)y1 (t)y3 (t)/2 − ny22 (t) + 1 − y42 (t) + sy2 (t),
{
{
y4 (t) = y5 (t),
{
{
{
{
{
{
{y5 (t) = −(3 − n)y1 (t)y3 (t)/2 − (n − 1)y2 (t)y4 (t) + s(y4 (t) − 1)
where n = −0.1 and s = 0.2. The boundary values are y1 (0) = y2 (0) = y4 (0) =
y2 (b) = 0, and y4 (b) = 1, b = 11.3.
10.7 Solve the following semi-infinite interval boundary value problem:[27]
{
{ f (t) − R[(f (t))2 − f (t)f (t)] + AR = 0,
{
{ h (t) + Rf (t)h (t) + 1 = 0,
{
{
{θ (t) + Pf (t)θ (t) = 0
10.5 Exercises | 389
https://doi.org/10.1515/9783110675252-011
392 | 11 Partial differential equations
Diffusion equations are also known as heat equations. Just consider one case. Assume
that there is an extremely thin heat-conducting rod, whose thickness can be neglected.
Both sides of the rod are connected to heat sources, with initial temperature. At the
initial time, the temperature at each point on the rod is also known. The temperature
at position x and time t can be described by a partial differential equation, known as
homogeneous diffusion equation.
𝜓u(t, x) 𝜓2 u(t, x)
=K (11.1.1)
𝜓t 𝜓x 2
where K is referred to as the diffusion coefficient.
Solutions. If MATLAB is used to show whether an equality holds, the simplest method
is to express the difference between the two sides of the equality using symbolic com-
putation, and then simplify the result. If the result is zero, then the equality is proven.
If it is not zero, the equality does not hold. For this specific problem, the following
commands can be used to directly verify the analytical solution, because the simpli-
fied error is zero.
Similar to the case when solving ordinary differential equations, for partial dif-
ferential equations, the boundary as well as initial conditions are needed. For the
11.1 Numerical solutions of diffusion equations | 393
unknown function u(t, x), the boundary conditions can be regarded as functions when
x is fixed at x0 or x1 , that is, at both ends of the rod. Boundary conditions are usually
functions of t:
Initial conditions are similar to the case of ordinary differential equations. They
are the values of the unknown function at the initial time t = t0 . The difference is that
the initial conditions are usually a function of x,
j+1 j
Ui − Ui K 2 j j+1 K j j j j+1 j+1 j+1
= (D Ui + D2 Ui ) = 2 (Ui−1 − 2Ui + Ui+1 + Ui−1 − 2Ui + Ui+1 ). (11.1.4)
k 2 2h
AU j+1 = Bj (11.1.6)
(1 + 2r) −r
[ ]
[ −r (1 + 2r) −r ]
[ ]
[ −r (1 + 2r) −r ]
A=[
[ ... ... ... ].
] (11.1.7)
[ ]
[ ]
[ −r (1 + 2r) −r ]
[ −r (1 + 2r)]
394 | 11 Partial differential equations
j j j+1
r(g0 (tj ) + g0 (tj+1 )) + (1 − 2r)U1 + rU2 U
[ j j j
] [ 1j+1 ]
[
[ rU1 + (1 − 2r)U2 + rU3 ]
]
[U ]
[ 2 ]
[ j j j ] [ j+1 ]
[ rU2 + (1 − 2r)U3 + rU4 ] [U ]
Bj = [ U j+1 = [ 3. ] (11.1.8)
[ ] [ ]
.. ],
[ . ] [ .. ]
[ ] [ ]
[ j j j
] [ j+1 ]
[
[ rUm−2 + (1 − 2r)Um−1 + rUm ]
]
[U ]
[ m−1 ]
j j j+1
rU
[ m−1 + (1 − 2r)U m + r(g (t
1 j ) + g (t ))
1 j+1 ] [ Um ]
function [t,x,U]=diffusion_sol1(h,k,t0,x0,n,m,eta,g0,g1,K)
r=k*K/2/h^2; x=x0+(0:m-1)*h; t=t0+(0:n-1)*k;
U1=eta(x); U1=U1(:); U=U1; v=-r*ones(1,m-1);
A=(1+2*r)*eye(m)+diag(v,1)+diag(v,-1); iA=inv(A);
for j=1:n-1
B=r*(g0(t(j))+g0(t(j+1)))+(1-2*r)*U1(1)+r*U1(2);
for i=1:m-2, B(i+1,1)=[r,1-2*r,r]*U1(i:i+2); end
B(m,1)=r*U1(m-1)+(1-2*r)*U1(m)+r*(g1(t(j))+g1(t(j+1)));
U1=iA*B; U=[U U1];
end
𝜓u(t, x) 1 𝜓2 u(t, x)
− 2 =0
𝜓t π 𝜓x 2
where the boundary conditions are u(t, 0) = u(t, 1) = 0, and initial condition is
u(0, x) = sin πx. If the analytical solution of the equation is u(t, x) = e−t sin πx.
Selecting x ∈ [0, 1], t ∈ (0, T), with T = 1, solve the partial differential equation and
assess the accuracy.
11.1 Numerical solutions of diffusion equations | 395
Solutions. It can be seen from the diffusion equation that K = 1/π2 . Selecting the step-
sizes h = k = 0.01, the following MATLAB commands can be written to describe the
boundary and initial conditions, and then solve the diffusion equation. The solution
surface of u(t, x) is obtained, as shown in Figure 11.1.
Since the analytical solution is known, and mesh grids are made in vectors t and x,
the maximum error with respect to the exact values is 0.0402. The error surface can
also be obtained, as shown in Figure 11.2. It can be seen that large errors happen at
both boundaries of x, and also when t is large.
Example 11.3. Consider again Example 11.2. The accuracy in the previous example is
relatively low. A natural way is to reduce the step-size so as to increase the accuracy.
Assess the impact of step-size on the accuracy and also efficiency.
Solutions. Selecting different step-sizes k and letting h = k, the elapsed time, ac-
curacy and sizes of the solution matrices are measured, as shown in Table 11.1 (see
Algorithm 1 entries). It can be seen that if the step-size is reduced, the accuracy is
increased, while the number of points and the elapsed time increase significantly.
Therefore it is not suitable to select too small step-sizes to increase the accuracy.
step-sizes h = k 0.01 0.005 0.0025 0.00125 0.000625 3.125 × 10−4 1.5625 × 10−4
function [t,x,U]=diffusion_sol2(h,k,t0,x0,n,m,eta,g0,g1,K)
r=k*K/2/h^2; x=x0+(0:m-1)*h; t=t0+(0:n-1)*k;
U1=eta(x); U1=U1(:); U=U1;
v=-r*ones(1,m-1); i=1:m-1; v1=(1+2*r)*ones(1,m);
11.1 Numerical solutions of diffusion equations | 397
Example 11.4. Solve the problem in Example 11.3 again with the sparse matrix based
solver.
Solutions. With almost the same commands as before, the elapsed time and other
information can be found as given in Table 11.1 (see Algorithm 2). It can be seen that,
with a sparse matrix, the accuracy and number of points are exactly the same, but the
elapsed time is significantly reduced. Therefore it is recommended to use such a solver
for diffusion equations.
Even though sparse matrices are introduced, the step-size should not be selected
too small. If possible, high-precision and effective computation methods and tools
should be used.
Slightly extending the diffusion equation in Definition 11.1, a more general form of a
diffusion equation can be defined. Here inhomogeneous diffusion equation are intro-
duced, and numerical solution methods for inhomogeneous diffusion equations are
presented.
From the discretized model in (11.1.4), it is not hard to write down a discretized
version of the inhomogeneous diffusion equation
j+1 j
Ui − Ui j
= ψ(tj , xi ) + f (Ui )
k
K j j j j+1 j+1 j+1
+ 2 (Ui−1 − 2Ui + Ui+1 + Ui−1 − 2Ui + Ui+1 ). (11.1.10)
2h
Therefore the equation obtained also satisfies the linear algebraic equation in
(11.1.6), where matrix A and vector U j are exactly the same. Vector Bj is modified as
j j j j
r(g0 (tj ) + g0 (tj+1 )) + (1 − 2r)U1 + rU2 + kΨ1 + kf (U1 )
[ j j j j j
]
[
[ rU1 + (1 − 2r)U2 + rU3 + kΨ2 + kf (U2 ) ]
]
[ j j j j j ]
[ rU2 + (1 − 2r)U3 + rU4 + kΨ3 + kf (U3 ) ]
Bj = [ (11.1.11)
[ ]
.. ]
[
[ . ]
]
[ j j j j j ]
[
[ rUm−2 + (1 − 2r)Um−1 + rUm + kΨm−1 + kf (Um−1 ) ]
]
j j j j
[rUm−1 + (1 − 2r)Um + r(g1 (tj ) + g1 (tj+1 )) + kΨm + kf (Um )]
j
where Ψ (tj , xi ) is simply denoted as Ψi . Similar to the solver duffusion_sol2(), the
following MATLAB code is written for inhomogeneous diffusion equations:
function [t,x,U]=diffusion_sol(h,k,t0,x0,n,m,eta,g0,g1,K,f,Psi)
r=k*K/2/h^2; x=x0+(0:m-1)*h; t=t0+(0:n-1)*k;
U1=eta(x); U1=U1(:); U=U1;
v=-r*ones(1,m-1); i=1:m-1; v1=(1+2*r)*ones(1,m);
A=sparse([i i+1 i m],[i+1 i i m],[v,v,v1]); % create sparse matrix
for j=1:n-1
B=r*(g0(t(j))+g0(t(j+1)))+(1-2*r)*U1(1)+r*U1(2);
for i=1:m-2, B(i+1,1)=[r,1-2*r,r]*U1(i:i+2); end
B(m,1)=r*U1(m-1)+(1-2*r)*U1(m)+r*(g1(t(j))+g1(t(j+1)));
for i=1:m, B(i,1)=B(i,1)+k*Psi(t(j),x(i))+k*f(U1(i)); end
U1=A\B; U=[U U1]; % backslash operation and record the results
end
Example 11.5. Consider an inhomogeneous diffusion equation, for which the bound-
ary conditions are u(t, 0) = u(t, 1) = 0, the initial condition is given by u(0, x) = sin πx,
and also the given functions are Ψ (t, x) = x(1 − x) cos t and f (u) = −2u cos u. Solve this
inhomogeneous diffusion equation.
Solutions. The boundary and initiation conditions, as well as the Ψ (t, x) function, are
described with anonymous functions. With the following commands, the inhomoge-
neous diffusion equation can be solved, and the solution surface with contours can
be obtained, as shown in Figure 11.3.
11.2 Several special forms of partial differential equations | 399
𝜓 𝜓 𝜓
∇u = [ , ,..., ] u. (11.2.3)
𝜓x1 𝜓x2 𝜓xn
𝜓 𝜓 𝜓
div(v) = ( + + ⋅⋅⋅ + ) v. (11.2.4)
𝜓x1 𝜓x2 𝜓xn
𝜓 𝜓u 𝜓 𝜓u 𝜓 𝜓u
div(c∇u) = [ (c )+ (c ) + ⋅⋅⋅ + (c )] (11.2.5)
𝜓x1 𝜓x1 𝜓x2 𝜓x2 𝜓xn 𝜓xn
𝜓2 𝜓2 𝜓2
div(c∇u) = c ( + + ⋅ ⋅ ⋅ + ) u = cΔu (11.2.6)
𝜓x12 𝜓x22 𝜓xn2
where Δ is the Laplace operator. Therefore, elliptic partial differential equation can
simply be written as
𝜓2 𝜓2 𝜓2
−c( + + ⋅ ⋅ ⋅ + ) u + au = f (t, x). (11.2.7)
𝜓x12 𝜓x22 𝜓xn2
11.2 Several special forms of partial differential equations | 401
𝜓2 u
m − div(c∇u) + au = f (t, x). (11.2.10)
𝜓t 2
If c is a constant, the equation can be simplified as
𝜓2 u 𝜓2 u 𝜓2 u 𝜓2 u
m 2
− c ( 2 + 2 + ⋅ ⋅ ⋅ + 2 ) + au = f (t, x). (11.2.11)
𝜓t 𝜓x1 𝜓x2 𝜓xn
It can be seen from the above three types of equations that the difference lies in
the order of the derivative of the function u with respect to t. If there is no derivative,
it can be regarded as a constant, such that the equation is like an elliptic algebraic
equation ax12 + bx22 = c. Therefore, it is named an elliptic partial differential equation;
if the first-order derivative of u is involved, it is like a parabolic equation y = ax12 + b22 ,
therefore it is called a parabolic partial differential equation; if the second-order of the
derivative of u is involved, it is like y2 = ax12 + bx22 , and the equation is referred to as a
hyperbolic partial differential equation.
The finite element method is used in MATLAB Partial Differential Equation Tool-
box, for handling such equations. In elliptic partial differential equations, the coeffi-
cients c, a, d, and f can be described as any functions, while in other forms, they must
be constants.
𝜓2 u 𝜓2 u 𝜓2 u
−c( + + ⋅ ⋅ ⋅ + ) + au = λdu. (11.2.13)
𝜓x12 𝜓x22 𝜓xn2
Comparing (11.2.13) and (11.2.2), it is found that if the term λdu is moved to the left-
hand side, the equation can be converted into an elliptic partial differential equation.
Therefore, such equation is a special case of an elliptic partial differential equation.
402 | 11 Partial differential equations
𝜓u 𝜓u
h (x, t, u, ) u𝜓Ω = r (x, t, u, (11.2.14)
)
𝜓x 𝜓x
where 𝜓Ω represents the boundary of the geometric region of interest. Assuming that
the boundary conditions are satisfied, one should specify r and h. They can be con-
stants, or functions of t, x and even u and 𝜓u/𝜓x. For simplicity, let h = 1.
(2) If the values of 𝜓u(t, x)/𝜓x at the boundaries are known, the boundary condi-
tions are referred to as Neumann boundary conditions, named after a German math-
ematician Carl Gottfried Neumann (1832–1925).
𝜓
[ (c∇u) + qu] = g (11.2.15)
𝜓n 𝜓Ω
(3) There is another form, which is a certain combination of u(t, x) and 𝜓u(t, x)/𝜓x,
the so-called Robin boundary condition, named after a French mathematician Victor
Gustave Robin (1855–1897).
(3) Set formula. The geometric region can be regarded as the sets composed of some
fundamental shapes. Set computations such as union, intersection, and differ-
ence are allowed to precisely describe the geometric regions.
(4) Geometric region. In the major part of the interface, the users are allowed to draw
the geometric region with different shapes. Three-dimensional displays are sup-
ported in MATLAB, but new graphics windows will be needed.
In this section, an example is used to show how to use the interface to define the
geometric region. Some ellipses and rectangles can be drawn with the tools as shown
in Figure 11.6. Each shape can be regarded as a set. In Set formula edit box, the formula
can be expressed as (R1+R2+E1)−E2, meaning the union of sets R1, R2, and E1, and the
removal of set E2.
Clicking the 𝜓Ω button in the toolbar, the geometric region can be obtained. Se-
lecting the menu item Boundary →Remove All Subdomain Borders, the borders in
adjacent domains are removed, and the geometric region is automatically drawn, as
shown in Figure 11.7.
With the geometric region, the Δ button in the toolbar can be clicked such that
Delaunay triangulation in the geometric region is made, as shown in Figure 11.8. If a
dense triangulation is expected, the appropriate button can be clicked, and the new
mesh grids are shown in Figure 11.9. It should be noted that the denser the grids, the
more accurate the solutions, while the longer the time.
Clicking the 𝜓Ω button in the toolbar, the boundary conditions can be specified. The
Dirichlet and Neumann conditions are supported in the interface.
Selecting the menu item Boundary→Specify Boundary Conditions, a dialog box
show in Figure 11.10 is opened. The boundary conditions can be specified in the dialog
box. If the conditions on all the bounders are expected to be 0, then the r edit box can
be filled with 0.
406 | 11 Partial differential equations
When the geometric region and boundary conditions are specified, and the partial
differential equation is described, the = button in the toolbar can be clicked to solve
the partial differential equation directly. An example is given next to demonstrate the
solution process.
d2 u 𝜓2 u 𝜓2 u
− − + 2u = 10.
dt 2 𝜓x2 𝜓y2
Solutions. It can be seen from the given equation that c = 1, a = 2, f = 10, and d = 1.
Clicking the PDE label in the toolbar, a dialog box shown in Figure 11.11 is opened.
Various partial differential equation types are listed on the left. The Hyperbolic item
can be selected, and the parameters in the dialog box can be specified.
To solve this partial differential equation, the equal button in the toolbar can be
clicked, and the solution of the equation can be found, as shown graphically in
Figure 11.12. The pseudocolor in the figure reflects the solution u(x, y). Note that if
time is involved, the u(x, y) values at time t = 0 are displayed. The display at other
time t will be illustrated later.
The boundary conditions can be modified. For instance, in the dialog box shown in
Figure 11.10, the Dirichlet condition is still used, with the values of u on the boundaries
set to 10. This can be set by filling the r edit box by 10. The partial differential equation
can be solved again, and a solution similar to that in Figure 11.12 can be found. The
difference is that the scale on the right has changed to 10∼12.
In the previous example it can be seen that a visual method can be used to input
the partial differential equations and boundary conditions in the pdetool interface,
and such equations can then be solved. The solutions can be displayed in contours,
in pseudocolor. Apart from the pseudocolor display, other display methods are sup-
ported. For instance, the surface or even animation methods are allowed. The display
formats are demonstrated next as an example.
Example 11.7. Consider the partial differential equation and geometric region in Ex-
ample 11.6. The boundary conditions are the same as in that example. Display the
solution with contours.
Solutions. The original problem in the user interface was saved in file c11mpde1.m.
Load the file directly into the user interface. Clicking the button in the toolbar, a
dialog box shown in Figure 11.13 appears. If Contour and Arrows options are clicked,
the contours with arrows will be shown as in Figure 11.14.
408 | 11 Partial differential equations
Note also that, in the dialog box in Figure 11.13, the Property items all have listboxes.
The first one is the default u, indicating the display is for the solution u(x, y). If the
solutions of other functions are expected, they can be assigned from the listboxes.
If the option Height (3d-plot) is clicked, another figure window appears, and a
three-dimensional surface plot in mesh grid form can be obtained as shown in Fig-
ure 11.15.
Example 11.8. Consider again the partial differential equation and geometric region in
Example 11.6. Display the equation solutions in three-dimensional animation format.
Solutions. The default time vector for the user interface is t=0:10. The solution at
the final time t = 10 in shown in Figure 11.12. It can be seen for a hyperbolic partial
differential equation that the solution should also be a function of time. Therefore the
animation format can be used to illustrate solutions in a dynamical way. The hyper-
bolic partial differential equation in Example 11.6 is still used. We illustrate how to
display the dynamic results of time-varying equations.
11.3 User interface of typical two-dimensional partial differential equations | 409
The menu item Solve →Parameters can be selected, and then a dialog box
in Figure 11.13 appears. In this dialog box, the time interval can be assigned to
0:0.1:4. Also if Animation is clicked, then clicking the Options button, a further
dialog box appears, which allows the user to assign video play speed, with the
default of 6 fps, i. e., 6 frames per second. The solution of the partial differential
equation can be displayed in animation. The user may use the menu item Plot
→Export Movie to export the video into MATLAB workspace, for instance, by saving
as variable M. Function movie(M) can be used to play videos in MATLAB Graphics
window. Command movie2avi(M,’myavi.avi’) can be used to save the animation
into a video file myavi.avi, for later viewing.
1 2 2
−div ( ∇u) + (x2 + y2 )u = e−x −y
1 + |∇u|2
where the boundaries are with zero conditions. Solve this partial differential equation.
410 | 11 Partial differential equations
Solutions. Observing the equation, it can be seen that this is an elliptic partial differ-
ential equation with
1 2
−y2
c= , a = x2 + y2 , f = e−x ,
√(1 + (𝜓u/𝜓x)2 + (𝜓u/𝜓y)2
and on the boundary u is 0. Using the Partial Differential Equation Toolbox, the dialog
box shown in Figure 11.11 appears. One may select Elliptic item to show the elliptic par-
tial differential equation. In the equation model, fill 1./sqrt(1+ux.^2+uy.^2) in item
c. In a and f items, one should fill in x.^2+y.^2 and exp(-x.^2-y.^2). With Solve →
Parameters menu, a dialog box is displayed, from which the item Use nonlinear solver
should be ticked. The functional coefficients apply only to elliptic partial differential
equations. Clicking the equality sign in the toolbar, the solutions can be found, and
the results are shown in Figure 11.16.
Before discussing the modeling and solution of partial differential equations, the com-
mand createpde() can be used to create a blank partial differential equation object,
with the syntax M=createpde. The name of the object is PDEModel, whose commonly
used members are provided in Table 11.2.
PDESystemSize numbers of PDEs N, the default one is 1. Function can also be called with
createpde(N)
IsTimeDependent whether the PDE contains explicitly time t; 0 for no, and 1 for yes. The
member can be set by a command, or set automatically when specifying
coefficients
Geometry geometric region, the default one is an empty matrix. It can also be set
with functions such as geometryFromEdges()
EquationCoefficients coefficients of PDEs, which can be set by the functions such as
specifyCoefficients()
BoundaryConditions boundary conditions set by applyBoundaryCondition()
InitialConditions initial conditions set by setInitialConditions()
Mesh mesh grid format, with default empty matrix. It can be set to assigned
individually with function generateMesh()
SolverOptions control parameters, similar to the case of ODEs, the member can be set
to RelativeTolerance and AbsoluteTolerance
Two-dimensional geometric region can be created graphically with pdetool user in-
terface, and it can be stored in a file. Unfortunately, the geometric domain thus gener-
ated cannot be loaded directly into the PDEModel object. Relevant MATLAB statements
should be issued to define the region again. In this section, the design method is
illustrated.
Example 11.10. In Example 11.6, a MATLAB file c11mpde0.m was created. If the file is
opened, it can be seen that inside the function a paragraph
is present. Do not attempt to execute the code, since pde_fig is an internal function,
which cannot be executed in MATLAB directly. It can be seen that two rectangles and
two ellipses are defined. Set operations are also described. With the internal com-
mands, the geometric region can be described.
Four basic shapes are provided in the Partial Differential Equation Toolbox. De-
tails of the shapes are described in Table 11.3. Each geometric shape is described by
a column vector, with different lengths. If set operations are expected, the end of the
shorter ones should be appended by 0’s, such that their lengths are unified.
When the basic shapes are described, all the shapes can be spanned into a matrix.
Each shape is assigned to a string for its name, and set operations are defined in a
certain format. Function decsg() can then be called to compute the bounds matrix.
The subbounds are still retained in the matrix. If one wants to remove the interior
bounds, function csgdel() can be used so that only the outer bounds are extracted.
Function pdegplot() can be called to draw the boundaries. The syntaxes of these
functions are fixed, and can be better understood after learning the demonstrative
commands in the examples.
Example 11.11. Use low-level statements to declare the boundaries in Figure 11.17. The
geometric region is defined by the union of circle C1 and rectangle R, and subtracting
circle C2 .
Solutions. The three basic shapes should be created first, with the parameters speci-
fied in the illustration. With the basic shapes, set operations are defined, and functions
can be called sequentially to extract the bounds. The finalized bounds are as shown in
Figure 11.18. The syntaxes of these functions are fixed, and one may read and compare
with the display. Detailed information may be seen in [51].
It can be seen that with the above commands, the boundaries are extracted success-
fully. Also each edge is described and assigned a name. Normally, a circle can be
described by 4 edges. The numbered edges can be used in subsequent boundary con-
dition specifications.
Example 11.12. It seems that the bounds created with the pdetool user interface can-
not be extracted directly. Use the information extracted from the model file in Exam-
ple 11.10, and redefine the boundaries with basic shapes.
pdeellip(-0.023899,0.1534591,0.92201258,0.6415094,0,’E1’);
pdeellip(-0.625157,-0.218868,0.46163522,0.384905660,0,’E2’);
It can be seen that R1 is a rectangle, whose x range is (−1.064, 1.195) and y range is
(−0.103, −0.822). It can be described directly by the following statements. Other shapes
can also be redefined with commands, so that the boundaries can be recreated as
shown in Figure 11.19.
>> R1=[3;4;-1.064;1.195;1.195;-1.064;-0.103;-0.103;-0.822;-0.822];
R2=[3;4;0.158;1.326;1.326;0.158;-0.818;-0.818;0.850;0.850];
E1=[4;-0.0239;0.153;0.922;0.642;0;0;0;0;0];
E2=[4;-0.625;-0.21;0.462;0.385;0;0;0;0;0];
smat=[R1,R2,E1,E2]; ns=char(’R1’,’R2’,’E1’,’E2’)’;
shape=’(R1+R2+E1)-E2’; [g,g0]=decsg(smat,shape,ns);
[g1,g10]=csgdel(g,g0); % remove the interior bounds
pdegplot(g1,’EdgeLabels’,’on’,’SubdomainLabels’,’on’) % draw bounds
axis([-1.5,1.5,-1,1]), save c11exbnd g1 % save the bounds to a file
From the previously established boundaries of the geometric region, each edge of the
boundary can be assigned individual boundary conditions. From the given PDEModel
object M, the command applyBoundaryCondition() can be called to assign a bound-
ary condition, with the syntaxes
where k is the serial number of the edge, and it can be a vector, so that the boundary
conditions for several edges can be set simultaneously. Vector v is composed of bound-
ary values or dot operations of functions. If a certain boundary edge was not assigned
a boundary condition, default zero Dirichlet condition is assigned automatically. For
instance, if edges 3 and 7 in the M object are expected to be constant 2, the command
applyBoundaryCondition(M ,’edge’,[3,7],’u’,2)
can be used. Each set of boundary conditions should be set individually by calling
applyBoundaryCondition() function once.
Similar to boundary conditions, function setInitialConditions(M , u0 ) can
be used to set initial conditions, where u0 can be a constant or a function handle for
the initial conditions. If m ≠ 0, the initial conditions u (t) should also be assigned,
with setInitialConditions(M ,u0 ,u0 ). Initial value specification will be illustrated
later through examples.
Under the PDEModel object, a standard form of partial differential equations is defined.
In this section, this standard form is presented. Then, the method of solving partial
differential equations with MATLAB is illustrated.
Definition 11.11. The standard partial differential equation expressed for PDEModel
object is given by
𝜓2 u 𝜓u
m 2
+d − div(c∇u) + au = f (11.4.1)
𝜓t 𝜓t
where m, d, c, a, and f are all referred to as coefficients. They can be constants, zeros,
or known functions of t and u.
It can be seen from the standard form that the elliptic (m = d = 0), parabolic
(m = 0), and hyperbolic (d = 0) partial differential equations are just special cases.
Besides, diffusion equation (m = a = 0) and Poisson equation (m = d = a = 0) are
also special cases.
416 | 11 Partial differential equations
Example 11.13. Use PDEModel object to express the hyperbolic partial differential
equation in Example 11.6.
For the coefficients which are not constants, anonymous or MATLAB functions
can be used to describe them. The input arguments of the functions are region and
state. They are both structured variables. In the structure region, the members are x,
y, and z, for spatial information, while in state, the members are u, ux (for 𝜓u/𝜓x), uy,
uz, and time (for t). Therefore the structured variables with member names should be
used to describe the coefficients. Dot operations should be used in describing the co-
efficients. An example will be given next to show how to describe variable coefficients
in anonymous functions.
Example 11.14. The elliptic partial differential equation in Example 11.9 is with vari-
able coefficients. Create such a model.
Solutions. For convenience, the variable coefficients in Example 11.9 are rewritten
below:
1 2 2
c= , a = x 2 + y2 , f = e−x −y .
√(1 + (𝜓u/𝜓x)2 + (𝜓u/𝜓y)2
From the mathematical expressions, the anonymous functions for the coefficients
a, c, and f can be established, from them then handles can be set to the partial differ-
ential equation object:
>> f=@(region,state)exp(-region.x.^2-region.y.^2);
a=@(region,state)region.x.^2+region.y.^2;
c=@(region,state)1./sqrt(1+state.ux.^2+state.uy.^2);
M=createpde; load c11exbnd; geometryFromEdges(M,g1);
specifyCoefficients(M,’m’,0,’d’,0,’c’,c,’a’,a,’f’,f);
11.4 Solutions of partial differential equations | 417
Similar to the case when solving partial differential equation, Delaunay mesh grid
should be generated first before the solution process, so that the partial differential
equations solutions on the mesh grid points can be found. More about Delaunay mesh
grid can be found in Volume I. Function generateMesh() can be used to generate the
mesh grid, with the syntaxes
m=generateMesh(M )
m=generateMesh(M ,name1,value,name2,value2, . . . )
The commonly used property is ’Hmax’, which determines the sizes of the mesh grid.
For accurate solutions, it should be set to very small numbers, while this will in turn
reduce the speed in the solution process. A tradeoff should be made in the selection.
Normally, it should be set to 0.02 or 0.01. Of course, other properties are also allowed,
see [51]. With the generated mesh grid, the information in member Mesh of the object
M will be updated automatically.
When the mesh grids are generated, function s=solvepe(M ) can be called to
solve the partial differential equations numerically. A structured variable s can be
returned, with the members NodalSolution, XGradients, and others. The member
NodalSolution contains the information of the solution on the mesh grids, while
XGradients contains the information of the partial derivatives. If variable t is explic-
itly contained in the partial differential equations, a time vector t should be generated
first, by using the command s=solvepe(M ,t ) to solve the partial differential equa-
tions.
It should be pointed out that the eigenvalue-type partial differential equations
cannot be solved with solvepde() function. They should be solved with function
solvepdeeig() instead. Such partial differential equations are not covered in this
book.
The solution u obtained is a collection of the values on the mesh grid points. It
cannot be converted to those in other mesh grids by default. The dedicated function
pdeplot() can be used to draw the solution. It can be extracted with the command
u=s.NodalSolution first. There are several ways to call function pdeplot():
where in the last one, the arrows are added to the results. If the partial differential
equations are three-dimensional, function pdeplot3D() can be called to draw the
results.
The following examples can be used to demonstrate the partial differential equa-
tion solution process.
418 | 11 Partial differential equations
Example 11.15. Solve again the hyperbolic partial differential equation studied in Ex-
ample 11.6 with MATLAB commands.
Solutions. In fact, the model in Example 11.13 has been demonstrated earlier. With
this model, mesh grids can be generated and the partial differential equation can be
solved. Unfortunately, the result obtained here is different from that obtained by the
user interface.
Example 11.16. Solve the elliptic partial differential equation with variable coeffi-
cients studied in Example 11.9.
Solutions. The model for the partial differential equation has been created in Exam-
ple 11.14. From it, the partial differential equation can be solved directly with the fol-
lowing commands. Again the solution obtained is different from that found using the
user interface.
>> f=@(region,state)exp(-region.x.^2-region.y.^2);
a=@(region,state)region.x.^2+region.y.^2;
c=@(region,state)1./sqrt(1+state.ux.^2+state.uy.^2);
M=createpde; load c11exbnd; geometryFromEdges(M,g1);
specifyCoefficients(M,’m’,0,’d’,0,’c’,c,’a’,a,’f’,f);
generateMesh(M,’Hmax’,0.1); sol=solvepde(M); % generate mesh grids
pdeplot(M,’XYData’,sol.NodalSolution) % draw the solutions
Solutions. Compared with the standard form of PDEModel object, it can be seen from
the given partial differential equation that
Therefore, the following statements can be used to describe the square geometric
region, and describe the coefficients of the partial differential equation. It can also
be seen from the boundary plot that edges 1 and 3 describe the boundaries at y = 0
and y = 1, respectively, while the edges 2 and 4 describe the boundaries at x = 0
and x = 1, respectively. The boundary conditions can be described by appropriate
commands. The initial condition can also be expressed. With this information, the
partial differential equation can be solved, and the solution surface is as shown in
Figure 11.20. The result looks the same as for the analytical solution.
Delaunay triangulation mesh grids are generated automatically for the geometric re-
gions by the Partial Differential Equation Toolbox in MATLAB. The coordinates of the
vertexes are stored in matrix p, whose two row vectors are, in fact, the x and y vectors
420 | 11 Partial differential equations
of the vertices. Matrix p can be extracted with meshToPet() function. Since the ana-
lytical solution is known, the exact values at the vertices can be found, and compared
with the numerical solution obtained, so as to find the maximum error. Under the
current setting, the maximum error is 2.3960 × 10−5 .
In the following code, different values of ’Hmax’ and error tolerance ee are probed,
and the comparative results obtained are given in Table 11.4. In this table, the com-
binations (’Hmax’, ee) are used. It can be seen that the relative error tolerance does
not contribute much to the accuracy. The choice of 10−6 is sufficient. The parameter
’Hmax’ is a very important factor to both the accuracy and elapsed time. The smaller
the value of ’Hmax’, the smaller the error, and the heavier the cost (elapsed time
and number of vertices are increased significantly). Therefore in real applications,
these parameters should be properly chosen, so as to find the numerical solutions
effectively.
parameters (0.1, 10−6 ) (0.05, 10−6 ) (0.02, 10−6 ) (0.01, 10−6 ) (0.005, 10−6 ) (0.005, 10−7 )
𝜓y 𝜓2 u 𝜓2 u
= 2 + 2 + (1 − x)xt.
𝜓t 𝜓x 𝜓y
If on the boundaries, zero Dirichlet conditions are assumed, and the initial condition
is u(0, x, y) = sin πx/2, solve the diffusion equation and make the animation into a
video file.
11.4 Solutions of partial differential equations | 421
Solutions. Compared with the standard form in (11.4.1), it can be seen that c = K = 1,
d = 1, m = a = 0, and f = (1 − x)xt. Therefore, the following commands can be used
to describe the partial differential equation, and the geometric region and conditions.
The equation can then be solved numerically. Animation can then be used to show the
solution and be saved into a video file c11mdiff.avi.
𝜓u 2 𝜓2 u 𝜓2 u
{
{
{ = 1 + u v − 4.4u + α ( + ),
{ 𝜓t 𝜓x 2 𝜓y2
2 2
{ 𝜓v = 3.4u − u2 v + α ( 𝜓 v + 𝜓 v )
{
{
{
{ 𝜓t 𝜓x2 𝜓y2
where α = 2 × 10−3 . It is known that the initial conditions are u(0, x, y) = 0.5 + y,
v(0, x, y) = 1 + 5x, and the Neumann boundary conditions are zero. Solve the partial
differential equation over the square region 0 ⩽ x, y ⩽ 1, for 0 ⩽ t ⩽ 11.5.
Solutions. It can be seen from the given equation that there are two partial differential
equations. Letting u1 = u and u2 = v, compared with the standard form, it is seen that
m = a = 0, d = 1, and c = 2 × 10−3 . The coefficient f = [1 + u21 u2 − 4.4u1 , 3.4u1 − u21 u2 ]T ,
and the initial condition u0 = [0.5 + y, 1 + 5x]T .
Selecting a step-size of 0.5 for time t, the following statements can be used to
construct the square geometric region, describe the partial differential equations, set
the conditions, and generate mesh grids, then finally find the numerical solution.
Since t = 11.5 is the last sample in vector t, the surfaces of the solutions can be found
in Figures 11.21, (a) and (b).
3.4*S.u(1,:)-S.u(1,:).^2.*S.u(2,:)];
M=createpde(2); geometryFromEdges(M,g); % geometric region
specifyCoefficients(M,’m’,0,’d’,1,’c’,2e-3,’a’,0,’f’,f);
applyBoundaryCondition(M,’edge’,1:4,’g’,0); % Neumann condition
u0=@(R)[0.5+R.y; 1+5*R.x]; setInitialConditions(M,u0); % initial
tic, generateMesh(M,’Hmax’,0.01); % generate mesh grids with Hmax
t=0:0.5:11.5; sol=solvepde(M,t); toc % solve PDE
u1=sol.NodalSolution(:,1,:); u2=sol.NodalSolution(:,2,:);
pdeplot(M,’XYData’,u1,’ZData’,u1(:,:,end),’colorbar’,’off’)
shading faceted, grid, figure % draw next surface
pdeplot(M,’XYData’,u1,’ZData’,u2(:,:,end),’colorbar’,’off’)
shading faceted, grid % set mesh grid format
If Hmax is selected relatively small, the surface obtained is much smoother than that
in [32], and more accurate. The cost is the increase in elapsed time, for this problem,
11.5 Exercises | 423
which is about 366.9 seconds. If the precision in the reference is acceptable, Hmax can
be set to 0.05, and then the elapsed time is 12.79 seconds.
11.5 Exercises
2
11.1 Show that for any real constants C and γ, u(t, x) = e−γ Kt cos γx + C satisfies the
homogeneous diffusion equation.
11.2 High precision o(hp ) numerical differentiation algorithms were presented in Vol-
ume II. In Section 11.1, the algorithm with p = 1 is used, and tri-diagonal co-
efficient matrix can be established. If larger values of p are taken, derive high
precision algorithms for diffusion problems, and solve again for the diffusion
equations in the examples.
11.3 Solve the extended diffusion equation in (11.1.1), where K = 1 and the boundary
conditions are u(t, 0) = 0 and u(t, 1) = 0, also the initial condition is u(0, x) =
η(x) = sin πx. Find numerical solutions of the diffusion equations for t ∈ (0, 1),
and measure the elapsed time for different step-sizes.
11.4 Solve the following diffusion equation, where ut is the short-hand notation of
𝜓u/𝜓t, while uxx is 𝜓2 u/𝜓x2 :[17]
(a) ut = uxx + x, u(0, x) = sin 2x, u(t, 0) = u(t, π) = 0;
(b) ut = uxx + 10, u(0, x) = 3 sin x − 4 sin 2x + 5 sin 3x, u(t, 0) = u(t, π) = 0.
11.5 Solve the diffusion equation ut = 3uxx , with the boundary conditions u(t, 0) =
u(t, π) = 0, and the initial condition u(0, x) = x(π − x). The solution is needed
for t ∈ (0, 3). It is known that the analytical solution of the original equation
can be expressed as an infinite series.[17] Assess the accuracy and efficiency if the
analytical solution is
8 ∞ 1 2
u(t, x) = ∑ 3
e−3(2k−1) t sin(2k − 1)x.
π k=1 (2k − 1)
𝜓u1 𝜓2 u1
{ 𝜓t = 0.024 𝜓x 2 − F(u1 − u2 ),
{
{
𝜓2 u2
{
{
{ 𝜓u2
= 0.17 + F(u1 − u2 )
{ 𝜓t 𝜓x 2
where F(x) = e5.73x − e−11.46x . The initial conditions are u1 (x, 0) = 1, u2 (x, 1) = 0,
while the boundary conditions are
𝜓 𝜓
u (0, t) = 0, u2 (0, t) = 0, u1 (1, t) = 1, u (1, t) = 0.
𝜓x 1 𝜓x 2
Bibliography
[1] Abramowita M, Stegun I A. Handbook of Mathematical Functions with Formulas, Graphs and
Mathematical Tables. 9th edition. Washengton D.C.: United States Department of Commerce,
National Bureau of Standards, 1970.
[2] Ascher U M, Mattheij R M M, Russel R D. Numerical Solution of Boundary Value Problems for
Ordinary Differential Equations. Philadelphia: SIAM Press, 1995.
[3] Ascher U M, Petzold L R. Computer Methods for Ordinary Differential Equations and
Differential–Algebraic Equations. Philadelphia: SIAM Press, 1998.
[4] Åström K J. Introduction to Stochastic Control Theory. London: Academic Press, 1970.
[5] Bashford F, Adams J C. An Attempt to Test the Theories of Capillary Action by Comparing the
Theoretical and Measured Forms of Drops of Fluid, with an Explanation of the Method of
Integration Employed in Constructing the Tables which Give the Theoretical Forms of Such
Drops. Cambridge: Cambridge University Press, 1883.
[6] Bellen A, Zennaro M. Numerical Methods for Delay Differential Equations. Oxford: Oxford
University Press, 2003.
[7] Bilotta E, Pantano P. A Gallery of Chua Attractors. Singapore: World Scientific, 2008.
[8] Blanchard P, Devaney R L, Hall G R. Differential Equations. 4th edition. Boston: Brooks/Cole,
2012.
[9] Bogdanov A. Optimal control of a double inverted pendulum on a cart. Technical Report
CSE-04-006, Department of Computer Science & Electrical Engineering, OGI School of Science
& Engineering, OHSU, 2004.
[10] Brown C. Differential Equations — A Modeling Approach. Los Angeles: SAGE Publications,
2007.
[11] Burger M, Gerdts M. A survey on numerical methods for the simulation of initial value problems
with sDAEs. In: Ilchmann A, Reis T, eds., Surveys in Differential–Algebraic Equations IV.
Switzerland: Springer, 2017.
[12] Butcher J C. Coefficients for the study of Runge–Kutta integration processes. Journal of the
Australian Mathematical Society, 1963, 3: 185–201.
[13] Butcher J C. Numerical Methods for Ordinary Differential Equations. 3nd edition. Chichester:
Wiley, 2016.
[14] Caputo M, Mainardi F. A new dissipation model based on memory mechanism. Pure and
Applied Geophysics, 1971, 91(8): 134–147.
[15] Chamati H, Tonchev N S. Generalized Mittag-Leffler functions in the theory of finite-size
scaling for systems with strong anisotropy and/or long-range interaction. Journal of Physics
A: Mathematical and General, 2006, 39: 469–478.
[16] Chicone C. An Invitation to Applied Mathematics: Differential Equations, Modeling and
Computation. London: Elsevier, 2018.
[17] Coleman M P. An Introduction to Partial Differential Equations with MATLAB. 2nd edition. Boca
Raton: CRC Press, 2013.
[18] Cryer C W. Numerical methods for functional differential equations. In: Schmitt, K, ed., Delay
and Functional Differential Equations and Their Applications, New York: Academic Press, 1972.
[19] De Coster C. The lower and upper solutions method for boundary value problems. In: Cañada
A, Drábek P and Fonda A, eds., Handbook of Differential Equations — Ordinary Differential
Equations, Volume 1. Amsterdam: Elsevier, 2004.
[20] Diethelm K. The Analysis of Fractional Differential Equations: An Application-Oriented
Exposition Using Differential Operators of Caputo Type. New York: Springer, 2010.
[21] Doshi H. Numerical techniques for Volterra equations. MathWorks File Exchange #23623, 2015.
https://doi.org/10.1515/9783110675252-012
426 | Bibliography
[22] Enns R H, McGuire G C. Nonlinear Physics with MAPLE for Scientists and Engineers. 2nd edition.
Boston: Birkhäuser, 2000.
[23] Enright W H. Optimal second derivative methods for stiff systems. In: Willoughby R A, ed., Stiff
Differential Systems. New York: Plenum Press, 1974.
[24] Felhberg E. Classical fifth-, sixth, seventh- and eighth-order Runge–Kutta formulas with
stepsize control. Technical report, Washengton D C: NASA Technical Report TR R-287, 1968.
[25] Felhberg E. Low-order classical Runge–Kutta formulas with stepsize control and their
applications to some heat transfer problems. Technical report, Washengton D C: NASA
Technical Report TR R-315, 1969.
[26] Gear C W. Automatic detection and treatment of oscillatory and/or stiff ordinary differential
equations. In: Hinze J, ed., Numerical Integration of Differential Equations and Large Linear
Systems, Berlin: Springer-Verlag, 1982, 190–206.
[27] Gladwell I. The development of the boundary-value codes in the ordinary differential equations
— Chapter of the NAG library. In: Childs B, eds., Codes for Boundary-Value Problems in
Ordinary Differential Equations. Berlin: Springer-Verlag, 1979.
[28] Gleick J. Chaos — Making a New Science. New York: Penguin Books, 1987.
[29] Govorukhin V. Ode87 integrator, MATLAB Central File ID: #3616, 2003.
[30] Hairer E. A Runge–Kutta method of order 10. Journal of the Institute of Mathematics and Its
Applications, 1978, 21: 47–59.
[31] Hairer E, Lubich C, Roche M. The Numerical Solution of Differential–Algebraic Systems by
Runge–Kutta Methods, Lecture Notes in Mathematics. Berlin: Springer-Verlag, 1980.
[32] Hairer E, Nørsett S P, Wanner G. Solving Ordinary Differential Equations I: Nonstiff Problems.
2nd edition. Berlin: Springer-Verlag, 1993.
[33] Hairer E, Wanner G. Solving Ordinary Differential Equations II: Stiff and Differential–Algebraic
Problems. 2nd edition. Berlin: Springer-Verlag, 1996.
[34] Hartung F, Krisztin T, Walther H-O, Wu J. Functional differential equations with state-dependent
delays: Theory and applications. In: Cañada A, Drábek P and Fonda A, eds., Handbook of
Differential Equations — Ordinary Differential Equations, Volume 3. Amsterdam: Elsevier,
2006.
[35] Hermann M, Saravi M. Nonlinear Ordinary Differential Equations — Analytical Approximation
and Numerical Methods. India: Springer, 2016.
[36] Hethcote H W, Stech H W, van den Driessohe P. Periodicity and stability in epidemic models:
A survey. In: Busenberg S N and Cooke K L, eds., Differential Equations and Applications in
Ecology, Epidemics, and Population Problems. New York: Academic Press, 1981.
[37] Hilfer R. Applications of Fractional Calculus in Physics. Singapore: World Scientific, 2000.
[38] Kalbaugh D V. Differential Equations for Engineers — The Essentials. Boca Raton: CRC Press,
2018.
[39] Kermack W O, McKendrick A G. A contribution to the mathematical theory of epidemics.
Proceedings of the Royal Society A, 1927, 115(772): 700–721.
[40] Keskin A Ü. Ordinary Differential Equations for Engineers — Problems with MATLAB Solutions.
Switzerland: Springer, 2019.
[41] Kundel P, Mehrmann V. Differential–Algebraic Equations — Analysis and Numerical Solution.
Zürich: European Mathematical Society, 2006.
[42] Lapidus L, Aiken R C, Liu Y A. The occurrence and numerical solution of physical and chemical
systems having widely varying time constants. In: Willoughby R A, ed., Stiff Differential
Systems. New York: Plenum Press, 1974.
[43] Laub A J. Matrix Analysis for Scientists and Engineers. Philadelphia: SIAM Press, 2005.
[44] LeVeque R J. Finite Difference Methods for Ordinary and Partial Differential Equations —
Steady-State and Time-Dependent Problems. Philadelphia: SIAM Press, 2007.
Bibliography | 427
[45] Li J C, Chen Y T. Computational Partial Differential Equations using MATLAB. Boca Raton: CRC
Press, 2008.
[46] Li Z G, Soh Y C, Wen C Y. Switched and Impulsive Systems — Analysis, Design, and Applications.
Berlin: Springer, 2005.
[47] Liao X X, Wang L Q, Yu P. Stability of Dynamic Systems. Oxford: Elsevier, 2007.
[48] Liberzon D, Morse A S. Basic problems in stability and design of switched systems. IEEE Control
Systems Magazine, 1999, 19(5): 59–70.
[49] Lorenz E N. Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 1963, 20(2):
130–141.
[50] Lorenz E N. The Essense of Chaos. Seattle: University of Washington Press, 1993.
[51] MathWorks. Partial differential equation toolbox — User’s guide, 2016.
[52] Mazzia F, Iavernaro F. Test set for initial value problem solvers [R/OL]. Technical Report 40,
Department of Mathematics, University of Bari and INdAM. http://pitagora.dm.uniba.it/
~testset/, 2003.
[53] Molor C B. Numerical Computing with MATLAB. MathWorks Inc, 2004.
[54] Oustaloup A, Levron F, Nanot F, Mathieu B. Frequency band complex non integer differentiator:
characterization and synthesis. IEEE Transactions on Circuits and Systems I: Fundamental
Theory and Applications, 2000, 47(1): 25–40.
[55] Petráš I, Podlubny I, O’Leary P. Analogue Realization of Fractional Order Controllers. TU Košice:
Fakulta BERG, 2002.
[56] Podlubny I. Fractional Differential Equations. San Diego: Academic Press, 1999.
[57] Podlubny I. Matrix approach to discrete fractional calculus. Fractional Calculus and Applied
Analysis, 2000, 3(4): 359–386.
[58] Polyanin A D, Zaitsev V F. Handbook of Ordinary Differential Equations — Exact Solutions,
Methods and Problems. Boca Raton: CRC Press, 2018.
[59] Richardson L F. Arms and Insecurity. Chicago: Quadrangle Books, 1960.
[60] Routh E J. A Treatise on the Stability of a Given State of Motions. London: Cambridge University
Press, 1877.
[61] Scott M R, Watts H A. A systematized collection of codes for solving two-point boundary-value
problems. In: Lapidus L and Schiesser W E, eds., Numerical Methods for Differential Systems
— Recent Developments in Algorithms, Software, and Applications. New York: Academic Press,
1976.
[62] Shampine L F, Allen Jr R C, Pruess S. Fundamentals of Numerical Computing. New York: John
Wiley & Sons, 1997.
[63] Shampine L F, Gladwell I, Thompson S. Solving ODEs with MATLAB. Cambridge: Cambridge
University Press, 2003.
[64] Shukla A K, Prajapati J C. On a generalization of Mittag-Leffler function and its properties.
Journal of Mathematical Analysis and Applications, 2007, 336: 797–811.
[65] Stroud A H. Numerical Quadrature and Solution of Ordinary Differential Equations. New York:
Springer-Verlag, 1974.
[66] Sun Z D, Ge S S. Stability Theory of Switched Dynamical Systems. London: Springer, 2011.
[67] Sun Z Q, Yuan Z R. Control System Computer Aided Design. Beijing: Tsinghua University Press,
1988 (in Chinese).
[68] The MathWorks Inc. Simulink user’s manual, 2007.
[69] Van der Pol B, Van der Mark J. Frequency demultiplication. Nature, 1927, 120: 363–364.
[70] Vinagre B M, Chen Y Q. Fractional calculus applications in automatic control and robotics, 41st
IEEE CDC, Tutorial workshop 2, Las Vegas, 2002.
[71] Waldschmidt M. Transcendence of periods: the state of the art. Pure and Applied Mathematics
Quarterly, 2006, 2(2): 435–463.
428 | Bibliography
[72] Willoughby R A. Stiff Differential Systems. New York: Plenum Press, 1974.
[73] Wuhan University, Shandong University. Computing Methods. Beijing: People’s Education
Press, 1979 (in Chinese).
[74] Xue D Y. Fractional-Order Control Systems — Fundamentals and Numerical Implementations.
Berlin: de Gruyter, 2017.
[75] Xue D Y. FOTF Toolbox. MATLAB Central File ID: #60874, 2017.
[76] Xue D Y. Fractional Calculus and Fractional-Order Control. Beijing: Science Press, 2018 (in
Chinese).
[77] Xue D Y, Chen Y Q. System Simulation Techniques with MATLAB and Simulink. London: Wiley,
2013.
[78] Yang H, Jiang B, Cocquempot V. Stabilization of Switched Nonlinear Systems with Unstable
Modes. Switzerland: Springer, 2014.
[79] Zhang H G, Wang Z L, Huang W. Control Theory of Chaotic Systems. Shenyang: Northeastern
University Press, 2003 (in Chinese).
[80] Zhao C N, Xue D Y. Closed-form solutions to fractional-order linear differential equations.
Frantiers of Electrical and Electronic Engineering in China, 2008, 3(2): 214–217.
[81] Zhao X D, Kao Y G, Niu B, Wu T T. Control Synthesis of Switched Systems. Switzerland: Springer,
2017.
MATLAB function index
Bold page numbers indicate where to find the syntax explanation of the function. The function or model
name marked by * are the ones developed by the authors. The items marked with ‡ are those down-
loadable freely from internet.
abs 162, 181, 182, 185, 189, 219, 257, 278, 279, c9mimps.slx* 344
298, 302, 305, 308, 329, 344, 375, 378, c9mode1.slx* 323
380, 382, 383 c9mode2.slx* 323
any_matrix* 45, 47, 48 c9mode3.slx* 324
apolloeq* 116–118, 122, 123 c9mrand2.slx* 336
applyBoundaryCondition 415, 419, 421, c9mswi1.slx* 328
422 c9mswi2.slx* 329
assignin 385, 387 c9mswi3.slx* 330
aux_func* 301, 302, 304 caputo9 295
axis 81, 121, 414 caputo9* 278, 279, 297
caputo_ics* 295–297
balreal 196 ceil 295, 301, 302, 306
bar 198, 335 char 45, 413
break 162 charpoly 237
bvp5c 361, 363, 365–368, 370–374 close 121, 421
bvpinit 361, 363, 365–369, 371, 372, 374 comet3 76
continue 162
c10mbvp* 375 contour 396
c10mbvp1.slx* 385, 386
cos 12, 89, 125, 176, 191
c10mbvp1a* 385, 386
create 416
c10mdde1* 380
createpde 411, 416, 418, 419, 421, 422
c10mdde2* 381
csgdel 413, 414
c10mfode.slx* 387
c10mfode1* 387
c10mimp1* 377 dde23 207, 209, 210, 212–214, 380
c10mimp2* 378, 379 ddensd 223, 224–228
c10mmult* 382, 383 ddesd 215, 216–223, 229, 332, 381
c10mnon* 383, 384 decic 161, 162, 175, 177, 179, 180
c2d 196 decic_new* 162, 164, 166, 167, 176–178,
c3mtest * 60 180, 181, 377, 378
c4exode1* 114 decsg 413, 414, 419, 421
c4impode* 128 default_vals* 162, 163
c4msylv* 132, 133 det 238
c5mwheels* 189 diag 196, 225, 355, 394
c9mdae1.slx* 326 diff 14, 33–42, 46–51, 59, 88, 104, 111, 117,
c9mdde1.slx* 331 118, 122, 137, 138, 151, 152, 155, 157, 167,
c9mdde2.slx* 332 228, 251, 358, 365, 374, 378, 382, 383, 392
c9mdde2x.slx* 334 diffusion_sol* 398, 399
c9mdde3.slx* 333 diffusion_sol1* 394–396
c9mdde4.slx* 334 diffusion_sol2* 396, 397
c9mfdde1.slx* 345 double 105, 122, 167, 267, 290, 302, 307, 383
c9mfode1.slx* 339 drawnow 121, 245, 246, 421
c9mfode2.slx* 340 dsolve 32, 33–42, 47–51, 59, 105, 111, 122,
c9mfode3.slx* 342, 343 146, 167, 251, 358, 378, 383
430 | MATLAB function index
171, 172, 176, 182, 184, 187, 189, 191, 247, sc2d* 196, 197
249, 250, 255, 257–260, 262, 264, 265, set 121
268, 352, 358, 372, 375, 376, 383, 384 setInitialConditions 415, 418, 419, 421,
ode87‡ 104 422
ode_adams* 69 seven_body* 120
ode_euler* 58, 59, 60, 147, 194 shading 419, 421, 422
ode_gill* 64, 65 shooting* 352, 354
odephas2 86 sign 190
odephas3 86, 87 sim 318, 329, 334, 343, 344, 386, 387
odeplot 86 simplify 12, 16, 30, 34–37, 39–41, 48, 122,
odeprint 86 137, 138, 146, 228, 242, 269, 374, 382, 392
ode_rk2* 61, 62 sin 12, 29, 44, 59, 62, 63, 65, 69, 89, 107, 113,
ode_rk4* 63, 122, 123, 148, 157 114, 125, 128, 176, 185, 242, 296, 392, 395
odeset 83, 84, 85, 87–89, 92, 101, 103–106, size 132, 373, 395, 397, 420
109, 111, 113, 114, 117–121, 123, 125, 127, solve 16, 28, 31, 66, 67, 126, 262, 269
128, 131, 133, 134, 137, 139, 149, 151, 153, solvepde 417, 418–422
155, 164, 165, 170–172, 174–178, 180–182, solvepdeeig 417
184, 189, 191 sparse 356, 397, 398
ones 162, 164, 165, 177, 179, 181, 288, 377, specifyCoefficients 416, 418, 419, 421,
378, 394, 396, 398 422
open 121, 421 sqrt 33, 59, 62–64, 79, 111, 116, 119, 198, 296,
open_system 312 335, 338, 379, 416, 418
optimset 114, 128, 376, 377, 379–381, 383, ss 196
384, 386, 387 step 212
ousta_fod* 338 strcmp 277
subplot 255
pdegplot 413, 414 subs 19, 44, 46, 105, 122, 137, 139, 146, 167,
pdeplot 417, 418, 419, 421, 422 229, 267, 269, 374
pdeplot3D 417 sum 120, 171, 288, 290
pdetool 402 surf 90, 395
pi 20, 373, 382–384, 395 surfc 399
plot 18, 75, 78, 80, 81, 85, 88, 89, 92, 102, svd 196
103, 107, 108, 111, 114, 116, 117, 120–123, sym 16, 19, 46, 235, 237, 238
125, 127, 128, 131, 133, 134, 139, 148, 149, sym2poly 296
151, 152, 171, 176, 181, 187, 252, 260 syms 12, 14–16, 20, 28, 29, 33, 35, 38, 45, 50,
plot3 76, 80–82, 85, 87, 88, 265 66, 126, 153
plotyy 123, 227
prod 238, 243, 396, 397
tan 359, 363
rand 162, 177, 179, 244–246, 262, 268, 363, tanh 177
365, 366, 368, 373, 374, 376, 381, 384, 387 taylor 33, 296
randn 194, 197 tf 197
real 111 threebody* 79, 80, 85, 88
reshape 47, 132, 134, 135 tic/toc 85, 92, 104, 114, 119, 122, 128, 137,
residue 284 149, 151, 153, 164, 166, 167, 176, 177, 182,
rewite 16 191, 229, 344, 366, 374, 376, 377, 379–381
ric_de* 134, 135 try/catch 162, 387
rot90 307
varargin 45, 163, 352
save 414 varargout 163
432 | MATLAB function index
while 162, 302, 304, 358 zeros 58, 61, 63, 64, 69, 162, 177, 197, 209,
writeVideo 121, 421 212, 238, 295, 302, 307, 395, 399
zlim 421
xlim 92, 151, 245 zpk 338
Index
8/7th order Runge–Kutta algorithm 87 band-limited 335, 336
base order 281, 283, 285
Abel–Ruffini theorem 27 benchmark problem 97, 202, 348
absolute error tolerance 84 Bernoulli equations 7
acceleration 3 Bessel differential equation 20–22, 53
accumulative error 58, 61, 65, 71, 298, 299 Bessel function 17, 20–22
Adams algorithm 8 bifurcation 9, 233, 269, 270
Adams–Bashforth algorithm 57, 68–70, 93 binomial coefficients 274
Adams–Mouton algorithm 57, 68, 69 block diagram 198, 203, 279, 308, 311–345,
additional parameter 74, 75, 80–83, 101, 103, 384–386
130, 132, 134, 187, 306 boundary condition 349–351, 360–364,
Airy function 17, 23–25, 38 366–373, 376, 380, 383–385, 391–394,
algebraic constraint 168, 170, 175, 180–182, 325 398, 402, 405–407, 410, 411, 415, 418, 419
algebraic equation 1, 6, 11, 14, 26, 27, 29, 31, boundary value problem 9, 41, 42, 349–387
36, 66–68, 114, 126–128, 159–161, 164, Brusselator 247
166, 168, 180, 262–264, 314, 325, 326, Butcher tableau 66, 67
343, 346, 350, 360, 401 butterfly effect 77, 254, 255, 257
algebraic loop 326, 327, 343
analytical solution 5, 7–51, 55, 59, 75, 83, 85, capacitor 1, 2, 10
89, 95, 101, 104, 105, 107, 110–112, 114, Caputo definition 274–276, 292, 293, 336, 337
121, 129, 132, 133, 136–139, 145–147, 153, Caputo equation 276, 291–294, 299, 300, 343
154, 167, 176, 195, 219–221, 223, 228, 229, chaos 9, 77, 233, 254–259
234–236, 251, 252, 273, 278–284, 286, characteristic polynomial 236–238
287, 290, 296, 303, 308, 344, 348, 350, Cholesky decomposition 196
351, 353, 354, 358, 363–365, 369, 371–373, Chua circuit 94, 256–258, 263, 264, 268
378, 379, 383, 384, 386, 387, 391, 392, Chua diode 94, 256
394, 395, 419, 420, 423 closed-form solution 43, 287–290, 293
animation 76, 120, 407–409, 420 closed-loop 72, 322
anonymous function 59, 60, 74, 75, 77–80, 82, coefficient matrix 47, 130, 184, 199, 234–236,
86, 91, 101, 103, 105, 107, 109, 110, 114, 295, 393, 423
119, 121, 125, 128, 132, 134, 147, 153, 155, column-wise 47, 115, 117, 122, 132, 133, 324,
160, 161, 163–165, 167, 171, 176, 179, 183, 346
184, 189, 209, 211, 213, 215–217, 219, 221, commensurate-order 273, 279–283, 308
223, 225, 226, 247, 250, 256, 264, 267, comparative operation 314
303, 311, 322, 353, 358, 359, 361, 363, 364, complex pole 36
368, 382, 398, 416 consistent initial value 145, 160–164, 166, 167,
Apollo equation 323 175, 177, 179, 180, 351, 375, 377–381,
approximate solution 68, 210 383–385, 387
arbitrary symbolic matrix 45 consistent solution 351, 371
arms race model 5 contour 24, 398, 407, 408
autonomous differential equation 240, 241, control option 80, 83–87, 161, 163, 169, 189,
247, 262, 265 260, 319
auxiliary function 14, 15, 207, 292–295, 297, Control System Toolbox 212
301 convergent 19, 23, 182, 245, 279, 280
corrector algorithm 300, 304, 305
backward difference 149 Coulomb friction 190
Bagley–Torwik equation 293, 347 covariance matrix 195, 196
434 | Index
Hurwitz judgement matrix 237–239 large step-size 62, 70, 73, 93, 118, 148, 278,
hyperbolic 391, 401, 406, 408, 415, 416, 418 290, 298
hypergeometric function 17, 19–21, 23 leakage resistance 3
Legendre differential equation 22, 23
implicit Caputo equation 300, 305–308 Legendre function 17, 22, 23, 53
implicit differential equation 7, 124–127, 129, like term collection 29, 45
158–168, 171, 172, 350, 375–379 limit cycle 9, 93, 102, 103, 233, 246–251, 253,
impulse response 282, 284–286 270
impulsive signal 281–283, 285, 289, 309 linear algebraic equation 294–296, 355, 393,
index 1 173–175, 202 394, 398
index reduction 174, 178, 180–182 linear time invariant 281
inductor 1, 2 logarithmic 314
infinite integral 17 logic expression 187
infinite series 279, 280, 423 logistic function 5
inhomogeneous 12, 14, 15, 20, 28, 29, 351, 397, loop structure 162, 238, 301, 394
398, 420 Lorenz equation 75–77, 80, 81, 86, 254, 255,
initial value 26–28, 34–36, 39, 41, 42, 44, 47, 257, 316, 318, 321
50, 56, 75, 77, 91, 93, 94, 100, 101, 103, lower-triangular matrix 306
109, 110, 115, 116, 127, 136–139, 149, 155, LTI model 281
159, 160, 164, 166–168, 173, 175, 178, 179, lumped parameter 3, 391
189, 190, 193, 196, 206, 214, 218, 229, 241, Lyapunov criterion 240, 242, 244, 246
244, 247, 248, 250, 254, 255, 257, 263, Lyapunov function 233, 239, 240, 243
267, 276, 287, 290–293, 300, 302, 303,
305–308, 315–317, 325, 331, 333, 342, 343,
Malthus’s population model 4
349–352, 361, 372, 374–377, 382, 384–386
mass matrix 84, 169, 172, 176
initial value problem 8, 55–93, 99, 112, 113, 135,
MATLAB function 58–61, 74, 77–82, 93, 94, 101,
146, 168, 349–352, 357, 360–362, 372,
110, 114, 116, 119, 120, 127, 128, 132, 133,
374, 375, 383, 386
139, 161, 170, 183, 186, 188, 189, 196, 207,
integrator chain 315, 316, 321, 323, 339,
208, 213, 215, 217, 223, 280, 288, 290, 295,
341–343
297, 301, 304, 311, 317, 322, 331, 352, 357,
interface 74, 82, 87, 132, 134, 391
361, 362, 375, 378, 380, 381, 385, 394, 416
interpolation 7, 18, 65, 68, 294, 361, 363, 368
MATLAB workspace 44, 46, 82, 101, 103, 132,
intersection 402, 404
170, 235, 237, 256, 263, 321, 328, 409, 414
inverse Laplace transform 26, 30, 31
matrix differential equation 8, 42–48, 99,
inverse matrix 176, 394, 396
129–135
inverted pendulum 130, 131, 141
matrix exponential 153
iteration 58, 301, 304, 305, 357
matrix integral 44
Jacobian matrix 84, 266–270, 369, 370 matrix Sylvester differential equation 46–48
member 83, 85, 86, 161, 169, 186, 207, 208,
kernel 136, 139 361, 411, 416, 417
key signal 314–316, 322–324, 326, 327, 331, mesh grid 365, 395, 402, 405, 408, 409, 411,
332, 341–344 417
Kirchhoff’s law 2 minimum step-size 89, 118, 154, 191
Kronecker product 47 Mittag-Leffler function 273, 278–281, 303
movie making 409
Lagrangian equation 129 mth order Runge–Kutta algorithm 65–68
Laplace operator 400 multi-point 381, 382
Laplace transform 11, 25–28, 30, 41, 42, 193, multi-point boundary value 381–383
211, 276, 279, 281–283, 285, 286 multi-step 8, 55, 57, 68–70
436 | Index
multiple limit cycles 93 phase space 76, 79, 80, 85, 86, 94, 140, 254,
multiple solutions 112, 136, 145, 166–168, 263, 255, 264, 271
378 piecewise function 96, 219, 230
mutual exclusive 185 Pochhammer symbol 280
Poincaré map 233, 246, 259, 260
natural growth rate 4 Poincaré–Bendixson theorem 247
negative definite 239, 241–243 Poisson equation 415
Neumann condition 391, 402, 405, 421 pole 36, 282, 283, 337
Neumann function 21 polynomial 236, 237, 270, 277, 294
neutral-type 9, 205, 223–229, 332–334 polynomial equation 26, 27
Newton–Raphson iterative method 357 population model 4, 5
Newton’s second law 3 positive definite 239–243
nonlinear algebraic equation 128, 306, 372 power series method 6, 10, 194
nonlinear differential equation 3, 8, 11, 12, predator–prey 5, 77, 95
15–17, 25, 48–51, 76, 91–93, 107, 127, 128, predictor solution 300, 304, 305, 307
152, 189, 233, 241, 246, 261, 262, predictor–corrector algorithm 69, 300
265–269, 350, 357–360 probability density function 193, 198, 203, 336,
nonlinear function 56, 113, 130, 256, 257, 314, 337
316, 342, 397, 409 pseudo polynomial 288
nonsingular matrix 124, 125, 169, 176 pseudocolor 407
nonzero history function 212–214, 216, 217, pseudorandom number 197, 335
333, 334
nonzero initial value 11, 25, 29–31, 218, 275, quasi-analytical solution 27, 200
276, 292, 297
norm 105, 154, 157, 158, 161, 162, 166, 167, 219, random initial value 244
221, 229, 334, 354, 356, 358, 359, recursive 8, 62, 64, 68, 70, 196, 277, 290
363–365, 371, 374, 375, 379, 383, 386, 387 regular singular point 106
normal distribution 194 relative error tolerance 83, 84, 179, 190, 191,
numerical analysis 62, 71, 122, 124, 158, 311 247, 260, 330, 358, 420
numerical differentiation 315, 423 repeated pole 282
numerical integral 139 resistor 1, 10
numerical stability 73, 315 Riccati matrix differential equation 99, 129,
133–135
open-loop control 72 Riemann–Liouville definition 274–276,
orthogonal matrix 196 289–291, 306, 336, 337
oscillation 37, 51, 147, 152, 234, 251, 252, 260 rising factorial 280
Oustaloup filter 337–339, 341, 343–345, 387 Rössler equation 94
roundoff error 71
parabolic 391, 401, 415 Routh–Hurwitz criterion 233, 236–239
partial derivative 358, 359, 402, 417 Runge–Kutta–Felhberg algorithm 74, 156
partial differential equation 9, 391–423
partial fraction expansion 16, 28, 273, 282–287 saddle point 265, 269
particular solution 27, 33, 34, 147 sample time 196, 197
PDEModel object 391, 410, 411, 414–416, 418 second-order Runge–Kutta algorithm 60, 61,
periodicity 9, 103, 147, 233, 246, 247, 251–253, 63, 70
259 semi-explicit 145, 169–176, 179, 181, 182
phase plane 77, 84, 86, 93, 96, 101, 102, 140, semi-infinite 349, 370–372, 385, 388
184, 189, 203, 244, 246, 247, 249, 250, semi-negative definite 239
253, 256–258, 262, 329 semi-positive definite 239
Index | 437
separable variables 7, 12, 14–17, 99, 135 subsystem 183–185, 202, 311
set difference 402, 404 sudden jump 190, 191
seven-body 119, 121 switched differential equation 145, 183–191,
shooting method 9, 350–354, 357–360, 375 311, 327–330
Sigmoid function 5 switching law 183–185, 188, 328
Simulink 9, 311–345, 349, 384–386 Sylvester matrix equation 11, 43, 46–48, 129,
single-step 149 132, 133
single-term 301, 304, 308, 309, 346 symbolic expression 32, 34, 38
singular equation 138, 180, 182, 386 Symbolic Math Toolbox 11, 19, 32, 36, 40, 44,
singular matrix 125, 169, 172, 369, 370 126, 153, 266
singular value decomposition 196 symmetrical matrix 119, 133
singularity 26, 105–108
SIR model 6 Taylor auxiliary function 292, 294, 295, 297, 301
SolidWorks 414 Taylor series 33, 58, 196, 279, 293, 296
sparse matrix 356, 391, 396, 397 terminal value 74, 133, 255, 350, 372, 375, 382
special function 11, 17–25, 33, 38, 279 third-order Runge–Kutta algorithm 67
speed 4, 71, 72, 78, 84, 95, 119, 146, 155, 172, three-body model 78, 87
261, 417 time constant 146, 147, 149, 155
stability 9, 233–246, 268–270 time delay 205
standard normal distribution 196 time elapse 77, 85, 114, 116, 117, 122, 128, 137,
state augmentation 108, 109 139, 151, 154, 156, 158, 163, 164, 166, 170,
state space 3, 11, 42–45, 75, 76, 91, 109, 110, 172, 176, 178, 179, 181, 191, 210, 213, 229,
126, 129, 132, 183, 196, 197, 234, 256, 266, 247, 249, 250, 303, 305, 307, 309, 326,
303, 348, 371 327, 343, 356, 360, 363, 366, 374, 375,
state transition matrix 44 377, 378, 380, 381, 386, 387, 396, 397,
state variable 43, 58, 73, 75, 77, 95, 99–101, 420, 422, 423
103, 105, 107–113, 115, 116, 119, 121, 125, time-dependent delay 205, 215, 216, 223
127, 132, 137, 140, 157, 159, 168, 170, 173, time-varying differential equation 5, 11, 38–40,
174, 178, 179, 184, 195, 197, 207, 208, 74, 104–106, 233, 240, 241, 324, 325,
211–213, 215, 216, 219, 222, 223, 225, 229, 351–354, 356, 359, 408
234–236, 241, 248, 253, 262, 311, 326, transfer function 192, 193, 197, 203, 210–212,
333, 349, 350, 352, 357, 359, 361, 362, 281–285, 331, 335
364, 369, 371, 372, 378, 382 transient solution 146, 147, 218, 251, 252
state-dependent delay 205, 214, 215, 219, 220, tri-diagonal matrix 356, 393
223 triangulation 402, 405, 417, 419
step response 130, 211, 283, 284, 286 two-point boundary value 349, 353, 359–361,
step-size 57–59, 61–65, 68–74, 84, 88, 89, 92, 381
104, 117–119, 122, 124, 139, 145, 147–152,
154–156, 158, 185, 191, 193–195, 277, 278, undetermined coefficient 14, 27, 42, 66, 350,
289, 296, 298, 299, 301, 302, 304, 305, 360, 361, 368–370, 372, 389
307, 335, 354, 393–397, 421, 423 union 402, 404, 412
stiff differential equation 9, 93, 104, 145–158 uniqueness 55, 56, 166, 372
stiffness 145, 146, 149, 152–157 unstable 131, 148, 184, 233–238, 240, 242,
stochastic differential equation 191–198, 311, 244–246, 265, 267, 268, 270
334–336 user interface 402–408, 410, 411, 413, 418
stochastic process 191, 192
stop time 302, 307, 319, 322, 325, 352, 371, 385 validation 55, 77, 83, 85, 220, 319
string 32, 34, 35, 38, 39, 409, 412 Van der Pol equation 51, 95, 101–104, 109, 149,
structured variable 83, 208, 361, 416 150, 246
438 | Index