Chen Allgower
Chen Allgower
1205—1217, 1998
( 1998 Elsevier Science Ltd. All rights reserved
PII: S0005–1098(98)00073–9 Printed in Great Britain
0005-1098/98 $—see front matter
Key Words—Nonlinear model predictive control; stability; terminal inequality constraint; terminal cost;
quasi-infinite horizon.
Abstract—We present in this paper a novel nonlinear model In general, the MPC problem is formulated as
predictive control scheme that guarantees asymptotic closed- solving on-line a finite horizon open-loop optimal
loop stability. The scheme can be applied to both stable and
unstable systems with input constraints. The objective func- control problem subject to (linear or nonlinear) sys-
tional to be minimized consists of an integral square error (ISE) tem dynamics and constraints involving states and
part over a finite time horizon plus a quadratic terminal cost. inputs. However, as shown in Bitmead et al. (1990),
The terminal state penalty matrix of the terminal cost term has
to be chosen as the solution of an appropriate Lyapunov equa- this general form of MPC does not guarantee
tion. Furthermore, the setup includes a terminal inequality con- closed-loop stability, because a finite horizon cri-
straint that forces the states at the end of the finite prediction terion is not designed to deliver an asymptotic prop-
horizon to lie within a prescribed terminal region. If the
Jacobian linearization of the nonlinear system to be controlled is erty such as stability. Closed-loop stability can only
stabilizable, we prove that feasibility of the open-loop optimal be achieved by a suitable tuning of design param-
control problem at time t"0 implies asymptotic stability of the eters such as prediction horizon, control horizon
closed-loop system. The size of the region of attraction is only
restricted by the requirement for feasibility of the optimization and weighting matrices. Therefore, Bitmead et al.
problem due to the input and terminal inequality constraints (1990) suggested an infinite horizon method (closely
and is thus maximal in some sense. ( 1998 Elsevier Science Ltd. related to LQ control), which, however, results in an
All rights reserved.
optimization problem that can generally be solved
only for unconstrained linear systems.
1. INTRODUCTION
For linear systems with constraints, the work of
The history of model predictive control (MPC), Rawlings and Muske (1993) represents a significant
also referred to as moving horizon control or reced- leap forward in the MPC theory. They propose
ing horizon control, began with an attempt to use a receding horizon control scheme with infinite
the powerful computer technology to improve the prediction horizon and finite control horizon. For
control of processes that are constrained, multivari- both stable and unstable systems, nominal closed-
able and uncertain (Cutler and Ramaker, 1980; loop stability is guaranteed by the feasibility of the
Richalet et al., 1978). In the last decade, many constraints, independent of the choice of perfor-
formulations have been developed for linear or mance parameters. For other MPC approaches
nonlinear systems (Garcı́a et al., 1989; Rawlings and stability results see, for example, Genceli and
et al., 1994; Mayne, 1995; van den Boom, 1996; Lee, Nikolaou (1993) and Polak and Yang (1993).
1997), that found successful applications especially Mayne and Michalska have contributed some
in the process industries (Richalet, 1993; Qin and very important issues on the stability of nonlinear
Badgwell, 1996). receding horizon control. They have shown in
Mayne and Michalska (1990) that under some
rather strong assumptions, receding horizon con-
* Received 16 October 1996; revised 10 July 1997; received in trol is able to stabilize a class of nonlinear systems
final form 9 April 1998. This paper was not presented at any with constraints (see also Chen and Shaw, 1982;
IFAC meeting. This paper was accepted for publication in Keerthi and Gilbert, 1988). The finite horizon con-
revised form by Associate Editor W. Bequette under the
direction of Editor Prof. S. Skogestad. Corresponding author strained optimal control problem is posed as min-
Dr. F. Allgöwer. Tel. #41-1-6323557; Fax #41-1-6321211; imizing a standard quadratic objective functional
E-mail allgower@[Link]. subject to an additional terminal state equality con-
- Institut für Systemdynamik und Regelungstechnik, Univer-
sität Stuttgart, Pfaffenwaldring 9, 70550 Stuttgart, Germany. straint requiring the states to be zero at the end of
Current address: Department of Electronic Engineering, the finite prediction horizon. The strong assump-
Jilin University of Technology, 130025 Changchun, People’s tions are needed to ensure that the optimal value
Republic of China. Email chenh@[Link].
‡ Institut für Automatik, Eidg. Techn. Hochschule Zürich, function is continuously differentiable. Those as-
CH-8092 Zürich, Switzerland. sumptions are relaxed in Michalska and Mayne
1205
1206 H. Chen and F. Allgöwer
(1991) to ensure merely local Lipschitz continuity For discrete nonlinear systems subject to con-
of the optimal value function. However, from straints, Keerthi and Gilbert (1988) discuss the
a computational point of view, an exact satisfaction moving horizon control problem as an approxima-
of the terminal equality constraint requires an infi- tion of an infinite horizon optimal feedback control
nite number of iterations in the nonlinear case. An problem. They provide sufficient conditions for the
approximate satisfaction means that the achieved existence of a solution to the general nonlinear
stability is lost inside the region of approximation. program and for closed-loop stability, based on
In order to avoid this, they extend their work in a controllability assumption that is however not
Michalska and Mayne (1993) with a terminal in- easy to verify in the nonlinear case. With terminal
equality constraint such that the states are on the equality constraints, Meadows et al. (1995) propose
boundary of a terminal region at the end of a vari- a comparatively easily implementable formulation
able prediction horizon. They suggest a dual-mode and discuss the existence and stability conditions.
receding horizon control scheme with a local linear In this paper, we introduce a quasi-infinite hori-
state feedback controller inside the terminal region zon nonlinear MPC scheme that optimizes on-line
and a receding horizon controller outside the ter- an objective functional consisting of a finite hori-
minal region. Closed-loop control with this scheme zon cost and a terminal cost subject to system
is implemented by switching between the two con- dynamics, input constraints and an additional ter-
trollers, depending on the states being inside or minal state inequality constraint. The feasibility of
outside the terminal region. the terminal inequality constraint implies that the
Yang and Polak (1993) present a moving horizon states at the end of the finite horizon are in a pre-
control scheme that deviates from conventional scribed terminal region. The terminal states are
MPC schemes in that the control horizon is also penalized in such a way that the terminal cost
a minimizer and the whole input sequence is imple- bounds the infinite horizon cost of the nonlinear
mented. In this scheme inequality contraction con- system controlled by a ‘‘fictitious’’ (i.e. not imple-
straints are added so as to ensure the state vector to mented) local linear state feedback. Thus, the pro-
contract by a prespecified factor before a new op- posed nonlinear model predictive controller has
timization begins. Like in the linear case of this a quasi-infinite prediction horizon, but the input
scheme (Polak and Yang, 1993), guaranteed stabil- profile to be determined on-line is only of finite
ity is achieved when the existence of a solution to nature. If the Jacobian linearization of the nonlin-
the optimization problem at each time is assumed. ear system to be controlled is stabilizable, the
However, this is a very strong assumption and unique positive-definite, symmetric solution of an
cannot be guaranteed in general (Mayne, 1995). In appropriate Lyapunov equation can serve as ter-
analogy to the linear case (Genceli and Nikolaou, minal penalty matrix of the terminal cost, and
1993), Genceli and Nikolaou (1995) derive an end a neighborhood of the origin serving as terminal
condition for nonlinear MPC with second-order region can be determined off-line. Closed-loop
Volterra models, when the system being controlled asymptotic stability is then guaranteed by the feasi-
is square and stable. The end condition requires the bility of the open-loop optimal control problem at
input values at the end of the finite horizon to be time t"0. As is usual in MPC, the closed-loop
equal to the steady-state values corresponding to control is calculated by solving the optimization
the setpoint and the steady-state estimates of dis- problem on-line at each time, no matter whether
turbances. With the end condition, closed-loop the states are inside or outside the terminal region.
stability is achieved under some restrictions not Thus, no switching between controllers is needed.
only on prediction and control horizons but also The local linear state feedback is only used to
on control move suppressions in the objective func- determine a terminal penalty matrix and a terminal
tional. This makes an independent specification of region off-line. The contribution of this paper is
control performance difficult. Another method to thus a computationally attractive formulation of
achieve stability for nonlinear MPC is suggested by nonlinear MPC for which asymptotic stability can
Nevistic and Morari (1995), combining state feed- be guaranteed. Compared to other nonlinear MPC
back linearization and stability issues of linear approaches that also guarantee stability (terminal
MPC with constraints, for feedback linearizable equality constraint and dual-mode), this approach
systems. However, because the exact state feedback appears to be more general and computationally
linearization law is state-dependent and generally more attractive.
nonlinear, the originally linear input constraints The paper is structured as follows: Section 2
are transformed into state-dependent and in gen- describes the mathematical formulation of the pro-
eral nonlinear constraints. In addition, an origin- posed quasi-infinite horizon nonlinear MPC prob-
ally quadratic cost functional will become an lem. Section 3 gives some preliminary results about
arbitrary nonlinear cost functional in the trans- a region of attraction and a performance bound of
formed coordinates. the nonlinear system controlled by a local linear
A quasi-infinite horizon nonlinear model predictive control scheme 1207
state feedback. Based on these results, a procedure values need not and will not be the same as the
for systematically determining a terminal region actual values.
and a terminal penalty matrix off-line is sum- For the particular setup considered in this
marized. In Section 4, asymptotic stability of the paper, the open-loop optimal control problem at
proposed nonlinear MPC scheme is discussed and time t with initial state x(t) is formulated as
sufficient stability conditions are given. Simulation
results for an unstable constrained system are given min J(x(t), u6 ( ) )) (3)
in Section 5. uN ( ) )
with
P A B
t`Tp
2. A QUASI-INFINITE HORIZON NONLINEAR MODEL J(x(t), u6 ()))" Ex6 (q)(x(t), t)E2 #Eu6 (q)E2 dq
Q R
PREDICTIVE CONTROL SCHEME t
#Ex6 (t#¹ ; x(t), t)E2 (4)
The class of systems to be controlled is described 1 P
by the following general nonlinear set of ordinary subject to
differential equations (ODEs): x60 "f(x6 , u6 ), x6 (t; x(t), t)"x(t) (5a)
x5 (t)"f(x(t), u(t)), x(0)"x , (1)
0 u6 (q)3º, q3[t, t#¹ ] (5b)
1
with state vector x(t)3Rn, input vector u(t)3Rm,
x6 (t#¹ ; x(t), t)3), (5c)
and subject to input constraints 1
u(t)3º , ∀t50 . (2) where Q3Rn]n and R3Rm]m denote positive-defi-
nite, symmetric weighting matrices; ¹ is a finite
1
It is assumed in this paper that prediction horizon; x6 ( ) ; x(t), t) is the trajectory of
equation (5a) driven by u6 ( ) ) : [t, t#¹ ]Pº (for
(A1) f : Rn]RmPRn is twice continuously differ- 1
simplicity of exposition, the control and prediction
entiable and f(0, 0)"0. Thus, 03Rn is an
horizons are chosen to have identical values in this
equilibrium of the system with u"0.
paper). Note the initial condition in equation (5a):
(A2) ºLRm is compact, convex and 03Rm is
The system model used to predict the future in the
contained in the interior of º.
controller is initialized by the actual system states
(A3) System (1) has a unique solution for any initial
x(t) at ‘‘real’’ time t.
condition x 3Rn and any piecewise continu-
0 The objective functional (4) consists of a finite
ous and right-continuous u( ) ) : [0, R)Pº.
horizon standard cost to specify the desired control
Assumption f(0, 0)"0 is not very restrictive, since performance and a terminal cost to penalize the
if f(x , u )"0, one can always shift the origin of the states at the end of the finite horizon. The terminal
s s
system to (x , u ). We consider in this paper the state inequality constraint (5c) will force the states at the
s s
feedback case and thus assume that all states are end of the finite prediction horizon to be in some
measurable. neighborhood ) of the origin, called here terminal
In the following, we describe the problem setup region. This terminal region ) will be chosen such
for the quasi-infinite horizon nonlinear MPC that it is invariant for the nonlinear system control-
scheme introduced in this paper. For a description led by a local linear state feedback. The quadratic
of the general idea and the principle of nonlinear terminal cost Ex(t#¹ ; x(t), t)E2 bounds the infi-
1 P
MPC we refer for example to the excellent papers nite horizon cost of the nonlinear system starting
by Mayne and Michalska (Mayne and Michalska, from ) and controlled by the local linear state
1990; Michalska and Mayne, 1993). feedback, i.e.
We shall first introduce some notations that will
P
be used in this paper. For any vector x3Rn, ExE =
Ex6 (t#¹ ; x(t), tE25 (Ex6 (q; x(t), t)E2
denotes the 2-norm and the weighted norm ExE is 1 P Q
P t`T1
defined by ExE2 " : xTPx, where P is an arbitrary
P #Eu6 (q)E2 ) dq
Hermitian, positive-definite matrix. For any Her- R
mitian matrix A, j (A) and j (A) denote the u6 "Kx6 , ∀x6 (t#¹ ; x(t), t)3).
.!9 .*/ 1
(6)
largest and the smallest real part of the eigenvalues
of the matrix A, respectively, and EAE stands for the We will show that this allows us to guarantee
induced 2-norm of A. In order to distinguish clearly closed-loop stability. The positive definite and sym-
between the system, that evolves in ‘‘real’’ time, and metric terminal penalty matrix P3Rn]n together
the system model, used to predict the future ‘‘with- with the terminal region ) is determined off-line
in’’ the controller and evolving in some fictitious such that the invariance property of ) holds and
time, we denote the internal variables in the con- the input constraints are satisfied in ). If we
troller by a bar (x6 , u6 ) to indicate that the predicted substitute inequality (6) into equation (4), we can
1208 H. Chen and F. Allgöwer
conclude that the cost functional to be minimized loop asymptotic stability of the proposed control
bounds the infinite horizon cost defined by scheme. Since a terminal region and a terminal
penalty matrix can be calculated off-line, variables
P
= without a bar will be used in this section.
J=(x(t), u6 ( ) )) "
: (Ex (q; x(t), t)E2 #Eu6 (q)E2 ) dq,
Q R We consider the Jacobian linearization of the
t
system (1) at the origin
where u6 (q)"Kx6 (q; x (t), t) for q5t#¹ , i.e. J=(x(t),
1 x5 "Ax#Bu,
u6 ( ) ))4J(x(t), u6 ( ) )). In this sense, the prediction (8)
horizon of the proposed nonlinear predictive con-
where A ": (L f/Lx)(0, 0) and B ": (L f/Lu)(0, 0). If
troller expands quasi to infinity, hence the name
equation (8) is stabilizable, then a linear state
quasi-infinite horizon nonlinear MPC scheme.
feedback u"Kx can be determined such that
An optimal solution to the optimization prob-
A " : A#BK is asymptotically stable. Conse-
lem (3)—(5) (existence assumed), will be denoted K
quently, we can state the following lemma.
by u6 *( ) ;x(t), t, t#¹ ) : [t, t#¹ ]Pº and the
1 1
corresponding objective value is denoted by
J*(x(t), t, t#¹ ) " : J(x(t), u6 *). ¸emma 1. Suppose that the Jacobian linearization
1 of the system (1) at the origin is stabilizable. Then,
The idea behind this setup is to guarantee stabil-
ity of the closed-loop system with a quasi-infinite (a) the following Lyapunov equation
horizon objective functional, where the input pro-
file needs to be determined on-line only for a finite (A #iI)TP#P(A #iI)"!Q* (9)
K K
horizon. In the sense of MPC, the ‘‘open-loop’’
admits a unique positive-definite and symmet-
control can be thought of as having two steps: over
ric solution P, where Q*"Q#KTRK3Rn]n
a finite horizon, an optimal input profile found by
is positive definite and symmetric; i3[0, R)
solving the open-loop optimal control problem
satisfies
drives the nonlinear system model into the terminal
region; after that, a local linear state feedback con- i(!j (A ). (10)
trol is assumed to steer it to the origin. In the .!9 K
moving horizon implementation, the local linear (b) There exists a constant a3(0, R) specifying
state feedback controller will never be directly ap- a neighborhood ) of the origin in the form of
a
plied to the system. Indeed, the input profile found )" : Mx3RnDxTPx4aN (11)
is applied to the system only until the next measure- a
ment becomes available. We assume that this will such that
be the case every d time-units. So d denotes the (i) Kx3º, for all x3) , i.e., the linear feed-
a
‘‘sampling time’’ with d(¹ , and the closed-loop back controller respects the input con-
1
control represented by u*( ) ) is defined by straints in ) ,
a
(ii) ) is invariant for the nonlinear system (1)
u*(q):"u6 *(q; x(t), t, t #¹ ), q3[t, t#d]. (7) a
1 controlled by the local linear feedback
u"Kx,
Updated with the new measurement, the above (iii) for any x 3) , the infinite horizon cost
optimization problem will be solved again to find 1 a
P
a new input profile. Thus, the closed-loop control is =
J=(x , u)" (Ex(t)E2 #Eu(t)E2 ) dt
obtained by solving the open-loop optimal control 1 Q R
problem on-line at each time, no matter whether t1
the states are inside or outside the terminal region. subject to nonlinear system (1), starting
The linear state feedback is only used to determine from x(t )"x and controlled by the local
1 1
a terminal penalty matrix P and a terminal region linear state feedback u"Kx, is bounded
) off-line, as described in the next section. from above as follows:
J=(x , u)4xT Px . (12)
1 1 1
3. PRELIMINARY RESULTS
Proof. Since Q*'0, by the general conditions for
By a slight modification of the associated content the solvability of Lyapunov equations, it is known
in Michalska and Mayne (1993), we present pre- that the Lyapunov equation (9) has a unique
liminary results about a region of attraction and positive definite and symmetric solution, if the real
a performance bound of the nonlinear system con- parts of all eigenvalues of (A #iI) are negative.
k
trolled by a local linear state feedback. These Because of the asymptotic stability of A , any
K
results allow us to outline a procedure to system- constant i3[0, !j (A )) ensures the negativity
.!9 K
atically determine a terminal region and a terminal of the real parts of all eigenvalues of (A #iI).
k
penalty matrix, and will be used to prove closed- Thus, (a) is true.
A quasi-infinite horizon nonlinear model predictive control scheme 1209
Since the point 03Rm is in the interior of º, we the controller and set x "x6 (t#¹ ; x(t), t), then
1 1
can always—for any fixed P'0—find a constant inequality (12) is equivalent to inequality (6). The
a 3(0, R), that specifies a region in the form of solution P of equation (9) and the region ) in the
1 a
(11), such that Kx3º, for all x3) . Thus, the form of equation (11) are able to serve as a terminal
ai
linear feedback control values satisfy the input con- penalty matrix and a terminal region. From the
straints in ) . above proof, a procedure can be stated to deter-
a1
Let a3(0, a ] specify a region in the form of mine a terminal penalty matrix P and a terminal
1
equation (11). Since a4a , the input constraints region ) (preferably as large as possible) off-line
1 a
are satisfied in ) and thus (i) is true. In other such that inequality (14) holds true and the input
a
words, the system can be thought of as being un- constraints are satisfied:
constrained in ) .
a Step 1. Solve a control problem based on the Jac-
We differentiate xTPx along a trajectory of
obian linearization to get a locally stabiliz-
x5 "f(x, Kx) (13) ing linear state feedback gain K.
and obtain Step 2. Choose a constant i3[0, R) satisfying
inequality (10) and solve the Lyapunov
d
x(t)TPx(t)"x(t)T(AT P#PA )x(t) equation (9) to get a positive-definite and
dt K K
symmetric P.
(14) Step 3. Find the largest possible a such that
#2x(t)TP/(x(t)), 1
Kx3º, for all x3) .
where /(x):"f(x, Kx)!A x. The term involving a1
K Step 4. Find the largest possible a3(0, a ] such
/(x) in the above equation is bounded above as 1
that inequality (16) is satisfied in ) .
follows: a
xTP/(x)4ExTPE ) E/(x)E4EPE ) ¸ ) ExE2 Remark 3.1. In Step 4, inequality (16) is not easy to
(
satisfy for an arbitrary large terminal region ) .
EPE ) ¸ a
( ExE2 , (15) Due to a typically small value of j (P)/EPE, it is
4
j (P) P .*/
.*/ possible that for some systems this inequality can
where ¸ ": supME/(x)E/ExE D x3) , xO0N. Now only be met for an extremely small terminal region
( a ) . From the proof of Lemma 1, we know that if
we choose an a3(0, a ] such that in ) a
1 a inequality (17) holds true for all x3) , then, in-
a
i ) j (P) equality (18) is also valid. In addition, since /(x)
¸ 4 .*/ . (16)
( EPE satisfies /(x)P0 and E/(x)E/ExEP0 as ExEP0,
there exists a constant e'0 such that inequality
Then, inequality (15) leads to (17) holds true for all x with ExE4e. Hence, in
xTP/(x)4i ) xTPx . (17) order to get a less conservative terminal region, we
may take a different approach. First, we follow the
Substituting inequality (17) into equation (14) above procedure until Step 3. Then, we may make
yields iterations of the simple optimization problem
d max MxTP/(x)!i ) xTPxDxTPx4aN (19)
x(t)TPx(t)4x(t)T ((A #iI)TP#P(A #iI))x(t)
dt K K x
that by equation (9) leads to for the chosen i by reducing a from a until the
1
optimal value of (19) is nonpositive. A discussion on
d
x(t)TPx(t)4!x(t)TQ*x(t) . (18) the optimization problem (19) and on finding the
dt maximum a in Step 3 can be found in Michalska
1
Since P'0 and Q*'0, inequality (18) implies that and Mayne (1993). If a suitable a is found in this
the region ) defined by equation (11) is invariant way, it specifies a region ) in the form of (11), in
a a
for the nonlinear system (1) controlled by the local which inequality (17) holds true. In turn, inequality
linear state feedback u"Kx. Moreover, any tra- (18) is valid and the results in Lemma 1 hold conse-
jectory of equation (13) starting in ) stays in quently. This region can then serve as a terminal
a region.
) and converges to the origin.
a
Then, for any x 3) , integrating inequality (18)
1 a
from t to R with initial condition x(t )"x Remark 3.2. Following the above procedure does
1 1 1
yields the desired result (12). K not yield a unique terminal region for a given
nonlinear system. For the sake of reducing the
It should be pointed out that if we use the nota- on-line computational burden, we are interested in
tion introduced in Section 2 for internal variables in determining the largest possible region. This is,
1210 H. Chen and F. Allgöwer
however, not an easy task. First, this requires a suit- fined by equation (7), where u6 *( ) ; x (t), t, t#¹ ): [t,
1
able selection of the stabilizing linear feedback gain t#¹ ]Pº is a solution to the optimization prob-
1
K, where many linear control methods can in prin- lem. In this section, we address the stability prop-
ciple be used. Because of the ‘‘optimality’’ of MPC, erty of the closed-loop system
the linear optimal control technique (LQR) may be
x5 (t)"f(x(t), u*(t)). (20)
preferentially applied for determining a stabilizing
K. Secondly, for a given gain K, an appropriate To do this, we use the following standard defini-
choice of i is needed. This will be discussed in tions (e.g. Khalil, 1992) and assume for the moment
Section 5. Moreover, the size of the terminal region (later it will be shown) that x"0 is an equilibrium
depends generally on the nonlinearity of the system of equation (20).
to be controlled. The stronger nonlinear the system
is, the smaller the terminal region will be. For linear Definition 1. The equilibrium point x"0 of equa-
or some mildly nonlinear systems, the size of the tion (20) is stable if for each e'0 there exists
terminal region will only be restricted by the input g(e)'0, such that Ex(0)E(g(e) implies Ex(t)E(e
constraints. This will also be shown in the example for all t50.
in Section 5.
Definition 2. The equilibrium point x"0 of
Remark 3.3. If there exists no linear feedback con- equation (20) is asymptotically stable if it is stable
troller that can locally asymptotically stabilize the and g can be chosen such that Ex(0)E(g implies
nonlinear system, ) contracts to the origin. Thus, x(t)P0 as tPR.
a
the terminal inequality constraint (5c) reduces to
the terminal equality constraint x(t#¹ )"0, For the sake of a clear proof, we use in this
1 section the notation for internal controller vari-
which is well known to lead to stability (Mayne and
Michalska, 1990; Rawlings and Muske, 1993). For ables and distinguish between the predicted values
a generalization of the proposed approach to sys- in the controller and the actual ones in the ‘‘real’’
tems with non-stabilizable Jacobian linearization system. Thus, x6 ( ) ; x(t), t) denotes the predicted tra-
see Chen and Allgöwer (1997a). jectory of the nonlinear system starting from the
actual state x(t) and driven by an open-loop con-
Remark 3.4. If the system to be controlled is linear, trol u6 ( ) ), when the prediction is made in the
i.e. /(x)"0 and ¸ "0 for all x3Rn, then, i"0 controller at ‘‘real’’ time t.
(
satisfies equation (16). In turn, equation (9) be-
comes the Lyapunov equation for linear systems, 4.1. Feasibility of the optimization problem
and equation (12) is satisfied with equality. That Due to the repeated solution of the optimization
means that the following equality: problem given by equations (3)—(5), we need its
feasibility at each time t50. Feasibility of the
P
t`T1 optimization problem means that there exists at
(Ex(q)E2 #Eu(q)E2 ) dq#Ex(t#¹ )E2
Q R 1 P least one (not necessarily optimal) input profile
t u6 ( ) ) : [t, t#¹ ]Pº such that the generated tra-
1
P
= jectory of equation (5a) satisfies the terminal in-
" (Ex(q)E2 #Eu(q)E2 ) dq
Q R equality constraint (5c), and such that the value of
t
the objective functional (4) is bounded. In the fol-
is valid for linear systems. Thus, the model predic- lowing, we give a lemma on the feasibility of the
tive controller has exactly an infinite prediction optimization problem at each time. This lemma
horizon with only a finite horizon input profile to follows a standard argument also used for example
be determined on-line. For ‘‘open-loop’’ control, in Genceli and Nikolaou (1993), Michalska and
the control law beyond the finite horizon would be Mayne (1993) and Rawlings and Muske (1993).
given by the local linear state feedback u"Kx (cf.
Rawlings and Muske (1993) and Muske (1995), ¸emma 2. For the nominal system with perfect
where the control beyond the finite horizon is state measurement and no disturbances, and for
chosen to be zero). A similar result can be found in a sufficiently small sampling time d'0, the feasi-
Scokaert and Rawlings (1996). bility of the open-loop optimal control problem (3)
with equation (4) subject to equation (5) at time
4. ASYMPTOTIC STABILITY RESULTS t"0 implies its feasibility for all t'0.
According to the principle of MPC, the open- Proof. It is assumed for the moment that, at time t, an
loop optimal control problem given by equations optimal solution u6 *( ) ; x(t), t, t#¹ ) : [t, t#¹ ]Pº
1 1
(3)—(5) will be solved repeatedly, updated with new to the optimization problem described by equa-
measurements. The closed-loop control u*( ) ) is de- tions (3)—(5) exists and is found. When applied in
A quasi-infinite horizon nonlinear model predictive control scheme 1211
open loop, this finite horizon optimal input profile value function is non-increasing. This result is cru-
drives the system model (5a) from initial state x(t) cial for the stability proof.
into the terminal region ) along the corresponding
open-loop optimal state trajectory x6 *() ; x(t), t, t#¹ ) ¸emma 3. Suppose that the optimization problem
1
on [t, t#¹ ] with x6 *(t#¹ ; x(t), t, t#¹ )3). is feasible at time t"0. Then, for the unperturbed
1 1 1
In terms of MPC, the closed-loop control u*( ) ) nominal system, for any t50 and q3(t, t#d] the
from time t to t#d is defined by equation (7). optimal value function satisfies
Since, by assumption, there are no disturbances
J*(x(q), q, q#¹ )4J*(x(t), t, t#¹ )!
and we only consider the nominal system, the 1 1
P
state measurement at time t#d is then x(t#d)" q
x6 *(t#d; x(t), t, t#¹ ). Therefore, for solving the (Ex(s)E2 #Eu*(s)E2 ) ds. (22)
1 Q R
open-loop optimal control problem at time t#d t
with initial condition x6 (t#d; x(t#d), t#d)" Proof. From Lemma 2, feasibility of the optimiza-
x(t#d), a candidate input profile u6 ( ) ) on tion problem at each time t'0 is guaranteed by its
[t#d, t#d#¹ ] may be chosen with
1 feasibility at time t"0.
If, at time t, a finite horizon open-loop optimal
u6 (q)"
input profile u6 *() ; x(t), t, t#¹ ):[t, t#¹ ]Pº
1 1
and the corresponding finite horizon open-loop
G
u6 *(q; x(t), t, t#¹ ) for q3[t#d, t#¹ ],
1 1 optimal state trajectory x6 *() ; x(t), t, t#¹ ) on
Kx6 (q; x(t#d), t#d) for q3[t#¹ , t#d#¹ ], 1
1 1 [t, t#¹ ] are given, the optimal value of the objec-
1
(21) tive functional can be written as
P
where K is the local linear state feedback gain t`T1
J*(x(t), t, t#¹ )" (Ex6 *(s; x(t), t, t#¹ )E2
used in determining P and ) off-line (compare 1 1 Q
t
Section 3). From Lemma 1, the terminal region
#Eu6 *(s ; x(t),t, t#¹ )E2 ) ds
) is invariant for the nonlinear system model 1 R
controlled with the linear state feedback. Thus, #Ex6 *(t#¹ ;x(t), t, t#¹ )E2.
1 1 P
x6 *(t#¹ ; x(t), t, t#¹ )3) implies x6 (t#d#¹ ; (23)
1 1 1
x(t#d), t#d)3), due to the choice (21) for the
input profile. In addition, since the input con- For any q3(t, t#d], the closed-loop control is
straints are satisfied in ), input profile (21) is a taken in terms of equation (7). For the nominal
feasible but perhaps not optimal solution to the system without disturbances, the closed-loop state
optimization problem at time t#d. Obviously, this trajectory of the system (1) is then given by
reasoning is also valid, if at time t we start out with x(s)"x6 *(s; x(t), t, t#¹ ), s3[t, q] . (24)
a feasible solution only, that needs not be optimal. 1
For a numerical implementation, the input pro- We now calculate the value of the objective func-
file is in general parameterized in a step-shaped tional for any q3(t, t#d], if a feasible (suboptimal)
manner. Thus, choosing u6 (q)"Kx6 (q; x(t#d), t#d) input profile
for q3[t#¹ , t#d#¹ ] as in equation (21) is
1 1 u6 (s)"
practically impossible. However, we do have
G
x6 *(t#¹ ; x(t), t, t#¹ )3). Then, if we choose u6 *(s; x(t), t, t#¹ ) for s3[q, t#¹ ]
1 1 1 1
u6 (q)"Kx6 (t#¹ ; x(t#d), t#d) for q3[t#¹ , t# Kx6 (s; x(t#d), t#d) for s3[t#¹ , q#¹ ]
1 1 1 1
d#¹ ], from the continuity of the trajectory, we
1 (25)
can assume w.l.o.g. that for a small enough d'0,
the trajectory x6 () ; x(t#d), t#d) on [t#¹ , t# is assumed to be applied to the system. We denote
1
d#¹ ] stays in ). Then, the result can be achieved that by JM (x(q), q, q#¹ ) " : J(x(q), u6 ( ) )) with u6 ( ) )
1 1
by induction. K according to equation (25). The generated finite
horizon open-loop state trajectory is the same as
Remark 4.1. Lemma 2 indicates that the prediction the open-loop state trajectory given by the
horizon ¹ (tuning parameter) needs to be chosen optimization at time t, except for the last part on
1 [t#¹ , q#¹ ], i.e.
such that the optimization problem (3) with equa- 1 1
tion (4) subject to equation (5) is feasible at time x6 (s; x(q), q)"x6 *(s; x(t), t, t#¹ ),
t"0. 1
s3[q, t#¹ ]. (26)
1
4.2. Asymptotic stability In order to characterize the contribution of the
Before the asymptotic stability of the closed-loop state trajectory on [t#¹ , q#¹ ] to the value
1 1
system (20) is addressed, we show that the optimal function, we use the results in Lemma 1: Since
1212 H. Chen and F. Allgöwer
the feasibility of the optimization problem at time It is clear, by the optimality of J*, that we have for
t implies that x6 *(t#¹ ; x(t),t, t#¹ )3) and on any q3(t, t#d],
1 1
[t#¹ , q#¹ ] the system model is controlled by
1 1 J*(x(q), q, q#¹ )4JM (x(q), q, q#¹ )
the linear state feedback (see equation (25)), that 1 1
part of the state trajectory stays in ) and obeys 4J*(x(t), t, t#¹ )
inequality (18). We want to remind that the ‘‘real’’ 1
P
time is now q3(t, t#d] and we discuss the pre- q
! (Ex(s)E2 #Eu*(s)E2 ) ds
dicted open-loop state trajectory in the controller. In Q R
t
this situation, x(t) and t in inequality (18) have to be
replaced by x6 (s; x(q), q) and s, respectively. Then, as required. K
integrating (18) from t#¹ to q#¹ with
1 1
x6 (t#¹ ; x(q), q)"x6 *(t#¹ ; x(t), t, t#¹ ) yields Because Q'0 and R'0, Lemma 3 implies by
1 1 1 induction that the optimal value function is non-
the following relationship:
increasing. Now we are able to state the asymptotic
Ex6 (q#¹ ; x(q), q)E24Ex6 *(t#¹ ; x(t), t, t#¹ )E2
1 P 1 1 P stability result for the closed-loop system (20).
P
q`T1
! Ex6 (s; x(q), q)E2 * ds .
Q ¹heorem 1. Suppose that
t`T1
(27) (a) assumptions (A1)—(A3) are satisfied,
(b) the Jacobian linearization of the nonlinear
Then, the value of the objective functional for any
system (1) is stabilizable,
q3(t, t#d] is
(c) the open-loop optimal control problem (3) with
JM (x(q), q, q#¹ ) equation (4) subject to equation (5) is feasible at
1
time t"0.
P
q`T1
" (Ex6 (s; x(q), q)E2 #Eu6 (s)E2 ) ds
Q R Then, for a sufficiently small sampling time d'0
q
and in the absence of disturbances, the closed-loop
#Ex6 (q#¹ ; x(q), q)E2
1 P system with the model predictive control (7) is nom-
P
t`T1 inally asymptotically stable. Let X-Rn denote the
" (Ex6 *(s; x(t), t, t#¹ )E2
1 Q set of all initial states for which assumption (c) is
q satisfied, then, X gives a region of attraction for the
#Eu6 *(s; x(t), t, t#¹ )E2 ) ds closed-loop system.
1 R
P
q`T1
# Ex6 (s; x(q), qE2 * ds Proof. From Lemma 1, assumption (b) implies that
Q
t`T1 a terminal penalty matrix P and a terminal region
#Ex6 (q#¹ ; x(q), q)E2 , ) in the form of equation (11) can be found by the
1 P
procedure in Section 3. According to Lemma 2, for
where equations (25) and (26) are used. Because of
a sufficiently small d'0, feasibility of the open-
inequality (27), the above equality becomes
loop optimal control problem at each time t'0 is
P
t`T1 guaranteed by assumption (c).
JM (x(q), q, q#¹ )4 (Ex6 *(s; x(t), t, t#¹ )E2
1 1 Q For x(t)"0, the optimal solution to the optim-
q ization problem is u6 *( ) ; 0, t, t#¹ ) : [t, t#¹ ]P0,
#Eu6 *(s; x(t), t, t#¹ )E2 ) ds 1 1
1 R i.e. u*(q)"0 for all q3[t, t#d]. Due to f(0, 0)"0,
#Ex6 *(t#¹ ; x(t), t, t#¹ )E2 . then x"0 is an equilibrium of the closed-loop
1 1 P system (20).
Combining it with (23) yields Now we define a function »(x) for the closed-
loop system (20) as follows:
JM (x(q), x(q), q)4J*(x(t), t, t#¹ )
1
»(x):"J*(x, t, t#¹ ). (29)
1
P
q
! (Ex6 *(s; x(t), t, t#¹ )E2
1 Q Then, »(x) has the following properties:
t
#Eu6 *(s; x(t), t, t#¹ )E2 ) ds. f »(0)"0 and »(x)'0 for xO0,
1 R f »(x) is continuous at x"0,
It follows from substituting equations (7) and (24) f along the trajectory of the closed-loop
into the above inequality that system starting from any x 3X there is for
0
P
q 04t (t 4R
1 2
JM (x(q), q, q#¹ )4J*(x(t), t, t#¹ )! (Ex(s)E2
1 1 Q
P
t t2
»(x(t ))!»(x(t ))4! Ex(t)E2 dt. (30)
#Eu*(s)E2 ) ds. (28) 2 1 Q
R t1
A quasi-infinite horizon nonlinear model predictive control scheme 1213
The first two properties follow from Lemma A.1 in contradicts however the fact that »(x)50. Thus,
Chen (1997) and the third property is due to any trajectory of equation (20) starting from X en-
Lemma 3 and R'0. As a central consequence, we ters into ¼ in a finite time. Then, the asymptotic
b
can take the standard argument used, for example, stability of equation (20) in X follows by the fact
in Hahn (1967) and Khalil (1992) to prove that the that ¼ is a region of attraction.
b
equilibrium x"0 is stable per Definition 1. That is, Finally, X has also the property that any closed-
for each e'0 there exists g(e)'0, such that loop trajectory starting from X stays in X. This can
Ex(0)E(g(e) implies Ex(t)E(e for all t50. More- be proven again by contradiction: We assume that
over, there exists a constant b3(0, R) such that the closed-loop trajectory starting from any
along the closed-loop trajectory one has x(0)3X has left X at some time t '0, i.e. x(t )NX.
1 1
From Lemma 2, we know that the optimization
»(x(t))4b , ∀t50 . (31) problem at time t with initial condition
1
In the following, we show that there exists g'0 x6 (t ; x(t ), t )"x(t ) is feasible. This contradicts
1 1 1 1
such that x(t)P0 as tPR for all Ex(0)E(g, that X is the set of all initial states for which
without having to use a continuous differen- assumption (c) is satisfied. Together with the
tiability assumption on »(x). This implies that the achieved asymptotic stability, X gives a region of
equilibrium x"0 is asymptotically stable from attraction for the closed-loop system. K
Definition 2. Finally, it is shown that X is a region
of attraction for the closed-loop system. Remark 4.2. The given stability conditions are only
We start out with inequality (30) to prove the sufficient and not necessary. The fact that the lin-
asymptotic stability. By induction, we have earized system is not stabilizable does of course not
imply that there exists no linear feedback controller
P
= being able to stabilize the nonlinear system locally.
»(x(R))4»(x(0))! Ex(t)E2 dt . (32)
Q
0
Remark 4.3. When applying this control scheme to
Due to »(x(R))50 and »(x(0))4b, the integral
practical systems, the numerical optimization em-
:=Ex(t)E2 dt exists and is bounded. Let e (e be
0 Q 1 ployed may not find the globally optimal input
such that Ex(t)E4e , then, x(t) is on the com-
1 profile at every time step, due to real time computa-
pact set MExE4e N for all t3[0, R). Moreover,
1 tion time restrictions or because the optimizer is for
u*(t)3º for all t3[0, R) with º being compact.
example caught in a local optimum. Even though
Because f is continuous in x and u, then, f(x(t), u*(t))
optimal performance might be lost this way, stabil-
is bounded for all t3[0, R). Thus, x(t) is uni-
ity can be guaranteed nevertheless, as the stability
formly continuous in t on [0, R) (Desoer and
guarantee does not depend on the optimality of the
Vidyasagar, 1975). Consequently, Ex(t)E2 is uni-
Q solution found but merely on its feasibility, as long
formly continuous in t on [0, R), because ExE2 is
Q as the problem is feasible and the optimizer finds
uniformly continuous in x on the compact set
any feasible solution at time t"0 and as long as
MExE4e N. Due to Q'0, it follows from Bar-
1 each optimization problem is initialized by the shif-
balat’s Lemma (Khalil, 1992) that
ted feasible solution from the previous step.
Ex(t)EP0 as tPR. (33)
Remark 4.4. If the nonlinear system is open-loop
Then, the equilibrium point x"0 of the system (20) asymptotically stable, the nonlinear terminal in-
is asymptotically stable. Clearly, equality constraint x(t#¹ )3) can be removed,
p
¼" : Mx3X D »(x)4bN (34) without loss of the achieved stability (Chen and
b Allgöwer, 1997b).
is a region of attraction.
Furthermore, for all x(0)3X, there exists a finite
time ¹ such that x(¹)3¼ . This can be shown by 5. EXAMPLE
b
contradiction: Assume that x(t)N¼ for all t5¹. It
b 5.1. Control problem and simulation results
follows from inequality (30) that for all t5¹
As an example for demonstrating the proposed
P
t`d control scheme, we consider a system described by
»(x(t#d))!»(x(t))4! Ex(q)E2 dq
Q the following ODEs:
t
4!d infMExE2 D xN¼ N xR "x #u(k#(1!k)x ), (36a)
Q b 1 2 1
4!d ) c, (35) xR "x #u(k!4(1!k)x ). (36b)
2 1 2
where c'0, because of the positive definiteness of This system is a modification of the system used in
»(x). By induction, »(x(t))P!R as tPR that Mayne and Michalska (1990) in that it is unstable
1214 H. Chen and F. Allgöwer
º"Mu3R1D!2.04u42.0N. (37)
A B
0.5 0.0
Q" , R"1.0. (38)
0.0 0.5
A B
16.5926 11.5926 line the optimization problem given by equations
P" (40)
11.5926 16.5926 (3)—(5) repeatedly. For the chosen prediction hori-
zon ¹ and sampling time d, the optimization prob-
is positive definite, symmetric and can be used as p
lem is feasible at each time. Thus, for the nominal
a terminal penalty matrix. After that, a "12.5 is
1 system without disturbances, the stability condi-
found to specify a region ) in the form of equa-
a1 tions given in Section 4.2 are all satisfied. The
tion (11), in which the linear feedback control satis-
closed-loop trajectories in Fig. 1 are guaranteed to
fies the constraint (37). Finally, we find a region
converge to the origin. Figure 2 shows time profiles
) defined by equation (11) with a"0.025 such
a for the closed-loop system for two selected initial
that inequality (16) is satisfied. However, this region
conditions (solid lines and dashed lines, respective-
is very small, because of the small value (0.1774) of
ly). It can be seen that the input constraint (37) is
j (P)/EPE. From the simple optimization (19) out-
.*/ not violated.
lined in Remark 3.1, we can derive a less conserva-
tive terminal region ) with a"0.7 as follows:
a 5.2. Discussion of computational burden
) "Mx3R2DxTPx40.7N . (41) The proposed nonlinear MPC scheme has signif-
a
icant computational advantages when compared to
The open-loop optimal control problem described other existing MPC approaches. To show this, we
by equations (3)—(5) is solved in discrete time with compare the proposed controller (case A) to two
a sampling time of d"0.1 time-units and a predic- other predictive controllers (cases B and C):
tion horizon of ¹ "1.5 time-units. A few trajecto-
P
ries corresponding to different initial conditions of A P given by equation (40), terminal inequality
the unstable constrained system (36) with k"0.5 constraint x6 (t#¹ ; x(t), t)3) with ) given
controlled by the proposed quasi-infinite horizon 1 a a
by equation (41), ¹ "1.5,
nonlinear predictive controller with parameters 1
B P"0, no terminal constraint, ¹ "3.5,
1
(38), (40) and (41) are shown in Fig. 1. The solid C P"0, terminal equality constraint x6 (t#¹ ;
lines represent closed-loop trajectories; the dashed 1
x(t), t)"0, ¹ "3.5.
line is the boundary of the terminal region given by 1
equation (41); the dash-dotted lines denote the pre- Controller A is designed with the proposed method
dicted open-loop trajectories that are found by and has guaranteed stability. For controller B,
solving the optimization problem at time t"0 and there is no terminal constraint and the terminal
A quasi-infinite horizon nonlinear model predictive control scheme 1215
Fig. 2. Time profiles for the closed-loop system from two initial conditions.
states are not penalized. This controller corre- tolerance, integration step, etc.). The symbol ‘‘*’’
sponds to the nonlinear MPC scheme usually used indicates that the optimization problem is not feas-
in applications. Closed-loop stability can only ible at time t"0 for the corresponding initial con-
be achieved by tuning the prediction (control) ho- dition. It is clearly seen that controller A needs
rizon ¹ . Here, ¹ "3.5 time-units is the shortest significantly less CPU time than controllers B and
p 1
prediction horizon determined by trial and error C. Here, controller B might be treated somewhat
such that the closed-loop system is stable (for unfairly. In practice, one can use techniques such as
¹ "1.5 time-units, the closed-loop system is un- blocking or confounding to reduce on-line compu-
1
stable). For controller C, the well-known terminal tation time. However, an important drawback of
equality constraint x6 (t#¹ ; x(t), t)"0 is used. controller B is that stability can only be achieved
1
Hence, a terminal state penalty does not make by tuning parameters such as the prediction hori-
sense. Closed-loop stability is also guaranteed for zon. The big difficulty for controller C is the infeasi-
this controller, if the optimization problem at time bility of the optimization problem caused by the
t"0 is feasible. terminal equality constraint. Despite the fact that
For a total simulation time of 10 time-units, the the terminal constraint x6 (t#¹ ; x(t), t)"0 needs
1
elapsed CPU times are shown in Table 1 for some only be satisfied with feasibility tolerance 10~4, the
different initial conditions, where the controllers A, optimization problem is not feasible at time t"0
B and C use the same optimization routine NAG for most initial conditions in Table 1. Thus, no
E04UCF [Numerical Algorithms Group, 1993] stability can be guaranteed. Figure 3 shows two
and the same integration algorithm [Mitchell trajectories of the constrained system controlled by
& Gauthier Associates, 1991] with the same nu- controllers A, B and C. We see that there is no big
merical parameters (optimality tolerance, feasibility difference in control performance.
1216 H. Chen and F. Allgöwer
Fig. 3. Comparison of nonlinear predictive controllers: A (—), Fig. 4. Terminal region vs k: the ellipses from the outside to the
B (— — ), C (2) for two selected initial conditions. inside correspond to k"0.9, 0.8, 0.7, 0.5, 0.3, 0.1.
penalty matrix P and a terminal region ) can be Genceli, H. and M. Nikolaou (1995). Design of robust con-
strained model-predictive controllers with Volterra series.
determined off-line in a straightforward manner, AIChE J, 41(9), 2098—2107.
essentially involving the solution of a linear stabil- Hahn, W. (1967). Stability of Motion. Springer-Verlag. Berlin.
ization problem and a Lyapunov equation. This is Keerthi, S. S. and E. G. Gilbert (1988). Optimal infinite-horizon
feedback laws for a general class of constrained discrete-time
outlined in the procedure given in the paper. systems: Stability and moving-horizon approximations. J.
The main advantage of this scheme is its guar- Opt. ¹heory Appl., 57(2), 265—293.
anteed asymptotic stability. In addition, the Khalil, H. K. (1992). Nonlinear Systems. Macmillan, New York.
Lee, J. H. (1997). Recent advances in model predictive control
quasi-infinite horizon nonlinear MPC scheme is and other related areas. In Y. C. Kantor, C. E. Garcia, B.
computationally more attractive than other known Carnahan (Eds.), Chemical Process Control — Assessment and
nonlinear MPC schemes that also guarantee New Directions for Research. AIChE Symposium series, Vol.
93, No. 316, pp. 201—216.
asymptotic stability (terminal equality constraint, Mayne, D. Q. (1995). Optimization in model based control. In
infinite horizon). This was also demonstrated with Proc. IFAC Symposium Dynamics and Control of Chemical
the control of the unstable and constrained system Reactors, Distillation Columns and Batch Processes, Helsingor,
pp. 229—242.
in the example. Mayne, D. Q. and H. Michalska (1990). Receding horizon
The results presented in this paper must however control of nonlinear systems. IEEE ¹rans. Automat. Control,
be viewed only as a further step towards a practical AC-35(7), 814—824.
Meadows, E. S., M. A. Henson, J. W. Eaton and J. B. Rawlings
nonlinear MPC theory. As usual we have assumed (1995). Receding horizon control and discontinuous state
that there is no model/plant mismatch, that no feedback stabilization. Int. J. Control, 62(5), 1217—1229.
disturbances are acting on the system and that the Michalska, H. and D. Q. Mayne (1991). Receding horizon
control of nonlinear systems without differentiability of the
whole state vector can be measured. We do how- optimal value function. Syst. Control ¸ett., 16, 123—130.
ever not need to assume that at every step the Michalska, H. and D. Q. Mayne (1993). Robust receding hor-
globally optimal input profile is found numerically. izon control of constrained nonlinear systems. IEEE
¹rans. Automat. Control, AC-38(11), 1623—1633.
Stability does only require feasible solutions to Mitchell & Gauthier Associates (1991). ACS¸ (Advanced
the optimization problem. It should however be Continuous Simulation ¸anguage) Reference Manual,
pointed out that the given conditions for nominal Edition 10.0.
Muske, K. R. (1995). Linear model predictive control of chem-
asymptotic stability are only sufficient. ical processes. PhD thesis, Univ. of Texas at Austin.
Current investigations focus on robustness prop- Nevistić, V. and M. Morari (1995). Constrained control of
erties of this control scheme, on a further reduction feedback-linearizable systems. In Proc. 3rd European Control
Conf. ECC’95. Rome, pp. 1726—1731.
of the computational burden and on a generaliz- Numerical Algorithms Group (1993). NAG Fortran ¸ibrary.
ation of the setup to include more general objective Mark 16.
functionals and additional state constraints. Polak, E. and T. H. Yang (1993). Moving horizon control of
linear systems with input saturation and plant uncertainty.
Part 1. Robustness. Int. J. Control, 58(3), 613—638.
Qin, S. J. and T. A. Badgwell (1997). An overview of industrial
REFERENCES model predictive control technology. In Y. C. Kantor, C. E.
Garcia, B. Carnahan (Eds.), Chemical Process Control — As-
Bitmead, R. R., M. Gevers and V. Wertz (1990). Adaptive sessment and New Directions for Research. AIChE Symposium
Optimal Control — ¹he ¹hinking Man’s GPC. Prentice-Hall,
series, Vol. 93, No. 316, pp. 232—256.
New York. Rawlings, J. B., E. S. Meadows and K. R. Muske (1994). Non-
Chen, C. C. and L. Shaw (1982). On receding horizon feedback
linear model predictive control: A tutorial and survey. In
control. Automatica, 18(3), 349—352. Proc. IFAC Int. Symp. Adv Control of Chemical Processes,
Chen, H. (1997). Stability and Robustness Considerations in
ADCHEM, Kyoto, Japan.
Nonlinear Model Predictive Control. Fortschr.-Ber. VDI Reihe Rawlings, J. B. and K. R. Muske (1993). The stability of con-
8 Nr. 674, VDI Verlag, Düsseldorf.
strained receding horizon control. IEEE ¹rans. Automat.
Chen, H. and F. Allgöwer (1997a). Nonlinear model predictive Control, AC-38(10), 1512—1516.
control schemes with guaranteed stability. In NA¹O Ad-
Richalet, J. (1993). Industrial applications of model based
vanced Study Institute on Nonlinear Model Based Process predictive control. Automatica, 29(5), 1251—1274.
Control, ed. R. Berber, Kluwer Academic Publishers. Dor-
Richalet, J., A. Rault, J. L. Testud and J. Papon (1978). Model
drecht, The Netherlands (in press). predictive heuristic control: Application to industrial pro-
Chen, H. and F. Allgöwer (1997b). A quasi-infinite horizon
cesses. Automatica, 14, 413—428.
nonlinear predictive control scheme for stable systems: Scokaert, P. O. M. and J. B. Rawlings (1996). Infinite horizon
Application to a CSTR. In Proc. IFAC Int. Symp. Adv Control
linear quadratic control with constraints. In Proc. 13th IFAC
of Chemical Processes, ADCHEM, Banff, pp. 471—476. ¼orld Congress, Vol. M, San Francisco, pp. 109—114.
Cutler, C. R. and B. L. Ramaker (1980). Dynamic matrix con-
van den Boom, T. J. J. (1996). Model based predictive control:
trol— A computer control algorithm. In Proc. Joint Auto- Status and perspective. In Symposium on Control, Optimiza-
matic Control Conference, San Francisco, CA.
tion and Supervision, CESA’96 IMACS Multiconference, Lille,
Desoer, C. A. and M. Vidyasagar (1975). Feedback Systems: pp. 1—12.
Input—Output Properties. Academic Press, New York.
Yang, T. H. and E. Polak (1993). Moving horizon control of
Garcı́a, C. E., D. M. Prett and M. Morari (1989). Model predic- nonlinear systems with input saturation, disturbances and
tive control: Theory and Practice—a survey. Automatica, 25,
plant uncertainty. Int. J. Control, 58(4), 875—903.
335—347.
Genceli, H. and M. Nikolaou (1993). Robust stability analysis
of constrained ¸ -norm model predictive control. AIChE J.
1
39(12), 1954—1965.