MPC 2
MPC 2
Part II
OVERVIEW OF INDUSTRIAL MPC
TECHNIQUES
1
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Contents
1 INTRODUCTION TO MODEL PREDICTIVE CONTROL 5
1.1 BACKGROUND FOR MPC DEVELOPMENT . . . . . . . . . . . . . . . . 5
1.2 WHAT'S MPC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.3 WHY MPC? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3.1 SOME EXAMPLES . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3.2 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.4 INDUSTRIAL USE OF MPC: OVERVIEW . . . . . . . . . . . . . . . . . . 25
1.4.1 MOTIVATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
1.4.2 SURVEY OF MPC USE . . . . . . . . . . . . . . . . . . . . . . . . . 31
1.5 HISTORICAL PERSPECTIVE . . . . . . . . . . . . . . . . . . . . . . . . . 32
1.6 CHALLENGES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
1.6.1 MODELING & IDENTIFICATION . . . . . . . . . . . . . . . . . . 34
1.6.2 INCORPORATION OF STATISTICAL CONCEPTS . . . . . . . . . 41
1.6.3 NONLINEAR CONTROL . . . . . . . . . . . . . . . . . . . . . . . . 48
1.6.4 OTHER ISSUES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
2
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
3
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
4
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Chapter 1
INTRODUCTION TO MODEL
PREDICTIVE CONTROL
1.1 BACKGROUND FOR MPC DEVELOPMENT
Two main driving forces for a new process control paradigm in the late 70's
early 80's:
Energy crisis + global competition + environmental reg.
+
{ process integration
{ reduced design / safety margin
{ real-time optimization
{ tighter quality control
+
higher demand on process control.
(Remarkable) advances in microprocessor technology.
5
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
PLANT
HOLD SAMPLER
CLOCK
COMPUTER
1 2 3 4 5 6 CONTROL 1 2 3 4 . . .
ALGORITHM
On - Line
memory model
Optimizer
6
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
7
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
HEICON (Adersa)
OPC (Treiber)
MAC
IMC
GPC
GMC
UPC
...
model based
explicit prediction of future system behavior
explicit consideration of constraints
use of on-line mathematical programming
receding horizon control : repeated computation of open-loop optimal
trajectory with feedback update ) implicit feedback control.
constraints
competing optimization requirements
MPC provides a systematic, unied solution to problems with these charac-
teristics.
Valve
Positions Additive A
Stock stock
Additive B
Additive A stock
Blending System
Additive B Model total blend flow
9
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Classical Solution :
Selector
<
Valve-position
controller
Feedback
FT
FT
Total
Stock
blended
flow
Ratio
Setpoint Setpoint
FC
> Blend of
A and B
High
FT
Selector
Additive A
X Setpoint
FC
Ratio
FT
Setpoint
Additive B
MPC Solution :
At t=k, solve
2 3 2 3
2
p
(rA )k+ijk
X
64 75 , 64 (rA)ref 75
+ kq
k+ijk , qref kR
min 2
ui
i=1
(rB )k+ijk (rB )ref
Q
QR
2 3 2 3 2 3
66 (u1)min 77 66 (u1)j 77 66 (u1)max 77
66 (u2)min 77 66 (u2)j 77 66 (u3)max 77 ; j = 0; ; p , 1
4 5 4 5 4 5
(u3)min (u3)j (u2)max
10
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
0.5
- 0.5
0.5
- 0.5
strong interaction
\wind-up" during saturation
saturation of an input requires recoordination of the other input
11
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
PID w/ anti-windup
C1 + (T )
1 ref
0.5 +
T1
-0.5 L +
D1
G
D2 T2
0.5 V +
-0.5 +
-
C2 (T 2) ref
+
PID w/ anti-windup
T1 controller does not know that V has saturated and vice versa )
coordination of the other input during the saturation of one input is
impossible.
mode-switching logic is dicult to design / debug (can you do it?) and
causes "bumps", etc.
12
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
MPC Solution:
F ZF
0.5 L +
T1
-0.5 +
D1
G
D2 T2
0.5 V +
-0.5 +
MPC
(T 1 ) ref
MIMO Constraints
Model -0.5 L 0.5 (T 2 ) ref
-0.5 V 0.5
At t = k, solve
2 3 2 3
2
2 3
2
p
(T1)k+ijk 1
Lk+ijk
min
X
64 75 , 64 (T1)ref
75
+ m
X ,
64 75
Uk i=1
(T2)k+ijk (T2)ref
i=0
Vk+ijk
Q R
with 2 3 2 3 2 3
L L L
64 min 75 64 k+ijk 75 64 max 75 for i = 0; ; m , 1
Vmin Vk+ijk Vmax
easy to design / debug / recongure.
anti-windup is automatic.
optimal coordination of the inputs is automatic.
13
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
14
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
LS
<
Set
Press
Feed
back FC PC
SC Set
Discharge Flow
Motor Time
Compressor
15
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
MPC Solution:
Compressor flowrate
motor speed Model
pressure
At t = k, solve p
2 mX
,1
2
X
q
min
U k+ijk , q
ref Q +
u k+ijk
R
k i=1 i=0
with
Pk+ijk Pmax for i = 1; ; p
16
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Hot
Staurated
Liquid
L
PI
LC FC
1 LS 1
vp q
17
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
MPC Solution:
At t = k, solve p
2 mX
,1
2
X
min qk+ijk , qref Q +
Uk i=1
uk+ijk
R
i=0
with
Lk+ijk Lmin for i = 1; ; p
18
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Valve-position
controller
HS
VPC
>
Feedback
Signals
from
individual
PC process
control
loops
Header
pressure
PT Process
SC
demands
for air
AIR
compressor
Header
19
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
MPC Solution :
PRIMARY
CONTROL VARIABLES
Valve
Positions
air flow rates
Air Distribution
Network
Model
Valve positions
Header
Pressure
SECONDARY
CONTROL VARIABLES
At t = k, solve
2 3 2 3
2
6 (q1)k+ijk 77 66 (q1)ref 77
mX
X
66
p
.. 77 , 66 .. 77
+ ,1
P
2
min
Uk i=1
4
66 75 64 75
k+ijk , P min
R
(qn)k+ijk (qn)ref
Q i=1
with Q R and
2 3 2 3 2 3
66 min 77 66 Pk+ijk
P P
77 66 max 77
66 (u1)min 77 66 (u1)k+ijk 77 66 (u1)max 77
66 . 77 66 ... 77 66 . 77 for i = 0; ; m , 1
66 .. 77 66 77 66 .. 77
4 5 4 5 4 5
(un)min (un)k+ijk (un)max
20
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
PC
reflux drum
T
LC
FC
Upper reflux duty
A Top draw
T
Bottoms reflux
duty Side draw
A
FC
LC
T
Feed Bottoms
21
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Classical Solution:
Not clear
w1
w2
y1
u1 comps.
y2
Heavy-Oil y3
u2
Fractionator
y6 temps.
u3
y7
2 3 2 3
2
2 3
2
u1
Xp
66 y1 77 66 y1 77
Xm
66 77
min
6 y 77 6 7
, 64 y2 75
+
664 u2 77
Uk l=1
64 2 5
i=1
5
u3 u
u3
k+ljk 3 ref
Q k+ijk
R
y7 Tmin
plus other input constraints.
22
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
LOOP #1 PI
PI
COMPRESSOR
PI CWS XA
LOOP #2 A
SC N XB
CONDENSER
PI A
LT PI XC
LT PT CWR L
TI Y XD
Z
XE
CWS VAP/LIQ E
TI
SEPARATOR R XF
LOOP #4 XH
XA
A
TI TI
XB N
A CWR
XC STRIPPER
L PI A
XD
XD Y PI
N
REACTOR
Z PI A XE
LT
XE L
E
STM XF
R Y
XF
Z XH
COND
E
R
PI LOOP #3
PI
23
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
1.3.2 SUMMARY
Advantages of MPC over Traditional APC
24
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
25
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
26
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
27
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
100
Traditional
80 APO
Investment, %
Real-time
60 Optimization
Constrained
Multivariable
40 Control
20
DCS
0
0 20 40 60 80 100
Potential,%
28
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
31
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
32
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Note that research on optimal control was most vigorous in the 50s and
60s. Most of the results during this period were for open-loop optimal
control. For feedback implementation, they hinted the idea of receding
horizon control. However, most of the results were impractical due to
the lack of implementation hardware.
Some remarkable results of this period include
{ Pontryagin's maximum principle.
{ Hamilton-Jacobi-Bellman equation for optimal feedback control.
{ Feldbaum's dual control concept.
Due to the lack of sophisticated hardware, it was highly desirable to de-
rive a closed-form control law that could be implemented with compu-
tational equipments available at reasonable costs. The work of Kalman
represents a major achievement in this regard.
Kalman derived analytical solutions to
{ linear quadratic optimal control problem for deterministic systems
) (1-horizon) LQR
{ the same problem for Gaussian stochastic systems ) LQG = LQR
+ Kalman lter
These solutions were important because they represented very few ana-
lytial solutions to optimal feedback control problems.
However his work (based on the idea of stage-wise solution using dynamic
programming) could not be extended to constrained systems or nonlinear
systems.
In the 70's, Kwon and Pearson discussed the idea of receding horizon
control (a cornerstone of MPC) in the context of LQ optimal control
and how to achieve stability with such a control law.
However, they did not consider constrained / nonlinear problems and
failed to motivate the need for such an approximation.
33
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
In the late 70s and early 80s, there were several reports of successful use
of optimal control concepts in oil industries. For instance, Charlie Cut-
ler reported the success of implementing the so-called Dynamic Matrix
Control in Shell Oil rening units.
This started an avalanch of similar algorithms and industrial projects.
From here on, process control would never be the same.
The industrial success renewed the academics' enthusiasm for optimal
control. Prototypical algorithms were analyzed and improved. Also,
connections to the Kalman's work and other optimal control theories
were brought forth.
Now, MPC is an essential tool-of-trade for process control engineers.
There are more than a dozen vendors oering commercial software pack-
ages and engineering services. There is probably not a single major oil
industry where MPC is not employed in its new instaillations or revamps.
MPC is also very well understood from a theoretical standpoint.
1.6 CHALLENGES
1.6.1 MODELING & IDENTIFICATION
34
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
most models for MPC design came from identication rater than funda-
mental modeling.
system ID takes up to 80-90% of the cost and time in a typical imple-
mentation of a model based controller.
35
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Current Practice
36
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Model B
2 3
1 64 0:867 ,0:875 75
10s + 1 1:095 ,1:083
Model C
2 3
1 ,0:804 75
64 0:808
10s + 1 1:006 ,1:025
37
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
41
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
On-line On-line
Model
measurements prediction
y q
input
process controller
u
42
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
43
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
44
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
On-line On-line
Model
measurements prediction
y q
input
process controller
u
47
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
48
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
49
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Chapter 2
DYNAMIC MATRIX CONTROL
Dynamic Matrix Control
Proposed by C. Cutler at Shell (later became the President of DMCC).
Based on a system representation using step response coecients.
Currently being marketed by AspenTech (in the name of DMC-Plus).
Prototypical of commercial MPC algorithms used in the process
industries.
We will discuss the core features of the algorithm. There may be some
dierences in details.
50
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
PLANT
HOLD SAMPLER
1 2 3 4 5 6 1 2 3 4 . . .
51
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
PLANT
Assumptions:
- H0 = 0: no immediate eect of impulse response
- 9 n s.t.Hn+1 = Hn+2 = = 0: \Finite Impulse Response"
(reasonable for stable processes).
Examples:
FC
52
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
PLANT ?
PLANT
PLANT
53
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
v(k-1)
y(k)
v(k-2)
=
k-n k
v(k-1) δ(k-1)
H 1 V (K -1)
+ k +
+ +
H N V (K - N )
v(k-n) δ(k-n)
k-n k
NOTE: need to have n-past inputs (v(k , 1); ; v(k , n)) in the memory.
PLANT
Assumptions:
- S0 = 0: no immediate eect of step input
- Sn+1 = Sn+2 = = S1: equivalent \Finite Impulse Response"
54
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
υ
2
1 1 1
1
∆υ(0) ∆υ(1)
1 2 1 2
1
∆υ(2)
3
−
2
As shown above, any z.o.h. signal v(t) can be represented as a sum of steps:
1
X
v(t) = v(i)S (t , i)
i=0
where v(i) = v(i) , v(i , 1) and S (t , i) is a unit step starting at the ith
time step.
Using this and superposition,
y(k) = S1v(k , 1) + S2v(k , 2) +
| v(k , n) + v(k ,{zn , 1) + + v(0))}
+Sn (
v(k,n)
55
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
More compactly,
,1
nX
y(k) = Siv(k , i) + Snv(k , n)
i=1
v(k-1)
y(k)
v(k-2)
k-n =
=
k
∆v(k-1)
S 1 ∆v(k-1)
k-1
+ +
+ +
∆v(k-n+1)
k-n+1
+ +
k-n
S n v(k-n)
v(k-n)
Note:
1. v(k , i) instead of v(k , i) appears in the model.
2. v(k , n); v(k , n + 1); : : : ; v(k , 2); v(k , 1) must be stored in the
memory (Same storage requirement as in the FIR model).
56
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
"Dynamic
states"
X(k) :
memory of
past
past future
past future
This choice is not practical since the memory size keeps growing with
time.
- For an FIR system, one only has to keep n past inputs (Why?) :
x(k) = [v(k , 1); v(k , 2); : : : : : : ; v(k , n)]T
With this choice of x(k), we can certainly build the prediction of the
future output behavior.
- Since the ultimate purpose of the memory is to predict future output, the
past may be more conveniently tracked in terms of its eect on the
future rather than the past itself. This is discussed next.
+ +
hypothesized future input trajectory
=
=
past future
58
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
59
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Y 0 (k)
current memory
update Y 0 (k+1)
υ(k)
new input
60
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
2
Y 0 (k )
3
! M 0 Y 0 (k ) + Hv (k ) = Y 0 (k + 1)
66 y 0 (k=k )
7 2 0 3 2 3 2 0 3
66 y0(k + 1=k) 777
66 . 77 66 y (k + 1=k) 77 66 H1 77 66 y (k + 1=k + 1) 77
66 .. 77 66 y0(k + 2=k) 77 66 H2 77 66 y0(k + 2=k + 1) 77
66 .. 7
77 6
66 .. 7 6
77 66 .. 77 7 66 .. 77
66 77 66 77 + 66 77 v(k) = 66 6 77
66 0 66 ... 77 66 ... 77 66 ... 77
64 y (k + n , 2=k) 775 66 77 66 77 66 77
y0(k + n , 1=k) 66 y0(k + n , 1=k) 77 66 Hn,1 77 66 y0(k + n , 1=k + 1) 777
4 0 5 4 5 4 0 5
0 y (k + n=k) Hn y (k + n=k + 1)
..
v y Y0 (k)
v y M 0 Y(k)
υ(k)
y H υ(k)
v
k-1 k k+1 k+2 k k+1 k+2 k+n-1 k+n k+n+1
61
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
62
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
k+n-1
+ +
hypothesized future input trajectory
=
Past Eects As Memory
Dene Y~ (k) as future output deviation due to past input deviation plus
current bias:
Y~ (k) = [~y(k=k); y~(k + 1=k); : : : : : : ; y~(k + n , 1=k)]T
where
y~(i=k) = y(i) assuming v(k + j ) = 0 for j 0
Note that y~(k + n , 1=k) = y~(k + n=k) = : : : = y~(1=k), thus allowing
the nite-dimensional representation of future output trajectory. This
vector can be chosen as states.
63
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
64
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
v ∆υ (k ) y ~
Y (k + 1 )
v y ~
M Y (k )
y S ∆υ (k )
v
∆υ (k )
k-1 k k+1 k+2 k k+1 k+2 k+3 k+n-1 k+n
Hence,
2 3 2 3
66 0 1 0 0 77 66 S1 77
66 0 0 1 0 777 66 S2 77
6 ... ... . . . ... 777 Y~ (k) + 6666 ... 77
~Y (k + 1) = 666 ... 77 v(k)
66 7 66 77
66 0 0 0 1 777 66 Sn,1 77
4 5 4 5
0 0 0 1 Sn
| {z }
M
Note that multiplication by M in the above represents a shift
operation of dierent kind (the last element is repeated).
65
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
66
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Again, dene Y~k+1 and Y~k in the same manner as before (now they are
(n ny )-dimensional vectors). Then,
Y~ (k + 1) = M Y~ (k) + S v(k)
where 2 3
66 0 I 0 0 77
66 0 0 I 0 777
66 . .
M = 666 . . .. . . . .. 777
66 0 0 7
64 0 I 7775
20 0 30 I
66 S1 77
66 S2 77
66 . 77
S = 666 . 77
77
66 S 77
64 n,1 5
Sn
where I is an ny ny indentity matrix. Again, it merely represents the
shift-operation; such a matrix does not need to be created in reality.
67
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
past future
target
projected
outputs
inputs
horizon
Memory: stores the eect of past inputs on future outputs.
Predictor: combines information stored in the memory with model
information to predict the eect of hypothesized future input
adjustments on future outputs.
(y(k+1jk); y(k+2jk); : : :; y(k+pjk)) = f (I (k); (u(k); : : :; u(k + m , 1)))
where I (k) denotes the all the available information at k (stored in the
memory).
Objective function and constraints
Optimization program
User-chosen parameters are the prediction horizon, control horizon, and
parameters in the objective function and constraints.
68
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
u : manpulated variable
d : measured / modelled disturbance
wy : unmeasured disturbance + model / bias error eect
69
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
2 3 2 wy (k3) = wy (k + 1) = 2 = 03
66 S1u 77 66 0 77 66 0 77
66 S u 77 66 S u 77 66 0 77
66 .2 77 66 1 77 66 . 77
+66 . 77 u(kjk) + 66 S2 77 u(k + 1jk) + + 666 . 777 u(k + p , 1jk)
6 7 6 u 7
66 ... 77 66 .. 77 66 0 77
64 75 64 75 64 75
2 Sp 3 2 Sp,31 2 S31
u u u
66 S1d 77 66 0 77 66 0 77
66 S d 77 66 S d 77 66 0 77
66 .2 77 66 1 77 66 . 77
.
+66 . 77 d(k) + 66 S2 77 d(k + 1jk) + + 666 . 777 d(k + p , 1jk)
6 7 6 d 7
66 ... 77 66 ... 77 66 0 77
64 75 64 75 64 75
2 Sp 3 Sp,1 S1d
d d
66 w(k + 1jk) 77
66 w(k + 2jk) 77
66 . 77
+66 6 .
. 77
66 ... 77
64 77
5
w(k + pjk)
There are more than a few terms (marked with (jk)) on the right-hand side
70
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
71
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
In summary,
2 3 2 3
66 y~(k + 1=k ) 77 66 S1d 77
66 y~(k + 2=k) 77 66 S d 77
66 . 77 66 .2 77
Yk+1jk = 66 6 . 77 + 66 .
77 66 .
77 d(k)
77
66 ... 77 66 .. 77
64 5 4 d 5
y~(k + p=k) S
| {z } | n {z }
MpY~ (k) S dd(k)
from feedforward
memory term
2 u 3
2 3 6 S 0 0 7 2 3
66 y(k) , y~(k=k) 77 u(kjk)
1
66 u 77 6 7
66 y(k) , y~(k=k) 77 6
66 . S 2 S u
1 0 0 7
7 66 u(k + 1jk) 777
6
66 77 66 . .. . . . . . . .. 77 66 77
+ 666 .
.. 77 + 6 77 6 .
. 77
7 66 S u S u S u 77 66 77
66 .. 77 66 .m m,1 1 7 66 .. 77
64 75 66 . . . . . . . . . . .. 7 64
7
5
7
y(k) , y~(k=k) 4 u u 5 u( k + m , 1 j k )
| {z } Sp Sp,1 Spu,m+1 | {z }
| {z }
Ip(y(k) , y~(k=k)) S u U (k)
feedback future
dynamic
term input
matrix
moves
NOTE: More complex (dynamic) extrapolation of the feedback errors is
possible. For instance, for ramp disturbances, use
d(k) = d(k + 1jk) = = d(k + p , 1jk)
wy (k + `jk) = wy (kjk) + `(wy (kjk) , wy (k , 1jk , 1))
72
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
w(k)-w(k-1)
∆d(k+2)
∆d(k+1)
∆d(k)
minu(j jk) [V (k) = Ppj=1(r(k + ijk) , y(k + ijk))T Q(r(ki + ijk) , y(k + ijk))
+ Pm`=0,1 uT (k + `jk)Ru(k + `jk)
Q and R are weighting matrices; they are typically chosen as diagonal
matrices.
73
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
74
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
2.3.6 CONSTRAINTS
Constraints include
Input magnitude constraints
Input rate constraints
Output magnitude constraints
At t = k, one has
past future y
target max
projected output u
max
inputs
horizon
75
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
+
u(k+`jk)
z }| {
X̀
u(k , 1) + u(k + ijk) umin
i=0
,u(k , 1) , X̀ u(k + ijk) ,umax
| i{z=0 }
,u(k+`jk)
+
2 2 33 2 2 33
66 66 I 0 0 7 77
7 66 66 umin , u(k , 1) 77 77
66 66 I I 0 .. 77 77 66 66 u , u(k , 1) 77 77
66 66 77 77 2 3 66 66 min 77 77
66 66 .. .. ... 7
0 75 77 6 7 u(kjk) 6 6 .
. 77 77
66 4 6 77 66 64 57
66
66 2
I
I I 3 7777 666 u(k + 1jk) 777 6666 2 umin , u(k , 1) 3 7777
66 6 I 0 0 7 777 6664 .. 77 6
66 6 umax , u(k , 1) 7 777
7
66 66 ... 777 77 u(k + m , 1jk) 5 66 666 umax , u(k , 1) 777 77
66 , 66 I
I 0 77 777 66 , 6 77 777
66 66 .... ... 0 775 77 6 6
66 66 .
. 77 77
64 64 5 4 4 55
I I I umax , u(k , 1)
+
2 2 33
66 66 umin , u(k , 1) 77 77
66 66 ... 77 77
2 3 6
66 4 6 75 77
7
64 IL 75 U (k) 66 2 umin , u(k , 1) 3 77
66 7
,IL 66 66 umax , u(k , 1) 77 777
66 , 66 ... 77 77
64 64 75 75
umax , u(k , 1)
76
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
77
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
+
y(k + j jk) ymin
,y(k + j jk) ,ymax
+
2 3 2 3
M
64 p ~
Y ( k ) + S d d(k ) + I (y (k ) , y~(k=k )) + S U U (k )
p Y
75 64 min 75
,MpY~ (k) , S dd(k) , Ip(y(k) , y~(k=k)) , S U U (k) ,Ymax
where 2 3 2 3
66 ymin 77 66 ymax 77
66 ymin 77 66 ymax 77
Ymin = 666 . 77
77 Ymax = 666 . 77
77
64 . 5 64 . 5
ymin ymax
since
2 3
6 y(k + 1jk) 77
666 ... 77 = Mp Y~ (k)+S dd(k)+Ip(y(k),y~(k=k))+S U U (k)
Y (k+1jk) = 6 75
4
y(k + pjk)
+
2 3 2 3
64 S U 75 U (k) 64 Ymin , MpY~ (k) , S dd(k) , Ip(y(k) , y~(k=k)) 75
,S U ,Ymax + MpY~ (k) + S dd(k) + Ip(y(k) , y~(k=k))
78
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
In summary, we have
2 2 33
66 66 umin , u(k , 1) 77 77
66 66 .. 77 77
66 64 75 77
2 umin , u(k , 1) 3 777
66 7
66
2 3
66
66 66 umax , u(k , 1) 77 777
, 666 ... 77 77
66 IL 77 66
66 4 75 77
66 ,IL
66
77
77 66 umax , u2(k , 1) 3 777
66 I 77 66 7
66 77 U (k) 666 66 umax 77 777
66 ,I 77 66 , 666 ... 777 777
66 U 77 66 4 5 77
64 S 75 66 2 umax 3 777
,S U 66
66 66 umax 77 777
66
66 , 666 ... 777 777
66 4 57
66 2 umax 3 777
66 6
44
Ymin , MpY~ (k) , S dd(k) , Ip(y(k) , y~(k=k)) 75 7775
~
,Ymax + MpY (k) + S d(k) + Ip(y(k) , y~(k=k))
d
79
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
80
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
In our case, at t = k,
x : U (k)
H : H = (S U )T Q S U + R
g : G (k) =2 2(S U )T3 Q (MpY~ (k) + S dd(k) + Ip(y(k) , y~(k=k)) , Rk+1jk )
66 IL 77
66 ,IL 77
66 77
C : C u = 666
6 I 77
7
66 ,I 777
66
64 S U 7775
2,S
U
2 33
66 66 umin , u(k , 1) 77 77
66 66 .. 77 77
66 64 75 77
2 umin , u(k , 1) 3 777
66 7
66
66
66 66 umax , u(k , 1) 77 777
66 , 666 .. 77 77
66 4 75 77
66 umax , u2(k , 1) 3 777
6 77
666 6 u
66 . 77 777 7
c : C (k ) = 6 max
66 66 . 77 77
66 4 5 77
66 2 u max 3 777
66
66 66 umax 77 777
66
66 , 666 .. 777 777
66 4 5 77
66 2 u max 3 77
66 6 Ymin , MpY~ (k) , S d(k) , Ip(y(k) , y~(k=k)) 7 777
d
44 ~ 55
,Ymax + MpY (k) + S d(k) + Ip(y(k) , y~(k=k))
d
83
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
84
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
y max
k k+H c
85
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Note that the above choice of p with the equality constraint amounts to
choosing p = 1. Stability of the closed-loop system is guaranteed under
this cuhoice (regardless of choice of m). Choice of m is not critical for
stability; a larger m should result in better performance at the expense of
increased computational requirement.
k+m-1 k+m+n-1
N time steps
k+m-1
The lesson is
Use large enough horizon for system responses to settle.
Try to penalize the endpoint error more (if not constrain to zero).
in the form of
y(1jk) = Ks (| u(1jk) ,{z u(k , 1))} +b(k)
us (k)
With only m moves considered,
us(k) = u(kjk) + u(k + 1jk) + : : : : : : + u(k + m , 1jk)
and with FIR assumption,
y(1jk) = y(k + m + n , 1jk)
and Ks = Sn. Hence, the steady prediction equation can be easily
extracted from the dynamic prediciton equation we had earlier.
In terms of the optimization criterion, various choices are possible.
{ Most typically, some kind of linear economic criterion is used along
with constraints on the inputs and outputs:
min [`(u(1jk); y(1jk))]
us (k)
In this case, a linear programming (LP) results.
{ Sometimes, the objective is chosen to minimize the input move size
while satisfying various input / output constraints (posed by
control requirements, actuator limits plus those set by the rigorous
plant optimizer):
min [jus(k)j]
u (k) s
Again, an LP results.
{ In the pure regulation problems where setpoint for the output is
xed, one may use
min [(r , y(1jk))T Q(r , y(1jk))]
us (k)
87
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
88
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
89
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
90
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
91
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
The system can be decomposed into (U1 U2 U4) , (Y1 Y2 Y4) and
U3 , Y3 disjoint pairs.
92
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
93
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
94
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
If G34 and G43 have slower dynamics and smaller steady state
gains than the other transfer functions, we may decompose the
system as shown in the gure.
95
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
96
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Decentralization Options
Decentralization for both model update and optimization.
Full model update, but decentralized optimization.
Full model update, full steady-state optimization (LP), but
decentralized dynamic optimization (QP).
97
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Two column vectors of the steady state gain matrix are colinear. As a
result, [y1 y2]T lies on v1 for any u1 and u2.
If the set point is given outside v1, it can never be attained.
2 3 2 3
64 u1 75 = 64 1 75
u2 0
99
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
100
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
step 2 Using u = G+y, check if the input constraints can be violated for
any y within the dened set. If unacceptable, do the next step.
step 3 The designer takes out some of low propority outputs from CVs.
Let the reduced input-output model be yr = Gr u. Repeat step 2 for
the reduced model until the estimated input is acceptable for all
possible yr .
101
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
2.4.7 BLOCKING
102
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
2 3 2 3
66 uk 77 66 1 0 07
66 uk+1 77 66 0 0 0 777
66 ... 77 66 ... ... ... 77
66 77 66 77
66 uk+m1,1 77 66 0 0 0 777
66 u 77 66 0 1 0 777 2 3
66 k+m1 77 66 u
66 uk+m1+1 77 66 0 0 0 777 666 k;1 777
66
66
.. 77 = 66 .
77 66 .
.. .. 77 4 uk;3 5
7 u
66 uk+m2,1
66
77 66 0
77 66 0 0 777 | {zk; 3 }
66 uk+m2 77 66 0 0 1 777 Uk
66 uk+m2+1 77 66 0 0 0 777
66 .. 77 66 . .. .. 77
64 75 64 . 5
|
uk{z+m,1 } |
0 0
{z
0}
Uk B
Rather heuristic.
Many industrial algorithms employ the above technique.
103
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
SVD-based Blocking
SVD of the pulse reponse matrix informs us which input
directions excite the signicant output directions.
Let the SVD of the truncated pulse response matrix over the
control and prediction horizons be
2 32 3
H = [ W1 W2 ] 64 1 0 75 64 V1T 75
T
0 2 V2
where
2 3
66 h1 0 0 77
66 h2 h1 0 77
66 . .. .. .. 77
66 . 77
H = 66 77
66 hm hm,1 h1 77
66 . .. .. .. 77
64 . 75
hp hp,1 hp,m+1
If 1 2, we can choose
B = V1
) Approximate Uk as a linear combination of dominant
input principal vectors.
considered to be better than the time-domain blocking in that
it provides structural approximation of MIMO systems, too.
104
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Q = I; R = 0:01I; p = m = 50
,1 Uk 1
105
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
2 1:5 2:2
3
Gmodel (s) = 64 101s:+1
2
30s+1
2:6
75
20s+1 10s+1
Q = I; R = I; p = m = 90
No constraints
106
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Chapter 3
SYSTEM IDENTIFICATION
The quality of model-based control absolutely relies on the quality
of the model.
107
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Procedure
1. Assume operation at steady-state with
controlled var.(CV) : y(t) = y0 for t < t0
manipulated var.(MV) : u(t) = u0 for t < t0
2. Make a step change in u of a specied magnitude, u for
u(t) = u0 + u for t t0
3. Measure y(t) at regular intervals:
yk = y(t0 + kh) for k = 1; 2; : : : ; N
where
h is the sampling interval
Nh is approximate time required to reach steady state.
4. Calculate the step response coecients from the data
sk = yk,uy0 for k = 1; : : : ; N
Discussions
1. Choice of sampling period
For modeling, best h is one such that N = 30 40.
Ex : If g(s) = Ke,ds=(s + 1),
then settling time 4 + d
Therefore, h 4N+d = 440+d = 0:1 + 0:025d
108
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
109
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
110
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
111
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Discussions
1. Select h and N as for the step testing.
2. Usually need u u for adequate S/N ratio.
3. Multiple experiments are recommended for the same reason as in the
step testing.
4. An appropriate method to detect steady state is requried.
5. Theoretically, pulse is a perfect (unbiased) excitation for linear systems.
112
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Type of Inputs
1. Pseudo-Random Binary Signal(PRBS)
In MATLAB, u=u0+del*2*sign(rand(100,1))-0.5;
or u=mlbs(12);
2. Random Noise
In MATLAB, u=u0+del*2*(rand(100,1)-0.5);
113
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Consider
yk = h1uk,1 + h2uk,2 + : : : + hN uk,N + dk
Assume the eects of initial condition are negligible.
2 3 2 32 3 2 3
66 y1 77 66 u0
u,1 : : : u1,N 7 6 h1 77 66 d1 77
66 77 66 u1 76
66 y2 u0 : : : u2,N 777 666 h2
77 = 66 .
77 66
77 + 66 d2 77
77
66 ... ... ...
77 66 .. ... 77 66 ... 77 66 ... 77
4 5 4 75 64 5 4 5
yM
uM ,1 uM ,2 : : : uM ,N hN dM
y = Uh + d
The least squares solution which minimizes
M
0 N
12
X X
(y , Uh)T (y , Uh) = @ yi , hj ui,j A
i=1 j =1
is
h^ = UT U ,1 UT y
In MATLAB, hhat=y\U;
Discussions
1. Random input testing, if appropriately designed, gives better models
than the step or pulse testing does since it can equally excite low to
high frequency dynamics of the process.
2. If UT U is singular, the inverse doesn't exist and identication fails.
! persistent excitation condition.
114
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
! h^ = UT U + I ,1 UT y
4. Unbiasedness: If d() and/or u() is zero-mean and u(i) is uncorrelated
with d(j ) for all (i; j ) pairs (these conditions are easily satised.), the
estimate is unbiased.
h^ = UT U ,1 UT (Uh + d) = h + UT U ,1 UT d
Since
E f UT U ,1 UT dg = 0
we have
E fh^ g = h
115
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Example
Process : h = [h1; h2; h3; h4; h5; ] = [1:5 2:0 5:5 0:1 0 ]
116
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
117
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Check plots of data and remove obvious outliers ( e.g., that are
impossible with respect to surrounding data points). Fill in by
interpolation.
After modeling, plot of actual vs predicted output (using measured
input and modeling equations) may suggest additional outliers.
Remove and redo modelling, if necessary.
But don't remove data unless there is a clear justication.
(b) Bias Removal and Normalization
The input/output data are biaesd by the nonzero steady state and also
by disturbance eects. To remove the bias, dierence is taken for the
input/output data. Then the dierenced data is conditioned by scaling
before using in identication.
9
y(k) = (yproc(k) , yproc(k , 1))=cy >= ! identication
u(k) = (uproc(k) , uproc(k , 1))=cu >;
118
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
119
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
u(k) is processed to y(k) by the process, i.e., y(k) contains the process
information. By treating fyk g together with fu(k)g, we can extract the
process characteristics.
A multivariable process has directional as well as frequency-dependent
characteristics.
120
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
122
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
123
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
124
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
125
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
128
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
u = process input
y = process output
Closed-loop experiment
Remember that the excitation input has limited energy with nite
magnitudes over a nite duration. Hence, it is inevitable that the
identied model has biased information of the process.
Depending on the way how to distribute the input energy over dierent
frequencies and also over dierent input principal directions (for
MIMO cases), the identied model may have dierent characteristics.
The input should preferrably be designed to suciently excite the
system modes which are associated with the desired closed-loop
performance.
For a SISO process, process information near the crossover frequency is
most important. The Ziegler-Nichols tuning method (i.e., continuous
130
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
131
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
Procedure
1. Given fy(k , 1); y(k , 2); g and fu(k , 1); u(k , 2); g,
the best one-step ahead output prediction is
132
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
2 3
y(^k) = ,ay(k,1)+bu(k,1) = [ ,y(k,1) u(k,1) ] 4 a 5 = (k)T
b
Discussions
The PEM tries to seek a parameter which minimizes the
prediction error. In case that the model has a dierent
structure from the process, the parameter is determined such
that the PE is minimized under its structural constraints.
This usually leads to unbiased estimate as shown below.
The process output can be written as
133
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
2 3 2 3 2 3
66 y(1) 77 66 (1)T 77 66 n(1) + cn(0) 77
64 .. 75 = 64 .. 75 +64 .. 75 ! YN = N +VN
y(N ) (N )T n(N ) + cn(N , 1)
Hence,
^ = (TN N ),1TN (N + VN ) = + (TN N ),1TN VN
Taking expectation gives
h i
E ^LS = + |E (TN N{z),1TN VN }
= 0 ?
Now,
2 3
, y (0) ,
2
y (1) , y ( N , 1)
n(1) + cn(0) 7 3
6
TN VN = u(0) u(1) u(N , 1) n(Nn(2)
4 + cn(1).. 775
5 66
4
2
) + cn(N , 3
1)
= 4 ,y(0) (n(1) + cn(0)) , y(1) (n(2) + cn(1)) , 5
Procedure
1. Let the estimate of e(k) be e^(k).
At k , 1, the best one-step-ahead output prediction is
2. As was in Ex. 1,
^ = (TN N ),1TN YN
3. In the above, e^(k) can be obtained by inverting the model
equation.
e^(k) = ,c^e^(k , 1) + y(k) + a^y(k , 1) , ^bu(k , 1); e^(0) = 0
Discussions
To nd e^(k), ^ should known. On the other hand, e^(k) is
needed to nd ^. ! Nonlinear equation. Backsubstitution or
other nonlinear solver is required.
Due to the structural consistency, unbiased estimate is
obtained.
135
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
136
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
137
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
138
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
or
{ close the y2 loop and apply an excitation from outside the
loop.
G is decomposed as
2 32 3
G = [u1 u2 ] 64
1 0 75 64 v1T 75 , ,SVD
0 2 v2T
Here, u1 ? u2, v1 ? v2, ku1k = ku2k = kv1k = kv2k = 1 .
If 1 2, the same problem as before arises.
To avoid the problem, it is necessary to apply large input along
the weak direction to the process either in an open-loop or a
closed-loop manner.
139
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
140
c 1997 by Jay H. Lee, Jin Hoon Choi, and Kwang Soon Lee
where 1 2
141