0% found this document useful (0 votes)
77 views7 pages

TTK4115 Linear System Theory Study Sheet

Uploaded by

Halvor Johnsen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views7 pages

TTK4115 Linear System Theory Study Sheet

Uploaded by

Halvor Johnsen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

TTK4115 – Study Sheet 2022

1 Matrix operations 1.3 Transpose


 T  
Fundamental. Learn this or fail the course. AT =
a b
=
a c
c d b d
T 
1.1 Multiplication 
a b c a d g

AT =  d e f  = b e h
Example 1:
g h i c f i
   
1   1·a 1·b
· a b =
2 2·a 2·b 1.4 Adjugate
   
a b d −c
Example 2: Adj(A) = Adj =
c d −b a
 

1
 a
2 · =1·a+2·b Tip: The adjugate of a 3x3 matrix is found similarly to
b how the determinant is calculated. Hold your finger over a
position of the transpose of the matrix (AT ) you want
Example 3: to adjugate. The same position of the adjugate matrix is
the determinant of the elements not in "line of sight" of
      the selected position. This is illustrated for positions a, e
1 2 a b 1·a+2·c 1·b+2·d and i as shown below. The blue elements indicate the 2x2
· =
3 4 c d 3·a+4·c 3·b+4·d matrix for which to calculate the determinant and place
at the position colored red.
Example 4:
     
    a b c a b c a b c
1 2 3 a b c d e f  , d e f  , d e f 
4 5 6 · d e f g h i g h i g h i
7 8 9 g h i
1.5 Inverse
See Sections 1.2 and 1.4, or 11 for complete annotation.
 
1·a+2·d+3·g 1 · b + 2 · e + 3 · h ···
1
= 4 · b + 5 · e + 6 · h ··· · · · A−1 = · Adj(A)
7·c+8·f +9·i ··· ··· Det(A)

Note: Multiply each row with every column. 1.6 Matrix exponential
Compatibility implies that the first matrix have the For a diagonal matrix the matrix exponential is
same number of rows as the second matrix has columns.
   at 
a1 · · · 0 e 1 ··· 0
1.2 Determinants A =  ... . . . ...  −→ eAt =  ... ..
.
.. 
. 
  

0 ··· an 0 ··· ean t


For a 2x2 matrix:
  If the matrix is not diagonal, the matrix exponential can
a b
Det(A) = = ad − bc be found by either Cayley-Hamilton or Laplace Method.
c d

For a 3x3 matrix: 1.7 Cayley-Hamilton Theorem


The theorem states that a matrix (A) satisfies its own
 
a b c
Det(A) = d e f characteristic polynomial (e.g, aλ2 + bλ + c).
g h i
∆(A) = An + α1 An−1 + · · · + αn−1 A + αn I = 0
     
e f d f d e Proof:
=a· −b· +c·
h i g i g h
 
1 2
Note: The inverse matrix A−1 exists if the Det(A) ̸= 0. A= −→ λ2 − 5λ − 2 = 0
3 4

1
which gives 1.9 Good to know
• Nullity or Nul(A) is the number of columns of A
minus the rank of A.
     
7 10 5 10 2 0
∆(A) = A2 −5A−2I = − − =0
15 22 15 20 0 2
• Positive definite is when all eigenvalues are
This is used as a practicality for calculating f (A) positive for a n × n matrix. Positive semidefinite
are if all eigenvalues are positive or zero.
• Singularity means that a n × n matrix is singular
f (A) = β0 I + β1 A + · · · + βn−1 An−1 = h(A) if the determinant is zero, implying that it is not
invertible.
Example 1:
  • Minimal realization is the name of transfer
2 4 10 functions who’s state-space model is both
A= , f (A) = A
0 1 controllable and observable, and has the same
Using the eigenvalues (λ) of A with input-output behavior as the transfer function.

h(λ) = β0 + β1 λ and f (λ) = λ10 2 Eigenvalues and Equivalents


10
h(2) = β0 + β1 2 = 2
2.1 Eigenvalues
h(1) = β0 + β1 1 = 110
Eigenvalues (λ) are scalars that are used to either
This results in β0 = −1022 and β1 = 1023 meaning stretch or compress an Eigenvector. They determine the
relationship between the individual system state variables,
1024

4093
 the response of the system to inputs, and the stability of
f (A) = β0 I + β1 A = = A10 the system. In other words, they are neat little numbers
0 1
that make the system easier to understand.
Example 2:
  Det(A − λI) = 0
0 0 At Note: For diagonal- and square matrices, the eigenvalues
A= , f (A) = e
1 0 are the values of the diagonal.
Because we have repeated eigenvalues (λ = 0), we have
to use derivatives to get an additional equation to find β1 2.2 Eigenvector
h(λ) = β0 + β1 λ and f (λ) = eλt Knowing the Eigenvalues (λ), the Eigenvector (q) can be
found by solving
h(0) = β0 + β1 · 0 = e0t  
v
df (λ) dh(λ) (A − λI) · 1 = 0
= −→ t · e0t = β1 v2
dλ dλ
Example:
β0 = 1, β1 = t 0

2

A= =⇒ λ = −1 ± i
Resulting in −1 −2
  For λ = −1 + i
1 0
f (A) = β0 I + β1 A = = eAt 
1−i 2
  
v
t 1 · 1 =0
−1 −1 − i v2
1.8 Laplace Method
(1 − i)v1 + 2v2 = 0
Using the inverse Laplace transform we can find the
−v1 − (1 + i)v2 = 0
exponential as shown below
   
eAt = L−1 (sI − A)−1
 v1 −1 − i
= = q1
v2 1
Example: The same is then performed for λ = −1 − i to find q2
 
0 1
   
ẋ = x −1 − i −1 + i
−1 0 q1 = q2 =
1 1
Using the equation and inverse Laplace we get The Q-matrix is then found by
 s 1
    
At −1 2 2 cos(t) sin(t)   −1 − i −1 + i
e =L s
−1
s
s = Q = q1 q2 =
s2 s2 −sin(t) cos(t) 1 1

2
2.3 Diagonalisation 2.7 Controllable Form
Using the Q-matrix derived from the eigenvectors of the The controllable canonical form arranges the coefficients
system, it may be diagonalised by of the transfer function denominator across one row of the
system matrix (A)
Λ = Q−1 AQ
b0 sn + b1 sn−1 + · · · + bn−1 s + bn
G(S) =
A matrix on diagonal form will only contain values on the sn + a1 sn−1 + · · · + an−1 s + an
matrix diagonal. Not all matrices can be presented
as diagonal. Jordan form is therefore more common.
   
−an −an−1 −an−2 ··· −a1 1
 1 0 0 ··· 0  0
.
   
ẋ =  0
 1 0 ··· 0   x +  ..  u
2.4 Jordan Form  .. .. .. .. 
 . . . . 
 
··· 0
A diagonal matrix is actually on Jordan form. However,
0 0 0 1 0 0
oftentimes we use the term for matrices that does not
purely contain values along the matrix diagonal. 
y = b0 b1 ···

bn−1 bn x

J = Q−1 AQ Example:

Converting a system to Jordan form is done by G(S) =


s+3
s2 + 3s + 2
ẋ = Q−1 AQx + Q−1 Bu which gives

y = CQx + Du 
−2 −3
  
1
ẋ = x+ u
1 0 0
2.5 Modal Form 
y= 1 3 x


If the Q-matrix contains pairs of complex conjugated


eigenvalues (e.g, λ = 1 ± i), we prefer to use the Modal 2.8 Observable Form (not in lectures)
form, which present the system with only real numbers.
The observable canonical form is the same as the
Assume we arrive at a diagonalized state equation with companion canonical form where the characteristic
complex eigenvalues as shown by polynomial of the system appears explicitly in the
  rightmost column of the system matrix (A)
α + β1 i 0
Λ=Q −1
AQ = 1 b0 sn + b1 sn−1 + · · · + bn−1 s + bn
0 α1 − β1 i
G(S) =
sn + a1 sn−1 + · · · + an−1 s + an
A modal transformation can then be performed using
the modal matrix (M ) giving the modal form (Λm )
   
0 0 0 ··· −an bn − an b0
1
 0 0 · · · −an−1  
bn−1 − an−1 b0 
 
Λm = M −1 ΛM ẋ = 0
 1 0 · · · −an−2   x + bn−2 − an−2 b0  u
 
 .. .. .. .. ..  ..
. . . . .  .
 
−i
1   
1 1
 
2 2 −1
M= 1 i , M = 0 0 0 1 −a1 b1 − a 1 b0
2 2
i −i
 
The resulting modal matrix thus become y = 0 0 ··· 0 1 x
  The relationship between the observable- and controllable
α1 β1
Λm = realizations are as follows (note that this is slightly
−β1 α1
incorrect for the notations above)

2.6 Blocks Ac = ATo , Bc = CoT , Cc = BoT , Dc = DoT ,

In general we have following structures


 
2.9 Algebraic Equivalence
λi 0
Diagonal block Algebraic equivalent systems means that there exists an
0 λi+1
invertible matrix (T) that relate the two systems.

λi 1
 The two systems therefore needs to have the same
Jordan block number of states. Algebraic equivalence implies zero-state
0 λi+1
equivalence, but this is not the case the other way around.
 
αi β
Modal block  = T AT −1
−β αi+1

3
2.10 Zero-State Equivalence 3.3 BIBO Stability
Zero-State Equivalence means that two systems have the Bounded-Input-Bounded-Output (BIBO) stability implies
same transfer function, which means that they exhibit the that if the input is bounded, the output is also bounded.
same forced-response to every input. These systems do We differentiate between continuous- and discrete systems.
not need to be algebraically equivalent to be zero-state
equivalent. For continuous systems, BIBO stability if (and only if):

Ĝ(s) = G(s) • Given SISO, the function is absolutely integrable


between zero and inf.

3 Conditions and Stability • SISO or MIMO, every pole of every transfer function
has a negative real part.
3.1 Linearity Conditions For discrete systems, BIBO stability if (and only if):
The two conditions that needs to be placed on the
function (F) for the system to be linear is Additivity and • Every pole of every transfer function has magnitude
Homogeneity. These are called the linearity conditions. less than 1.

Note: A system can be BIBO-stable despite being


F (x1 + x2 ) = F (x1 ) + F (x2 )
internally unstable.
F (αx) = αF (x)
When a function satisfies both of these conditions, it also 3.4 Lyapunov Stability
satisfies the Superposition Principle. If every finite initial state gives a finite response, i.e., the
zero-input response, the system is Lyapunov stable. The
F (αx1 + βx2 ) = αF (x1 ) + βF (x2 ) Continuous-Time Lyapunov Equation is as follows
To check for linearity, we replace F on the LH side of the
equations and see if it holds true for the RH side. AT M + M A = −N
An LTI system is stable if there exists a symmetric positive
Linearity is a useful property in dynamical systems definite matrix M that satisfies the Lyapunov Equation
(system whose state evolves with time over a state space) above. Here, N is an arbitrary positive definite matrix.
because it:
• Implies that the time-domain response of a system Example:
can be found as a superposition of inputs and initial    
−2 −1 1
values, thereby for instance allowing us to separate
 
ẋ = x+ ω, y= 0 1 x
1 0 0
the zero-state response and the zero-input response.
We choose to define M and N as
• Makes it simple to determine stability properties
globally.
   
m1 m2 1 0
M= , N=
m2 m3 0 1
• Allows global determination of observability and
controllability. Using the Lyapunov Equation we get
• Allows eigenvalue analysis and the ability to design
   
−4m1 + 2m2 −2m2 + m3 − m1 −1 0
controllers using pole placement or LQR. =
−2m2 + m3 − m1 −2m2 0 −1

3.2 Stability Solving for m1 , m2 and m3 we get


1 1
Stability theory addresses the stability of solutions of M = 21 23
differential equations and of trajectories of dynamical 2 2
systems under small perturbations of initial conditions. Calculating the eigenvalues of M gives λ = 1.707 and
λ = 0.293. The M-matrix is positive definite and the
• Instability occurs if one or more poles have positive
system is therefore asymptotically stable.
real parts. We also generally associate multiple poles
at zero as the system being unstable.
3.5 Controllability Matrix
• Marginal stability occurs when the real part of
every pole is non-positive, at least one pole has zero This matrix determines which states of the system are
real value, and there are no repeated poles on the controllable. In order to be able to do whatever we want
imaginary axis. with the given dynamic system under control input, the
system must be controllable.
• Asymptotic stability occurs if all poles have
strictly negative real parts. C = B AB A2 B · · · An−1 B
 

4
3.6 Observability Matrix 4.2 Euler Discretised Model
Observability is a measure of how well internal states of Euler’s method proceeds via the definition of the
a system can be inferred from knowledge of its external derivative giving the approximation of
outputs. This matrix is used to determine which states of
the system are observable. x[k + 1] − x[k]
ẋ[k] =
T
 2 n−1 T
 This results in
O = C CA CA · · · CA
x[k + 1] = x[k] + T Ax + T Bu
4 Discretisation Often times the exact discretised model is preferred
because Euler can appear unstable at large time steps
To compute physical models digitally (computer (T), although the system is indeed stable. Insufficiently
simulations), they need to be converted from continuous- stable systems or large timesteps will result in a divergent
to discrete systems. This process is called discretisation. solution.

Assuming the CLTI system 4.3 Internal Stability


For a discrete system to be stable, the absolute value of
ẋ = Ax + Bu
the poles of the system needs to be between 0 and 1 (ref.
Section 3.3). If one of the poles are equal to zero, the
y = Cx + Du
system is marginally stable.
A recursive model is found by
5 Transfer Functions
x[k + 1] = Ad x[k] + Bd u[k]
Transformation from LTI to transfer function is done by
y[k] = Cd x[k] + Dd u[k]
g(s) = C(sI − A)−1 B
Obtaining the discretised model can then be found using
either Exact Discretisation or Euler Discretisation. Note: If two systems have the same transfer function, they
are Zero-state equivalent (see Section 2.10).

4.1 Exact Discretised Model 6 Observers


The exact discretised model is found by replacing Ad , Bd , 6.1 Continuous Time Observer
Cd , and Dd as shown below
ẋ = Ax + Bu
Z T
y = Cx
x[k + 1] = eAT x[k] + eAα dα Bu[k]

0 Estimate state values are given by the Luenberger
Observer as
y[k] = Cx[k] + Du[k]
x̂˙ = Ax̂ + Bu + L(y − C x̂)
Example:
Estimated error is
ṡ = −0.2s + ω
ė = ẋ − x̂˙ = (A − LC)e

With control feedback u = K x̂


Z T
s[k + 1] = e−0.2T s[k] + e−0.2α dα ω[k]
    
ẋ A − BK BK
=
0 ė 0 A − LC
 T Note: We are free to choose poles for the observer
1 −0.2α
= e−0.2T s[k] + − e ω[k] without taking the feedback controller into consideration.
0.2 0 This is because the triangular system matrix implies
1 that the eigenvalues are included in A-BK and A-LC,
e−0.2T s[k] + 1 − e−0.2T ω[k] separately.

0.2
Note: The time step (T) can then be replaced by any 6.2 Discrete Time Observer
desirable value, e.g., T = 0.5 resulting in the below
x[k] = Ax[k − 1] + Bu[k − 1]
s[k + 1] = 0.9048s[k] + 0.4748ω[k] FINISH THIS

5
7 Linear Quadratic Regulator 7.1 Bryson’s Rule
1
Given the system Qi =
xi,min/max

ẋ = Ax + Bu 1
Ri =
ui,min/max
With state feedback (u = −Kx), the Cost Function (J)
of the LQR becomes Example:
Z ∞ Using limitations of x1 = 0.5, x2 = 1, and u = 2
J= xt Qx + ut Ru · dt
0  
4 0
Q= , R = 0.25
where Q is symmetric and positive semidefinite, R is 0 1
symmetric and positive definite, and K = R−1 B T P .
Note: This does not guarantee the control output (u)
staying within desired boundaries. This is because LQR
The P-matrix is found through the Riccati Equation has feedback gain K ̸= 0 and a controller u = −Kx, which
implies linear scaling.
AT P + P A − P BR−1 B T P + Q = 0

The relative values of the elements of Q and R enforce 8 Disturbance and White Noise
tradeoffs between the magnitude of the control action and
the speed of the response. The equilibrium can be shifted The Spectral Density Function of the system output
from 0 to xeq by instead using u = P xeq − Kx. (y) can be found by

Example: Sy (jω) = g(jω)g(−jω)Sω (jω)


ẋ = ax(t) + u(t) where g is the system transfer function and Sω is the
spectral density of the signal being processed/filtered
LQR Optimal Feedback Control for the system input
through the system.
u = −kx, we find a value for k that minimizes the cost
function below
Example:
   
−2 −1 1
∞ ẋ = x+ w
1 0 0
Z
J= qx(t)2 + ru(t)2 · dt, q > 0, r>0
0
 
y= 0 1 x
The Riccati Equation is used to find p Sω (jω) = α
Transfer function is found to be
p2 b2
2ap + q − =0 −→ p2 + 2arp − qr = 0 1
r g(s) =
(s + 1)2
 
giving the output spectral density
p
p = r a ± a2 + (q/r)
1 1 α
Value of k can then be set to Sy (jω) = ·α= 2
(jω + 1)2 (−jω + 1)2 ω + 2ω 2 + 1
p p
k= = a + a2 (q/r)
r 9 Kalman Filter
System with feedback control becomes
9.1 Continuous Time Kalman
 p 

ẋ = a − a − a2 + (q/r) x(t) The continuous system model is given by

where a′ correspond to the ”real” value of the plant, which ẋ = Ax + Bu + Gω


will be higher or lower than a.
y = Cx + v
”Worst case” stability is when q → 0 or r → ∞ resulting The Kalman Gain (L) is then given by
in the system poles being at a′ − 2a. Assuming that a′
is within a ±50% of a, the poles will be in the interval L(t) = P (t)C T R−1
[−0.5a, −1.5a]. The system is then stable over the entire
interval. Ṗ = AP + P AT + GQGT − P C T R−1 CP

6
9.2 Discrete Time Kalman 9.3 Q&A
The discrete system model is given by Advantages of using a Kalman Filter
• Work very well for systems that are continuously
x[k + 1] = Ax[k] + Bu[k] + ω[k] changing
y[k] = Cx[k] + v[k] • Fast, which makes them ideal for real time problems
and embedded systems
a priori estimates is a direct translation to "from
before". It refers to the fact that the estimate for the • Light on memory as previous states are not needed
solution is derived before the solution is known to exist. • Disadvantage is it assumes that both the system
Here, P is the Covariance Matrix. and observation models equations are both linear
Why use model and estimator instead of direct
x− [k] = Ax[k − 1] + Bu[k − 1] state measurements?
P − [k] = AP [k − 1]AT + Q • Noise is less prominent when using estimators
The Kalman Gain (L) is calculated by • Model can be used to predict future states

L[k] = P − [k]C T (CP − [k]C T + R)−1 10 Extended Kalman Filter


a posteriori estimates is the best guess for x[k] after
incorporation of the measurement y[k]. The Kalman Gain 11 Tables
is used as a blending factor.
Inverse of 3x3 matrix
− −  −1  
x[k] = x [k] + L[k](y[k] − Cx 1[k]) a b c ei − f h ch − bi bf − ce
1  f g − di ai − cg
 d e f  = cd − af 
P [k] = (I − L[k]C)P − [k](I − L[k]C)T + L[k]R[k]L[k]T g h i
det
dh − eg bg − ah ae − bd
Example: Trigonometric functions
1
Given the system and initial values cot(θ) =
tan(θ)
  1
1 0.4758 sec(θ) =
x[k + 1] = x[k] + ω[k] cos(θ)
0 0.9048
1
  csc(θ) =
y[k] = 1 0 x[k] + v[k] sin(θ)
  motstående
0.0097 0.0283 sin(θ) =
Q= , r = 0.125 hypotenus
0.0283 0.1133
hosliggende
cos(θ) =
hypotenus
   
0 0.5 0
x[0] = , P [0] =
0.2 0 0.1 sin(θ)
tan(θ) =
cos(θ)
The a priori estimate is found by
Trigonometric identities
− − T 1
x [1] = Ax[0], P [1] = AP [0]A + Q cos(θ) sin(θ) = sin(2θ)
    2
− 0.0952 − 0.5323 0.0714 cos2 (θ) − sin2 (θ) = cos(2θ)
x [1] = , P [1] =
0.181 0.0714 0.1952
sin(α + β) = sin(α) cos(β) + cos(α) sin(β)
The Kalman Gain is then found by cos(α + β) = cos(α) cos(β) − sin(α) sin(β)
sin(α − β) = sin(α) cos(β) − cos(α) sin(β)
 
− T − T −1 0.8098 cos(α − β) = cos(α) cos(β) + sin(α) sin
L[1] = P [1]C (CP [1]C + R) =
0.1086

Lastly, the a posteriori estimate is found by Geometric series


n−1
1 − rn
X  
k
ar = a
  1−r
− − 0.1153 k=0
x[1] = x [1] + L[1](y[1] − Cx [1] =
0.1837 Partial integration
Z Z
u dv = uv − v du
 
0.1817 0.01357
P [1] = Equation =
0.1182 0.04105

You might also like