0% found this document useful (0 votes)
14 views

Section 1. Introduction_to_control_theory

Uploaded by

jeddyderrick
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

Section 1. Introduction_to_control_theory

Uploaded by

jeddyderrick
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

I.

Introduction to Control Theory


Dr. Karama Khamis
Mechanical and Automotive Engineering Dept.
Technical University of Mombasa
November 20, 2024
Outline In this lecture, we will cover the fundamentals of classical control theory and introduce
the PID control approach as part of marine automation and control course. This is the first
part of the three modules.

Contents
1 Introduction 3
1.1 Basic Terminology in Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Control System Configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Open-Loop Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.2 Closed-Loop Control Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Outline of the Course . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2 Mathematical Modeling of Physical Systems 7


2.1 Modeling of Mechanical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Modeling of Electrical Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3 Linear Time-Invariant Systems 13


3.1 Linearity, Time-Invariance and Causality . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2 Transfer Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
3.2.2 Review of Laplace Transforms . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2.3 Some properties of the Laplace Transform . . . . . . . . . . . . . . . . . . . . 14
3.2.4 Inverse Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.5 The Final Value Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.2.6 Partial Fraction Expansion of F (s) . . . . . . . . . . . . . . . . . . . . . . . . 17
3.2.7 Obtaining the transfer function of a differential equation model . . . . . . . . 18
3.3 Control System Representation Using Block Diagrams . . . . . . . . . . . . . . . . . 19
3.3.1 Block diagram algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Time-Domain Analysis 22
4.1 Time-domain analysis of a first-order system . . . . . . . . . . . . . . . . . . . . . . 23
4.1.1 Response of a first-order system to a unit step input. . . . . . . . . . . . . . . 23
4.1.2 Response of a first-order system to a unit ramp input . . . . . . . . . . . . . 25
4.1.3 Response of a first-order system to a unit impulse input . . . . . . . . . . . . 26
4.2 Time-domain analysis of a second order system . . . . . . . . . . . . . . . . . . . . . 26
4.2.1 Unit step response of a second order system . . . . . . . . . . . . . . . . . . . 28
4.3 Time-domain performance measures . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

1
5 Introduction to PID Controller 33
5.1 Proportional term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
5.2 Integral term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.3 Derivative term . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
5.5 Designing a PID Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.6 Tuning via Process Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
5.7 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5.7.1 Summary of the effects of PID Controller Gains . . . . . . . . . . . . . . . . . 39

6 Exercise 40

2
1 Introduction
The field of control systems deals with applying or selecting inputs to a system to make it behave
in a desired way, such as guiding the system’s state or output along a specific trajectory.

Control theory has wide-ranging applications in various fields, including robotics, automotive sys-
tems, aerospace engineering, marine engineering, industrial automation, and even economics and
biology.
Some of the applications of control engineering include,
• Active stabilization systems in marine vessels are employed to reduce the effects of rolling
and pitching caused by waves.
• Autopilot systems in ships use control algorithms to automatically guide the vessel along a
predetermined course.
• Electronic Stability Control (ESC) in automobiles helps drivers to avoid crashes by reducing
the danger of skidding, or losing control as a result of over-steering.
• Manufacturing industries apply control in various forms. For example, Computer Numerical
Control (CNC) machines, widely used in metalworking and precision manufacturing, rely on
feedback control to maintain accurate tool positioning during machining operations.
• Jet fighters use three control surfaces (aileron, rudder and elevator) to perform complex
manoeuvres.
• Unmanned Aerial Vehicles (UAVs) are currently being equipped with control systems to avoid
in-air collisions.

Figure 1: Control System

1.1 Basic Terminology in Control Systems


Basic terminology commonly used in control systems, as shown in Figure 1.
1. System is an abstract object that accepts inputs and produces outputs in response. Systems
are often composed of smaller components that are interconnected together, leading to be-

3
havior that is more than just the sum of its parts. In the control literature, systems are also
commonly referred to as plants or processes.
2. Dynamical system loosely refers to any system that has an internal state and some dynamics
(i.e., a rule specifying how the state evolves in time). This description applies to a very large
class of systems, from automobiles and aviation to industrial manufacturing plants and the
electrical power grid. The presence of dynamics implies that the behavior of the system
cannot be entirely arbitrary; the temporal behavior of the system’s state and outputs can be
predicted to some extent by an appropriate model of the system.
3. Controlled Variable, y(t) is the quantity or condition that is measured and controlled.
Normally, the controlled variable is the output of the system.
4. Control signal or manipulated variable, u(t) is the quantity or condition that is varied
by the controller so as to affect the value of the controlled variable.
5. Disturbances, d(t) is a signal that tends to adversely affect the value of the output of a
system. If a disturbance is generated within the system, it is called internal, while an external
disturbance is generated outside the system and is an input.

1.2 Control System Configurations


Control systems may be classified as self-correcting type and non-self-correcting type. The term
self-correcting, as used here, refers to the ability of a system to monitor or measure a variable of
interest and correct it automatically without the intervention of a human whenever the variable
is outside acceptable limits. Depending on configuration, control systems can be categorized into
mainly two classes:
1. Open-loop control systems
2. Closed-loop (or feedback) control systems.

1.2.1 Open-Loop Control Systems


These are the non-self-correcting type systems, in which the output has no effect on the control
action. In other words, in an open loop control system the output is neither measured nor fed back
for comparison with the input. One practical example is a washing machine. Soaking, washing, and
rinsing in the washer operate on a time basis. The machine does not measure the output signal, that
is, the cleanliness of the clothes. In any open-loop control system the output is not compared with
the reference input. Thus, to each reference input there corresponds a fixed operating condition;
as a result, the accuracy of the system depends on calibration. In the presence of disturbances,
an open-loop control system will not perform the desired task. Open-loop control can be used, in
practice, only if the relationship between the input and output is known and if there are neither
internal nor external disturbances.
Practical Examples of Open Loop Control System:
• Automatic Washing Machine – This machine runs according to the pre-set time irrespective
of washing is completed or not.

4
Figure 2: Open-Loop System

• Bread Toaster - This machine runs as per adjusted time irrespective of toasting is completed
or not.
• Automatic Tea/Coffee Maker – These machines also function for pre adjusted time only.
• Timer Based Clothes Drier – This machine dries wet clothes for pre – adjusted time, it does
not matter how much the clothes are dried.
• Light Switch – lamps glow whenever light switch is on irrespective of light is required or not.
• Volume on Stereo System – Volume is adjusted manually irrespective of output volume level.
Advantages of Open Loop Control System
• Simple in construction and design.
• Economical.
• Easy to maintain.
• Generally stable.
• Convenient to use as output is difficult to measure.
Disadvantages of Open Loop Control System
• They are inaccurate.
• They are unreliable.
• Any change in output cannot be corrected automatically.

1.2.2 Closed-Loop Control Systems


These are the self-correcting type systems whereby it maintains a prescribed relationship between
the output and the reference input by comparing them and using the difference as a means of
control. An example would be a room temperature control system. By measuring the actual
room temperature and comparing it with the reference temperature (desired temperature), the
thermostat turns the heating or cooling equipment on or off in such a way as to ensure that the
room temperature remains at a comfortable level regardless of outside conditions. In practice, the
terms feedback control and closed-loop control are used interchangeably.
Practical Examples of Closed Loop Control System
1. Automatic Electric Iron – Heating elements are controlled by output temperature of the iron.

5
Figure 3: Closed-Loop System

2. Servo Voltage Stabilizer – Voltage controller operates depending upon output voltage of the
system.
3. Water Level Controller– Input water is controlled by water level of the reservoir.
4. Missile Launched & Auto Tracked by Radar – The direction of missile is controlled by com-
paring the target and position of the missile.
5. An Air Conditioner – An air conditioner functions depending upon the temperature of the
room.
6. Cooling System in Car – It operates depending upon the temperature which it controls.
Advantages of Closed Loop Control System
• Closed loop control systems are more accurate even in the presence of non-linearity.
• Highly accurate as any error arising is corrected due to presence of feedback signal.
• Facilitates automation.
• The sensitivity of system may be made small to make system more stable.
• This system is less affected by noise.
Disadvantages of Closed Loop Control System
• They are costlier.
• They are complicated to design.
• Required more maintenance.
• Feedback leads to oscillatory response.
• Overall gain is reduced due to presence of feedback.
• Stability is the major problem and more care is needed to design a stable closed loop system.

1.3 Outline of the Course


The trajectory of the course will be as follows.

6
• Modeling: Before we can control a system and make it behave in a desired manner, we need to
represent the input-output behavior of the system in a form that is suitable for mathematical
analysis.
• Analysis: Once we understand how to model systems, we need to have a basic understanding
of what the model tells us about the system’s response to input signals. We will also need
to formulate how exactly we want the output to get to its desired value (e.g., how quickly
should it get there, do we care what the output does on the way there, can we be sure that
the output will get there, etc.)
• Design: Finally, once we have analyzed the mathematical model of the system, we will study
ways to design controllers to supply appropriate control (input) signals to the system so that
the output behaves as we want it to.
We will be analyzing systems both in the time-domain (e.g., with differential equations) and in the
frequency domain (e.g., using Laplace transforms).

2 Mathematical Modeling of Physical Systems


To understand and control complex systems, one must obtain quantitative mathematical models of
these systems. Therefore, it is necessary to analyze the relationships between the system variables
and develop a mathematical model. Mathematical models of physical systems are then used to
design and analyze control systems. Since the systems under consideration are dynamic in nature,
the descriptive equations are usually differential equations. If the coefficients of the describing
differential equations are function of time, then the mathematical model is linear time-varying. On
the other hand, if the coefficients describing differential equations are constants, the model is linear
time-invariant (LTI).
The differential equations describing an LTI system can be reshaped into different forms for the
convenience of analysis. For transient response or frequency response analysis of single-input-single-
output linear systems, the transfer function representation is convenient. On the other hand, when
the system has multiple inputs and outputs, the vector-matrix notation may be more convenient.
Powerful mathematical tools like Fourier and Laplace transforms are available for linear systems.
Unfortunately no physical system in nature is perfectly linear. Certain assumptions must always
be made to get a linear model. In the presence of strong nonlinearity or in presence of distributive
effects it is not possible to obtain linear models.
In summary, the approach to dynamic system modeling can be listed as follows:
1. Define the system and its components.
2. Formulate the mathematical model and fundamental necessary assumptions based on basic
principles.
3. Obtain the differential equations representing the mathematical model.
4. Solve the equations for the desired output variables.
5. Examine the solutions and the assumptions.
6. If necessary, reanalyze or redesign the system.

7
2.1 Modeling of Mechanical Systems
Mechanical systems are usually considered to comprise of the linear lumped parameter elements of
stiffness, damping and mass. Mechanical systems are classified into two types:
1. Translational and
2. Rotational.
The motion of the body during translational motion is along a straight line or a curved path;
whereas during rotational motion, the motion of the body is about its own axis.
1. Stiffness in mechanical systems. An elastic element is assumed to provide an extension
proportional to the force (or torque) applied to it. For translational spring, if x2 (t) > x1 (t),

Figure 4: Spring

then,
f (t) = k(x2 (t) − x1 (t)) (1)

k, the spring stiffness has units of (N/m)


and for the rotational spring if θ2 (t) > θ1 (t), then

T (t) = k(θ2 (t) − θ1 (t)) (2)

k, the spring stiffness has units of (Nm/rad) and θ is the angular displacement.
2. Damping in mechanical systems. A damping element (sometimes called a dashpot) is
assumed to produce a velocity proportional to the force (or torque) applied to it. For the

Figure 5: Dashpot

translational damper if x2 (t) > x1 (t), then


d
f (t) = cv(t) = c (x2 (t) − x1 (t)) (3)
dt

c, the damping coefficient has units of (Ns/m).


and for the rotational damper if θ2 (t) > θ1 (t), then
d
T (t) = cω(t) = c (θ2 (t) − θ1 (t)) (4)
dt

8
c, the damping coefficient has units of (Nm s/rad) and ω is the angular velocity.
3. Mass in mechanical systems. The force to accelerate a body is the product of its mass
and acceleration (Newton’s second law). For the translational system

Figure 6: Mass

dv(t) d2 x(t)
f (t) = ma(t) = m =m (5)
dt dt2

For the rotational system

dω(t) d2 θ(t)
T (t) = Iα(t) = I =I (6)
dt dt2

In the above equations I is the moment of inertia about the rotational axis and α is the
angular acceleration. When analyzing mechanical systems it is usual to identify all external
forces by the use of a ‘Free-body diagram’, and then apply Newton’s second law of motion in
the form: For the translational system,

Σf (t) = ma(t) (7)

For the rotational system,


ΣM = Iα (8)

Example Find the differential equation relating the displacement x(t) and input force f (t) for the
spring-mass-damper system shown in Figure 7.

Figure 7: Mass-spring-damper system

Solution Draw the free-body diagram as shown in Figure 8:


For the translational system
Σf (t) = ma(t) (9)

9
Figure 8: Free body diagram

Therefore,
f (t) = fm (t) + fc (t) + fk (t) (10)

d2 x(t) dx(t)
f (t) = m 2
+c + Kx (11)
dt dt

Example Find the differential equation relating the displacement x1 (t) and x2 (t) for the spring-
mass-damper system shown in Figure 9.

Figure 9: Mass-spring-damper system

Solution Draw the free-body diagram 10:

Figure 10: Free body diagram

For the translational system


Σf (t) = ma(t) (12)

Therefore, for mass m1 ,


fm1 (t) + fc (t) = fk (t) (13)

and for mass m2 ,


fm2 (t) + fk (t) = f (t) (14)

10
Hence the dynamic equations of the system are given by Eqs. 15 and 16,

d2 x1 (t) dx1 (t)


m1 2
+c = k(x2 (t) − x1 (t)) (15)
dt dt

d2 x(t)
m2 + k(x2 (t) − x1 (t)) = f (t) (16)
dt2

Exercise A flywheel of moment of inertia I sits in bearings that produce a frictional moment of c
times the angular velocity ω(t) of the shaft as shown in Figure 11. Find the differential equation
relating the applied torque T (t) and angular velocity ω(t).

Figure 11: Flywheel mechanism

2.2 Modeling of Electrical Systems


The basic passive elements of electrical systems are resistance, inductance and capacitance.
The equation governing a resistive element is given by Ohm’s Law,

Figure 12: Resistor

(v1 (t) − v2 (t)) = Ri(t) (17)

for an inductive element, the relationship between voltage and current is given by,

Figure 13: Inductor

di(t)
(v1 (t) − v2 (t)) = L (18)
dt

11
Figure 14: Capacitor

and for a capacitive element, the electrostatic equation is,

Q(t) = C(v1 (t) − v2 (t)) (19)

Differentiating both sides with respect to t


dQ d
= i(t) = C (v1 (t) − v2 (t)) (20)
dt dt

Note that if both sides of the equation are integrated then,


Z
1
(v1 (t) − v2 (t)) = i(t)dt (21)
c

Example Find the differential equation relating v1 (t) and v2 (t) for the RLC network shown in
Figure 15.

Figure 15: RLC network

Solution Using Kirchhoff’s voltage Law


di(t)
(v1 (t) − v2 (t)) = Ri(t) + L (22)
dt

But Z
v2 (t) = 1/C i(t)dt (23)
or
dv2 (t)
C = i(t) (24)
dt
Substituting into the initial equation

v2 (t) d2 v2 (t)
v1 (t) − v2 (t) = RC + LC (25)
dt dt2

12
It can be rewritten in standard form of a second order differential equation as given by Eq. 26

d2 v2 (t) R v2 (t) 1 1
2
+ + v2 (t) = v1 (t) (26)
dt L dt LC LC

3 Linear Time-Invariant Systems


3.1 Linearity, Time-Invariance and Causality
The system is said to be linear if the Principle of Superposition holds. Suppose that the output

Figure 16: System

of the system is y1 (t) in response to input u1 (t) and y2 (t) in response to input u2 (t). Then the
output of the system in response to the input αu1 (t) + βu2 (t) is αy1 (t) + βy2 (t), where α and β
are arbitrary real numbers. Note that this must hold for any inputs u1 (t) and u2 (t). The system
is said to be time-invariant if the output of the system is y(t − τ ) when the input is u(t − τ ) (i.e.,
a time-shifted version of the input produces an equivalent time-shift in the output). The system is
said to be causal if the output at time t depends only on the input up to time t. In particular, this
means that if u(t) = 0 for t < τ , then y(t) = 0 for t < τ

3.2 Transfer Functions


3.2.1 Introduction
The transfer function of an LTI system is the ratio of Laplace transform of the output variable to
the Laplace transform of the input variable assuming zero initial conditions. Following are some
examples of how transfer functions can be determined for some dynamic system elements.

Figure 17: Transfer function

G(s) represents the transfer function of the system, mathematically it can be written as shown
below,

Y (s)
G(s) = (27)
U (s) yo =0

13
3.2.2 Review of Laplace Transforms
Suppose f (t) is a function of time. In this course, unless otherwise noted, we will only deal with
functions that satisfy f (t) = 0 for t < 0.
Reasons for using Laplace Transforms
1. Converts differential equations to algebraic equations facilitates combination of multiple com-
ponents in a system to get the total dynamic behavior (through addition and multiplication)
2. Can gain insight from the solution of the transformed domain i.e. s-domain, inversion of
transform not necessarily required.
The Laplace Transform of the function f (t) is defined as,
Z ∞
L{f (t)} = f (t)e−st dt
0

This is a function of the complex variable s, so we can write L{f (t)} = F (s).
Example Find the Laplace Transform of f (t) = e−at , t ≥ 0, where a ∈ R.
Solution. Z ∞
F (s) = L{f (t)} = e−at e−st dt
0
Z ∞
= e−(s+a)t dt
0


1 −(s+a)t 1
=− e =
s+a 0 s+a

Example Find the Laplace Transform of the unit step function r(t), defined as

1 if t ≥ 0
r(t) =
0 otherwise

Solution. ∞

e−st
Z
1
R(s) = L{r(t)} = 1 × e−st dt = − =
0 s 0 s

Transforms of other common functions can be obtained from Laplace Transform tables:
Note: The functions f (t) in this table are only defined for t ≥ 0 (i.e., we are assuming that f (t) = 0
for t < 0).

3.2.3 Some properties of the Laplace Transform


1. Linearity. L {αf1 (t) + βf2 (t)} = αF1 (s) + βF2 (s).

14
Time domain s-domain
f (t), t ≥ 0 F(s)
δ(t) 1
1
r(t) s

e−at 1
s+a
1
t s2
ω
sin ωt s2 +ω 2
s
cos ωt s2 +ω 2

.. ..
. .

Table 1: Sample Table of Laplace Transforms

2. Time Delay. Consider a delayed signal f (t − λ), λ ≥ 0 (i.e., a shifted version of f (t)).
Z ∞ Z ∞
−st
L{f (t − λ)} = f (t − λ)e dt = f (t − λ)e−st dt
0 λ
Z ∞
= f (τ )e−s(τ +λ) dτ ( letting τ = t − λ)
0
−sλ
=e F (s)
n o
df
3. Differentiation. L dt = sF (s) − f (0). More generally,

dm f dm−1 f
 
df
L = sm F (s) − sm−1 f (0) − sm−2 (0) − · · · − m−1 (0)
dtm dt dt
nR o
t
4. Integration. L 0 f (τ )dτ = 1s F (s).

5. Convolution. The convolution of two signals f1 (t) and f2 (t) is denoted by


Z t
f1 (t) ∗ f2 (t) = f1 (τ )f2 (t − τ )dτ
0

The Laplace Transform of the convolution of two signals is given by

L {f1 (t) ∗ f2 (t)} = F1 (s)F2 (s)

This is a very important property of Laplace Transforms! Convolution of two signals in the time-
domain (which could be hard to do) is equivalent to the multiplication of their Laplace Transforms
in the s domain (which is easy to do).

15
3.2.4 Inverse Laplace Transform
Given the Laplace Transform F (s), we can obtain the corresponding time domain function f (t) via
the Inverse Laplace Transform:
Z σ+j∞
1
f (t) = L−1 {F (s)} = F (s)est ds
2πj σ−j∞

This is quite tedious to apply in practice, and so we will not be using it in this class. Instead, we
can simply use the Laplace Transform tables to obtain the corresponding functions.

Example. Determine the inverse Laplace Transform of the following


1
F (s) =
s(s + 1)

Solution.
   
−1 −1 1 −1 1 1
f (t) = L {F (s)} = L =L −
s(s + 1) s s+1

Hence, by linearity of the (inverse) Laplace Transform


   
−1 1 −1 1
f (t) = L −L = 1(t) − e−t ; t ≥ 0
s s+1

1
In the given example, the function s(s+1) is converted into a sum of simpler functions, and then
using the inverse Laplace Transform tables its corresponding time domain function is derived. This
is a general technique for inverting Laplace Transforms.

3.2.5 The Final Value Theorem


Let F (s) be the Laplace Transform of a function f (t). Often, we will be interested in how f (t)
behaves as t → ∞ (this is referred to as the asymptotic or steady state behavior of f (t)). This
information can directly be obtained from F (s) using the Final Value theorem.
Theorem. If all poles of sF (s) are in the open left half plane, then

lim f (t) = lim sF (s)


t→∞ s→0

Example. Find the initial and final values of


s+9
F (s) =
s2 + 7s + 3

Solution.
s+9
f (∞) = lim sF (s) = lim s × =0
s→0 s→0 s2 + 7s + 3
Therefore, the final value is 0.

16
3.2.6 Partial Fraction Expansion of F (s)
Suppose we have a rational function

N (s) bm sm + bm−1 sm−1 + . . . + b1 s + b0


F (s) = = (28)
D(s) sn + an−1 sn−1 + . . . + a1 s + a0

where the ai ’s and bi ’s are constant real numbers.


Definition. If m ≤ n, the rational function is called proper. If m < n, it is strictly proper.
By factoring N (s) and D(s), we can write

(s + z1 )(s + z2 ) . . . (s + zm )
F (s) = K (29)
(s + p1 )(s + p2 ) . . . (s + pn )

Definition. The complex numbers −z1 , −z2 , · · · − zm are the roots of N (s) and are called the
zeros of F (s). The complex numbers −p1 ; −p2 , · · · − pn are the roots of D(s) and are called the
poles of F (s). Knowing the poles of a transfer function tells us what the natural response looks
like, and this is an important part of the overall response to initial conditions and inputs.
Consider the general case of partial fraction approach demonstrated using an example.
Example. Perform partial fraction expansion of the following equation
s
F (s) =
s2 + 6s + 5

Solution.
s s A1 A2
F (s) = = = +
s2 + 6s + 5 (s + 1)(s + 5) (s + 1) (s + 5)
or

A1 (s + 5) + A2 (s + 1) s (A1 + A2 ) + (5A1 + A2 )
F (s) = =
(s + 1)(s + 5) (s + 1)(s + 5)

Equating the terms of both sides in the numerator corresponding to s and the constant term, we
get

1 5
A1 = − , A2 =
4 4

Therefore,
− 14 5
F (s) = + 4
s+1 s+5

Taking inverse Laplace transform, we get


1 5
f (t) = − e−t + e−5t
4 4

17
3.2.7 Obtaining the transfer function of a differential equation model
1. First order System. It is a system whose dynamic behavior is described by a first order
differential equation.
dy(t)
τ + y(t) = ku(t)
dt

Applying the derivative and linearity properties to the dynamical system, the Laplace Trans-
form of the system becomes,
 
dy(t)
L + y(t) = ku(t)
dt

τ {sY (s) − y(0)} + Y (s) = kU (s)

τ {τ s + 1} Y (s) = kU (s) + y(0)

Which can be rearranged to give the transfer function of a general first order system, assuming
zero initial conditions:

Y (s) k
G(s) = = (30)
U (s) τs + 1

where τ is the time constant, defined as the time required for the output to attain 63.2% of
its steady-state value. Hence, the larger the time constant, the longer the system takes to
settle.
2. Second order System. It is a system whose dynamic behavior is described by a second
order differential equation.

d2 y(t) dy(t)
2
+ 2ζωn + ωn 2 y(t) = ωn 2 u(t)
dt dt

Applying the derivative and linearity properties to the dynamical system, the Laplace Trans-
form becomes,  2 
d y(t) dy(t) 2 2
L + 2ζωn + ω n y(t) = ωn u(t)
dt2 dt

L s2 Y (s) − sy(0) − ẏ(0) + 2ζωn {sY (s) − y(0)} + ωn 2 Y (s) = ωn 2 U (s)




Which can be rearranged to solve for the (transform of the) output Y (s) due to the (transform
of the) input U (s), and the initial conditions as follows:

L s2 + 2ζωn s + ωn 2 Y (s) = ωn 2 U (s) + {s + 2ζωn } y(0) + ẏ(0)




18
The transfer function of a general second order system is given by, again assuming zero initial
conditions:

Y (s) ωn 2
G(s) = = 2 (31)
U (s) s + 2ζωn s + ωn 2

where ζ is the damping ratio a dimensionless quantity describing the decay of oscillations
during a transient response. ωn is the system’s natural frequency, which represents the angular
frequency at which the system tends to oscillate in the absence of damping.
3. General case. Consider a general case of a system whose input and output are related via
a differential equation of the form

y (n) + an−1 y (n−1) + . . . + a1 ẏ + a0 y = bm y (m) + bm−1 u(m−1) + . . . + b1 u̇ + b0 u (32)


where the ai ’s and bi ’s are constant real numbers.
Taking Laplace Transforms of both sides (assuming that all initial conditions are zero), we
get

sn + an−1 sn−1 + . . . + a1 s + a0 Y (s) = bm sm + bm−1 sm−1 + . . . + b1 s + b0 U (s) (33)


 

from which we obtain the transfer function


Y (s) bm sm + bm−1 sm−1 + . . . + b1 s + b0
G(s) = = (34)
U (s) sn + an−1 sn−1 + . . . + a1 s + a0
. The impulse response of this system is given by

g(t) = L−1 {G(s)}

3.3 Control System Representation Using Block Diagrams


A block diagram of a system is a pictorial representation of the functions performed by each com-
ponent and of the flow of signals. Such a diagram depicts the interrelationships that exist among
the various components in a control system. Differing from a purely abstract mathematical repre-
sentation, a block diagram has the advantage of indicating more realistically the signal flows of the
actual system. In a block diagram all system variables are linked to each other through functional
blocks. The functional block or simply block is a symbol for the mathematical operation on the
input signal to the block that produces the output. The transfer functions of the components are
usually entered in the corresponding blocks, which are connected by arrows to indicate the direction
of the flow of signals.

The following are the basic elements used in block diagram system representation:
• Rectangular block diagram represents the cause-and-effect relationship between the input and
output of a physical system.
• Arrow represents the direction of signal flow.

19
• Circle block diagram represents the signals operation. A negative or positive sign is placed
inside the circle to represent deduction or addition of the respective signals.
• Takeoff point is a point where the signal is diverted into multiple flows which has similar
values to various operators or control elements.

3.3.1 Block diagram algebra


The following simple algebraic manipulation can be performed on the block diagram:
1. Blocks connected in series: Figure 18. Y = (G1 G2 )U

Figure 18: Combining blocks in cascade

2. Blocks connected in parallel: Figure 19. Y = (G1 ± G2 )U

Figure 19: Combining blocks in parallel


 
G1
3. Feedback loop reduction: Figure 20. Y = U
1 ∓ G1 G 2

Figure 20: Eliminating feedback loop

4. Shifting the pick-off point: Figure 21. Y = G1 U

Figure 21: Moving a pick-off point behind a block

5. Shifting the pick-off point: Figure 22. Y = G1 U


6. Shifting the summation point: Figure 23. Y = G1 (U1 + U2 )

20
Figure 22: Moving a pick-off point ahead of a block

Figure 23: Moving a summing point behind a block

Figure 24: Moving a summing point ahead of a block

7. Shifting the summation point: Figure 24. Y = G1 U1 + U2


The block diagram of a practical feedback control system is often quite complicated. It may include
several feedback or feed-forward loops, and multiple inputs. By means of systematic block diagram
reduction, every multiple loop linear feedback system may be reduced to canonical form.
The following general steps may be used as a basic approach in the reduction of complicated block
diagrams. Each step refers to specific transformations.
1. Combine all cascade blocks using Transformation 1.
2. Combine all parallel blocks using Transformation 2.
3. Eliminate all minor feedback loops using Transformation 3.
4. Shift summing points to the left and takeoff points to the right of the major loop.
5. Repeat Steps 1 to 4 until the canonical form has been achieved for a particular input.
Example. Reduce the block diagram Figure 25 to canonical form.
Solution. The following steps are used to simplify the given block diagram to canonical form,
• Step 1:
• Step 2:
• Step 3:
• Step 5:

21
Figure 25: Block diagram of a sample control system

4 Time-Domain Analysis
In time domain analysis, the time response of a linear dynamic system is obtained as a function of
time,y(t). It is possible to compute the time response if the input and the model of the system is
known. The time response of any linear system has two components
1. Transient response which decays exponentially to zero as time increases. It is a function only
of the system’s dynamics.
2. Steady-state response. This the response of the system after the transient component has
decayed and is a function of both the system’s dynamics and the input.

22
The total response of the system is always the sum of the transient and steady state component.

Below are the general steps for conducting a time-domain analysis of a control system:
1. Obtain the system’s transfer function, G(s) in the Laplace domain.
2. Identify the type of input signal, U (s) to be applied to the system based on the performance
measure.
3. Determine the system’s Laplace domain response, Y (s) = U (s)G(s).
4. Perform the inverse Laplace transform to determine the time-domain response, y(t) = L−1 {Y (s)}.
Inverse Laplace transform tables can be used, and sometimes partial fraction decomposition
method is employed to simplify the expression before finding the inverse.
5. Evaluate key performance metrics for the system’s time-domain behavior:
• Transient response analysis involves determining the system’s overshoot, settling time,
and rise time. These metrics can then be calculated and compared to desired specifica-
tions.
• The steady-state response analysis involves determining the steady-state error, which is
the difference between the output and the desired final value. This is usually derived
using the final value theorem
• Stability analysis. For stability in continuous-time systems, all poles must have negative
real parts.
6. Fine-tune the system if necessary. If the system does not meet the desired performance criteria
(e.g., excessive overshoot, slow settling time), you may need to adjust system parameters (e.g.,
damping ratio, controller gains) or modify the system design

4.1 Time-domain analysis of a first-order system


Here, the response of a first-order system is determined using standard test input signals, such
as step, ramp, and impulse inputs, which are commonly used to analyze system behavior in the
time domain. These signals are chosen because they provide useful insights into the system’s
time-domain behavior.

4.1.1 Response of a first-order system to a unit step input.


Let us now obtain the solution for the system response, y(t) to a unit step input, u(t).
Consider the transfer function for the general form of the first-order system,

Y (s) k
G(s) = =
U (s) τs + 1

For analysis sake we can manipulate the transfer function as follows


A
G(s) =
s+a

23
where,
k 1
A= , a= ,
τ τ

Applying the input,  


1 A
Y (s) = U (s)G(s) =
s s+a
Since it is not listed in the inverse Laplace Transform tables, the partial fraction expansion technique
can be used to expand the function as follows

A1 (s + a) + A2 s A1 A2
Y (s) = = +
s(s + a) s s+a

where,
A A
A1 = , A2 = −
a a

Therefore,  
A 1 1
Y (s) = −
a s s+a

Using the inverse Laplace Transform Tables,


A
y(t) = 1 − e−at
a
But,
k 1
A= , a= ,
τ τ

Therefore,
t
y(t) = k(1 − e− τ )

The plot of the unit step response with respect to time, by letting k = 1 and τ = 0.125 is shown in
Figure 26

Figure 26: Unit step response

24
4.1.2 Response of a first-order system to a unit ramp input
Let us now obtain the solution for response y(t) to a unit ramp input u(t).
The transfer function for the general form of the 1st order ODE,

Y (s) k
G(s) = =
U (s) τs + 1

Applying the input,


A
Y (s) = U (s)G(s) =
s2 (s + a)

Performing partial fraction expansion,

A1 s(s + a) + A2 (s + a) + A3 s2 A1 A2 A3
Y (s) = 2
= + 2 +
s (s + a) s s s+a

where,
A A A
A1 = − , A2 = , A3 = 2
a2 a a

Therefore,      
A 1 A 1 A 1
Y (s) = − 2 + + 2
a s a s2 a s+a
   
A 1 1 A 1
Y (s) = − +
a2 s+a s a s2

Using the inverse Laplace Transform Tables,


A −at  A
y(t) = e − 1 + t
a2 a

but,
k 1
A= , a= ,
τ τ
Therefore, n  o
t
y(t) = k t − τ 1 − e− τ

The plot of the unit ramp response with respect to time is shown in Figure 27

25
Figure 27: Unit ramp response

4.1.3 Response of a first-order system to a unit impulse input


The transfer function for the general form of the 1st order ODE,

Y (s) k
G(s) = =
U (s) τs + 1

Applying the input,


A
Y (s) = U (s)G(s) = 1 ×
(s + a)

Using the inverse Laplace Transform Tables,

y(t) = Ae−at

but,
k 1
A= , a= ,
τ τ
Therefore,
k  −t
y(t) = e τ
τ

The plot of the unit ramp response with respect to time is shown in Figure 28

4.2 Time-domain analysis of a second order system


Consider a second order system of the form

ωn 2
G(s) = , ωn > 0 (35)
s2 + 2ζωn s + ωn 2
Where ζ is called the damping ratio, and ωn is called the natural frequency. The characteristic
equation is given by Eq. (36),

s2 + 2ζωn s + ωn 2 = 0, ωn > 0 (36)

26
Figure 28: Unit impulse response

p
The poles of Eq. (36) are obtained using the quadratic formula as s = −ζωn ±ωn ζ 2 − 1. A second-
order system has two poles, from the two roots of its second-order characteristic polynomial. The
roots of polynomials with real coefficients may be real, imaginary or complex. The complex roots
normally occur in complex-conjugate pairs. The location of these poles in the complex plane will
vary based on the value of ζ. We analyze three different cases:
p
1. The poles are complex conjugates with negative real parts if ζ 2 − 1 < 0. i.e. 0 ≤ ζ < 1. In
this case, the system response is said to be under-damped.
p And the system poles are given
by s1,2 = σ ± jωd and σ = −ζωn and ωd = ωn ζ 2 − 1

Figure 29: Effect of pole location on system response for case 1.

p
2. The poles are negative, real and equal if ζ 2 − 1 = 0 which occurs when ζ = 1. This
will result to a critically-damped system response. And the system poles are given by,
s1,2 = −ζωn
p
3. The poles are negative, real and unequal if ζ 2 − 1 > 0 which occurs when ζ > 1. This
will result to
p an over-damped system response. And the system poles are given by, s1,2 =
−ζωn ± ωn ζ 2 − 1

27
Figure 30: Effect of pole location on system response for case 2.

Figure 31: Effect of pole location on system response for case 3.

4.2.1 Unit step response of a second order system


Let us now obtain the solution for response y(t) to a unit step input u(t).
Therefore using Eq. (35) we can determine the system response for the THREE different cases
depending on the value of ζ as given below.

ωn 2
 
1
Y (s) = , ωn > 0 (37)
s s2 + 2ζωn s + ωn 2

1. Under-damped case (0 ≤ ζ < 1) Eq. (37) can be written as,

1 s + 2ζωn
Y (s) = − 2 (38)
s s + 2ζωn s + ωn 2

1 s + ζωn ζωn
Y (s) = − + (39)
s (s + ζωn )2 + ωn 2 (1 − ζ 2 ) (s + ζωn )2 + ωn 2 (1 − ζ 2 )
p
also we know that ωd = ωn 1 − ζ 2 hence,
 
1 s + ζωn ζ ωd
Y (s) = − +p (40)
s (s + ζωn )2 + ωd 2 1 − ζ2 (s + ζωn )2 + ωd 2

28
Therefore the output response is obtained after taking the inverse Laplace Transform of Eq.
(40),
( )
ζ
y(t) = 1 − e−ζωn t cosωd t + p sinωd t (41)
1 − ζ2

2. Critically-damped case (ζ = 1) Based on Eq. (37), since the damping ratio ζ = 1, the roots
are real and equal. Hence, the output response Y (s) is given by,
ωn2
 
1 1 ωn 1
Y (s) = 2
= − 2
− (42)
s (s + ωn ) s (s + ωn ) s + ωn

Therefore the response y(t) is given as,


y(t) = 1 − ωn te−ωn t − e−ωn t (43)

3. Over-damped
p case (ζ > 1) From Eq. (37), the system poles are given by, s1,2 = −ζωn ±
ωn ζ 2 − 1. Hence it can be written as follow
!
1 ωn2
Y (s) = p p (44)
s (s + ζωn + ωn ζ 2 − 1)(s + ζωn − ωn ζ 2 − 1)

By partial fraction expansion,


!
1 A1 A2
Y (s) = − p + p (45)
s (s + ζωn + ωn ζ 2 − 1) (s + ζωn − ωn ζ 2 − 1)

Therefore the response y(t) is given as,


  √   √  
− ζωn +ωn ζ 2 −1 t − ζωn −ωn ζ 2 −1 t
y(t) = 1 − A1 e + A2 e (46)

for some constants A1 and A2 . This response has no oscillations; a sample response is shown
in Figure 32
As we can see from the responses for various values of ζ, a larger value of ζ corresponds to fewer
oscillations and less overshoot, and thus ζ is said to represent the amount of damping in the re-
sponse (a higher value of ζ corresponds to more damping).
The quantity ωn is called the undamped natural frequency, since it represents the location of the
pole on the imaginary axis when there is no damping (i.e., when ζ = 0). The quantity ωd is called
the damped natural frequency, since it represents the imaginary part of the pole when there is
damping. When ζ = 0, we have ωd = ωn .

The step response of the second order system will have different characteristics, depending on the
location of the poles (which are a function of the values ζ and ωn ). We will now introduce some
measures of performance for these step responses.

29
Figure 32: Second order system response

4.3 Time-domain performance measures


Performance of a second order system is often specified in terms of the characteristics of the step
response.
1. Delay time, td is the time required for the response to reach half the final value the very
first time.
2. Rise time, tr is the time required for the for a unit step response to rise from 10% to 90%,
5% to 95%, or 0% to 100% of its final value. For underdamped second order systems, the 0%
to 100% rise time is normally used. For over-damped systems, the 10% to 90% rise time is
commonly used.

tr = (π − β)/ωd (47)

where,
ωd p
β = tan−1 , ωd = ωn 1 − ζ 2, σ = −ζωn
σ

3. Peak time, tp is the time required for the response to reach the first peak of the overshoot.

π
tp = p (48)
ωn 1 − ζ 2

4. Maximum overshoot, Mp is the maximum peak value of the response curve measured from
unity. If the final steady-state value of the response differs from unity, then it is common to
use the maximum percent overshoot.
−ζπ
p
2
Mp = e 1 − ζ (49)

30
5. Settling time, ts is defined as the time for the response to reach, and stay within, 2% of its
final value.

• Within 2%

4
ts = (50)
ζωn

• Within 5%
3
ts = (51)
ζωn

Figure 33: Time-domain performance measures

Performance objectives for a control system are often specified in terms of settling time, which
measures the speed of the system’s response, and the damping ratio or natural frequency, which
relate to the system’s transient behavior. The damping ratio influences overshoot and the decay
rate of oscillations, while the natural frequency determines the speed of oscillatory responses when
tracking a reference input or following a desired trajectory.
The relationships between pole locations and corresponding time-domain response characteristics
form the foundation for designing control systems. By strategically placing poles in the left half
of the complex plane, with careful consideration of their real and imaginary components, control
systems can be tailored to achieve desired performance characteristics, such as appropriate damping,
fast settling time, and minimal overshoot.
Example. Given the pole plot shown in Figure 35, find ζ, ωn , tp , Mp , and ts .
Solution. The damping ratio is given by ζ = cos(θ) = cos arctan 73 = 0.394. The natural


frequency, ωn , is the radial distance from the origin to the pole, or ωn = 72 + 32 = 7.616. The

31
Figure 34: System’s natural frequency and damping ratio

Figure 35: Pole location of a second order system

peak time is
π π
tp = p = = 0.449
ωn 1 − ζ 2 7

also,

−ζπ
p
2
Mp = e 1 − ζ = 0.26

and

4
ts = = 1.333s
ζωn

32
5 Introduction to PID Controller
A proportional-integral-derivative (PID) controller is a feedback mechanism widely used in in-
dustrial control systems. It minimizes the error between a measured process variable (controlled
variable) and a desired set-point by calculating an appropriate corrective action to achieve the
desired system performance.

The PID controller, also known as a three-term controller, involves the proportional, integral, and
derivative terms. The proportional term determines the reaction to the current error, the integral
term responds to the sum of recent errors, and the derivative term reacts to the rate at which the
error is changing. The weighted sum of these three terms is used to adjust the process through a
control element, such as the position of a control valve or the power supply to a heating element.
Figure 36 is a simplified representation of the PID controller.

Figure 36: PID Controller

5.1 Proportional term


The proportional term makes a change to the output, y(t) that is proportional to the current error
value. The proportional response can be adjusted by multiplying the error, e(t) by a constant Kp ,
called the proportional gain as given by Eq. (52)

P = Kp e(t) (52)

Where, Kp , is the proportional gain, a tuning parameter.


A high proportional gain results in a large output change for a given error. However, if the gain is
too high, the system can become unstable. Conversely, a low proportional gain results in a smaller
output response to a large error, leading to a less responsive (or sensitive) controller. If the gain is
too low, the control action may be insufficient to effectively respond to system disturbances.

33
5.2 Integral term
The contribution from the integral term is proportional to both the magnitude of the error and
the duration of the error. Summing the instantaneous error over time (integrating the error)
gives the accumulated offset that should have been corrected previously. The accumulated error
is then multiplied by the integral gain and added to the controller output. The magnitude of the
contribution of the integral term to the overall control action is determined by the integral gain,
Ki . The integral term is given by Eq. (53)

Z t
I = Ki e(τ )dτ (53)
0

Where, Ki is the integral gain, a tuning parameter.


The integral term (when added to the proportional term) accelerates the movement of the process
towards set-point and eliminates the residual steady-state error that occurs with a proportional
only controller. However, since the integral term is responding to accumulated errors from the
past, it can cause the present value to overshoot the set-point value (cross over the set-point).

5.3 Derivative term


The rate of change of the process error is calculated by determining the slope of the error over time
(i.e. its first derivative with respect to time) and multiplying this rate of change by the derivative
gain Kd . The magnitude of the contribution of the derivative term to the overall control action is
termed the derivative gain, Kd . The derivative term is given by Eq. (54)

de(t)
D = Kd (54)
dt

Where, Kd is the derivative gain, a tuning parameter.

The derivative control term is used to reduce the overshoot caused by the integral component and
improve the stability. However, differentiating a signal amplifies noise, making the derivative term
highly sensitive to noise in the error signal. If the noise and derivative gain are large enough, this
can cause the process to become unstable.

5.4 Summary
The PID control model involves the summation of the three control terms, where the output, u(t),
is the control signal or manipulated variable from the controller. Hence, the final form of the PID
controller is given by Eq. (55)

Z t
de(t)
u(t) = Kp e(t) + Ki e(τ )dτ + Kd (55)
0 dt

where e(t) = y(t) − r(t), is the error between a measured process variable (controlled variable), y(t)
and a desired set-point, r(t).

34
In some applications, only one or two modes of a PID controller may be used to provide the
appropriate system control. In the absence of the respective control actions, a PID controller may
be referred to as a PI, PD, P, or I controller. PI controllers are particularly common, as derivative
action is sensitive to measurement noise, and the absence of an integral term may prevent the
system from reaching its target value.

5.5 Designing a PID Controller


There are four main steps to PID controller design:
1. Understand capabilities of PID components (as summarized previously)
2. Understand the strengths and weaknesses of the system’s existing performance (perform sim-
ple time-domain analysis, often step response test)
3. Specify desired closed-loop performance – general considerations and design specifications
4. Select the PID controller coefficients – may be done manually or automatically

5.6 Tuning via Process Models


A more effective approach to controller tuning is to use system knowledge, such as transfer functions
and formulas for time responses to reference inputs. Once the response requirements have been
specified, the control system designer can calculate the required values of Kp , Ki and Kd which
together constitute K(s). The design specifications are normally specified in terms of decay rate
of the transient response or settling time, steady-state error, disturbance rejection, and level of
overshoot.
In order to understand and assess the effects of different controller terms, we will consider a standard
transfer function block diagram which has a reference input and a disturbance signal as shown in
Figure 37

Figure 37: Closed Loop System

The control system performance can be accessed in terms of stability, reference tracking and dis-
turbance rejection.

Y (s) = D(s) + Gp (s)U (s), U (s) = K(s)E(s), E(s) = R(s) − Y (s) (56)

∴ Y (s) = D(s) + Gp (s)K(s)(R(s) − Y (s)) (57)

Hence, the overall system response is obtained as follows,

35
Gp (s)K(s) 1
Y (s) = R(s) + D(s) (58)
1 + Gp (s)K(s) 1 + Gp (s)K(s)

For stability we analyze the poles of

1 + Gp (s)K(s) (59)

For reference tracking we analyze the closed-loop transfer function

Gp (s)K(s)
G(s) = (60)
1 + Gp (s)K(s)

and for disturbance rejection we analyze the disturbance transfer function

1
S(s) = (61)
1 + Gp (s)K(s)

Three properties of interest:


1. Closed-loop system stability i.e. make sure that the poles of 1 + Gp (s)K(s) are in the LHP
of the s-plane.
2. Reference tracking performance i.e. look at shape of time response (speed, overshoot) and
steady-state error.
3. Disturbance rejection performance i.e. look at speed of response, size of peak disturbance
and make sure no steady-state error due to disturbance.

5.7 Examples
Example. Figure 38 shows a process which is being controlled in a closed-loop system with a
unity feedback system. The process is to be controlled using the proportional term so that it has
a damping ratio of 0.5. Determine the proportional gain, Kp

Figure 38: Proportional control

Solution. Given the open loop gain,


Kp
s(s + 10)

36
Figure 39: Effect of Kp on the pole location

Therefore, the closed-loop transfer function becomes


Kp
G(s) =
s2 + 10s + Kp

comparing it with the standard second order transfer function,


ωn
G(s) =
s2 + 2ζωn s + ωn 2

the following can be determined


2ζωn = 10, ωn 2 = Kp

for given value of the damping ratio, ζ = 0.5


25
∴ Kp = = 100
0.52

Figure 39 shows the effect of varying the gain Kp on the overall stability of the system. It can be
seen that as Kp increases the poles shift away from the Imaginary axis on the s-plane diagram.
Hence the system is stable for positive values of Kp and the response speeds up with increasing Kp .
Example. Consider a position control system with low damping where the system output is given
in mm and the process input is in volts. Let the system be modeled by the following transfer
function
2.5
G1 (s) = 2
s + 0.1s + 0.25

It is required to control the system behavior using a PD controller and unity feedback. calculate
the gains in a PD controller which give a closed-loop damping ratio of 0.5 and a closed-loop natural
frequency of 4 rad/s.
Solution. The closed-loop transfer function is given by

K(s)G1 (s)
G(s) =
1 + K(s)G1 (s)

37
but, K(s) = Kp + Kd s hence, the closed-loop transfer function becomes

2.5 (Kp + Kd s)
G(s) =
s2 + (0.1 + 2.5Kd ) s + (0.25 + 2.5Kp )

Compairing the denominator of the closed-loop transfer function with the standard second-order
form (s2 + 2ζωn + ωn 2 ) the coefficient terms can be determined as follows

ωn 2 = 0.25 + 2.5Kp , 2ζωn = 0.1 + 2.5Kd

from the design requirements, in order that the closed-loop natural frequency to be 4 rad/s we
require 42 = 0.25 + 2.5Kp which gives a value for the proportional controller gain ofKp = 6.3. In
order that the closed-loop damping is equal to 0.5 we require 2 × 0.5 × 4 = 0.1 + 2.5Kd which gives
a value for the derivative controller gain of Kd = 1.56
Example. Consider the following transfer function which represents a heating system as shown in
Figure 40:

0.3
G1 (s) =
2s + 1

The input signal U (s) is the power in kW from the heater and the output signal Y (s) is the resulting
temperature. The time constant τ of the system is 2 hours.

Figure 40: Integral control

The system behavior is to be controlled using an integral controller with a unity feedback. Assume
that the reference signal is a unit step input and the disturbance is given by d(t) = 0.5.
Solution. Forward path transfer function, Go (s):

Ki 0.3Ki
Go (s) = G1 (s) = (62)
s s(2s + 1)

Hence, the closed-loop transfer function, G(s) is given by

G1 (s)K(s) 0.3Ki
G(s) = = 2 (63)
1 + G1 (s)K(s) 2s + s + 0.3Ki

38
• Closed-loop stability and response: √
Two closed-loop poles at s1,2 = −0.25 ± 0.25 1 − 2.4Ki . System is stable for all positive
values of Ki .
Therefore, over-damped, critically damped and under-damped behavior is possible. Response
speed improves as Ki increases. Steady-state error is zero for all cases.
• Reference tracking performance:
Output response to reference input

G1 (s)K(s)
Y (s) = G(s)U (s) = U (s)
1 + G1 (s)K(s)

Final value theorem gives


 
G1 (s)K(s)
yss = lim sY (s) = lim s U (s)
s→0 s→0 1 + G1 (s)K(s)

For the above system,


 
0.3Ki 1 0.3Ki
yss = lim s = =1
s→0 2s2 + s + 0.3Ki s 0.3Ki

• Disturbance rejection performance:


Output response to disturbance input
1
Y (s) = S(s)D(s) = D(s)
1 + G1 (s)K(s)

Final value theorem gives


 
1
yss = lim sY (s) = lim s D(s)
s→0 s→0 1 + G1 (s)K(s)

For the above system,

2s2 + s
 
0.5 0
yss = lim s 2
= =0
s→0 2s + s + 0.3Ki s 0.3Ki

Response speed improves as Ki increases but can cause overshoot, whereas the steady-state
error is always zero due to the integral term. The presence of integral term in a controller
eliminates constant offset signals for both reference tracking and disturbance rejection.

5.7.1 Summary of the effects of PID Controller Gains


1. Increasing the proportional gain Kp :
• Faster speed of response.
• Improved reference tracking.
• Improved disturbance rejection.

39
• Can lead to overshoots when controlling second-order and higher systems.
• Physical limitation on gain
2. Increasing integral gain Ki :
• Faster speed of response.
• Can lead to overshoots even when controlling first-order systems.
• Can cause instability
• Perfect reference tracking.
• Perfect disturbance rejection.
3. Increasing derivative gain Kd :
• Increases level of damping.
• No effect on both reference tracking and disturbance rejection.

6 Exercise
Assignments and class quizzes will be provided separately.

The recommended textbooks for this unit chapter are Control Systems: Theory and Applications
[1] and Control Systems Engineering [2].

References
[1] Smarajit Ghosh et al. Control Systems: Theory and Applications. Pearson Education India,
2004.

[2] Norman S Nise. Control systems engineering. John Wiley & Sons, 2020.

40

You might also like