0% found this document useful (0 votes)
15 views

PID Control

Uploaded by

chungkailun1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

PID Control

Uploaded by

chungkailun1
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

PID Control

Raul Rojas

Institut für Informatik


Freie Universität Berlin
Takustr. 9, 14195 Berlin, Germany

1 The Control Problem

Given a system or process whose behavior can be controlled with one or more
input signals, the problem arises of inducing a desired state or response in an
optimal way. “Optimal” could refer here to obtaining the desired system’s re-
sponse in minimal time, or with a minimum number of oscillations, or without
overshooting, etc.

In this chapter we discuss the control problem from the point of view of a sys-
tem with feedback. Historically, many feedback controlled systems were built
before control theory developed in the 19th and 20th centuries. Control the-
ory is now a fairly mathematical subject, in which processes are described in
the time or frequency domain using special transformations, most notably the
Laplace Transformation. Here we will deal with a special case of a controller,
the PID controller, which is simple but nevertheless the most popular choice for
system control.

If we had an analytical model of a process valid for a broad range of environ-


mental conditions, we could compute the optimal input in advance, or even the
optimal sequence of input signals needed to obtain a desired behavior. Let us as-
sume that the response of the system to an input x is provided by the functional
model f (x). Then, the required input for the response y is just f −1 (y). Having
an “inverse model” of the system allows us to input the control variable to the
system directly and even without a feedback signal. More frequently, however,
although we can have detailed knowledge of the system, it is very difficult to
compute in advance the optimal input due to environmental disturbances and
noise. We then procede using a kind of “trial and error” approach, testing with
one input signal, and adjusting its value according to the measured error. The
corrections are not made blindly, but based in the detected past behavior of the
system, as we show below.
2

When a system is controlled without looking to the actual behavior or output,


we have an open-loop system (open loop because the possible feedback loop has
not been established). Open loop systems are simpler and can be adequate in
some cases. A closed-loop system is one in which a measurement of the system’s
state is compared to the desired goal, and the difference can then be used to
make corrections.

Fig. 1 shows a diagram of a system prepared for closed-loop control. The con-
troller, nowadays almost always a microprocessor, receives the desired output
value as well as the measured error of the system’s response (initialized to zero at
the beginning). The controller produces a control signal which drives the robot’s
actuators. This, together with environmental disturbances, produces the actual
system’s output. The output can be monitored using sensors possibly affected by
noise. The measurement is then compared inside the controller with the desired
response, and the controller produces a corrected control signal. The controller is
activated many times and very rapidly in comparison with the process’ inertia.

Fig. 1. A generic modell of control with feedback

The main question is therefore, what should be the structure of the controller
box. A very popular strategy for driving such closed-loop systems are PID con-
trollers. The letter P stands here for proportional, I for integral, and D for
differential.1 The letters P, I, and D refer also to the three gain constants used
in PID controllers.
1
N. Minorsky is usually credited as having been the first to use and describe a PID
controller, in this case, for steering of ships.
3

In what follows we take a very simple example, a DC motor, and discuss what
happens when a desired rotational speed is needed. We assume that the controller
does not contain any analytical model of the motor which could allow it to set
the input voltage a priori according to the desired speed. We only assume that
we can make the voltage lower or higher (between −1 and 1 Volts) in fractional
steps, and that we can monitor the instantaneous rotational speed of the motor.
A mass with moment of inertia I is fixed to the motor’s rotor.

In a DC motor the relationship between the torque and the voltage is linear. In
the examples below we use the following relationship between torque D, voltage
V , and angular velocity ω:
D = a(V − bω)
where a and b are constants specific to the motor. The term bω is the braking
voltage produced by the DC motor, which also behaves as a generator, when
it is running at angular velocity ω. When the DC motor reaches its maximum
speed, the braking voltage is equal to the maximum voltage and the torque is
zero. The constant a converts from units of voltage to units of torque.

Fig. 2. A classical PID controller. The input to the controller is the reference signal
minus the system’s output.

When the mass m attached to the rotor is small (i.e. I is small) the motor
responds to high voltages with a “jump-start”. When I is large, it takes much
more time to accelerate the DC motor. The PID controller should take charge
of minimizing the acceleration time, while at the same time avoiding control
oscillations, which are very common in robots or any process in which we try to
reach the steady state in minimal time. In what follows we first take a look at a
purely proportional controller, then a PI controller, and finally a PID controller.

Fig. 2 shows a schematics of a discrete bare bones PID controller. We denote


time by t. Time advances in discrete steps. The controller receives as input
4

not the reference signal r(t) but the difference between the current state of the
system given by s(t), and the reference signal r(t). This we call the error e(t).
The controller generates from the input e(t) a corrected control signal c(t + 1)
which drives the system, produces a new output s(t + 1), a new error, which is
fed back to the controller box.

It is very important to notice that in this simple feedback loop the original
value of the reference signal r(t) is not directly visible to the controller. In some
variations of PID this assumption is eliminated.

The classical textbook version of the control signal generated by a PID controller
reads as follows:

1 t
 Z 
de(t)
c(t) = Kp e(t) + e(τ )dτ + Td (1)
Ti 0 dt

where Kp , Kp /Ti and Kp Td are the proportional, integral, and differential control
gains, as we discuss below. Sometimes the equation is written in a more simple
manner Z t
de(t)
c(t) = Kp e(t) + Ki e(τ )dτ + Kd (2)
0 dt
where Kp , Ki and Kd are the proportional, integral and differential gains. Eq. 1
has a form which allows us to associate an intuitive meaning with the constants
Ti and Td , as explained below.

The formulas describe a continuous PID controller in which the process is run-
ning continuously and the feedback loop is so fast as to seem continuous (for
example, using an analog circuit). The integral of the current instantaneous er-
ror e(t) divided by Ti is a measure of how far off the system has been from the
desired response in the past, and the derivative of the error is a measure of how
fast the error is changing. The derivative of the error, multiplied by a time step
Td , can be used as a predictor of the future error. A PID controller, therefore,
adjusts the control signal according to the current error, a measure of the error
in the past, and the predicted error in the near future.

 
Z t
 1 de(t) 
c(t) = Kp  e(t) + e(τ )dτ + Td (3)
 
Ti | {zdt }

 |{z} 0 
current error | {z }
average past error future error

If we take Ti = t and Td = 1, then the current error is e(t), the I-term is the
average error in the past, and the D-term is the predicted error in the next time
step. They all enter in the computation. Their sum is multiplied by the constant
5

Kp and is used as the control signal. Variations in the values of Ti and Td provide
different weighted averages of past, current, and future error.

According to surveys of the kind of control methods most popular in industry,


PID controllers account for the bulk of all devices used in process control [3].
They are popular because they are simple, analyzable, and because there is
already a rich literature which deals with them.

2 Proportional Controller

A P-controller (proportional controller) runs in a closed-loop, which in the case


of robots can be monitored up to several thousand times per second. During the
control loop there is a difference at time t between the desired system state r(t)
(the reference signal), and the current system state s(t). Therefore, the error
computed in the controller at time t is e(t) = r(t) − s(t).

Fig. 3. Rotational velocity of a DC motor. The voltage is adjusted with a P controller.


The curves show the results for gains Kp equal to 0.5, 1, 1.5, 2. The maximum voltage
is 1.

Let us denote by c(t) the control signal applied at time t. In the discrete case,
the control signal for the next control iteration can be computed as
c(t + 1) = Kp · e(t),
6

where Kp is a proportionality constant which transforms from state to control


units, and which determines how large the control signal should be.

In the case of the DC motor described above, we can experiment with several
control signals. Starting from c(0) = 0, for a desired fixed angular velocity r(·) =
0.5, for given motor parameters and a fixed mass with moment of inertia I, we
obtain the results shown in Fig. 3. The graphic shows how the rotational velocity
increases until the system comes close to the desired rotational speed. The motor
does not reach the desired velocity of 0.5 because should the error (e(t) fall to
zero, then the control variable c(t + 1) (in this case the voltage) would also fall
to zero. The system reaches the steady state when a certain error e(t) is such
that the computed control variable c(t + 1) produces the error e(t + 1) = e(t).
The error stabilizes at a non-zero level which produces a constant control signal.

In the example shown in Fig. 3 the controller produces at the beginning voltages
higher than 1 Volt. In such case the control signal has become saturated and we
clip it at 1 Volt. This introduces a further complication in the control process,
a nonlinearity which in this case is not so important, but which can afflict the
control of other kind of systems.

Since the motor does not reach 0.5 as angular velocity, we can try to force it
to reach that speed, but only at the expense of increasing excessively the pro-
portional gain. The rotational speed would oscillate then around an equilibrium
value which is still different from 0.5.

3 PI Controller

The solution to the problem of the system settling on a steady state s(t) which is
different from the desired state r(t) is to add another term to the computation of
the control signal. An integral controller adds up the error during the successive
control cycles (as shown in Eq. 1). This integrated error can then be used to
adjust the control variable c(t). If, for example, the error has been positive for
many past cycles, the correction to the control variable should be increased more
drastically.

The control expression used in a PI controller is of the form


t
Kp X
c(t + 1) = Kp e(t) + e(i)
Ti 0
Pt
where Ti is the integration time constant, and 0 e(i) is the discrete integral of
the past errors until time t. Adjusting Ti and for a given Kp , we obtain different
7

weights for the integral of the past error. The factors Ti /(1 + Ti ) and 1/(1 + Ti )
are the relative weights of the P-term and the I-term before the are multiplied
by the constant Kp .

Fig. 4. Proportional and integral control of a DC motor

Fig. 4 shows the result of using the integral of the error in the computation
of the control signal. Now the velocity of the motor overshoots, but falls back
to the desired value of 0.5 gently or with several oscillations, depending on the
proportionality constant. In this example Kp takes the values 0.5, 1, 1.5 and 2.
The value of Ti was set to .

It is clear why the velocity oscillates. Since the accumulated error is positive
and large when the motor reaches the desired speed 0.5, the system has to
accumulate negative error in order to reduce the total accumulated error. When
the accumulation of negative error is too fast, a new cycle of accumulation of
positive error can be needed, and so on.

The I-term introduces a kind of inertia in the controller. Even when e(t) = 0,
the control signal does not become 0 instantly. This is similar to what would
happen if we would compute the control signal at time t + 1 as
c(t + 1) := c(t) + Kp e(t)
If the control signal, for example, becomes too high, it cannot be brought down
too fast when Kp is low. Many correction cycles have to accumulated in order
to make c smaller.
8

One problem related to the I-term is the fact that a large integrated error in the
past can lead to a large I-term. The control signal can become saturated if it is
larger than the maximal possible control value. In that case some PID algorithms
stop the integration of past errors so that the I-term does not continue growing.

4 PID Controller

The effect of the integral term is to eliminate the systematic offset from the
desired response r(t).

However, the introduction of the integral term can lead to undesirable oscillations
and makes the controller act “harder”. This problem can be ameliorated using
a differential term in the computation of the control signal.

For the differential term we use the discrete derivative of the error signal, that
is, the difference e(t) − e(t − 1). If the derivative is large, that is, if the error is
changing rapidly, we can decide to make the control signal smaller by an amount
Kp Td (e(t) − e(t − 1)). The control signal at t + 1 becomes then
t
!
1 X
c(t + 1) = Kp e(t) + e(i) + Td (e(t) − e(t − 1)) .
Ti 0

Fig. 5. PID control of a DC motor. The speed overshoots but less than with a pure PI
controller.
9

Using the formula above, we are mixing three terms. The relative weights of the
P-term, I-term, and D-Term are, respectively: Ti /(Ti +1+Ti Td ), 1/(Ti +1+Ti Td ),
and Ti Td /(Ti + 1 + Ti Td ).

Finding adequate combinations of the different gains for the process can be
difficult and is error prone. Therefore, automatic methods of gain adjustment
are needed.

A difficulty associated with the D-term is that system or measurement noise


can lead to high derivatives. Whenever there is a discontinuity in the reference
signal r(t) there is also a large derivative. Therefore, some PID controllers use a
low pass filter to smooth the derivative of the error signal. Mathematically this
amounts to computing a moving weighted average of the derivative of the error
before it is used in the D-term of the PID controller.

5 Tuning a PID controller

There are several recipes for tuning a PID controller, that is, for finding adequate
control gains for the controller constants Kp , Ti and Td . One possibility is to
examine many possible combinations of the constants using a simulator of the
system. But if the system is well understood, so that a good simulation can be
programmed, it is also probably feasible to compute the PID constants directly
from the mathematics of the system.

We have tuned our PID controller using Reinforcement Learning. More about
this experiments in another chapter.

One simple alternative for tuning a PID controller is the Ziegler-Nichols method.
It works as follows: first the controller is purely proportional (Ti = ∞, Td = 0).
The gain is increased until the system starts to oscillate. The proportional gain
is Ku at this point and the period of oscillation is Tu . From this data we set the
PID parameters according to Table 1. The predicted period of the closed loop
system, Tp , is also given in the table.

Controller Kp Ti Td Tp
P 0.5Ku Tu
PI 0.4Ku 0.8Tu 1.4Tu
PID 0.6Ku 0.5Tu 0.125Tu 0.85Tu
Table 1. PID Parameters
10

6 Our PID controller

In the case of our robots, the controller has been implemented as a program
running on an HC-12 microprocessor. Our robots have a platform with four
wheels, as shown in Fig. 6. For a discussion of the mathematical relationships
between motor torques and speeds, and robot forces and speeds, see [4]. Fig. 7
shows a diagram of our control software.

Fig. 6. Platform of a small size robot with four motors

The electronics receives a reference signal consisting of three parts: desired target
velocity vxt along the x axis (in the reference frame of the robot), desired target
velocity vyt along the y axis, and desired target angular velocity ω t . The controller
consists of three independent PID controllers, one for each dimension. The error
signals for vx , vy and ω are handled separately. From the robot we obtain the
motor pulses and from them we can compute the current velocities vx and vy ,
as well as the angular velocity ω. From these, the three PID controllers produce
the control signals fx , fy , and τ , which are the desired forces needed to drive
the robot (the forces are proportional to the desired accelerations).

The control signals are then transformed independently into the desired motor
torques t1x , t2x , t3x , t4x (for our four motors) for the fx signal, into t1y , t2y , t3y , t4y for
the ty signal, and into t1ω , t2ω , t3ω , t4ω for the τ signal. We add up the individual
contributions for all motors, so that t1 = t1x + t1y + t1ω and similarly for t2 , t3 and
t4 .
11

Fig. 7.

We next test if the motor torques can be provided and if the system is not
saturated.

From the scaled motor torques and the current motor speeds (given by the motor
pulses readings m1 , m2 , m3 , m4 we compute the necessary PWM signals for each
of the four motors.

Our PID controller runs at 250 Hz, with a cycle time of 4 ms. New commands
arrive with a frequency of 50 Hz, so that for any new command we have at least
5 PID cycles for adjusting the motors. Since commands (the reference signals)
do not really change 50 times per second, but are kept constant over several
transmission cycles by the high-level control, we actually have several periods of
5 cycles for the PID controller to work and the system to settle down.

References

1. M. Horn, and N. Dourdoumas, Regelungstechnik, Pearson Studium, Munich, 2004.


2. Minorsky, N., “Directional Stability and Automatically Steered Bodies,” J. Am.
Soc. Nav. Eng., Vol. 34, p. 280, 1922.
3. K. J. Aström and R. M. Murray, Feedback Systems – An Introduction for Scientists
and Engineers, 2005.
12

4. R. Rojas, “Omnidirectional Control”, www.fu-fighters.de.

You might also like