PIDController
PIDController
Article
Talk
Read
Edit
View history
Tools
From Wikipedia, the free encyclopedia
A proportional–integral–derivative controller (PID controller or three-term
controller) is a control loop mechanism employing feedback that is widely used in
industrial control systems and a variety of other applications requiring
continuously modulated control. A PID controller continuously calculates an error
value
�
(
�
)
e(t) as the difference between a desired setpoint (SP) and a measured process
variable (PV) and applies a correction based on proportional, integral, and
derivative terms (denoted P, I, and D respectively), hence the name.
The first theoretical analysis and practical application of PID was in the field of
automatic steering systems for ships, developed from the early 1920s onwards. It
was then used for automatic process control in the manufacturing industry, where it
was widely implemented in pneumatic and then electronic controllers. Today the PID
concept is used universally in applications requiring accurate and optimized
automatic control.
Fundamental operation
A block diagram of a PID controller in a feedback loop. r(t) is the desired process
value or setpoint (SP), and y(t) is the measured process value (PV).
The distinguishing feature of the PID controller is the ability to use the three
control terms of proportional, integral and derivative influence on the controller
output to apply accurate and optimal control. The block diagram on the right shows
the principles of how these terms are generated and applied. It shows a PID
controller, which continuously calculates an error value
�
(
�
)
e(t) as the difference between a desired setpoint
SP
=
�
(
�
)
{\displaystyle {\text{SP}}=r(t)} and a measured process variable
PV
=
�
(
�
)
{\displaystyle {\text{PV}}=y(t)}:
�
(
�
)
=
�
(
�
)
−
�
(
�
)
{\displaystyle e(t)=r(t)-y(t)}, and applies a correction based on proportional,
integral, and derivative terms. The controller attempts to minimize the error over
time by adjustment of a control variable
�
(
�
)
u(t), such as the opening of a control valve, to a new value determined by a
weighted sum of the control terms.
In this model:
Control action – The mathematical model and practical loop above both use a direct
control action for all the terms, which means an increasing positive error results
in an increasing positive control output correction. The system is called reverse
acting if it is necessary to apply negative corrective action. For instance, if the
valve in the flow loop was 100–0% valve opening for 0–100% control output – meaning
that the controller action has to be reversed. Some process control schemes and
final control elements require this reverse action. An example would be a valve for
cooling water, where the fail-safe mode, in the case of loss of signal, would be
100% opening of the valve; therefore 0% controller output needs to cause 100% valve
opening.
Mathematical form
The overall control function
�
(
�
)
=
�
p
�
(
�
)
+
�
i
∫
0
�
�
(
�
)
d
�
+
�
d
d
�
(
�
)
d
�
,
{\displaystyle u(t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\tau )\,\mathrm {d}
\tau +K_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm {d} t}},}
where
�
p
{\displaystyle K_{\text{p}}},
�
i
{\displaystyle K_{\text{i}}}, and
�
d
{\displaystyle K_{\text{d}}}, all non-negative, denote the coefficients for the
proportional, integral, and derivative terms respectively (sometimes denoted P, I,
and D).
�
(
�
)
=
�
p
(
�
(
�
)
+
1
�
i
∫
0
�
�
(
�
)
d
�
+
�
d
d
�
(
�
)
d
�
)
{\displaystyle u(t)=K_{\text{p}}\left(e(t)+{\frac {1}{T_{\text{i}}}}\int
_{0}^{t}e(\tau )\,\mathrm {d} \tau +T_{\text{d}}{\frac {\mathrm {d} e(t)}{\mathrm
{d} t}}\right)}
Selective use of control terms
Although a PID controller has three control terms, some applications need only one
or two terms to provide appropriate control. This is achieved by setting the unused
parameters to zero and is called a PI, PD, P or I controller in the absence of the
other control actions. PI controllers are fairly common in applications where
derivative action would be sensitive to measurement noise, but the integral term is
often needed for the system to reach its target value.
Applicability
The use of the PID algorithm does not guarantee optimal control of the system or
its control stability (see § Limitations, below). Situations may occur where there
are excessive delays: the measurement of the process value is delayed, or the
control action does not apply quickly enough. In these cases lead–lag compensation
is required to be effective. The response of the controller can be described in
terms of its responsiveness to an error, the degree to which the system overshoots
a setpoint, and the degree of any system oscillation. But the PID controller is
broadly applicable since it relies only on the response of the measured process
variable, not on knowledge or a model of the underlying process.
History
Early PID theory was developed by observing the actions of helmsmen in keeping a
vessel on course in the face of varying influences such as wind and sea state.
Pneumatic PID (three-term) controller. The magnitudes of the three terms (P, I and
D) are adjusted by the dials at the top.
Origins
Continuous control, before PID controllers were fully understood and implemented,
has one of its origins in the centrifugal governor, which uses rotating weights to
control a process. This was invented by Christiaan Huygens in the 17th century to
regulate the gap between millstones in windmills depending on the speed of
rotation, and thereby compensate for the variable speed of grain feed.[2][3]
With the invention of the low-pressure stationary steam engine there was a need for
automatic speed control, and James Watt’s self-designed "conical pendulum"
governor, a set of revolving steel balls attached to a vertical spindle by link
arms, came to be an industry standard. This was based on the millstone-gap control
concept.[4]
About this time, the invention of the Whitehead torpedo posed a control problem
that required accurate control of the running depth. Use of a depth pressure sensor
alone proved inadequate, and a pendulum that measured the fore and aft pitch of the
torpedo was combined with depth measurement to become the pendulum-and-hydrostat
control. Pressure control provided only a proportional control that, if the control
gain was too high, would become unstable and go into overshoot with considerable
instability of depth-holding. The pendulum added what is now known as derivative
control, which damped the oscillations by detecting the torpedo dive/climb angle
and thereby the rate-of-change of depth.[6] This development (named by Whitehead as
"The Secret" to give no clue to its action) was around 1868.[7]
It was not until 1922, however, that a formal control law for what we now call PID
or three-term control was first developed using theoretical analysis, by Russian
American engineer Nicolas Minorsky.[9] Minorsky was researching and designing
automatic ship steering for the US Navy and based his analysis on observations of a
helmsman. He noted the helmsman steered the ship based not only on the current
course error but also on past error, as well as the current rate of change;[10]
this was then given a mathematical treatment by Minorsky.[4] His goal was
stability, not general control, which simplified the problem significantly. While
proportional control provided stability against small disturbances, it was
insufficient for dealing with a steady disturbance, notably a stiff gale (due to
steady-state error), which required adding the integral term. Finally, the
derivative term was added to improve stability and control.
Trials were carried out on the USS New Mexico, with the controllers controlling the
angular velocity (not the angle) of the rudder. PI control yielded sustained yaw
(angular error) of ±2°. Adding the D element yielded a yaw error of ±1/6°, better
than most helmsmen could achieve.[11]
The Navy ultimately did not adopt the system due to resistance by personnel.
Similar work was carried out and published by several others in the 1930s.
Industrial control
Proportional control using nozzle and flapper high gain amplifier and negative
feedback
The wide use of feedback controllers did not become feasible until the development
of wideband high-gain amplifiers to use the concept of negative feedback. This had
been developed in telephone engineering electronics by Harold Black in the late
1920s, but not published until 1934.[4] Independently, Clesson E Mason of the
Foxboro Company in 1930 invented a wide-band pneumatic controller by combining the
nozzle and flapper high-gain pneumatic amplifier, which had been invented in 1914,
with negative feedback from the controller output. This dramatically increased the
linear range of operation of the nozzle and flapper amplifier, and integral control
could also be added by the use of a precision bleed valve and a bellows generating
the integral term. The result was the "Stabilog" controller which gave both
proportional and integral functions using feedback bellows.[4] The integral term
was called Reset.[12] Later the derivative term was added by a further bellows and
adjustable orifice.
From about 1932 onwards, the use of wideband pneumatic controllers increased
rapidly in a variety of control applications. Air pressure was used for generating
the controller output, and also for powering process modulating devices such as
diaphragm-operated control valves. They were simple low maintenance devices that
operated well in harsh industrial environments and did not present explosion risks
in hazardous locations. They were the industry standard for many decades until the
advent of discrete electronic controllers and distributed control systems (DCSs).
With these controllers, a pneumatic industry signaling standard of 3–15 psi (0.2–
1.0 bar) was established, which had an elevated zero to ensure devices were working
within their linear characteristic and represented the control range of 0-100%.
In the 1950s, when high gain electronic amplifiers became cheap and reliable,
electronic PID controllers became popular, and the pneumatic standard was emulated
by 10-50 mA and 4–20 mA current loop signals (the latter became the industry
standard). Pneumatic field actuators are still widely used because of the
advantages of pneumatic energy for control valves in process plant environments.
Showing the evolution of analog control loop signaling from the pneumatic to the
electronic eras
Current loops used for sensing and control signals. A modern electronic "smart"
valve positioner is shown, which will incorporate its own PID controller.
Most modern PID controls in industry are implemented as computer software in DCSs,
programmable logic controllers (PLCs), or discrete compact controllers.
Proportional
The obvious method is proportional control: the motor current is set in proportion
to the existing error. However, this method fails if, for instance, the arm has to
lift different weights: a greater weight needs a greater force applied for the same
error on the down side, but a smaller force if the error is low on the upside.
That's where the integral and derivative terms play their part.
Integral
An integral term increases action in relation not only to the error but also the
time for which it has persisted. So, if the applied force is not enough to bring
the error to zero, this force will be increased as time passes. A pure "I"
controller could bring the error to zero, but it would be both slow reacting at the
start (because the action would be small at the beginning, depending on time to get
significant) and brutal at the end (the action increases as long as the error is
positive, even if the error has started to approach zero).
Applying too much integral when the error is small and decreasing will lead to
overshoot. After overshooting, if the controller were to apply a large correction
in the opposite direction and repeatedly overshoot the desired position, the output
would oscillate around the setpoint in either a constant, growing, or decaying
sinusoid. If the amplitude of the oscillations increases with time, the system is
unstable. If they decrease, the system is stable. If the oscillations remain at a
constant magnitude, the system is marginally stable.
Derivative
A derivative term does not consider the magnitude of the error (meaning it cannot
bring it to zero: a pure D controller cannot bring the system to its setpoint), but
the rate of change of error, trying to bring this rate to zero. It aims at
flattening the error trajectory into a horizontal line, damping the force applied,
and so reduces overshoot (error on the other side because of too great applied
force).
Control damping
In the interest of achieving a controlled arrival at the desired position (SP) in a
timely and accurate way, the controlled system needs to be critically damped. A
well-tuned position control system will also apply the necessary currents to the
controlled motor so that the arm pushes and pulls as necessary to resist external
forces trying to move it away from the required position. The setpoint itself may
be generated by an external system, such as a PLC or other computer system, so that
it continuously varies depending on the work that the robotic arm is expected to
do. A well-tuned PID control system will enable the arm to meet these changing
requirements to the best of its capabilities.
Response to disturbances
If a controller starts from a stable state with zero error (PV = SP), then further
changes by the controller will be in response to changes in other measured or
unmeasured inputs to the process that affect the process, and hence the PV.
Variables that affect the process other than the MV are known as disturbances.
Generally, controllers are used to reject disturbances and to implement setpoint
changes. A change in load on the arm constitutes a disturbance to the robot arm
control process.
Applications
In theory, a controller can be used to control any process that has a measurable
output (PV), a known ideal value for that output (SP), and an input to the process
(MV) that will affect the relevant PV. Controllers are used in industry to regulate
temperature, pressure, force, feed rate,[15] flow rate, chemical composition
(component concentrations), weight, position, speed, and practically every other
variable for which a measurement exists.
Controller theory
This section describes the parallel or non-interacting form of the PID controller.
For other forms please see § Alternative nomenclature and forms.
The PID control scheme is named after its three correcting terms, whose sum
constitutes the manipulated variable (MV). The proportional, integral, and
derivative terms are summed to calculate the output of the PID controller. Defining
�
(
�
)
u(t) as the controller output, the final form of the PID algorithm is
�
(
�
)
=
M
V
(
�
)
=
�
p
�
(
�
)
+
�
i
∫
0
�
�
(
�
)
�
�
+
�
d
�
�
(
�
)
�
�
,
{\displaystyle u(t)=\mathrm {MV} (t)=K_{\text{p}}e(t)+K_{\text{i}}\int _{0}^{t}e(\
tau )\,d\tau +K_{\text{d}}{\frac {de(t)}{dt}},}
where
�
p
{\displaystyle K_{\text{p}}} is the proportional gain, a tuning parameter,
�
i
{\displaystyle K_{\text{i}}} is the integral gain, a tuning parameter,
�
d
{\displaystyle K_{\text{d}}} is the derivative gain, a tuning parameter,
�
(
�
)
=
S
P
−
P
V
(
�
)
{\displaystyle e(t)=\mathrm {SP} -\mathrm {PV} (t)} is the error (SP is the
setpoint, and PV(t) is the process variable),
�
t is the time or instantaneous time (the present),
�\tau is the variable of integration (takes on values from time 0 to the present
�
t).
Equivalently, the transfer function in the Laplace domain of the PID controller is
�
(
�
)
=
�
p
+
�
i
/
�
+
�
d
�
,
{\displaystyle L(s)=K_{\text{p}}+K_{\text{i}}/s+K_{\text{d}}s,}
where
�
s is the complex frequency.
Proportional term
Response of PV to step change of SP vs time, for three values of Kp (Ki and Kd held
constant)
The proportional term produces an output value that is proportional to the current
error value. The proportional response can be adjusted by multiplying the error by
a constant Kp, called the proportional gain constant.
�
out
=
�
p
�
(
�
)
.
{\displaystyle P_{\text{out}}=K_{\text{p}}e(t).}
A high proportional gain results in a large change in the output for a given change
in the error. If the proportional gain is too high, the system can become unstable
(see the section on loop tuning). In contrast, a small gain results in a small
output response to a large input error, and a less responsive or less sensitive
controller. If the proportional gain is too low, the control action may be too
small when responding to system disturbances. Tuning theory and industrial practice
indicate that the proportional term should contribute the bulk of the output
change.[citation needed]
Steady-state error
The steady-state error is the difference between the desired final output and the
actual one.[16] Because a non-zero error is required to drive it, a proportional
controller generally operates with a steady-state error.[a] Steady-state error
(SSE) is proportional to the process gain and inversely proportional to
proportional gain. SSE may be mitigated by adding a compensating bias term to the
setpoint AND output or corrected dynamically by adding an integral term.
Integral term
Response of PV to step change of SP vs time, for three values of Ki (Kp and Kd held
constant)
The contribution from the integral term is proportional to both the magnitude of
the error and the duration of the error. The integral in a PID controller is the
sum of the instantaneous error over time and gives the accumulated offset that
should have been corrected previously. The accumulated error is then multiplied by
the integral gain (Ki) and added to the controller output.
�
out
=
�
i
∫
0
�
�
(
�
)
�
�
.
{\displaystyle I_{\text{out}}=K_{\text{i}}\int _{0}^{t}e(\tau )\,d\tau .}
The integral term accelerates the movement of the process towards setpoint and
eliminates the residual steady-state error that occurs with a pure proportional
controller. However, since the integral term responds to accumulated errors from
the past, it can cause the present value to overshoot the setpoint value (see the
section on loop tuning).
Derivative term
Response of PV to step change of SP vs time, for three values of Kd (Kp and Ki held
constant)
The derivative of the process error is calculated by determining the slope of the
error over time and multiplying this rate of change by the derivative gain Kd. The
magnitude of the contribution of the derivative term to the overall control action
is termed the derivative gain, Kd.
�
out
=
�
d
�
�
(
�
)
�
�
.
{\displaystyle D_{\text{out}}=K_{\text{d}}{\frac {de(t)}{dt}}.}
Derivative action predicts system behavior and thus improves settling time and
stability of the system.[17][18] An ideal derivative is not causal, so that
implementations of PID controllers include an additional low-pass filtering for the
derivative term to limit the high-frequency gain and noise. Derivative action is
seldom used in practice though – by one estimate in only 25% of deployed
controllers[citation needed] – because of its variable impact on system stability
in real-world applications.
Loop tuning
Tuning a control loop is the adjustment of its control parameters (proportional
band/gain, integral gain/reset, derivative gain/rate) to the optimum values for the
desired control response. Stability (no unbounded oscillation) is a basic
requirement, but beyond that, different systems have different behavior, different
applications have different requirements, and requirements may conflict with one
another.
Even though there are only three parameters and it is simple to describe in
principle, PID tuning is a difficult problem because it must satisfy complex
criteria within the limitations of PID control. Accordingly, there are various
methods for loop tuning, and more sophisticated techniques are the subject of
patents; this section describes some traditional, manual methods for loop tuning.
Designing and tuning a PID controller appears to be conceptually intuitive, but can
be hard in practice, if multiple (and often conflicting) objectives, such as short
transient and high stability, are to be achieved. PID controllers often provide
acceptable control using default tunings, but performance can generally be improved
by careful tuning, and performance may be unacceptable with poor tuning. Usually,
initial designs need to be adjusted repeatedly through computer simulations until
the closed-loop system performs or compromises as desired.
Some processes have a degree of nonlinearity, so parameters that work well at full-
load conditions do not work when the process is starting up from no load. This can
be corrected by gain scheduling (using different parameters in different operating
regions).
Stability
If the PID controller parameters (the gains of the proportional, integral and
derivative terms) are chosen incorrectly, the controlled process input can be
unstable; i.e., its output diverges, with or without oscillation, and is limited
only by saturation or mechanical breakage. Instability is caused by excess gain,
particularly in the presence of significant lag.
Generally, stabilization of response is required and the process must not oscillate
for any combination of process conditions and setpoints, though sometimes marginal
stability (bounded oscillation) is acceptable or desired.[citation needed]
�
(
�
)
=
�
(
�
)
�
(
�
)
1
+
�
(
�
)
�
(
�
)
H(s)={\frac {K(s)G(s)}{1+K(s)G(s)}}
where
�
(
�
)
K(s) is the PID transfer function and
�
(
�
)
G(s) is the plant transfer function. A system is unstable where the closed-loop
transfer function diverges for some
�
s.[19] This happens in situations where
�
(
�
)
�
(
�
)
=
−
1
K(s)G(s)=-1. Typically, this happens when
|
�
(
�
)
�
(
�
)
|
=
1
|K(s)G(s)|=1 with a 180-degree phase shift. Stability is guaranteed when
�
(
�
)
�
(
�
)
<
1
K(s)G(s)<1 for frequencies that suffer high phase shifts. A more general formalism
of this effect is known as the Nyquist stability criterion.
Optimal behavior
The optimal behavior on a process change or setpoint change varies depending on the
application.
The choice of method depends largely on whether the loop can be taken offline for
tuning, and on the response time of the system. If the system can be taken offline,
the best tuning method often involves subjecting the system to a step change in
input, measuring the output as a function of time, and using this response to
determine the control parameters.[citation needed]
Ziegler–Nichols method
Control Type
�
�
K_{p}
�
�
K_{i}
�
�
K_{d}
P
0.50
�
�
0.50{K_{u}} — —
PI
0.45
�
�
0.45{K_{u}}
0.54
�
�
/
�
�
{\displaystyle 0.54{K_{u}}/T_{u}} —
PID
0.60
�
�
0.60{K_{u}}
1.2
�
�
/
�
�
{\displaystyle 1.2{K_{u}}/T_{u}}
3
�
�
�
�
/
40
{\displaystyle 3{K_{u}}{T_{u}}/40}
These gains apply to the ideal, parallel form of the PID controller. When applied
to the standard PID form, only the integral and derivative gains
�
�
K_{i} and
�
�
K_{d} are dependent on the oscillation period
�
�
T_{u}.
Cohen–Coon parameters
This method was developed in 1953 and is based on a first-order + time delay model.
Similar to the Ziegler–Nichols method, a set of tuning parameters were developed to
yield a closed-loop response with a decay ratio of
1
4
\tfrac{1}{4}. Arguably the biggest problem with these parameters is that a small
change in the process parameters could potentially cause a closed-loop system to
become unstable.
Relay (Åström–Hägglund) method
Published in 1984 by Karl Johan Åström and Tore Hägglund,[24] the relay method
temporarily operates the process using bang-bang control and measures the resultant
oscillations. The output is switched (as if by a relay, hence the name) between two
values of the control variable. The values must be chosen so the process will cross
the setpoint, but they need not be 0% and 100%; by choosing suitable values,
dangerous oscillations can be avoided.
As long as the process variable is below the setpoint, the control output is set to
the higher value. As soon as it rises above the setpoint, the control output is set
to the lower value. Ideally, the output waveform is nearly square, spending equal
time above and below the setpoint. The period and amplitude of the resultant
oscillations are measured, and used to compute the ultimate gain and period, which
are then fed into the Ziegler–Nichols method.
�
(
�
)
=
�
�
�
−
�
�
�
�
�
+
1
∗
�
(
�
)
{\displaystyle y(s)={\frac {k_{p}e^{-\theta s}}{\tau _{p}s+1}}*u(s)}
where kp is the process gain, τp is the time constant, θ is the dead time, and u(s)
is a step change input. Converting this transfer function to the time domain
results in:
�
(
�
)
=
�
�
Δ
�
(
1
−
�
−
�
−
�
�
�
)
{\displaystyle y(t)=k_{p}\Delta u\left(1-e^{\frac {-t-\theta }{\tau _{p}}}\right)}
It is important when using this method to apply a large enough step change input
that the output can be measured; however, too large of a step change can affect the
process stability. Additionally, a larger step change ensures that the output does
not change due to a disturbance (for best results, try to minimize disturbances
when performing the step test).
One way to determine the parameters for the first-order process is using the 63.2%
method. In this method, the process gain (kp) is equal to the change in output
divided by the change in input. The dead time (θ) is the amount of time between
when the step change occurred and when the output first changed. The time constant
(τp) is the amount of time it takes for the output to reach 63.2% of the new
steady-state value after the step change. One downside to using this method is that
it can take a while to reach a new steady-state value if the process has large time
constants.[26]
Tuning software
Most modern industrial facilities no longer tune loops using the manual calculation
methods shown above. Instead, PID tuning and loop optimization software are used to
ensure consistent results. These software packages gather data, develop process
models, and suggest optimal tuning. Some software packages can even develop tuning
by gathering data from reference changes.
Mathematical PID loop tuning induces an impulse in the system and then uses the
controlled system's frequency response to design the PID loop values. In loops with
response times of several minutes, mathematical loop tuning is recommended, because
trial and error can take days just to find a stable set of loop values. Optimal
values are harder to find. Some digital loop controllers offer a self-tuning
feature in which very small setpoint changes are sent to the process, allowing the
controller itself to calculate optimal tuning values.
Another approach calculates initial values via the Ziegler–Nichols method, and uses
a numerical optimization technique to find better PID coefficients.[27]
Other formulas are available to tune the loop according to different performance
criteria. Many patented formulas are now embedded within PID tuning software and
hardware modules.[28]
Advances in automated PID loop tuning software also deliver algorithms for tuning
PID Loops in a dynamic or non-steady state (NSS) scenario. The software models the
dynamics of a process, through a disturbance, and calculate PID control parameters
in response.[29]
Limitations
While PID controllers are applicable to many control problems, and often perform
satisfactorily without any improvements or only coarse tuning, they can perform
poorly in some applications and do not in general provide optimal control. The
fundamental difficulty with PID control is that it is a feedback control system,
with constant parameters, and no direct knowledge of the process, and thus overall
performance is reactive and a compromise. While PID control is the best controller
in an observer without a model of the process, better performance can be obtained
by overtly modeling the actor of the process without resorting to an observer.
PID controllers, when used alone, can give poor performance when the PID loop gains
must be reduced so that the control system does not overshoot, oscillate or hunt
about the control setpoint value. They also have difficulties in the presence of
non-linearities, may trade-off regulation versus response time, do not react to
changing process behavior (say, the process changes after it has warmed up), and
have lag in responding to large disturbances.
Integral windup
Further information: Integral windup
One common problem resulting from the ideal PID implementations is integral windup.
Following a large change in setpoint the integral term can accumulate an error
larger than the maximal value for the regulation variable (windup), thus the system
overshoots and continues to increase until this accumulated error is unwound. This
problem can be addressed by:
Disabling the integration until the PV has entered the controllable region
Preventing the integral term from accumulating above or below pre-determined bounds
Back-calculating the integral term to constrain the regulator output within
feasible bounds.[31]
Overshooting from known disturbances
For example, a PID loop is used to control the temperature of an electric
resistance furnace where the system has stabilized. Now when the door is opened and
something cold is put into the furnace the temperature drops below the setpoint.
The integral function of the controller tends to compensate for error by
introducing another error in the positive direction. This overshoot can be avoided
by freezing of the integral function after the opening of the door for the time the
control loop typically needs to reheat the furnace.
PI controller
�
�
Δ
+
�
�
∫
Δ
�
�
K_{P}\Delta +K_{I}\int \Delta \,dt
where
Δ\Delta is the error or deviation of actual measured value (PV) from the setpoint
(SP).
Δ
=
�
�
−
�
�
.
{\displaystyle \Delta =SP-PV.}
A PI controller can be modelled easily in software such as Simulink or Xcos using a
"flow chart" box involving Laplace operators:
�
=
�
(
1
+
�
�
)
�
�
C={\frac {G(1+\tau s)}{\tau s}}
where
�
=
�
�
G=K_{P} = proportional gain
�
�
=
�
�
{\displaystyle {\frac {G}{\tau }}=K_{I}} = integral gain
Setting a value for
�
G is often a trade off between decreasing overshoot and increasing settling time.
The lack of derivative action may make the system more steady in the steady state
in the case of noisy data. This is because derivative action is more sensitive to
higher-frequency terms in the inputs.
Deadband
Many PID loops control a mechanical device (for example, a valve). Mechanical
maintenance can be a major cost and wear leads to control degradation in the form
of either stiction or backlash in the mechanical response to an input signal. The
rate of mechanical wear is mainly a function of how often a device is activated to
make a change. Where wear is a significant concern, the PID loop may have an output
deadband to reduce the frequency of activation of the output (valve). This is
accomplished by modifying the controller to hold its output steady if the change
would be small (within the defined deadband range). The calculated output must
leave the deadband before the actual output will change.
Setpoint ramping
In this modification, the setpoint is gradually moved from its old value to a newly
specified value using a linear or first-order differential ramp function. This
avoids the discontinuity present in a simple step change.
Derivative of the process variable
In this case the PID controller measures the derivative of the measured process
variable (PV), rather than the derivative of the error. This quantity is always
continuous (i.e., never has a step change as a result of changed setpoint). This
modification is a simple case of setpoint weighting.
Setpoint weighting
Setpoint weighting adds adjustable factors (usually between 0 and 1) to the
setpoint in the error in the proportional and derivative element of the controller.
The error in the integral term must be the true control error to avoid steady-state
control errors. These two extra parameters do not affect the response to load
disturbances and measurement noise and can be tuned to improve the controller's
setpoint response.
Feed-forward
The control system performance can be improved by combining the feedback (or
closed-loop) control of a PID controller with feed-forward (or open-loop) control.
Knowledge about the system (such as the desired acceleration and inertia) can be
fed forward and combined with the PID output to improve the overall system
performance. The feed-forward value alone can often provide the major portion of
the controller output. The PID controller primarily has to compensate for whatever
difference or error remains between the setpoint (SP) and the system response to
the open-loop control. Since the feed-forward output is not affected by the process
feedback, it can never cause the control system to oscillate, thus improving the
system response without affecting stability. Feed forward can be based on the
setpoint and on extra measured disturbances. Setpoint weighting is a simple form of
feed forward.
Bumpless operation
PID controllers are often implemented with a "bumpless" initialization feature that
recalculates the integral accumulator term to maintain a consistent process output
through parameter changes.[32] A partial implementation is to store the integral
gain times the error rather than storing the error and postmultiplying by the
integral gain, which prevents discontinuous output when the I gain is changed, but
not the P or D gains.
Other improvements
In addition to feed-forward, PID controllers are often enhanced through methods
such as PID gain scheduling (changing parameters in different operating
conditions), fuzzy logic, or computational verb logic.[33][34] Further practical
application issues can arise from instrumentation connected to the controller. A
high enough sampling rate, measurement precision, and measurement accuracy are
required to achieve adequate control performance. Another new method for
improvement of PID controller is to increase the degree of freedom by using
fractional order. The order of the integrator and differentiator add increased
flexibility to the controller.[35]
Cascade control
One distinctive advantage of PID controllers is that two PID controllers can be
used together to yield better dynamic performance. This is called cascaded PID
control. Two controllers are in cascade when they are arranged so that one
regulates the set point of the other. A PID controller acts as outer loop
controller, which controls the primary physical parameter, such as fluid level or
velocity. The other controller acts as inner loop controller, which reads the
output of outer loop controller as setpoint, usually controlling a more rapid
changing parameter, flowrate or acceleration. It can be mathematically
proven[citation needed] that the working frequency of the controller is increased
and the time constant of the object is reduced by using cascaded PID controllers.
[vague].
The proportional, integral, and differential terms of the two controllers will be
very different. The outer PID controller has a long time constant – all the water
in the tank needs to heat up or cool down. The inner loop responds much more
quickly. Each controller can be tuned to match the physics of the system it
controls – heat transfer and thermal mass of the whole tank or of just the heater –
giving better total response.[36][37]
This section does not cite any sources. Please help improve this section by adding
citations to reliable sources. Unsourced material may be challenged and removed.
(December 2012) (Learn how and when to remove this template message)
Standard versus parallel (ideal) form
The form of the PID controller most often encountered in industry, and the one most
relevant to tuning algorithms is the standard form. In this form the
�
�
K_{p} gain is applied to the
�
o
u
t
I_{\mathrm {out} }, and
�
o
u
t
D_{\mathrm {out} } terms, yielding:
�
(
�
)
=
�
�
(
�
(
�
)
+
1
�
�
∫
0
�
�
(
�
)
�
�
+
�
�
�
�
�
�
(
�
)
)
{\displaystyle u(t)=K_{p}\left(e(t)+{\frac {1}{T_{i}}}\int _{0}^{t}e(\tau )\,d\tau
+T_{d}{\frac {d}{dt}}e(t)\right)}
where
�
�
T_{i} is the integral time
�
�
T_{d} is the derivative time
In this standard form, the parameters have a clear physical meaning. In particular,
the inner summation produces a new single error value which is compensated for
future and past errors. The proportional error term is the current error. The
derivative components term attempts to predict the error value at
�
�
T_{d} seconds (or samples) in the future, assuming that the loop control remains
unchanged. The integral component adjusts the error value to compensate for the sum
of all past errors, with the intention of completely eliminating them in
�
�
T_{i} seconds (or samples). The resulting compensated single error value is then
scaled by the single gain
�
�
K_{p} to compute the control variable.
�
(
�
)
=
�
�
�
(
�
)
+
�
�
∫
0
�
�
(
�
)
�
�
+
�
�
�
�
�
�
(
�
)
{\displaystyle u(t)=K_{p}e(t)+K_{i}\int _{0}^{t}e(\tau )\,d\tau +K_{d}{\frac {d}
{dt}}e(t)}
the gain parameters are related to the parameters of the standard form through
�
�
=
�
�
/
�
�
{\displaystyle K_{i}=K_{p}/T_{i}} and
�
�
=
�
�
�
�
{\displaystyle K_{d}=K_{p}T_{d}}. This parallel form, where the parameters are
treated as simple gains, is the most general and flexible form. However, it is also
the form where the parameters have the weakest relationship to physical behaviors
and is generally reserved for theoretical treatment of the PID controller. The
standard form, despite being slightly more complex mathematically, is more common
in industry.
M
V
(
t
)
=
�
�
(
−
�
�
(
�
)
+
1
�
�
∫
0
�
�
(
�
)
�
�
−
�
�
�
�
�
�
�
(
�
)
)
\mathrm {MV(t)} =K_{p}\left(\,{-PV(t)}+{\frac {1}{T_{i}}}\int _{0}^{t}{e(\tau )}\,
{d\tau }-T_{d}{\frac {d}{dt}}PV(t)\right)
King[38] describes an effective chart-based method.
Laplace form
Sometimes it is useful to write the PID regulator in Laplace transform form:
�
(
�
)
=
�
�
+
�
�
�
+
�
�
�
=
�
�
�
2
+
�
�
�
+
�
�
�
G(s)=K_{p}+{\frac {K_{i}}{s}}+K_{d}{s}={\frac {K_{d}{s^{2}}+K_{p}{s}+K_{i}}{s}}
Having the PID controller written in Laplace form and having the transfer function
of the controlled system makes it easy to determine the closed-loop transfer
function of the system.
Series/interacting form
Another representation of the PID controller is the series, or interacting form
�
(
�
)
=
�
�
(
1
�
�
�
+
1
)
(
�
�
�
+
1
)
{\displaystyle G(s)=K_{c}({\frac {1}{\tau _{i}{s}}}+1)(\tau _{d}{s}+1)}
where the parameters are related to the parameters of the standard form through
�
�
=
�
�
⋅
�K_{p}=K_{c}\cdot \alpha ,
�
�
=
�
�
⋅
�T_{i}=\tau _{i}\cdot \alpha , and
�
�
=
�
�
�T_{d}={\frac {\tau _{d}}{\alpha }}
with
�
=
1
+
�
�
�
�
\alpha =1+{\frac {\tau _{d}}{\tau _{i}}}.
This form essentially consists of a PD and PI controller in series. As the integral
is required to calculate the controller's bias this form provides the ability to
track an external bias value which is required to be used for proper implementation
of multi-controller advanced control schemes.
Discrete implementation
The analysis for designing a digital implementation of a PID controller in a
microcontroller (MCU) or FPGA device requires the standard form of the PID
controller to be discretized.[39] Approximations for first-order derivatives are
made by backward finite differences.
�
(
�
)
u(t) and
�
(
�
)
e(t) are discretized with a sampling period
Δ
�
\Delta t, k is the sample index.
�
˙
(
�
)
=
�
�
�
˙
(
�
)
+
�
�
�
(
�
)
+
�
�
�
¨
(
�
)
{\displaystyle {\dot {u}}(t)=K_{p}{\dot {e}}(t)+K_{i}e(t)+K_{d}{\ddot {e}}(t)}
�
˙
(
�
�
)
=
�
�
(
�
�
)
�
�
=
�
(
�
�
)
−
�
(
�
�
−
1
)
Δ
�
{\displaystyle {\dot {f}}(t_{k})={\dfrac {df(t_{k})}{dt}}={\dfrac {f(t_{k})-f(t_{k-
1})}{\Delta t}}}
So,
�
(
�
�
)
−
�
(
�
�
−
1
)
Δ
�
=
�
�
�
(
�
�
)
−
�
(
�
�
−
1
)
Δ
�
+
�
�
�
(
�
�
)
+
�
�
�
˙
(
�
�
)
−
�
˙
(
�
�
−
1
)
Δ
�
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-
e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\dot {e}}(t_{k})-{\dot {e}}
(t_{k-1})}{\Delta t}}}
Applying backward difference again gives,
�
(
�
�
)
−
�
(
�
�
−
1
)
Δ
�
=
�
�
�
(
�
�
)
−
�
(
�
�
−
1
)
Δ
�
+
�
�
�
(
�
�
)
+
�
�
�
(
�
�
)
−
�
(
�
�
−
1
)
Δ
�
−
�
(
�
�
−
1
)
−
�
(
�
�
−
2
)
Δ
�
Δ
�
{\displaystyle {\frac {u(t_{k})-u(t_{k-1})}{\Delta t}}=K_{p}{\frac {e(t_{k})-
e(t_{k-1})}{\Delta t}}+K_{i}e(t_{k})+K_{d}{\frac {{\frac {e(t_{k})-e(t_{k-1})}{\
Delta t}}-{\frac {e(t_{k-1})-e(t_{k-2})}{\Delta t}}}{\Delta t}}}
By simplifying and regrouping terms of the above equation, an algorithm for an
implementation of the discretized PID controller in a MCU is finally obtained:
�
(
�
�
)
=
�
(
�
�
−
1
)
+
(
�
�
+
�
�
Δ
�
+
�
�
Δ
�
)
�
(
�
�
)
+
(
−
�
�
−
2
�
�
Δ
�
)
�
(
�
�
−
1
)
+
�
�
Δ
�
�
(
�
�
−
2
)
{\displaystyle u(t_{k})=u(t_{k-1})+\left(K_{p}+K_{i}\Delta t+{\dfrac {K_{d}}{\Delta
t}}\right)e(t_{k})+\left(-K_{p}-{\dfrac {2K_{d}}{\Delta t}}\right)e(t_{k-1})+{\
dfrac {K_{d}}{\Delta t}}e(t_{k-2})}
or:
�
(
�
�
)
=
�
(
�
�
−
1
)
+
�
�
[
(
1
+
Δ
�
�
�
+
�
�
Δ
�
)
�
(
�
�
)
+
(
−
1
−
2
�
�
Δ
�
)
�
(
�
�
−
1
)
+
�
�
Δ
�
�
(
�
�
−
2
)
]
{\displaystyle u(t_{k})=u(t_{k-1})+K_{p}\left[\left(1+{\dfrac {\Delta t}{T_{i}}}+{\
dfrac {T_{d}}{\Delta t}}\right)e(t_{k})+\left(-1-{\dfrac {2T_{d}}{\Delta t}}\
right)e(t_{k-1})+{\dfrac {T_{d}}{\Delta t}}e(t_{k-2})\right]}
s.t.
�
�
=
�
�
/
�
�
,
�
�
=
�
�
/
�
�
T_{i}=K_{p}/K_{i},T_{d}=K_{d}/K_{p}
Pseudocode
Here is a very simple and explicit group of pseudocode that can be easily
understood by the layman:
Kp - proportional gain
Ki - integral gain
Kd - derivative gain
dt - loop interval time (assumes reasonable scale)[c]
previous_error := 0
integral := 0
loop:
error := setpoint − measured_value
proportional := error;
integral := integral + error × dt
derivative := (error − previous_error) / dt
output := Kp × proportional + Ki × integral + Kd × derivative
previous_error := error
wait(dt)
goto loop
Here is a more complicated and much less explicit software loop that implements a
PID algorithm:
A0 := Kp + Ki*dt + Kd/dt
A1 := -Kp - 2*Kd/dt
A2 := Kd/dt
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
output := output + A0 * error[0] + A1 * error[1] + A2 * error[2]
wait(dt)
goto loop
Here, Kp is a dimensionless number, Ki is expressed in
�
−
1
s^{-1} and Kd is expressed in s. When doing a regulation where the actuator and the
measured value are not in the same unit (ex. temperature regulation using a motor
controlling a valve), Kp, Ki and Kd may be corrected by a unit conversion factor.
It may also be interesting to use Ki in its reciprocal form (integration time). The
above implementation allows to perform a I-only controller which may be useful in
some cases.
In the real world, this is D-to-A converted and passed into the process under
control as the manipulated variable (MV). The current error is stored elsewhere for
re-use in the next differentiation, the program then waits until dt seconds have
passed since start, and the loop begins again, reading in new values for the PV and
the setpoint and calculating a new value for the error.[40]
Note that for real code, the use of "wait(dt)" might be inappropriate because it
doesn't account for time taken by the algorithm itself during the loop, or more
importantly, any preemption delaying the algorithm.
A typical workaround is to filter the derivative action using a low pass filter of
time constant
�
�
/
�
{\displaystyle \tau _{d}/N} where
3
<=
�
<=
10
{\displaystyle 3<=N<=10}: PID with derivative filtering
A variant of the above algorithm using an infinite impulse response (IIR) filter
for the derivative:
A0 := Kp + Ki*dt
A1 := -Kp
error[2] := 0 // e(t-2)
error[1] := 0 // e(t-1)
error[0] := 0 // e(t)
output := u0 // Usually the current value of the actuator
A0d = Kd/dt
A1d = - 2.0*Kd/dt
A2d = Kd/dt
N := 5
tau := Kd / (Kp*N) // IIR filter time constant
alpha = dt / (2*tau)
d0 := 0
d1 := 0
fd0 := 0
fd1 := 0
loop:
error[2] := error[1]
error[1] := error[0]
error[0] := setpoint − measured_value
// PI
output := output + A0 * error[0] + A1 * error[1]
// Filtered D
d1 = d0
d0 = A0d * error[0] + A1d * error[1] + A2d * error[2]
fd1 = fd0
fd0 = ((alpha) / (alpha + 1)) * (d0 + d1) - ((alpha - 1) / (alpha + 1)) * fd1
output := output + fd0
wait(dt)
goto loop
See also
Control theory
Active disturbance rejection control
Notes
The only exception is where the target value is the same as the value obtained
when the controller output is zero.
A common assumption often made for Proportional-Integral-Derivative (PID) control
design, as done by Ziegler and Nichols, is to take the integral time constant to be
four times the derivative time constant. Although this choice is reasonable,
selecting the integral time constant to have this value may have had something to
do with the fact that, for the ideal case with a derivative term with no filter,
the PID transfer function consists of two real and equal zeros in the numerator.
[20]
Note that for very small intervals (e.g. 60Hz/
.016
6
¯{\textstyle .016{\bar {6}}} seconds), the resulting derivative value will be
extremely large, and orders of magnitude larger than the proportional or integral
components. Adjusting this value for the derivative (e.g. multiplying by 1000) or
changing the division to multiplication is likely to yield the intended results.
This holds true for all pseudocode presented here.
References
Araki, M. (2009). "CONTROL SYSTEMS, ROBOTICS AND AUTOMATION – Volume VII - PID
Control" (PDF). Japan: Kyoto University.
Hills, Richard L (1996), Power From the Wind, Cambridge University Press
Richard E. Bellman (December 8, 2015). Adaptive Control Processes: A Guided Tour.
Princeton University Press. ISBN 9781400874668.
Bennett, Stuart (1996). "A brief history of automatic control" (PDF). IEEE Control
Systems Magazine. 16 (3): 17–25. doi:10.1109/37.506394. Archived from the original
(PDF) on 2016-08-09. Retrieved 2014-08-21.
Maxwell, J. C. (1868). "On Governors" (PDF). Proceedings of the Royal Society.
100.
Newpower, Anthony (2006). Iron Men and Tin Fish: The Race to Build a Better
Torpedo during World War II. Praeger Security International. ISBN 978-0-275-99032-
9. p. citing Gray, Edwyn (1991), The Devil's Device: Robert Whitehead and the
History of the Torpedo, Annapolis, MD: U.S. Naval Institute, p. 33.
Sleeman, C. W. (1880), Torpedoes and Torpedo Warfare, Portsmouth: Griffin & Co.,
pp. 137–138, which constitutes what is termed as the secret of the fish torpedo.
"A Brief Building Automation History". Archived from the original on 2011-07-08.
Retrieved 2011-04-04.
Minorsky, Nicolas (1922). "Directional stability of automatically steered bodies".
Journal of the American Society for Naval Engineers. 34 (2): 280–309.
doi:10.1111/j.1559-3584.1922.tb04958.x.
Bennett 1993, p. 67
Bennett, Stuart (June 1986). A history of control engineering, 1800-1930. IET. pp.
142–148. ISBN 978-0-86341-047-5.
Shinskey, F Greg (2004), The power of external-reset feedback (PDF), Control
Global
Neuhaus, Rudolf. "Diode Laser Locking and Linewidth Narrowing" (PDF). Retrieved
June 8, 2015.
"Position control system" (PDF). Hacettepe University Department of Electrical and
Electronics Engineering. Archived from the original (PDF) on 2014-05-13.
Kebriaei, Reza; Frischkorn, Jan; Reese, Stefanie; Husmann, Tobias; Meier, Horst;
Moll, Heiko; Theisen, Werner (2013). "Numerical modelling of powder metallurgical
coatings on ring-shaped parts integrated with ring rolling". Material Processing
Technology. 213 (1): 2015–2032. doi:10.1016/j.jmatprotec.2013.05.023.
Lipták, Béla G. (2003). Instrument Engineers' Handbook: Process control and
optimization (4th ed.). CRC Press. p. 108. ISBN 0-8493-1081-4.
"Introduction: PID Controller Design". University of Michigan.
Tim Wescott (October 2000). "PID without a PhD" (PDF). EE Times-India.
Bechhoefer, John (2005). "Feedback for Physicists: A Tutorial Essay On Control".
Reviews of Modern Physics. 77 (3): 783–835. Bibcode:2005RvMP...77..783B. CiteSeerX
10.1.1.124.7043. doi:10.1103/revmodphys.77.783.
Atherton, Drek P (December 2014). "Almost Six Decades in Control Engineering".
IEEE Control Systems Magazine. 34 (6): 103–110. doi:10.1109/MCS.2014.2359588. S2CID
20233207.
Li, Y., et al. (2004) CAutoCSD - Evolutionary search and optimisation enabled
computer automated control system design, Int J Automation and Computing, vol. 1,
No. 1, pp. 76-88. ISSN 1751-8520.
Kiam Heong Ang; Chong, G.; Yun Li (2005). "PID control system analysis, design,
and technology" (PDF). IEEE Transactions on Control Systems Technology. 13 (4):
559–576. doi:10.1109/TCST.2005.847331. S2CID 921620.
Jinghua Zhong (Spring 2006). "PID Controller Tuning: A Short Tutorial" (PDF).
Archived from the original (PDF) on 2015-04-21. Retrieved 2011-04-04.
Åström, K.J.; Hägglund, T. (July 1984). "Automatic Tuning of Simple Regulators".
IFAC Proceedings Volumes. 17 (2): 1867–1872. doi:10.1016/S1474-6670(17)61248-5.
Hornsey, Stephen (29 October 2012). "A Review of Relay Auto-tuning Methods for the
Tuning of PID-type Controllers". Reinvention. 5 (2).
Bequette, B. Wayne (2003). Process Control: Modeling, Design, and Simulation.
Upper Saddle River, New Jersey: Prentice Hall. p. 129. ISBN 978-0-13-353640-9.
Heinänen, Eero (October 2018). A Method for automatic tuning of PID controller
following Luus-Jaakola optimization (PDF) (Master's Thesis ed.). Tampere, Finland:
Tampere University of Technology. Retrieved Feb 1, 2019.
Li, Yun; Ang, Kiam Heong; Chong, Gregory C.Y. (February 2006). "Patents, software,
and hardware for PID control: An overview and analysis of the current art" (PDF).
IEEE Control Systems Magazine. 26 (1): 42–54. doi:10.1109/MCS.2006.1580153. S2CID
18461921.
Soltesz, Kristian (January 2012). On Automation of the PID Tuning Procedure
(Licentiate theis). Lund university. 847ca38e-93e8-4188-b3d5-8ec6c23f2132.
Li, Y. and Ang, K.H. and Chong, G.C.Y. (2006) PID control system analysis and
design - Problems, remedies, and future directions. IEEE Control Systems Magazine,
26 (1). pp. 32-41. ISSN 0272-1708
Cooper, Douglas. "Integral (Reset) Windup, Jacketing Logic and the Velocity PI
Form". Retrieved 2014-02-18.
Cooper, Douglas. "PI Control of the Heat Exchanger". Practical Process Control by
Control Guru. Retrieved 2014-02-27.
Yang, T. (June 2005). "Architectures of Computational Verb Controllers: Towards a
New Paradigm of Intelligent Control". International Journal of Computational
Cognition. 3 (2): 74–101. CiteSeerX 10.1.1.152.9564.
Liang, Yilong; Yang, Tao (2009). "Controlling fuel annealer using computational
verb PID controllers". Proceedings of the 3rd International Conference on Anti-
Counterfeiting, Security, and Identification in Communication. Asid'09: 417–420.
ISBN 9781424438839.
Tenreiro Machado JA, et al. (2009). "Some Applications of Fractional Calculus in
Engineering". Mathematical Problems in Engineering. 2010: 1–34.
doi:10.1155/2010/639801. hdl:10400.22/4306.
[1] Fundamentals of cascade control | Sometimes two controllers can do a better
job of keeping one process variable where you want it. | By Vance VanDoren, PHD, PE
| AUGUST 17, 2014
[2] | The Benefits of Cascade Control | September 22, 2020 | Watlow
King, Myke (2011). Process Control: A Practical Approach. Wiley. pp. 52–78. ISBN
978-0-470-97587-9.
"Discrete PI and PID Controller Design and Analysis for Digital Implementation".
Scribd.com. Retrieved 2011-04-04.
"PID process control, a "Cruise Control" example". CodeProject. 2009. Retrieved 4
November 2012.
Bequette, B. Wayne (2006). Process Control: Modeling, Design, and Simulation.
Prentice Hall PTR. ISBN 9789861544779.
Further reading
Liptak, Bela (1995). Instrument Engineers' Handbook: Process Control. Radnor,
Pennsylvania: Chilton Book Company. pp. 20–29. ISBN 978-0-8019-8242-2.
Tan, Kok Kiong; Wang Qing-Guo; Hang Chang Chieh (1999). Advances in PID Control.
London, UK: Springer-Verlag. ISBN 978-1-85233-138-2.
King, Myke (2010). Process Control: A Practical Approach. Chichester, UK: John
Wiley & Sons Ltd. ISBN 978-0-470-97587-9.
Van Doren, Vance J. (July 1, 2003). "Loop Tuning Fundamentals". Control
Engineering.
Sellers, David. "An Overview of Proportional plus Integral plus Derivative Control
and Suggestions for Its Successful Application and Implementation" (PDF). Archived
from the original (PDF) on March 7, 2007. Retrieved 2007-05-05.
Graham, Ron; Mike McHugh (2005-10-03). "FAQ on PID controller tuning". Mike McHugh.
Archived from the original on February 6, 2005. Retrieved 2009-01-05.
Aidan O'Dwyer (2009). Handbook of PI and PID Controller Tuning Rules (PDF) (3rd
ed.). Imperial College Press. ISBN 978-1-84816-242-6.
External links
PID tuning using Mathematica
PID tuning using Python
Principles of PID Control and Tuning
Introduction to the key terms associated with PID Temperature Control
PID tutorials
PID Control in MATLAB/Simulink and Python with TCLab
What's All This P-I-D Stuff, Anyhow? Article in Electronic Design
Shows how to build a PID controller with basic electronic components (pg. 22)
PID Without a PhD
PID Control with MATLAB and Simulink
PID with single Operational Amplifier
Proven Methods and Best Practices for PID Control
Principles of PID Control and Tuning
PID Tuning Guide: A Best-Practices Approach to Understanding and Tuning PID
Controllers
Michael Barr (2002-07-30), Introduction to Closed-Loop Control, Embedded Systems
Programming, archived from the original on 2010-02-09
Jinghua Zhong, Mechanical Engineering, Purdue University (Spring 2006). "PID
Controller Tuning: A Short Tutorial" (PDF). Archived from the original (PDF) on
2015-04-21. Retrieved 2013-12-04.