International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 1 Issue 3, May - 2012
Adaptive Filter Analysis for System
Identification Using Various Adaptive
Algorithms
Ms. Kinjal Rasadia, Dr. Kiran Parmar
Abstract— This paper includes the analysis of various adaptive algorithms such as LMS, NLMS, Leaky LMS, Sign-
Sign, Sign-error and RLS for system identification. The problem of obtaining a model of system from input and output
measurements is called the system identification problem. Using adaptive filter we can find the mathematical model of
unknown system based on the input and output measurement. And analyze different parameter of algorithm such as
order of filter, step size, leakage factor, normalized step size and forgetting factor. It has been found that RLS faster
than other, but for practical consideration LMS is better. Complexity of LMS is less as compare to RLS because of less
floating point operation. As the order increases magnitude response of adaptive filter is nearly equal to the response
of unknown system and mean square error also reduced.
Index Terms— Convergence speed, Least mean square error (LMS), Mean Square error, Normalized LMS,
System Identification
—————————— ——————————
1. INTRODUCTION which is in turn used to control the values of a set
Digital signal processing systems are attractive due of adjustable filter coefficients. However, essential
to their low cost, reliability, accuracy, small physical difference between the various applications of
sizes, and flexibility. Coefficients of adaptive filter adaptive filtering arises in the manner in which the
are continuously and automatically adapt to given desired response is extracted.
signal in order to get desired response and improve 2. ALGORITHMS
the performance. The algorithm is the procedure used to adjust the
adaptive filter coefficients in order to minimize a
prescribed criterion i.e. error signal. Most reported
developments and applications use the FIR filter
with the LMS algorithm because it relatively simple
to design and implement. Many adaptive algo-
rithms can be viewed as approximations of the
wiener filter. As shown in figure 1, the adaptive
algorithm uses the error signal
e (k) = d(k) – y(k) (1)
to update the filter coefficients in order to minimize
a predetermined criterion. The most widely mean
square-error (MSE) criterion is defined as
Figure 1. adaptive filter configuration ξ = E * e2(k) ] (2)
Figure 1 shows the basic adaptive filter Most widely used algorithm is LMS (Least Mean
configuration, where x(k) is the input signal, y(k) is Square). Because it is relatively simple to design
filter output, d(k) is the desired signal, e(k) is the and implement. There are some set of LMS-type
error signal.The main objective of adaptive filter is algorithms obtained by the modification of the LMS
to minimize the error signal. Here FIR filters struc- algorithm [5]. The motivation for each is practical
ture and different Algorithmic methods are used to consideration such as faster convergence, simplicity
represents complete adaptive filter specification. of implementation, or robustness of operation.
There are three main specifications are required for Mean square error behavior, convergence and
designing adaptive filter, i.e. algorithm, filter struc- steady state analysis of different adaptive algo-
ture, application. There are numbers of structures, rithms are analyzed in [2]-[4]. The LMS algorithm
but widely used FIR filter structure because of its requires only 2L multiplications and additions and
stability. Adaptive filters have been successfully is the most efficient adaptive algorithm in terms of
applied in such diverse fields as communications, computation and storage requirements. The com-
radar, sonar, seismology and biomedical engineer- plexity is much lower than that of other adaptive
ing. Although these applications are indeed quite algorithms such as kalman and recursive least
different in nature, nevertheless, they have one ba- square algorithms.
sic common feature: an input vector and desired 2.1 LMS Algorithm
response are used to compute an estimation error, The LMS algorithm is a method to estimate gra-
www.ijert.org 1
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 1 Issue 3, May - 2012
dient vector with instantaneous value. It changes lowing recursion:
the filter tap weights so that e (k) is minimized in
the mean-square sense. The conventional LMS al- w(n+1) = w(n) + μ sgn,x(n)} sgn,e(n)}, (10)
gorithm is a stochastic implementation of the steep-
est descent algorithm. 2.7 Recursive Least square (RLS)
The RLS method typically converges much faster
e (k) = d(k) – w(k) X(k) (3) than the LMS method, but at cost of most computa-
Coefficient updating equation is tional effort per iteration. Derivation of these results
can be found in references books [7]-[9]. Unlike the
w (k+1) = w(k) + μ x(k) e(k), (4) LMS method, which asymptotically approaches the
Where μ is an appropriate step size to be chosen as optimal weight vector using a gradient based
0 < μ < 0.2 for the convergence of the algorithm. The search, the RLS method attempts to find the optim-
larger step sizes make the coefficients to fluctuate al weight at each iteration. The expression for RLS
wildly and eventually become unstable.[6] method is
The most important members of simplified LMS w (k) = (11)
algorithms are:
the design parameter associated with the RLS me-
2.2 Normalized LMS (NLMS) Algorithm thod are the forgetting factor 0 < ≤ 1, the regula-
The normalized LMS algorithm is expressed as rization parameter, δ >0, and the transversal filter
order, m ≥ 0.the required filter order depends on
w (k + 1) = w (k) +2 µ (k) e (k) x (k). (5) the application.
μ (k) = α / (m+1) px (k). (6) 3. ADAPTIVE FILTER APPLICATION:
SYSTEM IDENTIFICATION
Where µ (k) is the time varying step size normal-
Mathematical models of physical phenomena can
ized by L= (m+1) and the power of the signal x (k).
Where 0 < α < 1. *3+-[4]. be effectively apply analysis and design techniques
to practical problems. In many instances, a mathe-
2.3 Leaky LMS Algorithm matical model can be developed using underlying
Insufficient spectral excitation of the algorithm of physical principles and understanding of the com-
LMS algorithm may result in divergence of the ponent of the system and how they are intercon-
adaptive algorithms. Divergence may avoided by nected. But in some cases, this approach is less ef-
using leaky mechanism during the coefficient adap- fective, because the physical system or phenome-
tation process. The leaky LMS algorithm is ex- non is too complex and is not well understood. In
pressed as
these cases, we have to design this mathematical
w (k+1) = v w(k) + µ e(k) x(k). (7)
model based on the measurement of the input and
Where v is leaky factor with range 0 << v < 1.
output. Typically, we assume that the unknown
2.4 Signed LMS algorithm system can be modelled as a linear time system.
The problem of obtaining a model of system from
This algorithm is obtained from conventional LMS
recursion by replacing e(k) by its sign. This leads to input and output measurements is called the sys-
the following recursion: tem identification problem.[9]
w(k+1) = w(k) + μ x(k) sgn{e(k)} (8) Adaptive filter are highly effective for perform-
2.5 Signed-Regressor Algorithm (SRLMS) ing system identification using the configuration
The signed regressor algorithm is obtained from the shown in figure 2.
conventional LMS recursion by replacing the tap-
input vector x (k) with the vector
sgn{x(k)}.Consider a signed regressor LMS based
adaptive filter that processes an input signal x(k)
and generates the output y(k) as per the following:
w (k+1) = w(k) + μ sgn,x(k)}e(k) (9)
2.6 Sign – Sign Algorithm (SSLMS)
This can be obtained by combining signed- Figure 2 System identification
regressor and sign recursions, resulting in the fol-
www.ijert.org 2
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 1 Issue 3, May - 2012
To illustrate the entire algorithm, consider the 300
NLMS Squared error
system identification problem shown in figure 2.
Let the system to be identified has the following 250
transfer function.
200
H(z)=
e 2 (k)
150
100
50
Here input x(k) consists of N=1000 samples of
white noise uniformly distributed over [-1,1]. Effec- 0
0 50 100 150 200 250 300 350 400 450 500
k
tiveness of adaptive filter can be assess by compar-
ing the magnitude response of the system, H(z), (b)
Leaky LMS Squared error
with the magnitude response of the adaptive filter, 450
w(z), using final steady state weight, w(N-1). Note 400
that this is true in spite of the fact that H(z) is a IIR
350
filter with six poles and six zeros, while the steady
300
state adaptive filter is an FIR filter with different
250
specifications.[10]
e 2 (k)
200
4. SIMULATION RESULT 150
This section presents the results of simulation using 100
MATLAB to investigate the performance behav- 50
iours of various adaptive algorithms. The principle 0
0 50 100 150 200 250 300 350 400 450 500
means of the comparison is the steady state error of k
the algorithms which depends on the parameters (c)
such as step size, filter length and the number of Figure 3 Plots of MSE using (a) LMS Method (b)
iterations and identifies the unknown system. Here NLMS Method (c) Leaky LMS Method using
system is identified using different adaptive algo- µ=0.01.(continue)
rithms such as LMS, NLMS, Leaky LMS, sign data
LMS, sign error LMS, sign sign LMS and RLS. All From simulation result shown in figure 3 we
simulations plots are average over 500 independent have seen that NLMS converge faster than LMS,
runs and filter order m =50. Leaky LMS have same as LMS but it has excess
MSE higher than the LMS. The equation of sign-
450
LMS Squared error
sign LMS algorithm requires no multiplication.
400
Sign sign LMS method and sign error LMS method
350
is not useful for DSP filter applications. This simpli-
300
fied LMS is designed for a VLSI or ASIC implemen-
tation to save multiplications. It is used in the adap-
250
tive differential pulse code modulation for speech
e 2 (k)
200
compression. However, when this algorithm is im-
150
plemented on DSP processor with a pipeline archi-
100
tecture and parallel hardware multipliers, the
50
throughput is slower than the standard LMS algo-
0
0 50 100 150 200 250 300 350 400 450 500 rithm because the determination of signs can break
k
the instruction pipeline and therefore severely
( a)
reduce the execution speed.
www.ijert.org 3
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 1 Issue 3, May - 2012
sign data LMS Squared error
500
450 Figure 4 shows the plots of convergence speed
400
using different adaptive algorithms. we can see
350
from that RLS method converge faster as compare
300
to other method and sign sign and sign error take
e 2 (k)
250
200
to much time and samples to convert into mini-
150 mum MSE. The results in [1] show that the perfor-
100
mance of the signed data LMS algorithm is superior
50
than conventional LMS algorithm, the performance
0
0 50 100 150 200 250
k
300 350 400 450 500
of signed LMS and sign-sign LMS based realiza-
(d) tions are comparable to that of the LMS based filter-
Sign error LMS Squared error ing techniques in terms of signal to noise ratio and
1200
computational complexity.
1000
LMS covergence speed
800 15
e 2 (k)
600
10
400
5
200
Amplitude
0 0
0 50 100 150 200 250 300 350 400 450 500
k
(e) -5
Sign sign LMS Squared error
900
-10
800
700
-15
600 0 100 200 300 400 500 600 700 800 900 1000
index
500
e 2 (k)
400
(a)
300
200
NLMS covergence speed
100
4
0
0 50 100 150 200 250 300 350 400 450 500
k
3
( f)
RLS Squared error 2
6
5 1
Amplitude
4
0
e 2 (k)
-1
-2
1
0
-3
0 50 100 150 200 250 300 350 400 450 500 0 100 200 300 400 500 600 700 800 900 1000
k index
(g) (b)
Figure 3 Figure 3 Plots of MSE using (d) Sign data Figure 4 Plots of convergence speed using (a) LMS
LMS Method (e) sign error LMS Method (f) Sign Method (b) NLMS Method using µ=0.01.(continue)
Sign LMS Method (g) RLS Method using µ=0.01.
(Continue)
www.ijert.org 4
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 1 Issue 3, May - 2012
LeakyLMS covergence speed RLS covergence speed
15 0.6
0.5
10
0.4
5
0.3
Amplitude
Amplitude
0
0.2
0.1
-5
0
-10
-0.1
-15
0 100 200 300 400 500 600 700 800 900 1000 -0.2
0 100 200 300 400 500 600 700 800 900 1000
index
index
(c) (g)
Sign data LMS covergence speed
10 Figure 4 Plots of convergence speed using (c) Leaky
LMS Method (d) sign data LMS Method (e) sign
5 error LMS Method (f) sign sign LMS Method (g)
RLS Method using µ=0.01.(continue)
0
Table 1
Amplitude
-5
Method MSE C MSE C
µ = 0.01 µ =0.004
-10
LMS 0.0870 450 0.1967 900
-15
0 100 200 300 400 500 600 700 800 900 1000 NLMS 0.0170 400 0.0170 400
index
(d) Leaky 0.0896 600 0.2076 1000
Sign error LMS covergence speed LMS
20
15
Sign 0.0630 400 0.1257 700
data
10
LMS
5
Sign 0.6216 2300 0.8309 4000
Amplitude
0 error
-5
LMS
-10 Sign 0.4732 1500 0.7469 3000
sign
-15
LMS
-20
0 100 200 300 400 500 600 700 800 900 1000
index
RLS 1.4443e- 30 1.4443e- 30
004 004
(e)
Sign sign LMS covergence speed
20
Table 1 shows the relation between the MSE and
15
convergence speed for different algorithms using
10 two different values of µ. It shows that for small
5 value of µ MSE is high and convergence time is also
high.
Amplitude
-5
M = µ (L+1) P(x) (12)
-10
-15
Where M gives miss adjustment factor, P(x) gives
-20
0 100 200 300 400 500 600 700 800 900 1000
the power of input signal and L indicates the filter
index
length.
(f)
www.ijert.org 5
International Journal of Engineering Research & Technology (IJERT)
ISSN: 2278-0181
Vol. 1 Issue 3, May - 2012
Table 2 [6] Syed Zahurul Islam, Syed Zahidul Islam, Ra-
zali Jidin, Mohd. Alauddin Mohd.
µ M
Ali,“Performance Study of Adaptive Filtering Al-
0.01 0.17 gorithms for Noise Cancellation of ECG Sig-
nal”,IEEE 2009.
0.004 0.068 [7] Moonen and peroudeler,”An Introduction to
Adaptive Signal Processing”, McGraw Hill, secod
edition,2000.
[8] Haykin, s. Adaptive Filter Theory,4th edition,
Table 2 shows the relation between excess MSE and
prentice Hall,2002.
step size. Where M indicates the miss adjustment
[9] Sen M.kuo, woon-seg gan,”Digital signal Pro-
factor. if the step size is higher M is also higher. It
cessors”.2005
means that after converge in to minimum MSE
[10] Robert J. Schilling, Sandra L. Har-
there are excess MSE is also presents due to noisy
ris,”Fundamentals of Digital Siganl
gradient estimation. And it may not be zero at min- Processing”,2009.
imum MSE. So there is always tradeoff between the
convergence speed and steady state accuracy. Ms. Kinjal N. Rasadia is currently persuing M.E in
L.D.College of engineering & technology, Ahmedabad
.E mail:
[email protected] 5. CONCLUSION Co-Author Dr.Kiran Parmar is currently working as a Asso-
We have studied and analyzed different adaptive ciate Professor & Head, EC engineering department,
algorithms for system identification. LMS algo- L.D.College, Ahmedabad.
rithm is useful for practical implementation.RLS
method is faster than the LMS methods but require
larger number of floating point operation. For LMS
m(50) Flops is required and for RLS 3m2 (7500)
Flops are required. Normalized LMS method, leaky
LMS method, sign data, sign error and sign sign
LMS are the modified version of LMS method,
which are used according to requirement of appli-
cation. Sign error and sign sign LMS method have
larger MSE and take too much time to converge.
There is always the tradeoff between convergence
speed and steady state accuracy.
References
[1] Mohammad Zia Ur Rahman, Rafi Ahamed
Shaik and D V Rama Koti Reddy, “Noise Cancel-
lation in ECG Signals using computationally Sim-
plified Adaptive Filtering Techniques: Application
to Biotelemetry”, Signal Processing: An Interna-
tional Journal 3(5), November 2009.
[2] Allan Kardec Barros and Noboru Ohnishi,
“MSE Behavior of Biomedical Event-Related Fil-
ters” IEEE Transactions on Biomedical Engi-
neering, 44( 9), September 1997.
[3] Ahmed I. Sulyman, Azzedine Zerguine, “Con-
vergence and Steady-State Analysis of a Variable
Step-Size Normalized LMS Algorithm”, IEEE
2003.
[4] S.C.Chan, Z.G.Zhang, Y.Zhou, and Y.Hu, “A
New Noise-Constrained Normalized Least Mean
Squares Adaptive Filtering Algorithm”, IEEE 2008.
[5] Syed Zahurul Islam, Syed Zahidul Islam, Ra-
zali Jidin, Mohd. Alauddin Mohd.
Ali,“Performance Study of Adaptive Filtering Al-
gorithms for Noise Cancellation of ECG Sig-
nal”,IEEE 2009.
www.ijert.org 6