0% found this document useful (0 votes)
93 views10 pages

1 s2.0 S0895717710005613 Main

Uploaded by

gtorretta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
93 views10 pages

1 s2.0 S0895717710005613 Main

Uploaded by

gtorretta
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Mathematical and Computer Modelling 53 (2011) 1074–1083

Contents lists available at ScienceDirect

Mathematical and Computer Modelling


journal homepage: www.elsevier.com/locate/mcm

Identification for the second-order systems based on the step response✩


Lei Chen a , Junhong Li b,c , Ruifeng Ding a,c,∗
a
Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, 214122 Wuxi, PR China
b
School of Electrical Engineering, Nantong University, 226019 Nantong, PR China
c
School of IoT Engineering, Jiangnan University, 214122 Wuxi, PR China

article info abstract


Article history: Based on the step response data, an identification method is presented to estimate the
Received 2 September 2010 parameters of second-order inertial systems. Using special data points, the transcendental
Received in revised form 18 November equations are changed into algebraic equations which are easy to solve for computing
2010
the parameters of the transfer function models. The numerical examples indicate that the
Accepted 18 November 2010
proposed approaches can estimate the parameters of the systems.
© 2010 Elsevier Ltd. All rights reserved.
Keywords:
Step responses
System modelling
Parameter estimation
Numerical simulation

1. Introduction

A system may be described by the mapping relationship (i.e., the transfer function) from the input to the output. For a
linear system, its transfer function is defined as the ratio of the Laplace transforms of the input to the output and is not
related to the system’s input and output. The so-called system identification or parameter estimation is to identify/estimate
the parameters of systems or transfer function models from available input–output data.
Parameter estimation have had important applications in system modelling, signal processing, filtering, adaptive
control [1–8]. Systems are divided into continuous-time systems and discrete-time systems. Many identification
methods discuss parameter estimation problems of discrete-time systems, e.g., the least squares algorithms [9,10], the
stochastic gradient algorithms [11–13], the multi-innovation algorithms [14–25], the auxiliary model based identification
algorithms [26–31], the hierarchical identification algorithms [32–37], the iterative algorithms [38–41] which can be used
to find the iterative solutions of matrix equations [42–49], and the identification algorithms of non-stationary or nonlinear
systems [50–54].
If the input of a continuous-time system is taken as a step signal, the corresponding output is called the step response.
Identification of continuous-time linear systems may be defined as estimating the parameters of the transfer function
models obtained by means of the Laplace transform. In the areas of continuous-time system identification, Wang, Guo and
Zhang presented a direct identification approach of continuous-time delay systems from the step response [55]; Bi et al.
studied robust identification problems of first-order plus a dead-time model from the step response [56]; Ahmed, Huang and
Shah considered identification from step responses with transient initial conditions [57]. This paper studies identification
methods of the parameters of the transfer functions from the step response data. Such identification methods are called the
classical identification ones.

✩ This work was supported in part by the National Natural Science Foundation of China.
∗ Corresponding author at: Key Laboratory of Advanced Process Control for Light Industry (Ministry of Education), Jiangnan University, 214122 Wuxi,
PR China.
E-mail addresses: [email protected] (L. Chen), [email protected] (J. Li), [email protected], [email protected] (R. Ding).

0895-7177/$ – see front matter © 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.mcm.2010.11.070
L. Chen et al. / Mathematical and Computer Modelling 53 (2011) 1074–1083 1075

This paper is organized as follows. Section 2 introduces some preliminary facts related to the Laplace transform. Section 3
derives the transfer function models from linear differential equations with constant coefficients. Sections 4 and 5 study the
parameter identification methods for second-order systems. Section 6 provides several examples to illustrate the proposed
methods. Section 7 simply concludes the work of the paper.

2. Preliminary facts

The basic facts of this section can be found in many textbooks and their proofs are omitted.
The Laplace transform is used for solving differential and integral equations. In physics and engineering, it is used for
the analysis of linear time-invariant systems such as electrical circuits and mechanical systems. In this analysis, the Laplace
transform is often interpreted as a transformation from the time-domain, in which inputs and outputs are functions of time,
to the frequency-domain. The following simply introduces the definition and properties of the Laplace transform.
For a real function f (t ) of time t over the interval [0, +∞), its Laplace transform is defined by
∫ +∞
F (s) := f (t )e−st dt ,
0
which is simply denoted by
F (s) = L [f (t )],
where s is the Laplace operator (a complex variable with a large real part), L is the symbol of the Laplace transform, f (t ) is
the original function. A necessary condition for existence of the integral is that f (t ) must be integrable on [0, ∞).
The inverse Laplace transform is given by the following complex integral,
∫ c +j ∞
1
f (t ) = F (s)est dt ,
2π j c −j∞
which is denoted by
f (t ) = L −1 [F (s)],
where L −1 is the symbol of the inverse Laplace transform, c is a large positive number.
In the analysis of control systems, one often uses several typical input signals. For example, the step function f (t ) with
amplitude U is defined by
U, t ⩾ 0,

f (t ) = (1)
0, t < 0.
When U = 1, f (t ) is called the unit step function, denoted by 1(t ).
Several typical functions’ Laplace transforms are as follows:
∫ +∞ ∫ +∞
1
L [1(t )] = 1(t )e−st dt = e−st dt = .
0 0 s
∫ +∞
1
L [e−at ] = e−at e−st dt = .
0 s+a
The Laplace transform has a number of properties that make it useful for analyzing linear dynamical systems. The most
significant advantage is that differentiation and integration become multiplication and division, respectively, by s (similarly
to logarithms changing multiplication of numbers to addition of their logarithms). The transform turns integral equations
and differential equations to polynomial equations, which are much easier to solve. Once solved, use of the inverse Laplace
transform reverts back to the time domain.
Given the functions f1 (t ) and f2 (t ), and their respective Laplace transforms F1 (s) = L [f1 (t )] and F2 (s) = L [f2 (t )], or
f1 (t ) = L −1 [F1 (s)] and f2 (t ) = L −1 [F2 (s)]. Let c1 and c2 be constants. Then we have the linear property:
L [c1 f1 (t ) + c2 f2 (t )] = c1 L [f1 (t )] + c2 L [f2 (t )] = c1 F1 (s) + c2 F2 (s).
Assuming that F (s) = L [f (t )], we have the differential property of the Laplace transforms:
 
df (t )
L = sF (s) − f (0). (2)
dt
The final value theorem of the Laplace transform indicates that
f (∞) = lim f (t ) = lim sF (s).
t →+∞ s→0

3. The transfer function models

A linear dynamical system can be described by a time-invariant linear differential equation:


y(n) (t ) + a1 y(n−1) (t ) + · · · + an−1 y(1) (t ) + an y(t ) = b1 u(n−1) (t ) + b2 u(n−2) (t ) + · · · + bn−1 u(1) (t ) + bn u(t ), (3)
1076 L. Chen et al. / Mathematical and Computer Modelling 53 (2011) 1074–1083

where u(t ) and y(t ) denote the input and output of the system, t represents the time variable, and

di y(t )
y(i) (t ) = , y(2) (t ) = y′′ (t ), y(1) (t ) = ẏ(t ) = y′ (t ), y(0) (t ) = y(t ),
dt i
d u(t )
i
u(i) (t ) = , u(2) (t ) = u′′ (t ), u(1) (t ) = u̇(t ) = u′ (t ), u(0) (t ) = u(t ).
dt i
ai and bi are the parameters of this system.
Under the zero initial values, taking the Laplace transform to both sides of (3) and using the differential property in (2)
give

(sn + a1 sn−1 + · · · + an−1 s + an )Y (s) = (b1 sn−1 + b2 sn−2 + · · · + bn−1 s + bn )U (s),


where Y (s) and U (s) are the Laplace transforms of y(t ) and u(t ):

Y (s) := L [y(t )], U (s) := L [u(t )].

Hence, we have the transfer function of the system:

Y (s) b 1 s n −1 + b 2 s n −2 + · · · + b n −1 s + b n
G(s) := = , (4)
U (s) s n + a1 s n − 1 + · · · + an − 1 s n − 2 + an
which can be equivalently rewritten as
K (Tn+1 s + 1)(Tn+2 s + 1) · · · (T2n−1 s + 1)
G(s) = .
(T1 s + 1)(T2 s + 1)(Tn s + 1)
The gain is K := bn /an and the time constants are Ti ⩾ 0 for stable systems.
Assume that u(t ) is a step function with amplitude U. Its Laplace transform is
U

U (s) = L [u(t )] =
, t ⩾ 0,
s
0, t < 0.

The Laplace transform of the output y(t ) is

Y (s) = G(s)U (s).

According to the final value theorem, we have

y(∞) = lim Y (s)s = lim G(s)U (s)s = KU . (5)


s→0 s→0

Or
y(∞)
K = .
U
This implies that the gain K is equal to the ratio of the stable output y(∞) to the amplitude U of the input of the system.
Without loss of generality, let U = 1.

4. The second-order systems with the same poles

Consider a second-order system with the same poles plus a zero, whose transfer function is given by

K (T2 s + 1)
G(s) = , T1 ̸= T2 . (6)
(T1 s + 1)2
Here, the time constants T1 and T2 and the system gain K are the parameters to be identified from the input–output data
{u(t ), y(t )}.
When the input is taken as a unit step function, i.e., u(t ) = 1 for t ⩾ 0, the Laplace transform of the system output is
K (T2 s + 1) 1
Y (s) = G(s)U (s) =
(T1 s + 1)2 s
 
1 T1 T1 − T2 T12
=K − − . (7)
s T1 s + 1 T12 (T1 s + 1)2
L. Chen et al. / Mathematical and Computer Modelling 53 (2011) 1074–1083 1077

Fig. 1. The step response curve.

Its inverse Laplace transform (i.e., the system response) is given by


 
T1 − T2
y(t ) = L −1 [Y (s)] = K 1 − e−t /T1 − te−t /T1
T12

= K [1 − e−t /T1 + β te−t /T1 ], (8)

where
T1 − T2
β := − . (9)
T12
From (8), we can see that K = y(∞).
We can take two points (t1 , y(t1 )) and (t2 , y(t2 )) on the step response curve in Fig. 1 and substitute them into (8) and
then get two equations

y(t1 ) = K [1 − e−t1 /T1 + β t1 e−t1 /T1 ],


y(t2 ) = K [1 − e−t2 /T1 + β t2 e−t2 /T1 ].

This is a transcendental equation, so we cannot obtain the solution of T1 and T2 using the algebraic method. In order to
avoid the transcendental equation, we take two special points (t1 , y(t1 )) and (2t1 , y(2t1 )) on the step response curve and
substitute them into (8) and then get two equations:

y(t1 ) = K [1 − e−t1 /T1 + β t1 e−t1 /T1 ],


y(2t1 ) = K [1 − e−2t1 /T1 + β 2t1 e−2t1 /T1 ].

Or

y(t1 ) = K (1 − α + βα t1 ),

(10)
y(2t1 ) = K (1 − α 2 + 2βα 2 t1 ),

where

α := e−t1 /T1 . (11)

Let
y(t1 ) y(2t1 )
k1 := − 1, k2 := − 1. (12)
K K
Eq. (10) gives

−α + βα t1 = k1 , (13)
−α + 2βα t1 = k2 .
2 2
(14)

Squaring both sides of (13) plus (14) gives



βα t1 = ± k21 + k2 .
1078 L. Chen et al. / Mathematical and Computer Modelling 53 (2011) 1074–1083

Consider α > 0 and substituting into (13), we have



α = −k1 + k21 + k2 . (15)

Substituting into (13) gives



k21 + k2
β= . (16)
α t1
Substituting (15) and (16) into (9) and (11) gives
−t1
T̂1 = ,
ln α
T̂2 = β T̂12 + T̂1 .

To enhance the estimation accuracy, we may take (t2 , y(t2 )) and (2t2 , y(2t2 )), (t3 , y(t3 )) and (2t3 , y(2t3 )), . . . , (tN , y(tN ))
and (2tN , y(2tN )), respectively, repeating the above procedure gives a series of the estimates T̂ij of Ti , we take their average

T̂11 + T̂12 + · · · + T̂1N T̂21 + T̂22 + · · · + T̂2N


T̂1 = and T̂2 =
N N
as the estimates of T1 and T2 .

5. The second-order systems with different poles

Consider a second-order system with different poles plus a zero:

K (T3 s + 1)
G(s) = , T1 < T2 , T3 ̸= T1 , T3 ̸= T2 , (17)
(T1 s + 1)(T2 s + 1)
where K is the gain, and T1 , T2 and T3 are the time constants.
Let u(t ) be a unit step function. We have
K (T3 s + 1) 1
Y (s) = .
(T1 s + 1)(T2 s + 1) s
The step response is
 
T3 − T1 −t /T1 T3 − T2 −t /T2
y(t ) = K 1 + e − e . (18)
T1 − T2 T1 − T2

Notice K̂ = y(∞). Taking three points (t1 , y(t1 )), (2t1 , y(2t1 )) and (3t1 , y(3t1 )) on the step response curve in Fig. 2 and
substituting them into (18) give
  
T3 − T1 −t1 /T1 T3 − T2 −t1 /T2
y(t1 ) = K 1 + e − e ,




 T1 − T2 T1 − T2

  

T3 − T1 −2t1 /T1 T3 − T2 −2t1 /T2

y(2t1 ) = K 1 + e − e ,

 T1 − T2 T1 − T2

  

y(3t ) = K 1 + T3 − T1 e−3t1 /T1 − T3 − T2 e−3t1 /T2 .



 1
T1 − T2 T1 − T2

Let
T3 − T1
α1 := exp(−t1 /T1 ), α2 := exp(−t1 /T2 ), β := . (19)
T1 − T2
Then we have
y(t1 ) = K [1 + βα1 − (1 + β)α2 ],

y(2t1 ) = K [1 + βα12 − (1 + β)α22 ],


y(3t1 ) = K [1 + βα13 − (1 + β)α23 ].

L. Chen et al. / Mathematical and Computer Modelling 53 (2011) 1074–1083 1079

Fig. 2. The step response curve.

Let
y(t1 ) y(2t1 ) y(3t1 )
k1 := − 1, k2 := − 1, k3 := − 1.
K K K
Hence, we have the following equations:
β(α1 − α2 ) − α2 = k1 ,
β(α12 − α22 ) − α22 = k2 ,
β(α13 − α23 ) − α23 = k3 ,
or
β(α1 − α2 ) = k1 + α2 ,
β(α12 − α22 ) = k2 + α22 , (20)
β(α − α ) = k3 + α .
3
1
3
2
3
2

Eliminating β gives
k1 (α1 + α2 ) + α1 α2 = k2 ,
k1 [(α1 + α2 )2 − α1 α2 ] + α1 α2 (α1 + α2 ) = k3 .
Thus, we have
k22 − k1 k3 k3 + k1 k2
α1 α2 = , α1 + α2 = .
k21 + k2 k21 + k2
Since T1 < T2 , we have α1 < α2 , and

k1 k2 + k3 − b
α1 = , (21)
2(k21 + k2 )

k1 k2 + k3 + b
α2 = , (22)
2(k21 + k2 )

b := 4k31 k3 − 3k21 k22 − 4k32 + k23 + 6k1 k2 k3 . (23)


Substituting α1 and α2 into (20) gives

2k31 + 3k1 k2 + k3 − b
β= √ . (24)
b
Substituting (21)–(24) into (19) gives the estimates of the time constants T1 and T2 as follows:
t1 t1
T̂1 = − , T̂2 = − ,
ln α1 ln α2
T̂3 = β(T̂1 − T̂2 ) + T̂1 .
1080 L. Chen et al. / Mathematical and Computer Modelling 53 (2011) 1074–1083

Fig. 3. The step responses.

Table 1
The step response data.
t (s) y(t ) t (s) y(t ) t (s) y(t )
0 0.00 12 12.54 24 12.37
2 5.31 14 12.65 26 12.30
4 8.58 16 12.65 28 12.24
6 10.53 18 12.60 ··· ···
8 11.65 20 12.52 70 12.00
10 12.25 22 12.44

To enhance the estimation accuracy, we may take (t2 , y(t2 )), (2t2 , y(2t2 )) and (3t2 , y(3t2 )), (t3 , y(t3 )), (2t3 , y(2t3 )) and
(3t3 , y(3t3 )), . . . , (tN , y(tN )), (2tN , y(2tN )) and (3tN , y(3tN )), respectively, repeating the above procedure gives a series of
the estimates T̂ij of Ti , we take their average

T̂i1 + T̂i2 + · · · + T̂iN


T̂i = , i = 1, 2, 3
N
as the estimates of Ti .
This method of determining the parameters of the second-order systems can be extended to higher-order systems.
About the choice of t1 , three scales t1 and t2 = 2t1 (and t3 = 3t1 ) should generally be chosen uniformly from the dynamic
process instead of the steady process in order to enhance the accuracy of the parameter estimates.

6. Examples

Example 1. Consider the following second-order system:


3(10s + 1)
G(s) = .
(6s + 1)2
In simulation, the input is taken as a step signal with amplitude 4, and considering the rounding errors (i.e., the noise), the
step response data are shown in Table 1, keeping 2 decimal places.

From Table 1, we have y(∞) = 12 and the gain K̂ = y(∞)/U = 3.


We take t1 = 4 s, t2 = 6 s and t3 = 8 s and use the data in Table 1 and the proposed method to determine the parameters
of this example system, the estimated transfer function is
3(9.98s + 1)
Ĝ(s) = .
(5.98s + 1)2
The step responses of G(s) and Ĝ(s) are shown in Fig. 3.

Example 2. Consider the following second-order system:


16(45s + 1)
G(s) = .
(25s + 1)(30s + 1)
L. Chen et al. / Mathematical and Computer Modelling 53 (2011) 1074–1083 1081

Table 2
The step response data.
t (s) y (t ) t (s) y(t ) t (s ) y (t )

0 0.00 30 71.91 60 83.45


5 21.16 35 75.83 65 83.73
10 37.47 40 78.66 70 83.81
15 49.95 45 80.66 ··· ···
20 59.43 50 82.02 400 80.00
25 66.58 55 82.91

Table 3
The step response data.
t (s) y(t ) t (s) y(t ) t (s) y (t )

0 0.17491 12 10.07839 24 10.22430


2 4.23428 14 10.23160 26 10.24584
4 6.82100 16 9.96224 28 10.20419
6 8.24238 18 10.11159 ··· ···
8 9.24654 20 10.33812 50 10.02477
10 9.66473 22 10.19159

Fig. 4. The step responses.

In the simulation, the input is taken as a step signal with amplitude 5, and the step response data are shown in Table 2,
keeping 2 decimal places. From Table 2, we have y(∞) = 80 and the gain K̂ = y(∞)/U = 16.
We take t1 = 10 s, t2 = 15 s and t3 = 20 s and use the data in Table 2 and the proposed method to determine the
parameters of this example system, the estimated transfer function is
16(45.71s + 1)
Ĝ(s) = .
(24.58s + 1)(31.06s + 1)
The step responses of G(s) and Ĝ(s) are shown in Fig. 4.

Example 3. Consider the following second-order system:


2(8s + 1)
G(s) = .
(5s + 1)(6s + 1)
In simulation, the input is taken as a step signal with amplitude 5, the measured output y(t ) is contaminated by a white
noise v(t ) with zero mean and variance σ 2 = 0.102 , and the step response data are shown in Table 3.

From Table 3, we have y(∞) = 10.02477 and the gain K̂ = y(∞)/U = 2.00495.
We use the data with t1 = 4 s, t2 = 6 s and t3 = 8 s in Table 3 and apply the proposed method to determine the
parameters of this example system, the estimated transfer function is
2.00495(8.42536s + 1)
Ĝ(s) = .
(5.38064s + 1)(5.89253s + 1)
The step responses of G(s) and Ĝ(s) are shown in Fig. 5.
1082 L. Chen et al. / Mathematical and Computer Modelling 53 (2011) 1074–1083

Fig. 5. The step responses.

From Figs. 3–5, the step responses of the estimated transfer functions are very close to those of the example systems.
This implies that the estimated models can capture the system’s dynamics.

7. Conclusions

This paper uses the algebraic method to estimate the parameters of the transfer function models of second-order systems
from step response data, avoiding the difficulty of solving transcendental equations. The numerical examples show that the
proposed methods are effective for identifying the second-order systems.

References

[1] D.Q. Wang, F. Ding, Input–output data filtering based recursive least squares identification for CARARMA systems, Digital Signal Processing 20 (4)
(2010) 991–999.
[2] M. Kohandel, S. Sivalogana, G. Tenti, Estimation of the quasi-linear viscoelastic parameters using a genetic algorithm, Mathematical and Computer
Modelling 47 (3–4) (2008) 266–270.
[3] C. Mocenni, E. Sparacino, A. Vicino, J.P. Zubelli, Mathematical modelling and parameter estimation of the Serra da Mesa basin, Mathematical and
Computer Modelling 47 (7–8) (2008) 765–780.
[4] J.L. Figueroa, S.L. Biagiola, O.E. Agamennoni, An approach for identification of uncertain Wiener systems, Mathematical and Computer Modelling 48
(1–2) (2008) 305–315.
[5] X.G. Liu, J. Lu, Least squares based iterative identification for a class of multirate systems, Automatica 46 (3) (2010) 549–554.
[6] F. Ding, T. Chen, Least squares based self-tuning control of dual-rate systems, International Journal of Adaptive Control and Signal Processing 18 (8)
(2004) 697–714.
[7] F. Ding, T. Chen, Z. Iwai, Adaptive digital control of Hammerstein nonlinear systems with limited output sampling, SIAM Journal on Control and
Optimization 45 (6) (2006) 2257–2276.
[8] F. Ding, T. Chen, A gradient based adaptive control algorithm for dual-rate systems, Asian Journal of Control 8 (4) (2006) 314–323.
[9] F. Ding, T. Chen, Performance bounds of the forgetting factor least squares algorithm for time-varying systems with finite measurement data, IEEE
Transactions on Circuits and Systems I: Regular Papers 52 (3) (2005) 555–566.
[10] J.R. Deller, M. Nayeri, S.F. Odeh, Least-squares identification with error bounds for real-time signal processing and control, Proceedings of the IEEE 81
(6) (1993) 815–849.
[11] F. Ding, P.X. Liu, H.Z. Yang, Parameter identification and intersample output estimation for dual-rate systems, IEEE Transactions on Systems, Man, and
Cybernetics, Part A: Systems and Humans 38 (4) (2008) 966–975.
[12] F. Ding, G. Liu, X.P. Liu, Partially coupled stochastic gradient identification methods for non-uniformly sampled systems, IEEE Transactions on
Automatic Control 55 (8) (2010) 1976–1981.
[13] J. Ding, Y. Shi, H.G. Wang, F. Ding, A modified stochastic gradient based parameter estimation algorithm for dual-rate sampled-data systems, Digital
Signal Processing 20 (4) (2010) 1238–1249.
[14] F. Ding, T. Chen, Performance analysis of multi-innovation gradient type identification methods, Automatica 43 (1) (2007) 1–14.
[15] F. Ding, P.X. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises,
Signal Processing 89 (10) (2009) 1883–1890.
[16] L.L. Han, F. Ding, Multi-innovation stochastic gradient algorithms for multi-input multi-output systems, Digital Signal Processing 19 (4) (2009)
545–554.
[17] F. Ding, Several multi-innovation identification methods, Digital Signal Processing 20 (4) (2010) 1027–1039.
[18] D.Q. Wang, F. Ding, Performance analysis of the auxiliary models based multi-innovation stochastic gradient estimation algorithm for output error
systems, Digital Signal Processing 20 (3) (2010) 750–762.
[19] J.B. Zhang, F. Ding, Y. Shi, Self-tuning control based on multi-innovation stochastic gradient parameter estimation, Systems & Control Letters 58 (1)
(2009) 69–75.
[20] L.L. Han, F. Ding, Identification for multirate multi-input systems using the multi-innovation identification theory, Computers & Mathematics with
Applications 57 (9) (2009) 1438–1449.
[21] F. Ding, H.B. Chen, M. Li, Multi-innovation least squares identification methods based on the auxiliary model for MISO systems, Applied Mathematics
and Computation 187 (2) (2007) 658–668.
[22] Y.J. Liu, Y.S. Xiao, X.L. Zhao, Multi-innovation stochastic gradient algorithm for multiple-input single-output systems using the auxiliary model, Applied
Mathematics and Computation 215 (4) (2009) 1477–1483.
[23] L. Xie, Y.J. Liu, H.Z. Yang, F. Ding, Modeling and identification for non-uniformly periodically sampled-data systems, IET Control Theory & Applications
4 (5) (2010) 784–794.
L. Chen et al. / Mathematical and Computer Modelling 53 (2011) 1074–1083 1083

[24] F. Ding, P.X. Liu, G. Liu, Multi-innovation least squares identification for linear and pseudo-linear regression models, IEEE Transactions on Systems,
Man, and Cybernetics, Part B: Cybernetics 40 (3) (2010) 767–778.
[25] Y.J. Liu, L. Yu, F. Ding, Multi-innovation extended stochastic gradient algorithm and its performance analysis, Circuits, Systems and Signal Processing
29 (4) (2010) 649–667.
[26] F. Ding, T. Chen, Combined parameter and output estimation of dual-rate systems using an auxiliary model, Automatica 40 (10) (2004) 1739–1748.
[27] F. Ding, T. Chen, Identification of dual-rate systems based on finite impulse response models, International Journal of Adaptive Control and Signal
Processing 18 (7) (2004) 589–598.
[28] F. Ding, T. Chen, Parameter estimation of dual-rate stochastic systems by using an output error method, IEEE Transactions on Automatic Control 50
(9) (2005) 1436–1441.
[29] Y.J. Liu, L. Xie, F. Ding, An auxiliary model based recursive least squares parameter estimation algorithm for non-uniformly sampled multirate systems,
Proceedings of the Institution of Mechanical Engineers, Part I: Journal of Systems and Control Engineering 223 (4) (2009) 445–454.
[30] L.L. Han, J. Sheng, F. Ding, Y. Shi, Auxiliary model identification method for multirate multi-input systems based on least squares, Mathematical and
Computer Modelling 50 (7–8) (2009) 1100–1106.
[31] F. Ding, J. Ding, Least squares parameter estimation with irregularly missing data, International Journal of Adaptive Control and Signal Processing 24
(7) (2010) 540–553.
[32] F. Ding, T. Chen, Hierarchical gradient-based identification of multivariable discrete-time systems, Automatica 41 (2) (2005) 315–325.
[33] F. Ding, T. Chen, Hierarchical least squares identification methods for multivariable systems, IEEE Transactions on Automatic Control 50 (3) (2005)
397–402.
[34] F. Ding, T. Chen, Hierarchical identification of lifted state-space models for general dual-rate systems, IEEE Transactions on Circuits and Systems I:
Regular Papers 52 (6) (2005) 1179–1187.
[35] H.Q. Han, L. Xie, F. Ding, X.G. Liu, Hierarchical least squares based iterative identification for multivariable systems with moving average noises,
Mathematical and Computer Modelling 51 (9–10) (2010) 1213–1220.
[36] F. Ding, L. Qiu, T. Chen, Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systems, Automatica 45 (2) (2009)
324–332.
[37] L.L. Xiang, L.B. Xie, R.F. Ding, Hierarchical least squares algorithms for single-input multiple-output systems based on the auxiliary model,
Mathematical and Computer Modelling 52 (5–6) (2010) 918–924.
[38] F. Ding, P.X. Liu, G. Liu, Gradient based and least-squares based iterative identification methods for OE and OEMA systems, Digital Signal Processing
20 (3) (2010) 664–677.
[39] Y.J. Liu, D.Q. Wang, F. Ding, Least-squares based iterative algorithms for identifying Box–Jenkins models with finite measurement data, Digital Signal
Processing 20 (5) (2010) 1458–1467.
[40] D.Q. Wang, G.W. Yang, R.F. Ding, Gradient-based iterative parameter estimation for Box–Jenkins systems, Computers & Mathematics with Applications
60 (5) (2010) 1200–1208.
[41] F. Ding, T. Chen, Identification of Hammerstein nonlinear ARMAX systems, Automatica 41 (9) (2005) 1479–1489.
[42] F. Ding, T. Chen, Gradient based iterative algorithms for solving a class of matrix equations, IEEE Transactions on Automatic Control 50 (8) (2005)
1216–1221.
[43] F. Ding, T. Chen, Iterative least squares solutions of coupled Sylvester matrix equations, Systems & Control Letters 54 (2) (2005) 95–107.
[44] F. Ding, T. Chen, On iterative solutions of general coupled matrix equations, SIAM Journal on Control and Optimization 44 (6) (2006) 2269–2284.
[45] F. Ding, P.X. Liu, J. Ding, Iterative solutions of the generalized Sylvester matrix equations by using the hierarchical identification principle, Applied
Mathematics and Computation 197 (1) (2008) 41–50.
[46] L. Xie, J. Ding, F. Ding, Gradient based iterative solutions for general linear matrix equations, Computers & Mathematics with Applications 58 (7) (2009)
1441–1448.
[47] J. Ding, Y.J. Liu, F. Ding, Iterative solutions to matrix equations of form Ai XBi = Fi , Computers & Mathematics with Applications 59 (11) (2010)
3500–3507.
[48] F. Ding, Transformations between some special matrices, Computers & Mathematics with Applications 59 (8) (2010) 2676–2695.
[49] L. Xie, Y.J. Liu, H.Z. Yang, Gradient based and least squares based iterative algorithms for matrix equations AXB + CX T D = F , Applied Mathematics
and Computation 217 (5) (2010) 2191–2199.
[50] F. Ding, Y. Shi, T. Chen, Performance analysis of estimation algorithms of non-stationary ARMA processes, IEEE Transactions on Signal Processing 54
(3) (2006) 1041–1053.
[51] F. Ding, Y. Shi, T. Chen, Auxiliary model based least-squares identification methods for Hammerstein output-error systems, Systems & Control Letters
56 (5) (2007) 373–380.
[52] D.Q. Wang, F. Ding, Extended stochastic gradient identification algorithms for Hammerstein–Wiener ARMAX systems, Computers & Mathematics
with Applications 56 (12) (2008) 3157–3164.
[53] D.Q. Wang, Y.Y. Chu, F. Ding, Auxiliary model-based RELS and MI-ELS algorithms for Hammerstein OEMA systems, Computers & Mathematics with
Applications 59 (9) (2010) 3092–3098.
[54] F. Ding, P.X. Liu, G. Liu, Identification methods for Hammerstein nonlinear systems, Digital Signal Processing (2010) doi:10.1016/j.dsp.2010.06.006.
[55] Q.G. Wang, X. Guo, Y. Zhang, Direct identification of continuous time delay systems from step response, Journal of Process Control 11 (2001) 531–542.
[56] Q. Bi, W. Cai, E.L. Lee, Q.G. Wang, C.C. Hang, Y. Zhang, Robust identification of first-order plus dead-time model from step response, Control Engineering
Practice 7 (1) (1999) 71–77.
[57] S. Ahmed, B. Huang, S.L. Shah, Identification from step responses with transient initial conditions, Journal of Process Control 18 (2) (2008) 121–130.

You might also like