0% found this document useful (0 votes)
19 views6 pages

Blind Estimation of Reverberation Time in Occupied

Uploaded by

haduong812
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views6 pages

Blind Estimation of Reverberation Time in Occupied

Uploaded by

haduong812
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/200045340

Blind estimation of reverberation time in occupied rooms

Conference Paper · January 2006

CITATIONS READS

10 156

5 authors, including:

Yonggang Zhang Jonathon Chambers


Harbin Engineering University University of Leicester
182 PUBLICATIONS 3,405 CITATIONS 692 PUBLICATIONS 14,301 CITATIONS

SEE PROFILE SEE PROFILE

Paul Kendrick Trevor John Cox


University of Salford University of Salford
48 PUBLICATIONS 261 CITATIONS 231 PUBLICATIONS 3,622 CITATIONS

SEE PROFILE SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Sound quality in the domestic appliance industry View project

Advancing Iris Biometric Technology View project

All content following this page was uploaded by Paul Kendrick on 21 May 2014.

The user has requested enhancement of the downloaded file.


14th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, September 4-8, 2006, copyright by EURASIP

BLIND ESTIMATION OF REVERBERATION TIME IN OCCUPIED ROOMS

Yonggang Zhang† , Jonathon A. Chambers† , Francis F. Li∗ , Paul Kendrick‡ , Trevor J. Cox‡
† The Centre of Digital Signal Processing, Cardiff School of Engineering
Cardiff University, Cardiff CF24 0YF, UK. email: [email protected], [email protected]
∗ Department of Computing and Mathematics
Manchester Metropolitan University, Manchester M1 5GD, UK. email: [email protected]
‡ School of Acoustics and Electronic Engineering
University of Salford, Salford M5 4WT, UK. email: [email protected], [email protected]

ABSTRACT z(n ) yˆ12 (n ) -


A new framework is proposed in this paper to solve the rever- RT ML RT Polyfit ANC
beration time (RT) estimation problem in occupied rooms. In +
this framework, blind source separation (BSS) is combined y 11 ( n ) + x1 ( n ) sˆ1 ( n )
s1 ( n ) h11(n)
with an adaptive noise canceller (ANC) to remove the noise
from the passively received reverberant speech signal. A noise y 21 ( n )
h21(n) +
polyfit preprocessing step is then used to extract the free de-
cay segments of the speech signal. RT is extracted from these y 12 ( n) BSS
segments with a maximum-likelihood (ML) based method. s 2 ( n) h12(n) +
An easy, fast and consistent method to calculate the RT via speech y 22 ( n) x 2 (n ) sˆ 2 ( n )
the ML estimation method is also described. This framework h22(n)
provides a novel method for blind RT estimation with robust- +
ness to ambient noises within an occupied room and extends Unobservable Observable
the ML method for RT estimation from noise-free cases to
more realistic situations. Simulation results show that the Figure 1: Proposed blind RT estimate frame work for occu-
proposed framework can provide a good estimation of RT in pied rooms.
simulated low RT occupied rooms.

1. INTRODUCTION
noise interference signal from a mixture of signals is the
Room reverberation time is a very important parameter that convolutive BSS method [6]. Naturally, given two spatially
qualifies the room acoustic quality [1]. This parameter is de- distinct observations, BSS can separate the mixed signals
fined as the time taken by a sound to decay 60 dB below to yield two independent signals. One of these two signals
its initial level after it has been switched off. Many meth- mainly consists of the excitation speech signal plus residue
ods have been proposed to estimate the RT during recent of noise and the other signal contains mostly the noise signal.
years [2][3][4][5]. The maximum-likelihood (ML) estima- Using this estimated noise signal as a reference signal the
tion method proposed in [5] which utilizes a passively re- noise contained in the received speech signal can then be re-
ceived speech signal has received a lot of attention due to its moved by an ANC. Our new framework is motivated by BSS
simplicity and efficiency. In this method, an exponentially and ANC. Different stages of this framework in an occupied
damped Gaussian white noise model is used to describe the room are shown in Fig.1. The signal s1 (n), which is assumed
reverberation diffusive tail signal. An ML estimation method to be the noise signal in this work, is independent with the
is then performed on segments of the speech signal to mea- excitation speech signal s2 (n). The passively received sig-
sure the time-constant of the decay. The most likely RT nals x1 (n) and x2 (n) are modelled as convolutive mixtures of
is identified from a series of estimates by using an order- s1 (n) and s2 (n). The room impulse response h ji (n) is the im-
statistic filter. As shown by the authors, it provides reliable pulse response from source i to microphone j. BSS is used
RT estimates in a noise free environment. To estimate the firstly to obtain the estimated excitation speech signal ŝ2 (n)
RT in noisy environments, such as occupied rooms, where and the estimated noise signal ŝ1 (n). The estimated noise
many noises are generated by the occupants, this method po- signal ŝ1 (n) then serves as the reference signal for the ANC
tentially only considers the signal decay range between the to remove the noise component from x1 (n). The output of
initial maximum of the decay curve and the point where the the ANC ŷ12 (n) is an estimation of the noise free reverberant
decay curve intersects the background noise. When the noise speech signal y12 (n). As compared with x1 (n), it crucially
is large, for example comparable with the excited speech sig- retains the reverberant structure of the speech signal and has
nal, the results will be contaminated or even incorrect. There- a low level of noise, therefore it is more suitable to estimate
fore this method is limited by the noise level and not suitable the RT of the occupied room. To remove the guess work of
for occupied rooms. the window length selection in the ML method and reduce
To make the ML RT estimation method more robust and the variance of the RT estimates, we use an overlap polyfit
accurate, an intuitive way is to remove the unknown noise method as a preprocessing step. The decay segments in z(n)
signal from the received speech signal as much as possible which contain most of the free decay samples of the reverber-
before RT estimation. A powerful tool for extracting some ant speech signal are extracted by this preprocessing. Then
14th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, September 4-8, 2006, copyright by EURASIP

the ML estimation method is performed only on these decay W(ω ) = 0 will lead to the minimization of F(W)(w, k). To
segments. Based on the idea of bisection [5], a new method avoid this some constraints should be added to the unmix-
to calculate the RT is also provided in the ML RT estima- ing matrix. In [6] a penalty function is added to convert the
tion method. Compared with other calculation methods this constrained optimization problem into an unconstrained op-
method has some advantages, as will be discussed later. timization problem. The cost function of penalty function
The following section introduces the BSS process. The based joint diagonalization is as follows:
ANC is described in Section 3. The polyfit preprocessing is
described in Section 4. Section 5 describes the ML method. T K
A bisection algorithm for the ML estimation method is also J(W(ω )) = arg min ∑ ∑ F(W)(ω , k) + λ g(W)(ω , k)
introduced. Simulation results are given in Section 6. Section w=1 k=1
7 summarizes the paper. (6)
where λ is the penalty weight factor and g(W)(ω , k) is a
2. BLIND SOURCE SEPARATION form of penalty function based on a constraint of the unmix-
ing matrix. With a gradient-based descent method we can
As shown by Fig.1, the goal of BSS is to extract the esti- calculate the unmixing matrix after several iterations from
mated noise signal ŝ1 (n) from received mixture signals x1 (n) equation (6). The separated signals ŝ1 (n) and ŝ2 (n) can then
and x2 (n). If we assume that the room environment is time be obtained from (3) after applying an inverse DFT.
invariant, the received mixtures x1 (n) and x2 (n) can be mod-
eled as weighted sums of convolutions of the source signals 3. ADAPTIVE NOISE CANCELLER
s1 (n) and s2 (n). Assume that N sources are recorded by M
microphones (here M=N=2) the equation that describes this After BSS we obtain the estimated noise signal ŝ1 (n). This
convolved mixing process is: signal is then used as a reference in the ANC stage signal to
remove the noise component from the received signal x1 (n).
N P−1
A new variable step size LMS algorithm which is suitable
x j (n) = ∑ ∑ si (n − p)h ji (p) (1) for speech processing is used in the ANC. The updates of the
i=1 p=0
step size can be formulated as follows:
where si (n) is the source signal from a source i, x j (n) is the
received signal by a microphone j, and h ji (n) is the P-point e(n) = x1 (n) −ŝT1 (n)w(n) (7)
response from source i to microphone j. Using a T-point
windowed discrete Fourier transformation (DFT), time do- e(n)ŝ1 (n)
main signal x j (n) can be converted into the time-frequency g(n) = p (8)
L[σ̂e2 (n) + σ̂s2 (n)]
domain signal X j (ω , n) where ω is a frequency index and n
is a time index. For each frequency bin we have
p(n) = β p(n − 1) + (1 − β )g(n) (9)
X(ω , n) = H(ω )S(ω , n) (2)
µ (n + 1) = α µ (n) + γ kp(n)k2F (10)
where S(ω , n) = [s1 (ω , n), · · · , sN (ω , n)]T and X(ω , n) =
[x1 (ω , n), · · · , xM (ω , n)]T are the time-frequency representa- where µ (n) is the variable step size, ŝ1 (n) =
tions of the source signals and the observed signals respec- [ŝ1 (n), · · · , ŝ1 (n − L + 1)]T , w(n) is the weight vector
tively and (·)T denotes vector transpose. The separation can of the adaptive filter, L is the filter length, σ̂e2 (n) and σ̂s2 (n)
be completed by the unmixing matrix W(ω ) in a frequency are estimations of the temporal error energy and the temporal
bin ω input energy, 0 < α < 1, 0 < β < 1, γ > 0, g(n) is the
Ŝ(ω , n) = W(ω )X(ω , n) (3) square root normalized gradient vector, p(n) is a smoothed
version of g(n). The recursion of the filter weight vector is
where ŝ(ω , n) = [ŝ1 (ω , n), · · · , ŝN (ω , n)]T is the time- as follows
frequency representations of the estimated source signals and
W(ω ) is the frequency representation of the unmixing ma- e(n)ŝ1 (n)
trix. W(ω ) is determined so that ŝ1 (ω , n), ..., ŝN (ω , n) be- w(n + 1) = w(n) + µ (n) (11)
L[σ̂e2 (n) + σ̂s2 (n)]
come mutually independent. Exploiting the nonstationary of
the speech signal we define the cost function as follows: The square root normalized gradient vector g(n) in (8) is
T K used to obtain a robust measure of the adaptive process. The
J(W(ω )) = arg min ∑ ∑ F(W)(ω , k), (4) first-order filter based averaging operation in (9) removes the
w=1 k=1 disturbance brought by the target signal. The variable step
size µ (n) in (10) is adapted to obtain a fast convergence rate
where K is the number of signal segments and F(W)(ω , k) during the early adaptive process and a small misadjustment
is defined as after the algorithm converges. The adaptation of the weight
vector in (11) is based on the sum method in [7] which is de-
F(W)(ω , k) = kRŜ (ω , k) − diag[RŜ (ω , k)]k2F (5) signed to minimize the steady state mean square error. Equa-
tions (7)(8)(9)(10)(11) provide a new variable step size LMS
where RŜ (ω , k) is the autocorrelation matrix of the separated algorithm for the ANC stage. The output signal of the ANC
signals and k · k2F denotes the squared Frobenius norm, k is ŷ12 (n) should then be a good estimation of the noise free re-
the block index. The separation problem is then converted verberant speech signal y12 (n). Next, estimating of the RT
into a joint diagonalization problem. Obviously, the solution from ŷ12 (n) must be considered.
14th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, September 4-8, 2006, copyright by EURASIP

4. POLYFIT PREPROCESSING as that in (13) and (14)) by z(n) and a(n) and N is the estima-
tion window length, we can obtain the logarithm likelihood
In this stage, the input signal is the estimated noise free re- function
verberant speech signal ŷ12 (n). The overlap polyfit method
is used to extract the decay segments of this signal. The out- N(N − 1)
put of this stage is the signal z(n) which contains the decay E{L(z; a, σ )} = − ln(a) −
2
segments of ŷ12 (n). In accordance with the ML estimation of
RT, we use the same exponentially damped Gaussian white N 1 N
ln(2πσ 2 ) − 2 ∑ a−2n z2 (n) (16)
noise model which has been used in [5]. The mathematical 2 2σ n=1
formulation is as follows:
where σ is the initial power of the signal. With this func-
ŷ12 (n) = a(n)v(n) (12) tion the parameters a and σ can be estimated using an ML
approach. From each segment we can obtain a series of esti-
where v(n) is an i.i.d. term with normal distribution N(0, σ ) mates of RT. All estimates are used to identify the most likely
and a(n) is a time-varying envelope term. Let a single decay RT of the room.
rate τ describe the damping of the sound envelope during free By considering the relationship between a and decay rate
decay, then the sequence a(n) is uniquely determined by τ , we propose a new bisection method with respect to the RT
a(n) = exp(−n/τ ) = an (13) rather than with respect to a in [5]. The range of the RT is set
between 3s and 0.1s. As the time-constant is not required to
where be arbitrarily precise, the accuracy is limited to 10ms in our
a = exp(−1/τ ) (14) method. The update of our bisection method is as follows:
i). Initialization
In this stage, we first use a moving window with an appro-
priate length and shift to obtain overlap speech frames. From T 60 min = 0.1; T 60 max = 3; accuracy = 0.01;
the model of the reverberant speech tail in (12), which is as-
sumed to hold in each frame, the logarithm of the envelope of iter = log2 ((T 60 max − T 60 min)/accuracy))
the free decay segment is a line with negative slope. Because
in reality the RT should have a reasonable span, for example, where accuracy is the accuracy of the estimation of RT and
0s to 3s, such a slope should have a corresponding range. A iter is the iteration number.
polyfit operation is then performed on each frame to extract 2). Iteration
the slope. By discarding the frames whose slopes are outside
such a range, the speech signal is divided into several con- T (i) = (T 60 min + T 60 max)/2
tinuous decay segments. The longest segments contained in a(i) = exp(−6.91/T (i))
z(n) should contain the most likely free decay segments of ∂ L(y; a, σ )
the speech signal. g(i) =
∂a
As a preprocessing stage for the ML RT estimation
g(i) > 0 then T 60 min = T (i)
method it has several advantages. At first it provides the
window length for the ML RT estimation method automat- g(i) < 0 then T 60 max = T (i)
ically. Although in [5] it has been found that increasing win-
dow length reduces the variability in the estimates, the win- As the authors point out in [5], the disadvantage of the bi-
dow length is limited by the duration and occurrence of the section method is that it works poorly in regions near the
gaps between sound segments. The choice of the window true value of a. From (14) and (15) we know that a is not
length is a trade off between the accuracy and variance of a linear transform of RT. Our bisection on RT is actually a
the estimated RTs. After the polyfit preprocessing the win- non-equivalent bisection with respect to a. Compared with
dow length of the ML estimation must be less than the length the fast block algorithm proposed in [8] our algorithm has a
of the extracted signal segment. It is then chosen automat- number of advantages:
ically according to the segment length. Simulation results 1. No step size needs to be selected.
show that half length of the segment length will be a good 2. No initial value of a is needed.
choice of the window length. Secondly, the variance of RT 3. It always converges and converges quickly within a
estimates is reduced because most samples of these segments fixed number of steps.
are in agreement with the Gaussian damped model, as will be
confirmed in later simulations. 6. SIMULATION
In this section we examine the performance of the proposed
5. ML RT ESTIMATION METHOD framework. The flow chart of the simulations is shown in
The ML estimation method is then performed on the cho- Fig.1. The occupied room and its impulse response h ji be-
sen segments z(n). From the definition of RT and the signal tween source i and microphone j are simulated by an im-
model, the relationship between RT and the decay rate τ is age room model [9]. The room size is set to be 10*10*5
as follows [5]: meter3 and the reflection coefficient is set to be 0.7 in rough
correspondence with the actual room. The RT of this room
−3τ measured by Schroeder’s method [2] is 0.27s. The excita-
T60 = = 6.91τ (15) tion speech signal and the noise signal are two anechoic 40
log10 (exp(−1))
seconds male speech signals with a sampling frequency of
The decay rate τ is extracted by the ML estimation method. 8kHz, and scaled to make the signal to noise ratio (SNR) to
Denote the N-dimensional vectors of z(n) and a(n) (the same be 0dB over the whole observation. The position of these two
14th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, September 4-8, 2006, copyright by EURASIP

sources are set to be [1m 3m 1.5m] and [3.5m 2m 1.5m]. The (a)
positions of the two microphones are set to be [2.45m 4.5m 100
0.29s
1.5m] and [2.55m 4.5m 1.5m] respectively. As shown by

Number
50
Fig.1, BSS is performed firstly to extract the estimated noise
signal ŝ1 . This signal contains mostly the noise signal and a 0
low level of the desired speech signal. To evaluate the BSS 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
performance we use a noise to signal ratio (NSR) which is RTs
(b)
the energy ratio defined between the component of the noise 40
signal and the component of the speech signal contained in 0.35s

Number
ŝ1 . The NSR of ŝ1 in this simulation is 38dB, therefore it 20
has a strong correlation with the noise signal s1 and a slight
correlation with the speech signal. This signal is then used 0
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6
in the ANC model as a reference signal. The filter length of RTs
the ANC is set to be 500 and the parameters α , β , γ are set (c)
40
to be 0.99, 0.9999, 200 respectively. The last 1000 samples
0.3s
of the filter coefficients are used to measure the steady-state

Number
performance. The output signal of the ANC contains two 20

components: the reverberant speech signal and the residue


of the noise signal. The signal to noise ratio (SNR) between 0
0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2
these two components is 43dB. The first approximately 10s RTs
of this signal will be used to estimate the RT. We plot the
first approximately 10s of the received signal x1 and the out- Figure 3: (a) The estimates of RTs by ML method with the
put signal of ANC ŷ12 in Fig.2(a) and Fig.2(b) respectively. whole output signal of ANC and a window length of 1200.
It is easy to see that after BSS and ANC the noise contained (b)The estimates of RTs by ML method with the output sig-
in x1 is reduced greatly. nal of polyfit process and a window length of 1200. (c)The
estimates of RTs by ML method with the output signal of
(a) Received mixture signal
polyfit process and automatical decided window length.
0.5
Amplitude

0 in Fig.2(c). Note that three segments are connected in the


figure in the interval 75,000 to 80,000.
−0.5 Finally, two experiments are performed to show the two
0 1 2 3 4 5 6 7 8 9 advantages of the polyfit process which we analyzed in sec-
Sample number 4
x 10 tion 4. In the first experiment, the extracted signal which is
(b) Output signal of ANC the output of the polyfit process is used to estimate the RT
0.5
by using the ML method with a window length of 1200. The
Amplitude

0 results are shown in Fig.3(b). The second experiment is the


same as the first experiment except the window length of the
−0.5 ML method is decided automatically, which is half of the
0 1 2 3 4 5 6 7 8 9
Sample number 4
segment length. The results of this experiment are shown in
x 10 Fig.3(c).
(c) Extracted segments by polyfit process
0.5 Compare Fig.3(a) with Fig.3(b) we can see that the vari-
ance of the estimates of the RT is reduced greatly by using
Amplitude

0
the polyfit process, where most decay samples are extracted.
The first peak in Fig.3(a) is 0.29s, but it is not clear and the
variance of the RT estimates is very large. In Fig.3(b) the
−0.5
0 1 2 3 4 5 6 7 8 9 variance of the RT estimates is reduced greatly and the first
Sample number 4
x 10
peak is 0.35s. Although both results in these two figures are
larger than the theoretical RT of 0.27s due to the lack of sharp
transients in the clean speech, the bias of the model in ML
Figure 2: The received mixture signal, the output of ANC method and the influence of the interference, they are rea-
and the extracted signal by polyfit process sonable and acceptable in most applications.
Compare Fig.3(c) with Fig.3(b) we can see that the vari-
At first we estimate the RT by using the ML RT estima- ance of the RT estimates in both figures are comparable. The
tion method [5] with the whole output signal of ANC first. first peak in Fig.3(c) is 0.3s, which is also a reasonable and
According to the analysis and simulations in [5], the window acceptable result. Thus we can conclude that performance
length is set to be 1200, which is approximately equal to 4τ , of the ML method with automatical decided window length
to provide a good choice of the window length. The results is comparable, if not better, with the performance of the ML
are shown in Fig.3(a). method with a good choice of the window length.
Then the polyfit process is performed to extract the free From all the simulations above we can see that the combi-
decay segments. The window length of our polyfit method is nation of BSS and ANC can remove the noise signal greatly
set to be 400 samples (0.05s) and the shift is set to be 10 sam- whilst retaining the key reverberant structure to make the
ples. Ten segments extracted by the polyfit stage are shown high-noise environment RT estimation possible. Further
14th European Signal Processing Conference (EUSIPCO 2006), Florence, Italy, September 4-8, 2006, copyright by EURASIP

more, the polyfit process has been added before the ML RT [7] J. E. Greenberg, “Modified LMS algorithm for speech
estimation method to reduce the variance of the results and processing with an adaptive noise canceller,” IEEE
remove the ‘guess’ work of the window length of the ML RT Trans. Signal Processing, vol. 6, no. 4, pp. 338–351, July
estimation method. 1998.
We have performed other experiments where one of the [8] R. Ratnam, D. L. Jones and W. D. O’Brien Jr., “Fast
speech signals in the previous simulations is replaced by a algorithms for blind estimation of reverberation time,”
white noise signal, as a simulated interference in the oc- J. Acoust. Soc. Amer., vol. 11, no. 6, pp. 537–540, June
cupied room. Similar estimation results are also obtained. 2004.
However limited by the room model and the performance of
[9] J. B. Allen and D. A. Berkley, “Image method for effi-
frequency domain BSS, this framework is designed to esti-
ciently simulating small-room acoustics,” J. Acoust. Soc.
mate RT in the occupied room whose RT is less than 0.3s.
Amer., vol. 65, pp. 943–950, Apr. 1979.
As shown by our simulations above, nonetheless, reliable RT
can be extracted using this framework within a highly noisy
occupied room, something that has not previously been pos-
sible.

7. CONCLUSION
This paper proposes a new framework for blind RT estima-
tion in occupied rooms. In this framework, BSS is combined
with an ANC to remove the noise of the received speech sig-
nal. A polyfit stage is added to improve the performance of
the ML RT estimation method. A bisection method is used
in the ML method which provides many advantages over the
previous calculation method. Simulation results show that
the noise is removed greatly from the reverberant speech sig-
nal and the performance of this frame work is good in a sim-
ulated low RT occupied room environment. Due to the mo-
tivation of our framework BSS and ANC can be potentially
used in many reverberation time estimation methods as a pre-
processing. Although the mixing model used in this paper is
not suitable for many applications, this framework provides
a new way to overcome the noise disturbance in RT estima-
tion. However, limited by the performance of convolutive
BSS, this framework is only appropriate for the low RT esti-
mation case. Future work will focus on the theoretic analysis
of this blind RT estimate framework and the improvement
of its stages, especially the improvement of convolutive BSS
under long reverberation environments.

REFERENCES
[1] H. Kuttruff, Room Acoustics 4th ed., Spon Press, Lon-
don, 2000.
[2] M. R. Schroeder, “New method for measuring reverber-
ation time,” J. Acoust. Soc. Am, vol. 37, pp. 409–412,
1965.
[3] ISO 3382, “Acoustics-measurement of the reverberation
time of rooms with reference to other acoustical param-
eters,” International Organization for Standardization,
1997.
[4] T. J. Cox, F. Li and P. Darlington, “Extracting room re-
verberation time from speech using artificial neural net-
works,” J. Audio. Eng. Soc, vol. 49, pp. 219–230, 2001.
[5] R. Ratnam, D. L. Jones, B. C. Wheeler, W. D. O’Brien
Jr., C. R. Lansing and A. S. Feng, “Blind estimation of
reverberation time,” J. Acoust. Soc. Amer., vol. 114, no.
5, pp. 2877–2892, Nov. 2003.
[6] W. Wang, S. Sanei and J. A. Chambers, “Penalty
function-based joint diagonalization approach for convo-
lutive blind separation of nonstationary sources,” IEEE
Tran. Signal Processing, vol. 53, no. 5, pp. 1654–1669,
May 2005.

View publication stats

You might also like