0% found this document useful (0 votes)
57 views12 pages

2) Real Valued Root Music Doa Est

Uploaded by

RAVI SHANKAR JHA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
57 views12 pages

2) Real Valued Root Music Doa Est

Uploaded by

RAVI SHANKAR JHA
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Signal Processing 152 (2018) 1–12

Contents lists available at ScienceDirect

Signal Processing
journal homepage: www.elsevier.com/locate/sigpro

Real-valued root-MUSIC for DOA estimation with reduced-dimension


EVD/SVD computation
Feng-Gang Yan a,b, Liu Shuai a, Jun Wang a,∗, Jun Shi b,∗, Ming Jin a
a
School of information and Electrical Engineering, Harbin Institute of Technology at Weihai, Weihai 264209, China
b
Communication Research Center, Harbin Institute of Technology, Harbin 150001, China

a r t i c l e i n f o a b s t r a c t

Article history: A novel real-valued formulation of the popular root multiple signal classification (root-MUSIC) direc-
Received 3 December 2017 tion of arrival (DOA) estimation technique with substantially reduced computational complexity is devel-
Revised 9 May 2018
oped. The proposed real-valued root-MUSIC (RV-root-MUSIC) algorithm reduces the computational bur-
Accepted 10 May 2018
den mainly in three aspects. First, it exploits the eigenvalue decomposition or the singular value de-
Available online 22 May 2018
composition (EVD/SVD) of a real-valued covariance matrix to extract a real-valued noise subspace, which
Keywords: reduces the complexity by a factor about four as compared to root-MUSIC. Next, based on the bisym-
Direction-of-arrival (DOA) estimation metric or the anti-bisymmetric structure of the real-valued covariance matrix, the real-valued EVD/SVD
Root multiple signal classification in RV-root-MUSIC is optimized to be equivalently performed on two sub-matrices with reduced dimen-
(root-MUSIC) sions of about half sizes, which further reduces the complexity by another factor about four as com-
Reduced-dimension EVD/SVD pared to most state-of-the-art real-valued estimators including unitary root-MUSIC (U-root-MUSIC). Fi-
Uniform linear array (ULA)
nally, the eigenvectors and the singular vectors of those sub-matrices are found of centrosymmetrical or
Real-valued computation
Bisymmetric structure
anti-centrosymmetrical structures while the roots of RV-root-MUSIC are proven to appear in conjugate
pairs with the form a + jb, a − jb, which also allows fast coefficient computation and real-valued rooting
using Bairstow’s method. Numerical simulations illustrate that with significantly reduced complexity, the
proposed technique is able to provide good root mean square errors (RMSEs) close to the Cramér–Rao
Lower Bound (CRLB).
© 2018 Elsevier B.V. All rights reserved.

1. Introduction (ULA), the time consuming spectral search in MUSIC can be equiv-
alently transformed into a simple polynomial rooting step by the
In many fields such as radar, sonar, passive localization and popular root-MUSIC algorithm [12]. Taking advantage of the shift
wireless communication, there is a need for determining the direc- invariant array geometry, DOA can be also found with low com-
tion of arrivals (DOAs) of different signals impinging from distinct plexity by the famous estimation of signal parameters via rota-
directions on an array of spatially distributed sensors or antennas tional invariance techniques (ESPRIT) [13]. Although root-MUSIC
[1]. Over several decades, this topic has been extensively studied and ESPRIT require no spectral search as compared to MUSIC, con-
and numerous algorithms have been proposed. As a majority of ventional versions of those classical algorithms were originally pro-
traditional spectral search-based methods such as multiple signal posed based on complex-valued computations, and consequently,
classification (MUSIC) [2], maximum-likelihood (ML) [3], subspace there is a requirement for algorithms demanding more efficient
fitting [4] and Min-Norm [5] involve high computational complex- real-valued computations.
ities, reducing the computational burdens of those estimators has Following this idea, a unitary transformation [14] as well as a
becoming one of the bottleneck techniques in the literature [6–11]. forward/backward averaging [15,16] technique is proposed to re-
One of the most representative methods for complexity reduc- alize real-valued computations. These techniques are investigated
tion is to exploit some specified array structure to simplify the for the unitary MUSIC algorithm [17] and extended progressively
problem formulation. It is well known that by using the Vander- to a variety of approaches including unitary root MUSIC (U-root-
monde structure of the steering vector of a uniform linear array MUSIC) [18], unitary ESPRIT [19], unitary method of direction-of-
arrival estimation [20] and unitary matrix pencil [21]. With the
special structures of centro-symmetrical arrays (CSAs), those algo-

Corresponding author. rithms transform the complex array covariance matrix (ACM) into
E-mail addresses: [email protected] (J. Wang), [email protected] (J. a real one. It has been proven that this real matrix is symmetri-
Shi).

https://doi.org/10.1016/j.sigpro.2018.05.009
0165-1684/© 2018 Elsevier B.V. All rights reserved.
2 F.-G. Yan et al. / Signal Processing 152 (2018) 1–12

cal, and hence, eigenvalue decomposition (EVD) and singular value Table 1
Mathematical notations.
decomposition (SVD) operations can be implemented with real-
valued computations [22]. Since one multiplication between two ( · )T Transpose;
complex variables require four times that between two real ones, ( · )∗ Conjugation;
( · )H Hermitian transpose;
unitary algorithms can reduce the complexity by a factor about
∩ Intersection;
four. Besides, it has been found that unitary methods also show  Phase angle;
improved accuracy as compared to their complex-valued versions |·| Matrix determinant;
[23]. · Frobenius norm;
Despite their increased accuracies with reduced costs, almost E[ · ] Mathematical expectation;
Re( · ) Real part of the embraced element;
all of the state-of-the-art unitary transformation-based methods
Im( · ) Imaginary part of the embraced element;
are suitable for only CSAs [24]. To extend real-valued DOA es- 0 Zero matrix (vector);
timation with no dependence on array structures, we have pro- Im m × m identity matrix;
posed in [25] a real-valued MUSIC (RV-MUSIC) algorithm with ar- Jm m × m exchange matrix;
diag{ · } Diagonal matrix composed of the embraced elements;
bitrary array geometries. The basic idea is to exploit a real-valued
ai ith column of matrix A;
EVD/SVD computation on either the real part of the ACM (R-ACM) γ (i) ith element of vector γ ;
or the imaginary part of the ACM (I-ACM) to obtain a real-valued γ (i: j) Sub-vector composed of ith to jth elements of vector γ .
noise subspace. However, it is worth noting that almost all of
the existent real-valued methods including RV-MUSIC still involve
an EVD/SVD step on an M × M ACM or on a transformed ACM agonals, i.e., an m × m matrix A is anti-bisymmetric if it satisfies
of the same sizes, where M is the numberof sensors.
 Generally,

this EVD/SVD computation requires about O M3 flops [26]. When AT = −A ( 4.1 )


massive arrays are used [27–29], M can be a very large number Jm AJm = −A. ( 4.2 )
and this term of high complexity is still unacceptable.
5). Centrosymmetrical vector. An m × 1 vector a is a centrosym-
As a further development of the previous works in [24,25],
metrical vector if it satisfies
we propose in this paper a new real-valued root-MUSIC (RV-
root-MUSIC) algorithm. We show that under the geometry of a Jm a = a. (5)
ULA, the R-ACM is a bisymmetric matrix while the I-ACM is an
6). Anti-centrosymmetrical vector. An m × 1 vector a is an anti-
anti-bisymmetric matrix. Based on such a mathematical fact, we
centrosymmetrical vector if it satisfies
provide in-depth analysis in two cases with respect to the par-
ity of M to prove that both the EVD/SVD of the R-ACM and Jm a = −a. (6)
the SVD of the I-ACM can be equivalently computed on two
sub-matrices with reduced dimensions of about half sizes. We 2. Signal model and literature review
show that the polynomial coefficients in RV-root-MUSIC are real,
and they can be efficiently computed using the centrosymmetri- 2.1. Signal model
cal or anti-centrosymmetrical eigenvectors and singular vectors of
those sub-matrices. Thanks to the real coefficients, we show that Assume L uncorrelated
narrow-band
signals sl (t), l ∈ [1, L] with
the roots of RV-root-MUSIC appear in conjugate pairs with the unknown DOAs θ  θ1 , θ2 , . . . , θL simultaneously impinge from
form a + jb, a − jb, which enable us to exploit Bairstow’s method far-field on a ULA composed of M antenna elements with inter-
[17,30] for fast rooting with real-valued computations. We finally sensor spacing d, where L is assumed to be known in advance
conduct numerical simulations to verify that with substantially re- [2–23]. The inter-sensor spacing d is assumed to satisfy the half-
duced complexity, the proposed approach is able to provide satis- wavelength constraint d ≤ μ/2, to avoid phase ambiguity caused by
factory performances close to the Cramér–Rao Lower Bound (CRLB) the multi-valued property of the sine function, where μ is the
[31]. wavelength of the narrow-band source. In most subspace-based
Notations. Throughout the paper, matrices and vectors are de- high-resolution DOA estimators [32], M is generally assumed to
noted by upper- and lower- boldface letters, respectively. Complex- be larger than L [2–23]. In the proposed technique, it is assumed
and real- vectors and matrices are denoted by single-bar- and M > 2L, which is reasonable for large ULAs [25,27–29]. Let the first
double-bar- upper boldface letters, respectively, and detailed math- sensor be the reference point, the array output at snapshot t, t ∈ [1,
ematical notations are defined in Table 1. T] can be expressed as
Definitions. The following definitions are made in the paper. x(t ) = As(t ) + n(t ), (7)
1). Null space. For an m × n matrix A, the null space of A is de-
noted by null(A), given by where A = a(θ1 ), a(θ2 ), . . . , a(θL ) is the M × L array manifold ma-
   trix with each column
null(A )  β∈C n×1 Aβ = 0 . (1) T
a(θ ) = 1, e j (2π /μ)d sin θ , · · · , e j (2π /μ)d (M−1) sin θ (8)
2). Column space. For an m × n matrix A, the column space of

A is denoted by span(A), given by denoting the M × 1 steering vector, where j  −1. In addition, s(t)

  
n is the L × 1 signal vector, n(t ) ∼ CN 0, σn IM is the M × 1 additive
2
span(A )  γ ∈ Cm×1 γ = ki ai , ki ∈ C . (2) noise vector and σn2 is the noise power.
i=1
The M × M forward-only ACM is given by
3). Bisymmetric matrix. A bisymmetric matrix is a square matrix
symmetric about both of its main diagonals, i.e., an m × m matrix
R = E x(t )xH (t ) = ARs AH + σn2 IM , (9)
A is bisymmetric if it satisfies where Rs  E[s(t)sH (t)]
is the L × L signal covariance matrix. As the

AT = A ( 3.1 ) theoretical ACM is unavailable, we can compute an estimated ACM


Jm AJm = A. (EACM) by T snapshots of data as
( 3.2 )
1
T
4). Anti-bisymmetric matrix. An anti-bisymmetric matrix is a
R= x(t )xH (t ). (10)
square matrix which is anti-symmetric about both of its main di- T
t=1
F.-G. Yan et al. / Signal Processing 152 (2018) 1–12 3

The EVD/SVDs of the theoretical ACM Eq. (9) and the EACM Eq.
(10) can be defined in a standard way as
R = VV H = V S S V H
S + V N N V N
H
(11.1)


R=
V
VH =
VS  H
S V S + V N N V N ,
H
(11.2)
respectively, where the subscripts S and N stand for the signal- and
noise- subspaces, respectively.

2.2. Literature review

Based on the orthogonality between signal- and the noise- sub-


spaces, the standard MUSIC [3] suggests to search the L minima of Fig. 1. EVD/SVD on ACM, conjugate ACM, R-ACM and I-ACM.
the following cost function for source DOAs
fMUSIC (θ )  aH (θ )
VN 2 . (12)
is a real subspace, denoting the noise intersection [25]. We have
Despite its high complexity, standard MUSIC has an easy imple- shown that the RV-MUSIC spectral
mentation advantage with arbitrary array geometries [6]. For the N 2
special structure of a ULA, the steering vector Eq. (8) can be fur-
fRV−MUSIC (θ )  aH (θ )V (21)
ther written as [12] is able to provide similar performance with comparable complexity
M−1 T
with unitary MUSIC. In addition, both the EVD/SVD of the R-ACM
a(θ ) = p(z )  1, z, z , . . . , z
2
, (13)
and the SVD of the I-ACM are performed with only real-valued
where z ej(2π /μ)dsin θ .
The search step in fMUSIC (θ ) can be replaced computations as
by solving the polynomial
 
  Re R = VXVT = VS XS VTS + VN XN VTN (22.1)
froot−MUSIC (z )  p z −1
VN N p (z )
T
VH (14)
for the L roots zl , l = 1, 2, . . . , L which are located closest to the  
Im R = MYNT = MS YS NTS + MN YN NTN , (22.2)
unit circle, and DOAs are estimated by Rao [12], Pesavento [18],
Qian et al. [22] where
 μ 
θ l = arcsin ∠
zl , l = 1, 2, . . . , L. (15) VS = v1 , v2 , . . . , v2 L
2π d
VN = v2L+1 , v2L+2 , . . . , vM
Because R is generally a complex matrix, all the computations in

MUSIC and root-MUSIC are complex-valued. XS = diag ψ1 , ψ2 , . . . , ψ2L
To realize real-valued computation, the U-root-MUSIC algorithm
 
[18] exploits the forward/backward averaging EACM [15,16] XN = diag ψ2L+1 , ψ2L+2 , . . . , ψM
1  

RFB = R + JM
R∗ JM , (16) MS = m1 , m2 , . . . , m2 L
2
MN = m2L+1 , m2L+2 , . . . , mM
with a unitary transformation technique [14–23] to obtain a sym-

metrical real matrix NS = n1 , n2 , . . . , n2L
 
= Re CH
R RFB C , (17) NN = n2L+1 , n2L+2 , . . . , nM
 
where C is a complex unitary matrix, given by YS = diag χ1 , χ2 , . . . , χ2L
   
1 Ik j Ik YN = diag χ2L+1 , χ2L+2 , . . . , χM ,
C= √ , M = 2k (18.1)
2 Jk − jJk
and ψ and v stand for the eigenvalue and the eigenvector of Re(R),
  respectively, χ , m and n stand for the singular value, the left- and
1
Ik √0 jIk
the right- singular vectors of Im(R), respectively. It should be noted
C= √ 0 2 0 , M = 2k + 1. (18.2)
2 J that there are 2L significant eigenvalues and 2L significant singular
k 0 − j Jk
values for Re(R) and Im(R), respectively [6,25]. It should be also
as
By performing real-valued EVD/SVD on R noted that the right singular vectors of Im(R) span the same sub-
H space as the noise subspace of Re(R) such that [6,25]
=
R SH S H
S +G H,
NG (19)    
span NN = span VN . (23)
one obtains the U-root-MUSIC polynomial
   Based on the above brief literature review, one can conclude
G
fU−root−MUSIC (z )  p˜ T z−1 G T p˜ z , (20) that both conventional complex-valued methods [2–5,12,13] and
is real, the coefficients in state-of-the-art real-valued algorithms [14–25] still involve an
where p˜ (z )  CH p(z ). Although G
fU−root−MUSIC (z ) are complex, and hence the polynomial rooting in   step on an M × M matrix, which generally
EVD/SVD computation
requires about O M3 flops [26]. When large sensors are used [27–
U-root-MUSIC still involve complex computations.
29], M can be a big number, and this term of cost may be compu-
We have proposed a RV-MUSIC algorithm with arbitrary ar-
tationally very expensive. Therefore, there is a need for DOA esti-
ray geometry in [25]. The key idea of the RV-MUSIC algorithm is
mators demanding less EVD/SVD computations. In the section that
shown in Fig. 1, in which
      follows, we are to show efficient methods for EVD/SVD computa-
span VN  span VN ∩ span V∗N tions on sub-matrices with reduced dimensions.
4 F.-G. Yan et al. / Signal Processing 152 (2018) 1–12

⎧  
3. Reduced-dimension EVD/SVD computation ⎪
⎪ VS,1 VS,2
⎨ S
V = (33.1 )
−Jk VS,1 Jk VS,2
Because Rs is diagonal (since the signals are uncorrelated), one  

⎪ VN,1 VN,2
can prove that R is centro-Hermitian such that [14–22] ⎩ N
V = . (33.2 )
−J V Jk VN,2

k N,1
JM RJM = R∗ (24.1 )
Proof. See Appendix A. 
JM R∗ JM = R. (24.2 )
Recalling the definitions Eq. (5) and Eq. (6) as well as observing
Using Re(R ) = 12 (R + R∗ ) and Eq. (9), we can easily verify ReT (R ) = (33), one can conclude the following corollary for the eigenvectors
Re(R ). In addition, by using (24), we also obtain of Re(R) immediately.
JM Re(R )JM = Re(R ). (25) Corollary 1. In the case M = 2k, the first k eigenvectors of Re(R) are
Similarly, by using Im(R ) = 1
(R − R∗ ), Eq. (9) and Eq. (24), we can anti-centrosymmetrical while the last k eigenvectors of Re(R) are cen-
2j
trosymmetrical. 
verify ImT (R ) = −Im(R ) and
Next, let us consider the SVD of Im(R). With M = 2k, we can
JM Im(R )JM = −Im(R ). (26)
divide Im(R) into four k × k sub-matrices as
According to Eq. (3) and Eq. (4), Re(R) is a bisymmetric matrix  
Im(R )11 Im(R )12
while Im(R) is an anti-bisymmetric matrix. Based on such char- Im(R ) = . (34)
Im(R )21 Im(R )22
acteristics, we investigate reduced-dimension computations for the
EVD/SVD of Re(R) and the SVD of Im(R) with respect to the parity In a similar way, we insert Eq. (34) into Eq. (26) and use ImT (R ) =
of M in two cases. −Im(R ) as well as J2k = Ik to obtain

3.1. The number of sensors is even such that M = 2k Im(R )22 = −Jk Im(R )11 Jk (35 − 1 )
Jk Im(R )21 = ImT (R )21 Jk . (35 − 2 )
First, let us consider the EVD of Re(R). With M = 2k, we can
Thus, Eq. (34) is simplified with only two k × k independent sub-
divide Re(R) into four k × k sub-matrices as
  matrices Im(R)11 and Im(R)21 as
Re(R )11 Re(R )12  
Re(R ) = . (27) Im(R )11 −ImT (R )21
Re(R )21 Re(R )22 Im(R ) = . (36)
Im(R )21 −Jk Im(R )11 Jk
Since ReT (R ) = Re(R ), we have Re(R )12 = ReT (R )21 . Inserting Eq. By defining two k × k matrices D1 and D2 as
(27) into Eq. (25) and using ReT (R ) = Re(R ) and J2k = Ik , it is easy
to obtain that D1  Im(R )11 − Jk Im(R )21 (37.1)

Re(R )22 = Jk Re(R )11 Jk (28 − 1 )


Jk Re(R )21 = ReT (R )21 Jk . (28 − 2 ) D2  Im(R )11 + Jk Im(R )21 , (37.2)

Using (28-1) and Re(R )12 = ReT (R )21 , Eq. (27) can be simplified we obtain the following theorem, which shows that in the case
with only two k × k independent sub-matrices Re(R)11 and Re(R)21 M = 2k, the SVD of Im(R) can be also computed by those of D1
as and D2 equivalently.
 
Re(R )11 ReT (R )21 Theorem 2. Let the SVD of the I-ACM be defined in Eq. (22.2), and
Re(R ) = . (29) let D1 and D2 be defined as (37) with their SVDs given by
Re(R )21 Jk Re(R )11 Jk
D1 = M1 Y1 NT1 = MS,1 YS,1 NTS,1 + MN,1 YN,1 NTN,1 (38.1)
By introducing two k × k sub-matrices B1 and B2 as

B1  Re(R )11 − Jk Re(R )21 (30.1)


D2 = M2 Y2 NT2 = MS,2 YS,2 NTS,2 + MN,2 YN,2 NTN,2 . (38.2)
With M = 2k, the 2k singular values of Im(R) are composed of the
B2  Re(R )11 + Jk Re(R )21 , (30.2) k singular values of D1 and those of D2 , and we have
 
we obtain the following theorem, which indicates that in the case Y = diag Y1 , Y2 . (39)
M = 2k, the EVD/SVD of Re(R) can be computed by those of B1
and B2 equivalently. In addition, the eigenvectors of Im(R) can be jointed by those of D1
and D2 as
Theorem 1. Let the EVD/SVD of the R-ACM be defined in Eq. (22.1), ⎧    


MS,1 MS,2 NS,1 NS,2
(40.1 )
and let the EVD/SVD of B1 and that of B2 be defined in a standard ⎨MS = Jk MS,1 −Jk MS,2
, NS =
−Jk NS,1 Jk NS,2
way as    


B1 = V1 X1 VT1 = VS,1 XS,1 VTS,1 + VN,1 XN,1 VTN,1 (31.1) ⎩MN = MN,1 MN,2
, NN =
NN,1 NN,2
. (40.2 )
Jk MN,1 −Jk MN,2 −Jk NN,1 Jk NN,2

Proof. See Appendix B. 


B2 = V2 X2 VT2 = VS,2 XS,2 VTS,2 + VN,2 XN,2 VTN,2 . (31.2)
One should note that there are L significant singular values for
With M = 2k, the 2k eigenvalues of Re(R) are composed of the k both D1 and D2 , and these 2L singular values compose the 2L sig-
eigenvalues of B1 and those of B2 such that nificant singular values of Im(R). Also note that the right singu-
  lar vectors of Im(R) span the same subspace as the noise subspace
X = diag X1 , X2 . (32)
of Re(R), as shown in Eq. (23). Therefore, we observe from (40.2)
In addition, the eigenvectors of Re(R) can be jointed by those of B1 that the real noise intersection NN can be computed by reduced-
and B2 as dimension SVDs on D1 and D2 . According to the definitions Eq.
F.-G. Yan et al. / Signal Processing 152 (2018) 1–12 5

 
(5) and Eq. (6) as well as observing (40), one can conclude the fol-
1
Ik √0 Ik
lowing corollary for the singular vectors of Im(R) immediately. P3  √ 0 2 0 (47.1)
2 −J 0 Jk
k
Corollary 2. In the case M = 2k, the first k left singular vectors of
Im(R) are centrosymmetrical while the last k left singular vectors of
Im(R) are anti-centrosymmetrical. Oppositely, the first k right singu- Q2  diag{V1 , V3 } (47.2)
lar vectors of Im(R) are anti-centrosymmetrical while the last k right
singular vectors of Im(R) are centrosymmetrical.   
V1 V√3 (2 : k + 1 )
1
T4  P3 Q2 = √ 0 2 · V3 ( 1 ) , (47.3)
2 −J V Jk V3 (2 : k + 1 )
3.2. The number of sensors is odd such that M = 2k + 1. k 1

we can find that TT4 Re(R )T4 is a diagonal matrix, and the remain-
First, let us investigate the EVD of Re(R). Using ReT (R) = Re(R ),
ing proof is similar to Theorem 1. 
we can divide Re(R) with M = 2k + 1 as
⎡ ⎤ It should be noted that B1 has an EVD/SVD expression form
Re(R )11 d1 ReT (R )21 with M = 2k + 1 similar to that with M = 2k in Eq. (31.1). In addi-

⎢ k    ⎥
tion, with M = 2k + 1, B1 has k eigenvalues in total while B3 has
⎢ ×kT k×1 k×k ⎥
⎢ d1 a eT1 ⎥ k + 1. However, the numbers of significant eigenvalues for B1 and
Re(R ) = ⎢   ⎥
1
⎢  ⎥, (41) B3 are the same, both given by L.
⎢ 1×k 1×1 1×k ⎥
⎣Re(R )21 e Re(R )22 ⎦
Based on the definitions Eq. (5) and Eq. (6), the following corol-
   
1
   lary holds in the case M = 2k + 1 for the eigenvectors of Re(R).
k×k k×1 k×k
Corollary 3. In the case M = 2k + 1, the first k eigenvectors of Re(R)
where a1 is a scalar. By inserting Eq. (41) into Eq. (25) and us- are anti-centrosymmetrical while the last k those are centrosymmetri-
ing the facts Re(R )22 = Jk Re(R )11 Jk and Jk Re(R )21 = ReT (R )21 Jk cal. In addition, the center elements of the first k eigenvectors of Re(R)
(as already shown in (28) as well as e1 = Jk d1 , we can remove the are zeros. 
two dependent elements Re(R)22 and e1 and simplify Eq. (41) as
Finally, let us investigate the SVD of Im(R) with M = 2k + 1, in
 
Re(R )11 d1 ReT (R )21 which the following corollary about the singular values of Im(R) is
Re(R ) = dT1 a1 dT1 Jk . (42) needed.
Re(R )21 Jk d1 Jk Re(R )11 Jk
Corollary 4. In the case M = 2k + 1, Im(R) must contain at least a
To simplify the notations, we reuse the definitions for B1 and zero singular value.
B2 in (30), and further define a (k + 1 ) × (k + 1 ) matrix Proof 1. Using the fact ImT (R ) = −Im(R ), we have
 √           
a 2dT1 Im(R ) = ImT (R ) =  − Im(R ) = (−1 )2k+1 Im(R ) = −Im(R ),
B3 = √ 1 . (43)
2d1 B2 (48)
 
The following theorem indicates that in the case of M = 2k + 1, the which indicates Im(R ) = 0. Define F  ImT (R )Im(R ), we have
EVD/SVD of Re(R) can be also computed on sub-matrices B1 and    T     
F = Im (R ) · Im(R ) = Im(R )2 = 0. (49)
B3 equivalently.
Hence, F contain at least a zero eigenvalue. As the singular val-
Theorem 3. Let Re(R) be divided as Eq. (42), and let B1 and B3 be ues of Im(R) are square roots of the eigenvalues of F [26], Im(R)
defined as (30) and Eq. (43), respectively. Let the EVD/SVD of B1 be contain at least a zero singular value accordingly.
defined as Eq. (31.1) and let that of B3 be defined as
Using ImT (R ) = −Im(R ), we can divide Im(R) as
B3 = V3 X3 VT3 = VS,3 XS,3 VTS,3 + VN,3 XN,3 VTN,3 . (44.1) ⎡ ⎤
Im(R )11 −d −ImT (R )21
2
With M = 2k + 1, the 2k + 1 eigenvalues of Re(R) are composed ⎢ k    ⎥
of the k eigenvalues of B1 and the k + 1 eigenvalues of B3 , and we ⎢ ×kT k×1 k×k ⎥
⎢ d2 a −eT2 ⎥
Im(R ) = ⎢   ⎥
2
have
⎢  ⎥, (50)
  ⎢ 1×k 1×1 1×k ⎥
X = diag X1 , X3 . (45) ⎣Im(R )21 e Im(R )22 ⎦
   
2
  
In addition, the eigenvectors of Re(R) can be jointed by those of B1 k×k k×1 k×k
and B3 as where a2 is a scalar. Similarly, by inserting Eq. (50) into Eq.
⎧   (26) as well as using the facts ImT (R ) = −Im(R ), Im(R )22 =
⎪ VS,1 VS,√3 (2 : k + 1 )

⎪ (46 − 1 )
−Jk Im(R )11 Jk , Jk Im(R )21 = ImT (R )21 Jk (as already shown in (35)

⎪V S = 0 2VS,3 (1 )
⎨ −Jk VS,1 Jk VS,3 (2 : k + 1 )
as well as a2 = 0 and e2 = Jk d2 , we can remove the three depen-
dent elements a2 , e2 and Im(R)22 , and simplify Eq. (50) as
   

⎪ VN,1 √3 (2 : k + 1 )
VN,

⎪ Im(R )11 −d2 −ImT (R )21

⎩V N = 0 2VN,3 (1 ) , (46 − 2 ) Im(R ) = .
dT2 0 −dT2 Jk (51)
−Jk VN,1 Jk VN,3 (2 : k + 1 )
Im(R )21 Jk d2 −Jk Im(R )11 Jk
where VN,3 (1 ) and VS,3 (1 ) are two 1 × (k + 1 ) vectors, denoting the With D1 and D2 be defined in (37), we further define a (k + 1 ) × k
first rows of VN,3 and VS,3 , respectively, VN,3 (2 : k + 1 ) and VS,3 (2 : matrix D3 and a k × (k + 1 ) matrix D4 as follows
k + 1 ) are two k × (k + 1 ) matrices composed of the last k rows of  
VN,3 and VS,3 , respectively. D2
D3 = √ T (52.1)
2d2
Proof. By defining
6 F.-G. Yan et al. / Signal Processing 152 (2018) 1–12


D4 = − 2d2 D1 . (52.2) Until now, we have shown that the EVD/SVD of the theoreti-
cal Re(R) and the SVD of the theoretical Im(R) can be equivalently
We obtain the following theorem indicating that in the case
computed on sub-matrices with reduced dimensions. In practice,
of M = 2k + 1, the SVD of Im(R) can be also computed on sub-
the theoretical R is unavailable, and we can obtain only
R with T
matrices D3 and D4 equivalently.
snapshots of observed data according to Eq. (10). Because T < ∞,
Theorem 4. Let Im(R) be divided as Eq. (51) and let D3 and D4 be the centro-Hermitian character does not hold for
R, and we gener-
defined as (52) with their SVDs given by ally have

D3 = M3 Y3 NT3 = MS,3 YS,3 NTS,3 + MN,3 YN,3 NTN,3 (53.1) JM


RJM
= R∗ (61.1 )
JM
R∗ JM
=
R. (61.2 )
   
D4 = M4 Y4 NT4 = MS,4 YS,4 NTS,4 + MN,4 YN,4 NTN,4 . (53.2) Consequently, Re
R is no longer bisymmetric and Im
R is no
long anti-bisymmetric such that
With M = 2k + 1, the 2k + 1 singular values of Im(R) are com-
   
JM ReT (62.1 )
posed of the k singular values of D3 , zero, and the k singular values
R JM
= Re R 
of D4 such that JM Im R JM
= −Im
T
R . (62.2 )
 
Y = diag Y3 , 0, Y4 . (54) Fortunately, by using J2M = IM , we obtain from Eq. (16) that
In addition, the eigenvectors of Im(R) can be jointed by those of D3
 
JM
RFB JM = 1 M +
R∗ = ∗
(63.1 )
and D4 as 2 JM RJ  RFB
⎧   JM
R∗FB JM = 1
JM
R JM +

R =
RFB . (63.2 )
S,3 (2 : k + 1 )
2


M√ MS,4

⎪MS = 2MS,3 (1 ) 0 , Therefore, RFB is a centro-Hermitian matrix. In fact, it has been



⎪ Jk MS,3 (2 : k + 1 ) −Jk MS,4 shown in [15] that RFB is centro-Hermitian with an arbitrary ma-

⎪   trix Rs , and the

⎪ RFB has been used in various algorithms [14–22].


NS,3 √4 (2 : k + 1 )
NS,
Using (63), one can easily verify
⎪NS =
⎪ 2NS,4 (1 ) (55 − 1 )


0
   
−Jk NS,3 Jk NS,4 (2 : k + 1 ) ReT
RFB = Re RFB (64.1 )
     
⎪ N,3 (2 : k + 1 )
JM Re
RFB JM = Re RFB , (64.2 )


M√ MN,4

⎪MN = 2MN,3 (1 ) 0 ,

⎪ and

⎪ J M ( 2 : k + 1 ) −J M
   

⎪ 
k N, 3 k N,

4
ImT RFB = −Im
RFB (65.1 )

⎪    


NN,3 √4 (2 : k + 1 )
NN, JM Im
RFB JM = −Im RFB . (65.2 )

⎪ 2NN,4 (1 ) (55 − 2 )
⎩NN = 0 .    
−Jk NN,3 Jk NN,4 (2 : k + 1 ) Therefore, Re
RFB while Im
is bisymmetric
RFB is anti-
Proof. Similarly, by defining bisymmetric. This enables us to use RFB instead of the R to
using the above discussed reduced-
  compute the noise matrix V N
1
Ik √0 Ik dimension EVD/SVD computations in practice.
P4  √ 0 2 0 (56)
2 J 0 −Jk
k 4. Proposed RV-root-MUSIC algorithm

L2  diag{M3 , M4 } (57) 4.1. Algorithm description

With the real noise intersection matrix V computed with


N
K2  diag{N3 , N4 } (58) reduced-dimension EVD/SVD computations, we can use the idea
of polynomial rooting to obtain the following RV-root-MUSIC cost
  function
M √3 (2 : k + 1 ) M4  
1 fRV−root−MUSIC (z )  pT z−1 V T p(z ).
N V (66)
T5  P4 L2 = √ 2 · M3 ( 1 ) 0 (59) N
2 J M (2 : k + 1 ) −Jk M4
k 3 Similar to root-MUSIC and U-root-MUSIC, we can find signal DOAs
by inserting the roots of fRV−root−MUSIC (z ) that lie closest to the
 
N3 N√4 (2 : k + 1 ) unit circle into Eq. (15). It is worth noting that the roots of
1
T6  P3 K2 = √ 0 2 · N4 (1 ) , (60) fRV−root−MUSIC (z ) have an important property different from those
2 −J N Jk N4 (2 : k + 1 ) of root-MUSIC and U-root-MUSIC, which is given by the following
k 3
theorem.
we can verify that TT5 Im(R )T6 is diagonalized, and the remaining
proof is similar to Theorem 2.  Theorem 5. The roots of fRV−root−MUSIC (z ) appear in both
conjugate- and conjugate reciprocal-pairs, that is, if z0 is a root
Based on theorem 4, the real noise intersection NN can be
of fRV−root−MUSIC (z ), then z0∗ , z"0  1/z0∗ and z"0∗ = 1/z0 are roots of
computed by reduced-dimension SVDs on D3 and D4 equivalently,
fRV−root−MUSIC (z ) as well.
which is shown in (55.2). Noting the dimensions of matrices M3 ,
N3 , M4 and N4 , we conclude from (55) the following corollary for Proof. See Appendix C. 
the singular vectors of Im(R).
Theorem 5 indicates that the roots of fRV−root−MUSIC (z ) has
Corollary 5. In the case M = 2k + 1, the first k + 1 left singular vec- a different distribution form from those of froot-MUSIC (z) and
tors of Im(R) are centrosymmetrical while the last k left singular vec- fU−root−MUSIC (z ), which is shown in Figs. 2 and 3 for more clear
tors of Im(R) are anti-centrosymmetrical. Oppositely, the first k right illustrations. In the figures, a ULA composed of M = 6 sensors
singular vectors of Im(R) are anti-centrosymmetrical while the last are exploited to estimate L = 2 sources at θ1 = 10◦ and θ2 = 30◦ ,
k + 1 right singular vectors are centrosymmetrical.  where the signal-to-noise ratio (SNR) is set as SNR = 0 dB and the
F.-G. Yan et al. / Signal Processing 152 (2018) 1–12 7

easily selected between θ and −θ by




θ = θ 1 , θ 2 , · · · , θ L = arg 
min  fCBF (θ ), (67)
θ ,−
θ∈ θ

where the CBF spectral is given by Vasylyshyn [33]

fCBF (θ )  aH (θ )
RFB a(θ ).
In summary, the proposed RV-root-MUSIC algorithm structure
for DOA estimation with reduced-dimension EVD/SVD computa-
tions in cases M = 2k and M = 2k + 1 are shown in Figs. 4 and 5,
respectively.

Remark 1. Since the proposed method needs to select between θ


and −θ for the true DOA, the ambiguity may be difficult if two
candidate angles lie close to each other. This is because CBF does
not provide super-resolution capability, and hence the accuracy of
the proposed method may deteriorate in such cases. To solve this
problem, we suggest using the standard MUSIC around the esti-
Fig. 2. Roots distribution for root-MUSIC and U-root-MUSIC techniques.
mated angles by the proposed to enhance the accuracy.

Remark 2. The ULA structure is used to apply the for-


ward/backward averaging technique to make the R-ACM bisym-
metric, and this limits the proposed method in some sense. How-
ever, there are many techniques such as array interpolation (AI)
[34] and beamspace transformation [35] reported to extend array
geometries, the new algorithm can be extended to arbitrary array
configurations by exploiting these methods. 

4.2. Complexity reduction

We stress that the proposed method has a significant low-


complexity advantage over most state-of-the-art techniques in-
cluding root-MUSIC and U-root-MUSIC. The reduced complexity is
mainly resulted from the following three aspects:
1). The new algorithm exploits real-valued EVD/SVD to calcu-
late the real noise matrix V , which obtains a preliminary com-
N
Fig. 3. Roots distribution for the proposed RV-root-MUSIC algorithm. plexity reduction by a first factor about four as compared to us-
ing complex-valued EVD/SVD computations to calculate the com-
plex noise matrix VN . What’s more, the real-valued EVD/SVD com-
putations are optimized to be performed on sub-matrices B , B
,
number of snapshot is set as T = 100. As it is seen from Fig. 2 that 1 2

B3 , D1 , D2 , D3 and D4 , all with reduced dimensions of about half
the roots of froot-MUSIC (z) and fU−root−MUSIC (z ) appear in only con-
jugate reciprocity pairs [12,18]. However, it can be concluded from sizes (as shown in (30), (37),(43) and (52)). This provides a further
Fig. 3 that the roots of fRV−root−MUSIC (z ) appear in both conjugate- complexity reduction by a second factor about four as compared
and conjugate reciprocal-pairs. to state-of-the-art real-valued methods. Combining these facts, the
Suppose that θ l ∈ θ is one of the signal DOAs, it can be con- complexity of the subspace decomposition step in RV-root-MUSIC
cluded from Theorem 5 that zl = e j (2π /μ )d sin θl is one root of is significantly reduced by a factor about sixteen as compared to
fRV−root−MUSIC (z ) that lies closest to the unit circle. In addition, root-MUSIC while it is reduced by a factor about four as compared
there must be another three corresponding roots zl∗ , 1/zl and 1/zl∗ to U-root-MUSIC.
in fRV−root−MUSIC (z ) that also lie closest to the unit circle. Due to 2). Since all the coefficients in fRV−root−MUSIC (z ) are real, the
the double orthogonality between the noise intersection VN and roots of fRV−root−MUSIC (z ) must appear in conjugate pairs with
the original steering vector a(θ l ) as well as the conjugate steering the form a + jb, a − jb. Consequently, we can exploit Bairstow’s
vector a∗ (θl ) = a(−θl ) [25], among the four roots, zl and 1/zl∗ are method [17,30] to find the roots of fRV−root−MUSIC (z ). This means
associated with θ l while zl∗ and 1/zl are associated with −θl . Com- that the polynomial rooting step in RV-root-MUSIC can be also
bining these facts, we can see that for L DOAs θ , there are L roots implemented with real-valued computations. Note that both root-
zl , l ∈ [1, L] in fRV−root−MUSIC (z ) that lie closest to the unit circle, MUSIC and U-root-MUSIC involve complex-valued computations
which are associated with θ . In addition, there are another L roots in the polynomial rooting step since the coefficients in both
zl∗ , l ∈ [1, L] in fRV−root−MUSIC (z ) that also lie closest to the unit cir- froot-MUSIC (z) and fU−root−MUSIC (z ) are complex and their roots do
cle, which are associated with −θ . Therefore, we need to select be- not appear in conjugate pairs with the form a + jb, a − jb. There-
tween θ and −θ for correct DOA estimation without ambiguity. fore, the complexity of the polynomial rooting step in RV-root-
Due to its low computational complexity, the conventional MUSIC is reduced by a factor about four as compared to both root-
beamforming (CBF) [33] is exploited here to solve the ambiguity MUSIC and U-root-MUSIC.
problem for the proposed technique. Since the steering vector a(θ ) 3). According to Eq. (C.2), the coefficients of zk in polynomial
belongs to the signal subspace at only the true DOAs θ , the CBF fRV−root−MUSIC (z ), namely Ck , k ∈ [0, M − 1], can be computed by
spectral amplitudes responding to θ must be much smaller than the product V V T . Because the columns of V are either cen-
N N N
those associated with −θ . On the other hand, as the number of trosymmetrical or anti-centrosymmetrical (as shown by corollary
the true DOAs, i.e., L, is known in advance, the L true DOAs can be 1–corollary 3 and corollary 5), we need to compute only half of
8 F.-G. Yan et al. / Signal Processing 152 (2018) 1–12

Fig. 4. Algorithm structure of RV-root-MUSIC with M = 2k, in which solid lines and boxes are used to describe the proposed algorithm, dashed lines and boxes are provided
for comparison, and corresponding equation numbers are given on tops of lines to indicate detailed implementation steps.

Fig. 5. Algorithm structure of RV-root-MUSIC with M = 2k + 1, in which solid lines and boxes are used to describe the proposed algorithm, dashed lines and boxes are
provided for comparison, and corresponding equation numbers are given on tops of lines to indicate detailed implementation steps.

Table 2
Complexity comparison among different algorithm in terms of real-valued flops with respect to each implementation step.

Step

Algorithm EACM computation Additional transformation EVD/SVD Polynomial coefficient computation Polynomial rooting

root-MUSIC 4 × O (M T )
2
 4× O (M )
3
4 × O[ M (M − L )]
2
4 × O (M3 )
U-root-MUSIC 4 × O M2 T  O M3 4 × O M 2 (M − L ) 4 × O M3
     
RV-root-MUSIC 4 × O M2 T 4 × O ( M 2 + M )L 1/4 × O M 3 1/2 × O M 2 ( M − 2L ) O M3

the product V T
V 5. Numerical simulations
N N . On the other hand, to compute the coefficients
of z , root-MUSIC needs to compute
k VN
VHN
(as shown in Eq. (14))
T Numerical simulations with 500 Monte Carlo trials are con-
while U-root-MUSIC needs to compute G
C∗ G CH (as shown in Eq.
ducted on a ULA to assess the performance of the proposed es-
(20)), both involve complex-valued computations. Therefore, the
timator and to verify the theoretical analysis. Throughout the sim-
complexity of the coefficient computation step in RV-root-MUSIC
ulations, the root mean square error (RMSE) is defined as
is reduced by a factor about eight as compared to both root-MUSIC
and U-root-MUSIC. #
1   2
500
Based on the above analysis, we compare the complexity of
different algorithms in Table 2, where the complexity is given in RMSE  10log10 θi − θ , (68)
500
i=1
terms of real-valued flops with respect to each implementation
step. It should be noted that polynomial rooting can be performed
by EVD/SVD on a companion matrix by using the Arnoldi itera- in which θ i stands for the ith estimated value for θ . For L sources,
tion method [26,30], which has a same complexity as EVD/SVD. the SNR is defined as
Also note that both the FB Eq. (16) and the unitary transformation  
Eq. (17) require no additional flops since they involve only element Pavg
exchange and addition operations. Therefore, the proposed method SNR  10log10 [dB], (69)
σn2
cost only additional flops for the ambiguity
solving problem in Eq.
(67), which is given by 4 × O (M2 + M )L . It is seen from the table $
where Pavg = 1L Ll=1 Pl denotes the average power of all sources,
that the proposed technique has a substantially lower complexity 2
as compared the other two methods. and Pl = E sl (t ) is the power of the l th, l = 1, 2, · · · , L source.
In the first simulation, we verify the correctness of the
proposed reduced EVD/SVD computation methods described in
F.-G. Yan et al. / Signal Processing 152 (2018) 1–12 9

Fig. 8. RMSE versus the SNR with respect to θ1 = 20◦ , M = 10 sensors, T = 100
snapshots, L = 2 sources at θ1 = 20◦ and θ2 = 23◦ .

Fig. 6. Noise matrix similarity versus the SNR, M = 8 sensors, L = 2 signals at θ1 =


30◦ and θ2 = 40◦ , T = 100 snapshots.

Fig. 9. RMSE versus the number of snapshot with respect to θ2 = 23◦ , M = 10 sen-
sors, SNR = 10 dB, L = 2 sources at θ1 = 20◦ and θ2 = 23◦ .

cluding MUSIC, ESPRIT, root-MUSIC and U-root-MUSIC in terms of


RMSE performance. For fair comparison reference, the uncondi-
Fig. 7. Noise matrix similarity versus the number of snapshot, M = 9 sensors, L = 2 tional CRLB [31] is also applied.
signals at θ1 = 30◦ and θ2 = 40◦ , SNR = 0 dB.
In the first example, we assume L = 2 uncorrelated sources at
θ1 = 20◦ and θ2 = 23◦ , and set the number of sensors as an even
Section 3. To this end, we define number M = 10. First, we plot in Fig. 8 the RMSEs of different al-
⎧ % %2 gorithms as functions of the SNR, where the number of snapshot

⎨d1  %
% IM − V N
N ( 2 )%
T (1 ) · V
N ( 1 )V % (70 − 1 ) is fixed as T = 100 while the SNR varies over a wide range from
% %2 SNR = −10 dB to SNR = 40 dB. Next, we plot in Fig. 9 the RMSEs
⎩d2  %

% IM − N N
N ( 2 )%
T (1 ) · N
N ( 1 )N %, (70 − 2 ) of different algorithms as functions of the number of snapshots,
where the SNR is fixed as SNR = 10 dB, and the number of snap-
where V (1 ) and V (2 ) denote the noise matrices computed by shots varies over a wide range from T = 23 to T = 212 .
N N
using the high dimension EVD/SVD Eq. (22.1) and the low dimen- It can be concluded from Figs. 8 and 9 that the proposed
sion EVD/SVD given in (33.2) (with M = 2k) or (46.2) (with M = method provides similar performances to MUSIC, and it is much
(1 ) and N
2k + 1), respectively. N (2 ) denote the noise matrices better than ESPRIT. It is also seen from the two figures that
N N
computed by using the high dimension SVD Eq. (22.2) and the low root-MUSIC and U-root-MUSIC slightly outperform RV-root-MUSIC
dimension SVD (40.2) (with M = 2k) or (55.2) (with M = 2k + 1), at very low SNRs and very small T’s (SNR < -0 dB as shown in
respectively. Fig. 8 and T < 100 as shown in Fig. 9). As the SNR and T increase,
We evaluate d1 and d2 with an even M in Fig. 6 and with an the RMSEs of RV-root-MUSIC decrease dramatically and the pro-
odd M in Fig. 7, respectively, where the simulation parameters are posed method performs similarly to the other two methods. With
given in the figure captions. It is seen clearly from the two figures mild SNRs and T’s (SNR ≥ -5 dB as shown in Fig. 8 and T ≥ 100 as
that shown in Fig. 9), all of the three polynomial rooting-based estima-
tors and MUSIC provide good RMSEs tend to the CRLB.
d1 ≈ d2 ≈ 0. (71)
In the second example, we assume L = 3 uncorrelated sources
Because d1 and d2 reflect the similarity between the noise matri- located at θ1 = 20◦ , θ2 = 23◦ and θ3 = 30◦ , and set the number of
ces computed by using high- and reduced- dimension EVD/SVD sensors as an odd number M = 11 to further compare the perfor-
methods, we conclude from the two figures that the proposed mances of the above five algorithms. Similarly to the first exam-
reduced dimension EVD/SVD computation methods described in ple, we plot the RMSEs as functions of the SNRs and those of the
Section 3 are correct. number of snapshots in Figs. 10 and 11, respectively. Detailed sim-
In the second simulation, we compare the proposed RV-root- ulation parameters are given in the captions of Figs. 10 and 11, re-
MUSIC technique with some state-of-the-art popular methods in- spectively.
10 F.-G. Yan et al. / Signal Processing 152 (2018) 1–12

Matlab codes on a PC with Intel(R) Core(TM) Duo T5870 2.0 GHz


CPU and 1GB RAM in the same environment. It can be seen from
Fig. 12 that RV-root-MUSIC has an obvious efficiency advantage
over the other two techniques, and it costs a much lower CPU time
than both root-MUSIC and U-root-MUSIC.

6. Conclusions

We have proposed a real-valued version of root-MUSIC for fast


DOA estimation, namely, the RV-root-MUSIC algorithm. We have
investigated reduced-dimension EVD/SVD computations in detail
with respect to the parity of the number of sensors in two cases.
We have provided in-depth theoretical analysis to prove that the
eigenvalues of the R-ACM and the singular values of the I-ACM
Fig. 10. RMSE versus the SNR with respect to θ1 = 20◦ , M = 11 sensors, T = 100 are composed of the eigenvalues and the singular values of cor-
snapshots, L = 3 sources at θ1 = 20◦ , θ2 = 23◦ and θ3 = 30◦ . responding sub-matrices divided from the R-ACM and the I-ACM,
respectively. Accordingly, the eigenvectors of the R-ACM and the
singular vectors of the I-ACM can be equivalently jointed by sub-
vectors decomposed from the corresponding sub-matrices. In ad-
dition, we have shown that the real-valued coefficients of the
RV-root-MUSIC polynomial can be efficiently computed by the
centrosymmetrical or anti-centrosymmetrical vectors of the sub-
matrices. We have further proven that the roots of RV-root-MUSIC
appear in conjugate pairs, which allows fast polynomial rooting
using Bairstow’s method with real-valued computations. As both
tasks of EVD/SVD and polynomial rooting are realized with real-
valued computations, we conclude that RV-root-MUSIC is much
more efficient than root-MUSIC and U-root-MUSIC. Finally, we
show by numerical simulations that the proposed approach has a
similar performance to root-MUSIC, and hence it makes a substan-
tially efficient trade-off between computational complexity and es-
Fig. 11. RMSE versus the number of snapshot with respect to θ2 = 23◦ , M = 11 sen-
timation accuracy.
sors, SNR = 10 dB, L = 3 sources at θ1 = 20◦ , θ2 = 23◦ and θ3 = 30◦ .
Acknowledgment

This work is supported by National Natural Science Foundation


of China (61501142), Science and Technology Program of WeiHai
and Project Supported by Discipline Construction Guiding Founda-
tion in Harbin Institute of Technology (Weihai) (WH20160107).

Appendix A. Proof of Theorem 1

To prove the theorem, we are to diagonalize Re(R) with appro-


priate unitary matrices composed of sub-matrices B1 and B2 . To
this end, we introduce the following 2k × 2k real matrix
 
1 Ik Ik
P1  √ . (A.1)
2 −Jk Jk

Fig. 12. Simulation time versus the number of sensors, SNR = 0 dB, T = 100 snap- It can easily verified that P−1
1
= PT1 , and therefore, P1 is a unitary
shots, L = 2 sources at θ1 = 20◦ and θ2 = 30◦ . matrix [26]. Using (28.2) and Eq. (29), we have

PT1 Re(R )P1 = diag{B1 , B2 }. (A.2)


It is seen clearly again from Figs. 10 and 11 that with much
Notice that Re (R )11 = Re(R )11 , we obtain by using JTk = Jk and
T
better accuracy than ESPRIT, the proposed algorithm shows a simi-
(28-2) that BT1 = B1 and BT2 = B2 . Therefore, both B1 and B1 are
lar performance very close to the other three algorithms including
symmetrical matrices, and their EVD/SVDs require only real-valued
MUSIC, root-MUSIC and U-root-MUSIC, all along mild SNRs and T’s
computations [26].
(SNR ≥ –5dB as shown in Fig. 10 and T ≥ 100 as shown in Fig. 11).
Assume that {ψ1,i }ki=1 are the k eigenvalues of B1 with corre-
Considering that RV-root-MUSIC only requires a substantially low
computational complexity, the new technique makes a sufficiently sponding eigenvectors {b1,i }ki=1 , {ψ2,i }ki=1 are the k eigenvalues of
efficient trade-off between complexity and accuracy. B2 with corresponding eigenvectors {b2,i }ki=1 , then we have
In the last simulation, we investigate the computational effi-
VT1 B1 V1 = diag{ψ1,1 , ψ1,2 , · · · , ψ1,k } (A.3.1)
ciency of the proposed RV-root-MUSIC and compare it with other
two polynomial rooting-based methods including root-MUSIC and
U-root-MUSIC in Fig. 12. The efficiency is equivalently examined
in terms of the CPU times of the three estimators by running the VT2 B2 V2 = diag{ψ2,1 , ψ2,2 , · · · , ψ2,k }. (A.3.2)
F.-G. Yan et al. / Signal Processing 152 (2018) 1–12 11

Using V1 and V2 to define a 2k × 2k real matrix Using (B2.1), (B2) as well as (B.4), we have
 
V1 0 TT2 Im(R )T3 = LT1 PT2 Im(R )P1 K1
Q1  = diag{V1 , V2 }, (A.4)
0 V2
= LT1 diag{D1 , D2 }K1
 
with which we further define another 2k × 2k real matrix = diag MT1 D1 N1 , MT2 D2 N2
 
1
T1  P1 Q1 = √
V1 V2
. (A.5) = diag{χ1,1 , · · · , χ1,k , χ2,1 , · · · , χ2,k }. (B.5)
2 −Jk V1 Jk V2
which indicates that {χ1,i }ki=1 and {χ2,i }ki=1 are the 2k singular val-
It is straightforward to obtain ues of Im(R), and we obtain Eq. (39).
T−1 −1 −1 T T T In addition, it can be clearly seen from Eq. (B.5) that the
1 = Q1 P1 = Q1 P1 = T1 . (A.6)
columns of T2 are the 2k left singular eigenvectors of Im(R), the
Thus, T1 is a unitary matrix [26]. Using Eq. (A.2)–Eq. (A.5), we columns of T3 are the 2k right singular eigenvectors of Im(R). It
have can be proved that there are L significant singular values for both
D1 and D2 , we obtain (40) by the definitions of T2 and T3 in (B.4),
TT1 Re(R )T1 = QT1 PT1 Re(R )P1 Q1
which completes the proof. 
= QT1 diag{B1 , B2 }Q1
  Appendix C. Proof of Theorem 5
= diag VT1 B1 V1 , VT2 B2 V2
= diag{ψ1,1 , · · · , ψ1,k , ψ2,1 , · · · , ψ2,k }, (A.7) Let Ck , k = −(M − 1 ), . . . , −1, 0, 1, . . . , M − 1 be the coefficient of
 
zk in pT (z−1 )V V T T
which indicates that {ψ1,i }ki=1
and {ψ2,i }ki=1
are the 2k eigenvalues N N p (z ), and let VN VN be the sth row and tth
s,t
of Re(R), and we have Eq. (32). V T
column element of V N N , we have
According to Eq. (A.7), the columns of T1 are the 2k eigenvec-
tors of Re(R). More specifically, the first k eigenvectors of Re(R) are 
M−1 
M 
M  
& ' fRV−root−MUSIC (z ) = Ck zk = z−(s−1 ) V T
N V zt−1
b1,i N
associated with {ψ1,i }ki=1 , which are given by vi = ,i ∈ k=−(M−1 ) s=1 t=1
s,t
−Jk b1,i
[1, k], while the last k eigenvectors of Re(R) are associated with  M 
M  
& ' = V T
N V zt−s , (C.1)
b2,i
{ψ2,i }ki=1 , and they are given by vi+k = , i ∈ [1, k]. Since the N
s,t
Jk b2,i s=1 t=1

coefficient 1/ 2 can be removed without affecting the eigenvec- which implies that
tors of Re(R), and it can be proved that there are L significant

M−k   
M−k  
eigenvalues for both B1 and B2 , we obtain (33) immediately.  T
N V T
N V
Ck = V N = V N = C−k , k  0. (C.2)
s,s+k s+k,s
s=1 s=1
Appendix B. Proof of Theorem 2
Assume that z0 is a root of fRV−root−MUSIC (z ), notice that Ck ∈
Similar to the proof of theorem 1, the primary goal is to choose R, k = −(M − 1 ), . . . , −1, 0, 1, . . . , M − 1, we obtain
two unitary matrices for the diagonalization of Im(R). We intro- 
M−1 
M−1
duce the following 2k × 2k real matrix 0= Ck z0k = Ck (z0∗ )k . (C.3)
  k=−(M−1 ) k=−(M−1 )
1 Ik Ik
P2  √ , (B.1) Therefore, z0∗ is a root of fRV−root−MUSIC (z ). Moreover, it follows di-
2 Jk −Jk
rectly from Eq. (C.2) and Eq. (C.3) that
whose inverse is given by P−1
2
= PT2 . Hence, P2 is unitary [26]. Us-

M−1 
M−1 
M−1
ing Eqs. (A.1), (B.1), (36) and Jk Im(R )21 = ImT (R )21 Jk , it can be 0= Ck (z0∗ )k = C−k (z"0 )
−k
=
k
Ck (z"0 ) ,
easily verified that k=−(M−1 ) k=−(M−1 ) k=−(M−1 )

PT2 Im(R )P1 = diag{D1 , D2 }. which indicates that z"0 = is a root of fRV−root−MUSIC (z ), and
1/z0∗
consequently, z"0∗ = 1/z0 is also a root of fRV−root−MUSIC (z ). 
Let {χ1,i }ki=1 be the k singular values of D1 and let {χ2,i }ki=1 be
the k singular values of D2 , we have
References
MT1 D1 N1 = diag{χ1,1 , χ1,2 , · · · , χ1,k } (B.2.1)
[1] J. Krim, M. Viberg, Two decades of array signal processing research: the para-
metric approach, IEEE Signal Process. Mag. 13 (3) (1996) 67–94.
[2] R.O. Schmidt, Multiple emitter location and signal parameter estimation, IEEE
MT2 D2 N2 = diag{χ2,1 , χ2,2 , · · · , χ2,k }. (B.2.2) Trans. Antennas Propag. 34 (3) (1986) 276–280.
[3] A.G. Jaffer, Maximum likelihood direction finding of stochastic sources: a sep-
Using M1 , M2 , N1 and N2 to define two 2k × 2k matrices arable solution, Proc. ICASSP 5 (1988) 2893–2896.
[4] M. Viberg, B. Ottersten, Sensor array processing based on subspace fitting, IEEE
L1  diag{M1 , M2 } (B.3.1) Trans. Signal Process. 39 (5) (1991) 1101–1121.
[5] R. Kumaresan, D.W. Tufts, Estimating the angles of arrival of multiple plane
waves, IEEE Trans. Aerosp. Electron. Syst. AES-19 (1) (1983) 134–139.
[6] F.G. Yan, M. Jin, X.L. Qiao, Low-complexity DOA estimation based on com-
K1  diag{N1 , N2 }, (B.3.2) pressed MUSIC and its performance analysis, IEEE Trans. Signal Process 61 (8)
(2013) 1915–1930.
with which we introduce another two 2k × 2k matrices [7] F.G. Yan, M. Jin, X.L. Qiao, Source localization based on symmetrical MUSIC and
  its statistical performance analysis, Sci. China Inf. Sci. 56 (6) (2013) 1–13.
1 M1 M2 [8] F.G. Yan, B. Cao, J.J. Rong, Y. Shen, M. Jin, Spatial aliasing for efficient direc-
T2  P2 L1 = √ (B.4.1)
2 Jk M1 −Jk M2 tion-of-arrival estimation based on steering vector reconstruction, EURASIP J.
Adv. Signal Process. (2016) 121.
  [9] A. Khabbazibasmenj, et al., Efficient transmit beamspace design for search-free
1 N1 N2 based DOA estimation in MIMO radar, IEEE Trans. Signal Process 62 (6) (2014)
T3  P1 K1 = √ . (B.4.2) 1490–1500.
2 −Jk N1 Jk N2
12 F.-G. Yan et al. / Signal Processing 152 (2018) 1–12

[10] K. Yu, Low-complexity 2d direction-of-arrival estimation for acoustic sensor ar- [23] M. Haardt, J.A. Nossek, Unitary ESPRIT: how to obtain increased estimation ac-
rays, IEEE Signal Process. Letts. 23 (12) (2016) 1791–1795. curacy with a reduced computational burden, IEEE Trans. Signal Process 43 (5)
[11] V.V. Reddy, M. Mubeen, B.P. Ng, Reduced-complexity super-resolution DOA es- (1995) 1232–1242.
timation with unknown number of sources, IEEE Signal Process. Lett. 22 (6) [24] F.G. Yan, Y. Shen, M. Jin, Fast DOA estimation based on a split subspace de-
(2015) 772–776. composition on the array covariance matrix, Signal Processing 115 (2015) 1–8.
[12] B.D. Rao, et al., Performance analysis of root-MUSIC, IEEE Trans. Acoust., [25] F.G. Yan, M. Jin, S. Liu, X.L. Qiao, Real-valued MUSIC for efficient direction es-
Speech, Signal Process. 37 (1989) 1939–1949. timation with arbitrary array geometries, IEEE Trans. Signal Process. 62 (6)
[13] R. Roy, T. Kailath, ESPRIT-estimation of signal parameters via rotational invari- (2014) 1548–1560.
ance techniques, IEEE Trans. Signal Process. 37 (7) (1989) 984–995. [26] G.H. Golub, C.H.V. Loan, Matrix Computations, The johns Hopkins University
[14] K.C. Huarng, C.C. Yeh, A unitary transformation method for angle-of-arrival es- Press, Baltimore, MD, 1996.
tiamtion, IEEE Trans. Signal Process. 39 (1991) 975–977. [27] R. Cao, B. Liu, F. Gao, X. Zhang, A low-complex one-snapshot DOA estimation
[15] D.A. Linebarger, R.D. DeGroat, E.M. Dowling, Efficient direction-finding meth- algorithm with massive ULA, IEEE Communi. Letts. 21 (5) (2017) 1071–1074.
ods employing forward-backward averaging, IEEE Trans. Signal Process. 42 (8) [28] X. Mestre, M.A. Lagunas, Modified subspace algorithms for doa estimation with
(1994) 2136–2145. large arrays, IEEE Trans. Signal Process. 56 (2) (2008) 598–614.
[16] P. Stoica, M. Jansson, On forward-backward MODE for array signal processing, [29] G.T. Pham, P. Loubaton, P. Vallet, Performance analysis of spatial smoothing
Digital Signal Process. 7 (4) (1997) 239–252. schemes in the context of large arrays, IEEE Trans. Signal Process. 64 (1) (2016)
[17] J. Selva, Computation of spectral and root MUSIC through real polynomial root- 160–172.
ing, IEEE Trans. Signal Process. 53 (5) (2005) 1923–1927. [30] J. Zhuang, W. Li, A. Manikas, Fast root-MUSIC for arbitrary arrays, Electron. Lett.
[18] M. Pesavento, et al., Unitary root-MUSIC with a real-valued eigendecomposi- 46 (2) (2010) 174–176.
tion: a theoretical and experimental performance study, IEEE Trans. Signal Pro- [31] P. Stoica, A. Nehorai, Performance study of conditional and unconditional di-
cess. 48 (5) (20 0 0) 1306–1314. rection-of-arrival estimation, IEEE Trans. Acoust., Speech, Signal Process 38 (9)
[19] M. Haardt, F. Romer, Enhancements of unitary ESPRIT for non-circular sources, (1990) 1783–1795.
in: Proc. ICASSP, 2004, pp. 101–104. [32] F.G. Yan, T. Jin, M. Jin, Y. Shen, Subspace-based direction-of-arrival estimation
[20] A.B. Gershman, P. Stoica, On unitary and forward-backward MODE, Digital Sig- using centro-symmetrical arrays, Eletron. Lett. 27 (11) (2016) 1895–1896.
nal Process. 9 (2) (1999) 67–75. [33] V. Vasylyshyn, Improving the performance of root-MUSIC via pseudo-noise re-
[21] N. Yilmazer, J. Koh, T.K. Sarkar, Utilization of a unitary transform for efficient sampling and conventional beamformer, 2011 Conf. Microwaves, Radar Remote
computation in the matrix pencil method to find the direction of arrival, IEEE Sensing Symp. (MRRS). 8 (2) (2011) 309–312.
Trans. Signal Process. 54 (2006) 175–181. [34] P. Hyberg, M. Jansson, B. Ottersten, Array interpolation and DOA MSE reduc-
[22] C. Qian, L. Huang, H.C. So, Improved unitary root-MUSIC for DOA estimation tion, IEEE Trans. Signal Process., 53 (12) (2005) 4464–4471.
based on pseudo-noise resampling, IEEE Signal Process. Lett 21 (2) (2014) [35] C.P. Mathews, M.D. Zoltowski, Eigenstructure techniques for 2-d angle esti-
140–144. mation with uniform circular arrays, IEEE Trans. Signal Process 42 (9) (1994)
2395–2407.

You might also like