0% found this document useful (0 votes)
23 views

A Novel Multilayer Neural Network Model For TOA-based Localization in Wireless Sensor Networks

This paper proposes a novel multilayer neural network model called artificial synaptic network for sensor localization using time-of-arrival measurements. The performance of this model is compared to other neural network models and traditional localization methods. Results show the proposed model achieves the lowest error and highest efficiency, and its robustness to noise is better than weighted least squares and close to the theoretical lower bound.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
23 views

A Novel Multilayer Neural Network Model For TOA-based Localization in Wireless Sensor Networks

This paper proposes a novel multilayer neural network model called artificial synaptic network for sensor localization using time-of-arrival measurements. The performance of this model is compared to other neural network models and traditional localization methods. Results show the proposed model achieves the lowest error and highest efficiency, and its robustness to noise is better than weighted least squares and close to the theoretical lower bound.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/261351830

A novel multilayer neural network model for TOA-based localization in wireless


sensor networks

Conference Paper · July 2011


DOI: 10.1109/IJCNN.2011.6033628

CITATIONS READS
16 145

2 authors:

Sayed Yousef Monir Vaghefi Reza Monir Vaghefi


Islamic Azad University Najafabad Branch Virginia Tech (Virginia Polytechnic Institute and State University)
3 PUBLICATIONS 32 CITATIONS 34 PUBLICATIONS 1,216 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Reza Monir Vaghefi on 16 March 2015.

The user has requested enhancement of the downloaded file.


Proceedings of International Joint Conference on Neural Networks, San Jose, California, USA, July 31 – August 5, 2011

A Novel Multilayer Neural Network Model for TOA-Based


Localization in Wireless Sensor Networks
Sayed Yousef Monir Vaghefi, Reza Monir Vaghefi

Abstract—A novel multilayer neural network model, called In SDP, the cost function is approximated and relaxed to
artificial synaptic network, was designed and implemented for a convex optimization problem and solved with efficient
single sensor localization with time-of-arrival (TOA) measure- algorithms that do not require any initialization. In MDS,
ments. In the TOA localization problem, the location of a source
sensor is estimated based on its distance from a number of the locations of the sources are estimated using data analysis
anchor sensors. The measured distance values are noisy and of the coordinates and distances. It is stated that the MDS
the estimator should be able to handle different amounts of approach is very sensitive to measurement noise and not
noise. Three neural network models: the proposed artificial applicable to low connectivity networks [7].
synaptic network, a multi-layer perceptron network, and a Linear estimators [8], [9], [10] are also investigated in a
generalized radial basis functions network were applied to
the TOA localization problem. The performance of the models number of studies. The model of the TOA localization, which
was compared with one another. The efficiency of the models is basically a non-linear problem, is linearized using some
was calculated based on the memory cost. The study result approximation. The least squares (LS) method is studied in
shows that the proposed artificial synaptic network has the [9]. The method is simple to implement but cannot handle
lowest RMS error and highest efficiency. The robustness of the measurement noise. The TOA measurement noise is
the artificial synaptic network was compared with that of
the least square (LS) method and the weighted least square usually modeled as a zero-mean Gaussian random variable
(WLS) method. The Cramer-Rao lower bound (CRLB) of TOA with a variance depending mostly on the distances (i.e., the
localization was used as a benchmark. The model’s robustness larger measured distances have higher noise than the shorter
in high noise is better than the WLS method and remarkably ones). To improve the performance of the LS method, the
close to the CRLB. weighted least squares (WLS) algorithm [11] is introduced
I. INTRODUCTION which can tolerate unequally sized noises. However, since
linearization of the TOA localization model is done under
W IRELESS sensor network (WSN) localization prob-
lem is one of the interesting subjects studied in recent
years. In this problem, the locations of anchor sensors are
the assumption that the measurement noise is sufficiently
small, the performance of LS and WLS declines considerably
as the measurement noise increases. Different extensions
known and the location of each source sensor is estimated
to the LS method such as the constrained weighted least
based on its distance from the anchor sensors. The approx-
squares (CWLS) method [8] and corrected least squares
imate distance of a source sensor from an anchor sensor
method [9] are also introduced. Although these algorithms
is obtained using different measurement methods such as
have better performance in high noise, their computation cost
time of arrival (TOA) [1], time difference of arrival [2],
and complexity is higher.
and received signal strength (RSS) [3]. Throughout this
In this project, the TOA-based sensor localization is tack-
work we assume that the distances are obtained using TOA
led using artificial neural network models. Shareef et al.
measurements, however, the problem can easily be extended
[12] have compared the performance of three types of neural
to the other methods.
networks namely RBF, MLP, and recurrent neural networks
The maximum likelihood (ML) estimator is the optimal
in sensor localization. Their result shows that the RBF
estimator when the number of data records is sufficiently
network performs better than the other networks but it has
large [1], [4]. However, the cost function of the ML estimator
higher memory and computation costs. On the other hand,
is non-linear and non-convex and finding the global minimum
the MLP network has the lowest memory and computation
requires convoluted computations. The solution of the ML
costs. Rahman et al. [13] have implemented a MLP network
estimator is computed using iterative optimization methods
for the WSN localization problem. The network has reached
[1]. Since the cost function has many saddle points and local
the RMSe of 0.54 meter for a 20𝑚 × 20𝑚 problem. The
minima, a good initialization guess is inevitable to make sure
localization problem in a UWB sensor network is tackled in
that the algorithm converges to the global minimum. To deal
[14]. A neural network model is introduced. The performance
with this behavior of the ML estimator, different methods
of the model has not been as good as the LS method. In
such as semidefinite programming (SDP) relaxation [5], [6]
[15], a neural network model is developed for identification
and multidimensional scaling (MDS) [7] are introduced.
of undetected direct paths (UDP) in TOA localization.
Sayed Yousef Monir Vaghefi is with the School of Computer Science The advantage of artificial neural network models is that
and Information Technology at Royal Melbourne Institute of Technology, they are adaptable to different conditions and situations.
Melbourne, Australia (email: [email protected]), Reza Monir
Vaghefi is with Department of Signals and Systems, Chalmers University Indeed, our proposed model is designed to tolerate a specific
of Technology, Gothenburg, Sweden (email: vaghefi@student.chalmers.se). condition or situation. For instance, if the sensor network is

978-1-4244-9637-2/11/$26.00 ©2011 IEEE 3079


set up in an indoor environment where the signals from the
sources are blocked and diminished by many objects and the
measurements are subject to high noise, the algorithm can
be trained to deal with the high-noise measurements. On the
other hand, if the connectivity of the sensor network is high
and it is required to have an accurate estimate, the algorithm
can be trained to handle the low-noise measurements.
The rest of paper is organized as follows. In the networks
section, the neural network models are described. In the
data section, the training and test data are presented. In the
experiments and results section, the models are compared
based on their performance and efficiency and the robustness
of the artificial synaptic network (ASN) model is compared
with that of the LS, and WLS methods. The study results are
summarized in the conclusion section.

II. NETWORKS
In this study, the TOA localization problem was considered
as a supervised learning problem, in which a neural network
is trained with a set of input-output data called the training
dataset, and tested with another set of input-output data called
the test dataset. Three neural network models: an artificial
synaptic network (ASN), a generalized radial basis func-
tions (GRBF) network, and a multilayer preceptron (MLP)
network were applied to the problem. The models were
implemented in C sharp . The performance and efficiency
of the models were studied. The best model was identified
and then tested for robustness. The designed ASN model is
comprised of a number of artificial synaptic networks - each
of them working on a data cluster. The architecture of each
network is presented in Fig. 1. The output of each network
is computed using:
𝑛

𝑓 (𝑥) = 𝑤𝑖 ∗ 𝑥 𝑖 + 𝑤0 . (1)
𝑖=1

where n is the number of inputs, and is constructed in a


deeper layer:
𝑛

𝑤𝑖 = 𝑤𝑖𝑗 ∗ 𝑥𝑗 + 𝑤𝑖0 . (2)
𝑗=1

where n is the number of inputs. In training, a center is


assigned to each ASN by the k-means clustering algorithm.
Each training data goes to the closet network. The error is
computed at the output layer of the network. The error is
then backpropagated to the deepest layer. The weights at the
deepest layer are updated. The weights at the next layers are
not updated; they are instead constructed layer by layer from
the weights of the deepest layer. The learning algorithm:
1. Initialize the centers, the output layer, the second layer,
and the third layer weights, and the maximum acceptable Fig. 1. Artificial Synaptic Network (ASN) architecture
error
2. For each data point
a. Compute the Euclidean distance of the point from all
the centers
b. Assign the closet center to the point

3080
20
3. For each center
Change the center to the centroid of the data points 15
assigned to that center
4. If not yet converged go to step 2 10

5. For each data point: x


a. 𝑏𝑎𝑐𝑘𝑝𝑟𝑜𝑝𝑎𝑔𝑎𝑡𝑒(𝑥) 5

y coordinate [m]
b. Compute the networks output:
𝑛𝑒𝑡𝑤𝑜𝑟𝑘𝑜𝑢𝑡(𝑥) = 𝑐𝑜𝑚𝑝𝑢𝑡𝑒𝑜𝑢𝑡𝑝𝑢𝑡(𝑥) 0

−5

−10

−15

−20
−20 −15 −10 −5 0 5 10 15 20
x coordinate [m]

Fig. 4. Training dataset

20

15

Fig. 2. Multilayer Preceptron Network Architecture


10

5
y coordinate [m]

−5

−10

−15

−20
−20 −15 −10 −5 0 5 10 15 20
x coordinate [m]

Fig. 5. Test dataset

𝑛

Fig. 3. GRBF Network Architecture
𝑛𝑒𝑡𝑤𝑜𝑟𝑘𝑜𝑢𝑡(𝑥) = 𝑤𝑖 ∗ 𝑥 𝑖 + 𝑤0 . (5)
𝑖=1
6. Compute the Root Mean Square Error:
√ where n is the number of inputs.
∑𝑛 2 The backpropagation(x:the data point) method:
𝑖=1 (𝑜𝑢𝑡𝑝𝑢𝑡𝑖 − 𝑛𝑒𝑡𝑤𝑜𝑟𝑘𝑜𝑢𝑡𝑖 )
𝑟𝑚𝑠𝑒 = (3) 1. Compute the output for the data point:
𝑛
𝑐𝑜𝑚𝑝𝑢𝑡𝑒𝑜𝑢𝑡𝑝𝑢𝑡(𝑥)
where n is the number of training data. 2. Backpropagate the error layer by layer
7. If (𝑟𝑚𝑠𝑒 > 𝑚𝑎𝑥𝑒𝑟𝑟𝑜𝑟) go to step 5 a. Output layer:
The computeoutput(x: the data point) method: 𝑑𝑒𝑙𝑡𝑎1 = 𝑜𝑢𝑡𝑝𝑢𝑡(𝑥) − 𝑛𝑒𝑡𝑤𝑜𝑟𝑘𝑜𝑢𝑡(𝑥)
1. Compute the Euclidean distance of x from all the centers b. Second layer:
2. Send x to the network with the closet center 𝑑𝑒𝑙𝑡𝑎2𝑖 = 𝑑𝑒𝑙𝑡𝑎1 ∗ 𝑥𝑖
3. Compute the output of the chosen network: c. Third layer:
𝑛 𝑑𝑒𝑙𝑡𝑎3𝑖𝑗 = 𝑑𝑒𝑙𝑡𝑎2𝑖 ∗ 𝑥𝑗

𝑤𝑖 = 𝑤𝑖𝑗 ∗ 𝑥𝑗 + 𝑤𝑖0 . (4) 3. Update the weights of the deepest layer
𝑗=1 𝑙𝑎𝑦𝑒𝑟3𝑤𝑒𝑖𝑔ℎ𝑡𝑠 = 𝑙𝑎𝑦𝑒𝑟3𝑤𝑒𝑖𝑔ℎ𝑡𝑠 + 𝑒𝑡𝑎 ∗ 𝑑𝑒𝑙𝑡𝑎3 ∗ 𝑥

3081
TABLE I
The multilayer preceptron (MLP) network implemented T HE COMPARISON OF MODELS
for the problem has two hidden layers. The architecture of
RMSe Memory Cost Iterations
the network is presented in Fig. 2. The activation function of Generalized Radial Basis
the hidden neurons is a logistic function. The network was Functions (GRBF) Network 2.829 4*15+15=75 99917
Multilayer Preceptron (MLP)
trained using the backpropagation algorithm [16]. Network 3.204 4*7 + 7*7 +7 = 84 97186
The GRBF network [17] consists of a number of radial Artificial Synaptic Network
(ASN) 0.2999 3*4*5 +3*4= 72 91697
basis function neurons, each of them working on a center.
In the training, the fixed centers of the network are specified
using the k-means clustering algorithm. The network is then
trained using the gradient descent algorithm. Fig. 3 shows of different number of hidden neurons. The number of
the architecture of the network. hidden neurons in the first and second hidden layer is equal.
The networks were trained for 100 thousand iterations. By
III. DATA
increasing the number of hidden neurons, the RMSe first
The performance of a neural network model depends decreases. But then it increases because networks with more
on the density of the training data and complexity of the hidden neurons require more iterations to converge. Lower
problem. If the training data is not dense, the network does RMSe can be achieved, if the learning rate goes up. After
not have enough information to build the model. Considering changing the number of neurons to 12, the learning rate was
the modeling process as a hyperplane reconstruction process, increased. As a result the RMSe went down. To conclude,
the more complex the hyperplane is the more data is required the MLP network could not achieve better performance than
for reconstruction. the ASN model, even with more hidden neurons.
The datasets were generated in MATLAB. The training
dataset was a set of 144 data points evenly distributed over 5

the input space. Fig. 4 presents the training dataset. The 4.5
red points, diamonds, are the source sensors and the blue
4
points, squares, are the anchors. The test dataset was 300
data points randomly distributed over the input space. The 3.5

distances of each data point from the anchors were calculated 3


RMSE [m]

by MATLAB. A random amount of noise was added to the


2.5
distances. The test dataset is shown in the Fig. 5.
2
IV. EXPERIMENTS AND RESULTS
1.5
A model with two outputs is required to estimate the
1
location, x and y, of a sensor. However, since one-output
models have less complexity and less training time, two 0.5

separate models of one output were employed to estimate 0


7 8 9 10 11 12 13 14 15 16
x and y of a sensor. In the first experiment the models were Number of hidden neurons in each hidden layer
trained with the training dataset of Fig. 4. The performance
of the models was compared based on the Root Mean Square Fig. 6. Influence of number of hidden neurons on RMSe
error (RMSe). The efficiency of the models was compared
based on the memory cost. Table 1 shows the memory cost The ASN model had the best performance and efficiency.
and RMSe of the models. Compared to the other models, the ASN model converges
The memory cost was calculated based on the number of faster and requires less memory. It has also achieved the
memory blocks required to store the centers and synaptic lowest RMSe. Therefore, the ASN model was selected for
weights. In the GRBF network, there are 15 centers that the TOA localization problem.
require 60 memory blocks, and there are 15 synaptic weights In the third experiment, the ASN model was trained with
that require 15 blocks of memory. The MLP network has a termination condition of 𝑅𝑀 𝑆𝑒 < 0.3𝑚. The model
7 neurons in each hidden layer which means 28 synaptic was then tested on the test dataset. The computed RMSe
weights between the input layer and the first hidden layer, is 0.335 m. Fig. 7 shows the estimated and true locations
49 synaptic weights between the first and the second hidden of the source nodes. The model was again trained with a
layer, and 7 synaptic weights between the second hidden termination condition of 𝑖𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛𝑠 > 400000. The model
layer and the output layer. The number of synaptic weights in reached the RMSe of 0.258 m. The model was then tested
the ASN model is equal to the number of centers multiplied with the test dataset. The RMSe of the test dataset is 0.271
by the number of neurons in the deepest layer. The model m. Thus, in a 40𝑚∗40𝑚 square area, the location of a source
has 3 centers and there are 20 neurons in the deepest layer. sensor can be estimated with an average error of 27.1 cm.
In the second experiment, more neurons were added to In the next experiment, the ASN model was tested for
the MLP network. The RMSe decreased but the training was robustness. A total of 300 random locations were selected.
considerably slow. Fig. 6 shows the RMSes of networks For each location, the distance between the location and

3082
4.5
each anchor was computed. A zero-mean Gaussian noise LS
WLS
was added to the distances. A set of 10 different four- 4
ASN−3
distance inputs were computed for each location. A total of CRLB
3.5
3000 inputs were sent to the ASN model and the RMSe
was computed. The experiment was repeated for different 3

RMSE [m]
2.5
20

2
15

1.5

10
1

5 0.5
y coordinate [m]

0
0 0 0.5 1 1.5 2 2.5
Standard deviation of measurement noise [m]

−5
Fig. 8. ASN model with 3 centers
−10
7
LS
−15 WLS
6 ASN−5
CRLB
−20
−20 −15 −10 −5 0 5 10 15 20 5
x coordinate [m] RMSE [m]

4
Fig. 7. Diamonds depict estimated locations and squares depict true
locations.
3

Gaussian noises. Two other methods applied to the WSN


localization problem, the LS method and the WLS method, 2

were tested using the same procedure. Fig. 8 shows the


1
computed RMSe in the presence of different amounts of
noise. As shown in the figure, although the ASN model
0
has achieved lower RMSes compared to the LS method, the 0 0.5 1 1.5 2 2.5 3 3.5 4
Standard deviation of measurement noise [m]
model has not been as robust as the WLS method.
To improve the robustness of the model, two centers were Fig. 9. ASN model with 5 centers
added to the model. The memory cost increased to 100.
The model was trained using the same training dataset. The
training stopped with the RMSe of 0.3 m. The model was model was trained with the new training dataset. The model
tested with the same set of test inputs. Fig. 9 presents the was then tested with the same test inputs. Fig. 11 shows the
robustness of the model. As shown in the figure, the model results. When the noise is high, the model outperforms the
is almost as robust as the WLS method. However, WLS WLS method. In contrast, when the noise is low, the model
outperforms the model when the standard deviation of noise fails to compete with the WLS method. Overall, the model’s
is lower than 0.5. robustness is better than the LS method and almost as good
If the model is less trained, it would be more robust. In as the WLS method. If trained with noisy data, the model
other words, if the RMSe of the training data is higher, the can outperform the WLS method.
model’s tolerance of noise is higher. To test this theory,
the model was trained with the termination condition of V. CONCLUSION
𝑅𝑀 𝑆𝑒 < 0.5𝑚. The model was then tested on the test In TOA localization, our artificial synaptic network (ASN)
dataset. Fig. 10 compares the model’s robustness with that model has better performance and efficiency compared to
of the other methods. As demonstrated by this figure, when GRBF and MLP models. The model converges faster and
the noise standard deviation is higher than or equal to 2, its memory cost is lower. Tested on the training dataset, the
the model has lower RMSes compared to the WLS method. model reached the RMSe of 0.258 m. Tried on the test dataset
Conversely, when the noise standard deviation is lower than the model achieved the RMSe of 0.271 m. Therefore, in a
2, the model has higher RMSes in compare with the WLS 40 m * 40 m square area, the location of a source sensor can
method. be estimated with an average error of 27.1 cm.
Another way to improve the robustness is to train the The robustness of the ASN model is almost as well as the
model with noisy data. A zero-mean Gaussian noise with the weighted least squares (WLS) method. Adding more centers
standard deviation of 2 was added to the training data. The to the model improves the robustness of the model. However,

3083
7
LS Applied Mathematics, 2005, pp. 405–414. [Online]. Available:
WLS http://portal.acm.org/citation.cfm?id=1070432.1070488
6 ASN−5H [6] C. Meng, Z. Ding, and S. Dasgupta, “A semidefinite programming
CRLB
approach to source localization in wireless sensor networks,” Signal
5
Processing Letters, IEEE, vol. 15, pp. 253 –256, 2008.
[7] K. Cheung and H. So, “A multidimensional scaling framework for mo-
bile location using time-of-arrival measurements,” Signal Processing,
RMSE [m]

4 IEEE Transactions on, vol. 53, no. 2, pp. 460 – 470, feb 2005.
[8] K. W. Cheung, H. C. So, W.-K. Ma, and Y. T. Chan,
3
“A constrained least squares approach to mobile positioning:
algorithms and optimality,” EURASIP J. Appl. Signal Process.,
vol. 2006, pp. 150–150, January 2006. [Online]. Available:
2 http://dx.doi.org/10.1155/ASP/2006/20858
[9] Y. Chan and K. Ho, “A simple and efficient estimator for hyperbolic
1
location,” Signal Processing, IEEE Transactions on, vol. 42, no. 8, pp.
1905 –1915, aug 1994.
[10] R. Vaghefi, M. Gholami, and E. Strom, “Bearing-only target localiza-
0 tion with uncertainties in observer position,” in Personal, Indoor and
0 0.5 1 1.5 2 2.5 3 3.5 4
Standard deviation of measurement noise [m] Mobile Radio Communications Workshops (PIMRC Workshops), 2010
IEEE 21st International Symposium on, sept. 2010, pp. 238 –242.
[11] M. Spirito, “On the accuracy of cellular mobile station location
Fig. 10. Influence of lowering the expected training RMSe estimation,” Vehicular Technology, IEEE Transactions on, vol. 50,
no. 3, pp. 674 –685, may 2001.
7 [12] A. Shareef, Y. Zhu, and M. Musavi, “Localization
LS using neural networks in wireless sensor networks,” in
WLS
6 ASN−5T
Proceedings of the 1st international conference on MOBILe
CRLB Wireless MiddleWARE, Operating Systems, and Applications,
ser. MOBILWARE ’08. ICST, Brussels, Belgium, Belgium:
5 ICST (Institute for Computer Sciences, Social-Informatics and
Telecommunications Engineering), 2007, pp. 4:1–4:7. [Online].
Available: http://portal.acm.org/citation.cfm?id=1361492.1361497
RMSE [m]

4
[13] M. Rahman, Y. Park, and K.-D. Kim, “Localization of wireless
sensor network using artificial neural network,” in Communications
3 and Information Technology, 2009. ISCIT 2009. 9th International
Symposium on, sept. 2009, pp. 639 –642.
[14] S. Ergut, R. Rao, O. Dural, and Z. Sahinoglu, “Localization via tdoa
2
in a uwb sensor network using neural networks,” in Communications,
2008. ICC ’08. IEEE International Conference on, may 2008, pp. 2398
1 –2403.
[15] M. Heidari, N. Alsindi, and K. Pahlavan, “Udp identification and
error mitigation in toa-based indoor localization systems using neural
0
0 0.5 1 1.5 2 2.5 3 3.5 4 network architecture,” Wireless Communications, IEEE Transactions
Standard deviation of measurement noise [m] on, vol. 8, no. 7, pp. 3597 –3607, july 2009.
[16] S. Saarinen, R. Bramley, and G. Cybenko, “Neural networks, back-
Fig. 11. 5-center ASN model trained with noisy data propagation, and automatic differentiation,” in Automatic Differ-
entiation of Algorithms: Theory, Implementation, and Application,
A. Griewank and G. F. Corliss, Eds. Philadelphia, PA: SIAM, 1991,
pp. 31–42.
there is a limit in this improvement. If trained with noisy [17] S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd ed.
Upper Saddle River, NJ, USA: Prentice Hall PTR, 1998.
data, the model can outperform the WLS method. In high
noise, the model performs better than the WLS method. On
the contrary, in low noise, the model fails to perform as well
as the WLS method.

R EFERENCES
[1] S. Kay, Fundamentals of statistical signal processing: estimation
theory. Prentice-Hall, 1993.
[2] K. Ho and Y. Chan, “Solution and performance analysis of geolocation
by tdoa,” Aerospace and Electronic Systems, IEEE Transactions on,
vol. 29, no. 4, pp. 1311 –1322, oct 1993.
[3] N. Patwari, I. Hero, A.O., M. Perkins, N. Correal, and R. O’Dea,
“Relative location estimation in wireless sensor networks,” Signal
Processing, IEEE Transactions on, vol. 51, no. 8, pp. 2137 – 2148,
aug. 2003.
[4] Y.-T. Chan, H. Yau Chin Hang, and P. chung Ching, “Exact and
approximate maximum likelihood localization algorithms,” Vehicular
Technology, IEEE Transactions on, vol. 55, no. 1, pp. 10 – 16, jan.
2006.
[5] A. M.-C. So and Y. Ye, “Theory of semidefinite programming for
sensor network localization,” in Proceedings of the sixteenth
annual ACM-SIAM symposium on Discrete algorithms, ser.
SODA ’05. Philadelphia, PA, USA: Society for Industrial and

3084
View publication stats

You might also like