0% found this document useful (0 votes)
62 views

Ultrasonic Sensor Network: Target Localization With Passive Self-Localization

This document describes a system that uses an ultrasonic sensor network to localize a flying robot. It discusses: 1) Using weighted Multi-Dimensional Scaling and gradient descent to localize the sensor nodes based on outlier-free arrival time data, which generates a locally linear manifold. 2) After sensor node localization is complete, an Extended Kalman Filter is used to estimate the robot's position and other unknown parameters in real-time. 3) Outliers in sensor data, which are common due to noise and large distances, are detected by checking if innovations are within an expected range. Variable length observation vectors help select good sensor observations.

Uploaded by

Amit Singh
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

Ultrasonic Sensor Network: Target Localization With Passive Self-Localization

This document describes a system that uses an ultrasonic sensor network to localize a flying robot. It discusses: 1) Using weighted Multi-Dimensional Scaling and gradient descent to localize the sensor nodes based on outlier-free arrival time data, which generates a locally linear manifold. 2) After sensor node localization is complete, an Extended Kalman Filter is used to estimate the robot's position and other unknown parameters in real-time. 3) Outliers in sensor data, which are common due to noise and large distances, are detected by checking if innovations are within an expected range. Variable length observation vectors help select good sensor observations.

Uploaded by

Amit Singh
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Ultrasonic Sensor Network: Target

Localization with Passive Self-Localization

Ashutosh Saxena∗ CS229


Department of Electrical Engineering Project Report
Stanford University Prof. Andrew Ng
Stanford, CA 94305 http://www.stanford.edu/class/cs229
[email protected]

Abstract

A flying robot is localized using ultrasonic sensor network dis-


tributed on the ground. This requires localization of sensor nodes
first. In absence of RF link between target and sensors, weighted
Multi-Dimensional Scaling (MDS) with Gradient Descent is used
to localize the sensors from outlier free data obtained from gener-
ating a locally linear manifold. After initialization, an Extended
Kalman filter (EKF) is used to estimate robot’s position alongwith
other unknown parameters. Outliers, which are numerous because
of large distances and noise involved, are detected by checking in-
novation to be within range. Variable length observation vector
helps select the good sensor observations from the available many.

1 Introduction
Absolute positioning of flying robots is extemely important for free range path
tracking, reactive navigation, as well as automatic control [1]. With flying robots,
localization systems using inertial sensors and GPS suffer from problems in some
scenarios like inverted flights and aerobatic maneuvers [1]. Using computer vision
suffers from slow update rate.
Deploying ultrasonic sensor networks to solve this problem offers an attractive so-
lution, providing high update rate, high accuracy and robustness using redundancy
in the sensors. Each sensor is equipped with an ultrasonic receiver, some computing
power, and a wireless link to central computer. A pulse emitted from the trans-
mitter on the flying robot, is received by some of the sensors who report the pulse
arrival time to the central computer. In absence of RF-link between ultrasonic
transmitter and sensors, time-of-flight (TOF) information is not available, and the
position is estimated using pulse arrival times.
Ultrasonic sensors have been use in localization of robots [2,4,5,6]. Problems ad-
dressed include estimating variable speed of sound, and combining data from differ-
ent types of sensors,etc. However, in almost all of them, the following problems are

Ashutosh Saxena ([email protected]). Project being pursued under Prof. An-
drew Ng ([email protected]) in AI-Robotics Lab.
(a) Receiver (b) Transmitter

Figure 1: Reciver and Transmitter circuits

not addressed: a) Very unreliable arrival times because of large distances involved
and noise, resulting in a large number of outliers, b) Unknown ultrasonic sensor
node locations, c) No RF-synchronization between transmitter and receiver. Using
redundancy from higher number of sensors, allows to reliably estimate the position
if the robot in these scenarios.
Most sensor network algorithms require knowledge of sensor node locations, which is
not always practical. Network localization algorithms exist, but some of them need
special beacon nodes with known locations, and others need nodes to communicate
with each other using sound and RF signals. Passive sensor network localization
techniques are promising. Bayesian network [6]approach to passive localization is
not applicable in practical scenarios of single target localization because the pulses
are not well dispersed (in fact they lie on a particular trajectory) and only a small
subset of nodes receive the pulses.
We use Weighted Multi-Dimensional Scaling (WMDS), with minimization over pulse
arrival time as well. Outliers are removed by finding a 3-dimensional locally linear
[3] manifold in a subset of space of arrival times from all the sensors. Once the
sensors are localized, the algorithm switches to Extended Kalman Filtering (EKF)
mode for position estimates of the target. A variable length observation vector is
used and consistent innovations from each subset of available observed arrival times
are used to update the position.

2 Localizer Structure

The localization system consists of M beacons (6 in our specific experiment), and a


transmitter array placed on the flying robot which sends a 40kHz ultrasonic pulses
(Figure 1(b)) at every 20 ms. Note that there is no RF-Synchronization between
the flying robot and the sensor nodes. The sensor nodes are located at s~i with
i = 1, ...M . The nodes are placed on uneven ground with approximately known
locations s~i 0 .
The sensor nodes ((Figure 1(a))) consist of a ultrasonic receiver, signal from which
is amplified, filtered and rectified using low-noise analog circuitry and fed to sensor
microprocessors which have adaptive pulse energy and noise energy thresholds to
detect pulses. The time of arrival of pulses alongwith pulse energy and width is
reported to the central computer over wireless network (using Crossbow radio).
The large distances involved (50 ft.) and the noise from flying robot (helicopter)
result in poor reception of pulses at the ultrasonic sensors. There are numerous
missing pulses and erroneous pulses. The amplitude of the pulse received varies a
lot because of many factors like transmitter and receiver angle, shadowing, multi-
path interference, etc. Although it has some correlation with the distance from
transmitter, the amplitude of the pulse cannot be taken as a feature for distance.
The arrival time of the pulses is accurate (to 150µsec), and is taken as the feature
vector for observation.

3 System Model
For the nth pulse, the arrival time at sensor m (= 1, ..., M ) is given by
tm (n) = t0 + nT + Dm (n)/c (1)
Dm (n) = ||x(n) − sm || (2)
Dk1 (n) − Dk2 (n) = c(tk1 (n) − tk2 (n)) (3)

where, T is the beacon firing cycle time (variable), t0 is the firing time of first
pulse, c is the speed of sound in air (variable but can be modeled). Dm (n) is the
euclidean distance of the target when nth was fired from sensor m. Note, that in
third equation, taking differences of the distances between two sensors results in
cancellation of firing time of the pulse. Thus, if we are using more sensors, then the
time-of-flight is not needed.

4 Algorithm
The localization process consists of two parts

1. : Pulse Detection: The noise level and the pulse energy level vary. To take
care of this, and ensure that low energy pulses are also detected, we put a
adaptive threshold based on total signal energy and pulse energy.

Figure 2: Received signal from 10 ft at the Sensor

2. Sensor Localization: Given approximate location of the sensors placed ran-


domly on the field and unknown position of the target, the task is to ac-
curately find the position of the sensors. There are a lot of outliers in the
data because of false pulses and missing pulses.
3. Realtime Target Localization: Once sensor node locations have been deter-
mined, the position of the target should be determined in realtime. Only
some of the sensors maybe giving reliable observations, and it is may not
be known apriori if the observations are reliable or not.

5 Sensor Node localization


The system equations can be looked at as a 3-dimensional manifold in M-
dimensional observation space. The intrinsic free variables are x = (x, y, z) po-
sitions of the target. Since, the equations are well-behaved when we linearize them,
the trajectory is locally linear. Further, x changes as a result of a target (heli-
copter) moving, hence x(n)’s are closely spaced. Under these conditions, we fit a
Locally Linear Manifold [3]. Locally Linear Manifold finds nearest neighbor from a
candidate set of neighbors based on how well neighbors construct the point. After
repeating this for each point, a outlier free manifold is formed. This presents a novel
application of this algorithm in removing outliers from sensor data. This automat-
ically chooses the points for which all the sensor readings are valid, and removing
outliers. This can also be extended to the case when only a few sensor readings are
valid by fitting manifolds in subspace spanned by rest of the sensors. In this paper,
we restrict ourselves to the scenario when all sensor readings are valid (Figure 5).

Figure 3: 1-D manifold in 2-D space for 2 sensors and robot moving along a straight
line

Another way to look at it is N equations in the following unknown fixed variables:


t0 (0), six ,siy , siz (total 3M + 1 parameters). There are P ≤ M N equations, and
3(M − 1) + 3N unknowns, and equations can be solved [6], if P ≥ 3M + 3N − 3.
Now, once we remove the outliers, we have pairwise distances (shifted by a additional
constant t0 ) between S ≤ N valid positions of the target and M sensors (all the
target positions may not have resulted in valid readings at all the sensors). However,
we do not have distances between one sensor to another, and between one target
location to another.
We have
D(x(k), si ) = c(ti (k) − t0 (0) − kT ) (4)
View t0 (0) as a paramter (and with known T). If we correctly estimate the pa-
rameter, we will get pairwise distances between each sensor and robot location in
R3 , by subracting t0 from the scaled arrival atime observations ti . If all the sensor
positions and the valid target locations are considered as a vector of positions in
R3 , then
 
s
z=
x

And we have a matrix of distances D = zz T in which off-diagonal block matrices


are Dxs = xsT , and the diagonal matrices Dss = ssT and the Dxx = xxT are not
known
 
Dss Dsx
+ t0
Dxs Dxx

Now, we can apply weighted Multi-Dimensional Scaling (MDS) with t0 (0) as the
parameter, and minimize the error by varying t0 (0) according to a iterative method
(Gradient Descent). A version of weighted MDS (iterative) is used, with weights
correspoinding to Dxs set to zero.

6 Target Localization

We use Extended Kalman Filter (EKF) for estimating the n-dimensional state of
the system, denoted x, given a set of measurements in m dimensional vector y.

6.1 State vector for EKF

The state vector consists of robot position (x, y, z), and the speed of sound (c).

x=( x y z c )

6.1.1 Variable length Observation vector


The observation vector is given by

y = ( t2 − t1 t3 − t1 ... tM − t1 )

If a particular sensor knows that it has not received the pulse, we do not include
that particular sensor in that state update, removing that arrival time from the
observation vector, and the corresponding matrices.
The number of arrival times required for estimating the position
P6 is 4. We have to
choose 4 out of 6 observations. This gives us a maximum of i=4 6 Ci = 22 choices
for the observation vector to choose from. The following heuristic will be used to
optimally choose the observation vector

1. Choose a observation only if it is ampitude is above threshold θ1 . If number


of observations choosen LN are less than 4, then choose top 4 if they are
above threshold θ2 , where θ2 < θ1 .
2. Reduce number of observations, if innovation exceeds a threshold θ3 .
3. Compute state covariance matrix with LN , and with highest LN − 1 ob-
servations. If error goes down, compute with highest LN − 2 observations,
and continue till LN − i ≥ 4.
(a) Sensor1 (b) Sensor2

Figure 4: Variable Length Observation vector. First arrival time is spurious after
inital few seconds. Second one is good almost all of the times.

Figure 5: Numerous false peaks are there. Initially, there are no false peaks; but as
Helicopter moves far away, the false peaks are detected
6.2 State Transition equations

The State transition equation is


x(k + 1) = φx(k) + w(k + 1) (5)
where, w(k + 1) is the state noise. The Measurement equation is
y(k + 1) = h(x(k + 1)) + v(k + 1) (6)
hi (x(k)) = (Di+1 (k) − D1 (k))/c (7)
i = 1, ..., M − 1 (8)
where, h is a non-linear function, in which non-linearity is not very severe, therefore
can be linearized around operating point. v(k + 1) is the measurement noise. The
EKF filter works well if it is initialized without good values.

6.3 State and error prediction

The variance matrices for state and measurement noise are Q and R are estimated
beforehand statistically, to avoid time delay in online estimation. The predicted
error covariance matrix P is initialized to a high value because of uncertaintities in
prediction of state in the beginning. The state prediction is done, predicted error
covariance matrix is calculated. Innovation is calculated, and if it exceeds a certain
threshold, the observation is rejected. The threshold may be decreased as time
passes. The innovation is added to predicted state with a Kalman gain. Then the
error covariance matrix is re-computed.

6.4 Spurious arrival time detection

The algorithm may diverge because of spurious arrival times, which cannot be
detected in the beginning. In the beginning the error covariance matrices are high
because of uncertaintities in the state. As the samples start to come, the EKF
converges. However, in presence of spurious samples, this may never happen. We
need an algorithm for removal of those spurious signals.
Some samples are collected in the beginning, and locally linear neighbors are clus-
tered together as in [3]. These have a high probability of being correct samples are
fed to the Kalman filter with weak threshold for spurious sample check. Once, the
EKF converges, the samples are fed directly to the EKF.

7 Results
7.1 Simulation Results

The convergence of the Sensor Localization algorithm algorihms was checked in


simulation. It converged with a high precision.

7.2 Experimental Results

In target localization, the sensor position is assumed to be known. The starting


position is also assumed to be known, although, helicopter position was initialized
with a wrong value and the Kalman filter automatically corrected it.

7.2.1 Sensor Localization


With 2 sensors, the Locally Linear Manifold algorithm was able to remove outliers.
7.2.2 Estimate of 1-D motion

The error found was 3.5 cm using 2 ultrasonic sensors, and robot moving in 1
dimension.

7.2.3 Estimate of 2-D motion

The arrival times were detected, and the spurious arrival times are removed (Figure
7.2.3).

Figure 6: X position of robot moving in a straight line with 3 sensors

7.2.4 Helicopter Localization

The results of localization of helicopter are shown in the figure. The position is
shown in meters (Figure 7.2.4). Estimated error is about 15 cm.

Figure 7: X, Y, Z position of the helicopter in meters with 6 sensors. Output is at


55 Hz
8 Conclusion
The paper will propose a novel location system for robots in 3-dimensions, using
ultrasonic sensor network with unknown node locations. This makes the system
deployable in scenarios where sensor node localization is not possible, for example,
when sensor nodes are dropped by the robot itself. Currently, offline position es-
timates of 15 cm are available at 25 ft distances. Present system indicates that
accuracies of less than 10 cm may be achieved in a 400 sq-ft area for distances
upto 50 cm. The algorithm and the system has to be made robust, and operable in
real-time.
We also demonstrated a new application of locally linear manifolds in removing
outliers, which helps weighted to operate to find sensor node locations.

References
[1] Andrew Y. Ng, Adam Coates, Mark Diel, Varun Ganapathi, Jamie Schulte, Ben Tse,
Eric Berger and Eric Liang, Inverted autonomous helicopter flight via reinforcement learn-
ing, In International Symposium on Experimental Robotics, 2004.
[2] Michael R McCarthy, et. al., RF Free Ultrasonic Positioning, Proc. IEEE ISWC 2003.
[3] Ashutosh Saxena, Abhinav Gupta and Amitabha Mukerjee, Non-Linear Dimensionality
Reduction by Locally Linear Isomaps, In Lecture Notes in Computer Science (Proceedings
of 11th International Conference on Neural Information Processing- ICONIP 2004)
[4] Ching-Chih, A Localization System of a Mobile Robot by Fusing Dead-Recknoning and
Ultrasonic Measurements, IEEE Trans Instr Measurement, vol 47, no 5, Oct 1998
[5] Lindsay Kleeman, Optimal Estimation of Position and Heading for Mobile Robots
Using Ultrasonic Beacons and Dead-reckoning, IEEE 1992
[6] Rahul Biswas and Sebastian Thrun, A Passive Approach to Sensor Network Localiza-
tion, unpublished, 2005.

You might also like