0% found this document useful (0 votes)
62 views16 pages

Eece 522 Notes - 11 CH - 6

The document discusses the Best Linear Unbiased Estimate (BLUE) and how it can be used to estimate unknown parameters when only the first and second moments of the probability density function are known. The BLUE is defined as a linear estimator that is constrained to be unbiased, with the goal of minimizing variance. This leads to a constrained optimization problem. The document also provides an example of using BLUE to estimate the location of an emitter using time difference of arrival measurements from multiple receivers. The nonlinear measurement model is first linearized so BLUE can be applied.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views16 pages

Eece 522 Notes - 11 CH - 6

The document discusses the Best Linear Unbiased Estimate (BLUE) and how it can be used to estimate unknown parameters when only the first and second moments of the probability density function are known. The BLUE is defined as a linear estimator that is constrained to be unbiased, with the goal of minimizing variance. This leads to a constrained optimization problem. The document also provides an example of using BLUE to estimate the location of an emitter using time difference of arrival measurements from multiple receivers. The nonlinear measurement model is first linearized so BLUE can be applied.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

Chapter 6

Best Linear Unbiased Estimate


(BLUE)

1
Motivation for BLUE
Except for Linear Model case, the optimal MVU estimator might:
1. not even exist
2. be difficult or impossible to find
⇒ Resort to a sub-optimal estimate
BLUE is one such sub-optimal estimate
Idea for BLUE:
1. Restrict estimate to be linear in data x
2. Restrict estimate to be unbiased
3. Find the best one (i.e. with minimum variance)

Advantage of BLUE:Needs only 1st and 2nd moments of PDF


Disadvantages of BLUE:
1. Sub-optimal (in general) Mean & Covariance
2. Sometimes totally inappropriate (see bottom of p. 134)
2
6.3 Definition of BLUE (scalar case)
Observed Data: x = [x[0] x[1] . . . x[N – 1] ]T

PDF: p(x;θ ) depends on unknown θ


N −1
BLUE constrained to be linear in data: θˆBLU = ∑n
a x[ n ] = a T
x
n −0
Choose a’s to give: 1. unbiased estimator
2. then minimize variance

Linear
Unbiased
Variance

Estimators Note: This is


BLUE not Fig. 6.1
Nonlinear
Unbiased Estimators
MVUE

3
6.4 Finding The BLUE (Scalar Case)
N −1
1. Constrain to be Linear: θˆ = ∑ a n x[ n]
n −0

2. Constrain to be Unbiased: E {θˆ} = θ


Using linear constraint

N −1

∑a
n=0
n E { x [ n ]} = θ

Q: When can we meet both of these constraints?

A: Only for certain observation models (e.g., linear observations)

4
Finding BLUE for Scalar Linear Observations
Consider scalar-parameter linear observation:
x[n] = θs[n] + w[n] ⇒ E{x[n]} = θs[n]
N −1
Then for the unbiased condition we need: E{θˆ } = θ ∑ a#n"s[!
n] = θ
n −0 ⇓
Tells how to choose
weights to use in the
Need aT s = 1
BLUE estimator form
N −1
θˆ = ∑a
n −0
n x[ n]

Now… given that these constraints are met…


We need to minimize the variance!!

Given that C is the covariance matrix of x we have:

{ } { }
Like var{aX} =a2 var{X}
var θˆBLU = var aT x = aT Ca
5
Goal: minimize aTCa subject to aTs = 1
⇒ Constrained optimization
Appendix 6A: Use Lagrangian Multipliers:
Minimize J = aTCa + λ(aTs – 1)
∂J λ
Set : = 0 ⇒ a = − C −1s
∂a #$"$ 2 $!
$ C −1s
a s =1
T
a=
sT C −1s
λ λ 1
⇒ aT s = − sT C −1s = 1 ⇒ − =
2 2 sT C −1s

sT C −1x var(θˆ) =
1
θˆ =a x=
T
BLUE
sT C −1s sT C −1s

Appendix 6A shows that this achieves a global minimum


6
Applicability of BLUE
We just derived the BLUE under the following:
1. Linear observations but with no constraint on the noise PDF
2. No knowledge of the noise PDF other than its mean and cov!!

What does this tell us???


BLUE is applicable to linear observations
But… noise need not be Gaussian!!!
(as was assumed in Ch. 4 Linear Model)
And all we need are the 1st and 2nd moments of the PDF!!!

But… we’ll see in the Example that we


can often linearize a nonlinear model!!!

7
6.5 Vector Parameter Case: Gauss-Markov Thm
Gauss-Markov Theorem:
If data can be modeled as having linear observations in noise:
x = Hθ + w
Known Matrix Known Mean & Cov
(PDF is otherwise
arbitrary & unknown)

(
T −1 −1 T −1
Then the BLUE is: θ BLUE = H C H H C x
ˆ )
(
and its covariance is: C ˆ = HT C −1H
θ
)
−1

Note: If noise is Gaussian then BLUE is MVUE


8
Ex. 4.3: TDOA-Based Emitter Location
s(t)
Tx @ (xs,ys)

s(t – t1)
s(t – t2) s(t – t3) Rx3
Rx1 Rx2 (x3,y3)
(x1,y1) (x2,y2)
Hyperbola:
Hyperbola:
τ23 = t3 – t2 = constant
τ12 = t2 – t1 = constant

TDOA = Time-Difference-of-Arrival

We won’t worry about


Assume that the ith Rx can measure its TOA: ti “how” they do that.
Also… there are TDOA
Then… from the set of TOAs… compute TDOAs systems that never
actually estimate TOAs!
Then… from the set of TDOAs… estimate location (xs,ys)
9
TOA Measurement Model
Assume measurements of TOAs at N receivers (only 3 shown above):
There are measurement errors
t0, t1, … ,tN-1

TOA measurement model:


To = Time the signal emitted
Ri = Range from Tx to Rxi
c = Speed of Propagation (for EM: c = 3x108 m/s)

ti = To + Ri/c + εi i = 0, 1, . . . , N-1

Measurement Noise ⇒ zero-mean, variance σ2, independent (but PDF unknown)


(variance determined from estimator used to estimate ti’s)

Now use: Ri = [ (xs – xi)2 + (ys - yi)2 ]1/2


1 Nonlinear
ti = f ( x s , y s ) = To + ( x s − xi ) 2 + ( y s − y i ) 2 + ε i Model
c
10
Linearization of TOA Model
So… we linearize the model so we can apply BLUE:
Assume some rough estimate is available (xn, yn)
xs = xn + δxs ys = yn + δys
⇒ θ = [δx δy]T
know estimate know estimate

Now use truncated Taylor series to linearize Ri (xn, yn):


x n − xi y n − yi
Ri ≈ Rn + δ s
x + δy s
i Rn Rn
#
$"$ i
! #
$"$ i
!

= Ai = Bi

Known

~ Rn Ai Bi
Apply to TOA: ti = ti − i
= To + δx s + δy s + ε i
c c c
known known known

Three unknown parameters to estimate: To, δys, δys 11


TOA Model vs. TDOA Model
Two options now:
1. Use TOA to estimate 3 parameters: To, δys, δys
2. Use TDOA to estimate 2 parameters: δys, δys
Generally the fewer parameters the better…
Everything else being the same.
But… here “everything else” is not the same:
Options 1 & 2 have different noise models
(Option 1 has independent noise)
(Option 2 has correlated noise)

In practice… we’d explore both options and see which is best.

12
Conversion to TDOA Model N–1 TDOAs rather
than N TOAs
TDOAs: τ i = ~ti − ~ti −1 , i = 1, 2, …, N − 1
Ai − Ai −1 Bi − Bi −1
= δx s + δy s + ε i − ε i −1
c$ c$ # $"$ !
#$" ! #$" ! correlated noise
known known

In matrix form: x = Hθ + w

x = [τ 1 τ2 ' τ N −1 ]T θ = [δx s δy s ]T

 ( A1 − A0 ) & ( B1 − B0 )   ε1 − ε 0 
   
1  ( A2 − A1 ) & ( B2 − B1 )   ε 2 − ε1 
H=   w=  = Aε
c & & &   & 
   
( AN −1 − AN − 2 ) & ( BN −1 − BN − 2 ) ε N −1 − ε N − 2 
  

See book for structure


C w = cov{w} = σ AA 2 T
of matrix A
13
Apply BLUE to TDOA Linearized Model
ˆθ (
BLUE = H C w H )
T −1 −1 T −1
H Cw x (
C θˆ = H T −1 −1
Cw H )
( ) ( ) ( )
−1 −1
 T −1  −1 2 T −1 
=  H T AA H  H T AA T x = σ  H AA T
H
   
Dependence on σ2 Describes how large
cancels out!!! the location error is

Things we can now do:


1. Explore estimation error cov for different Tx/Rx geometries
• Plot error ellipses
2. Analytically explore simple geometries to find trends
• See next chart (more details in book)

14
Apply TDOA Result to Simple Geometry
Tx
R
Rx1 α Rx2 α Rx3
d d

 1 
 0 
2 2 2 cos 2
α 
Then can show: Cθˆ = σ c
 3/ 2 
 0 2
 (1 − sin α ) 

ey
Diagonal Error Cov ⇒ Aligned Error Ellipse

And… y-error always bigger than x-error ex


15
3
10
σx
σy

σ y/cσ
2
10

or
1
10
σ x/cσ

0
10

-1
10
0 10 20 30 40 50 60 70 80 90
α (degre es )

Tx
• Used Std. Dev. to show units of X & Y
• Normalized by cσ… get actual values by R
multiplying by your specific cσ value
Rx1 α Rx2 α Rx3
d d

• For Fixed Range R: Increasing Rx Spacing d Improves Accuracy

• For Fixed Spacing d: Decreasing Range R Improves Accuracy 16

You might also like