An Introduction To Signal Detection and Estimation - Second Edition Chapter IV: Selected Solutions
An Introduction To Signal Detection and Estimation - Second Edition Chapter IV: Selected Solutions
Exercise 1:
a.
b.
e
e|y| d
1
e|y| e1e|y|
M M SE (y) = 1 e |y|
=
+ |y|
d
|y|
e
ee|y|
1 e
Exercise 3:
y e e
y e(+1) (1 + )y+1
w(|y) = y =
.
d
y!
0 e e
So:
1 y+1 (+1)
y+1
M M SE (y) =
e
d(1 + )y+1 =
;
y! 0
+1
y
;
M AP (y) = arg max y log ( + 1) =
>0
+1
and ABS (y) solves
ABS (y)
1
w(|y)d =
2
0
which reduces to
k
y
ABS (y)
1
= eABS (y) .
k!
2
k=0
Note that the series on the left-hand side is the truncated power series expansion of
exp{ABS (y)} so that the ABS (y) is the value at which this truncated series equals half
of its untruncated value when the truncation point is y.
1
Exercise 7:
We have
p (y) =
and
w() =
Thus,
ey+ if y
;
0
if y <
1 if (0, 1)
.
0 if (0, 1)
e
ey+
= min{1,y}
,
w(|y) = min{1,y}
e
1
ey+ d
0
M M SE (y) =
e d
[min{1, y} 1]emin{1,y} + 1
=
;
emin{1,y} 1
emin{1,y} 1
M AP (y) = arg
and
ABS (y)
max
0min{1,y}
e d =
= min{1, y};
1 min{1,y}
e
1
2
emin{1,y} + 1
Exercise 8:
a. We have
w(|y) =
ey+ e
1
y
= , 0 y,
y
e
y
0 d
and w(|y) = 0 otherwise. That is, given y, is uniformly distributed on the interval
[0, y]. From this we have immediately that M M SE (y) = ABS (y) = y2 .
b. We have
M M SE = E {V ar(|Y )} .
Since w(|y) is uniform on [0, y], V ar(|Y ) =
p(y) = ey
y
Y2
.
12
We have
d = yey , y > 0,
from which
M M SE =
3!
1 3 y
1
E{Y 2 }
y e |dy =
=
= .
12
12 0
12
2
2
c. In this case,
p (y) =
k=1
from which
M AP (y) = arg
max
0<<min{y1 ,...,yn }
exp
n
yk + (n 1)
= min{y1 , . . . , yn }.
k=1
Exercise 13:
a. We have
p (y) = T (y) (1 )(nT (y)) ,
where
T (y) =
n
yk .
k=1
Rewriting this as
p (y) = C()eT (y)
with = log(/(1 )), and C() = en we see from the Completeness Theorem for
Exponential Families that T (y) is a complete sucient statistic for and hence for .
(Assuming ranges throughout (0, 1).) Thus, any unbiased function of T is an MVUE for
. Since E {T (Y )} = n, such an estimate is given by
n
T (y)
1
yk .
=
M V (y) =
n
n k=1
b.
Since the MLE equals the MVUE, we have immediately that E {M L (Y )} = . The
variance of M L (Y ) is easily computed to be (1 )/n.
c. We have
2
T (Y ) n T (Y )
log p (Y ) = 2
,
2
(1 )2
from which
I =
The CRLB is thus
CRLB =
n
n
n
+
=
.
1
(1 )
1
(1 )
=
= V ar (M L (Y )).
I
n
Exercise 15:
We have
e y
,
y!
p (y) =
Thus
y 0, 1, . . ..
y
log p (y) =
( + y log ) = 1 + ,
and
2
y
log
p
(y)
=
< 0.
2
2
So
M L (y) = y.
Since Y is Poisson, we have E {M L (Y )} = V ar (M L (Y )) = . So, M L is unbiased.
Fishers information is given by
I = E
2
E {Y }
1
log p (Y ) =
= .
2
2
So the CRLB is , which equals V ar (M L (Y )). (Hence, the MLE is an MVUE in this
case.)
Exercise 20:
a. Note that Y1 , Y2 , . . . , Yn , are independent with Yk having the N (0, 1 + s2k ) distribution.
Thus,
n
1
yk2
2
log p (y) =
log(1 + sk )
2
2(1 + s2k )
k=1
n
1
yk2 s2k
s2k
=
,
2 k=1 1 + s2k (1 + s2k )2
n
s2k (yk2
k=1
b.
I = E
n
s4k
2
s4k E {Yk2 }
log p (Y ) =
2 3
2
2(1 + k2 )2
k=1 (1 + sk )
n
1
s4k
.
2 k=1 (1 + s2k )2
So the CRLB is
n
s4k
k=1 (1+s2 )2
k
M L (y) =
n
1
y 2 1,
n k=1 k
Thus, the bias of the MLE is 0 and the variance of the MLE equals the CRLB. (Hence,
the MLE is an MVUE in this case.)
Exercise 22:
a. Note that
p (y) = exp
n
log F (yk )/ ,
k=1
which impies that the statistic nk=1 log F (yk ) is a complete sucient statistic for via
the Completeness Theorem for Exponential Families. We have
E
n
n
log F (Yk ) = nE {log F (Y1 )} =
log F (Y1 ) [F (y1 )](1)/ f (y1 )dy1 .
k=1
1)
dy1 , and that [F (y1 )]1/ = exp{log F (y1 )/}, we can make
Noting that d log F (y1 ) = Ff (y
(y1 )
the substitution x = log F (y1 ) to yield
E
Thus, we have
n
n 0
log F (Yk ) =
xex/ dx = n.
k=1
E M V (Y ) = ,
c
b. [Correction: Note that, for the given prior, the prior mean should be E{} = m1
.]
It is straightforward to see that w(|y) is of the same form as the prior with c replaced
by c nk=1 log F (yk ), and m replaced by n + m. Thus, by inspection
E{|Y } =
n
log F (Yk )
,
m+n1
k=1
which was to be shown. [Again, the necessary correction has been made.]
c. In this example, the prior and posterior distributions have the same form. The only
change is that the parameters of that distribution are updated as new data is observed.
A prior with this property is said to be a reproducing prior. The prior parameters , c and
m, can be thought of as coming from an earlier sample of size m. As n becomes large
compared to m, the importance of these prior parameters in the estimate diminishes.
Note that nk=1 log F (Yk ) behaves like nE{log F (Y1 )} for large n. Thus, with n m, the
estimate is appropximately given by the MVUE of Part a. Altenatively, with m n, the
estimate is approximately the prior mean, c/(m 1). Between these two extremes, there
is a balance between prior and observed information.
Exercise 23:
a. The log-likelihood function is
2
n
1
k
yk A sin
+
log p(y|A, ) = 2
2 k=1
2
n
log(2 2 ).
2
k=1
and
A
n
k=1
k
yk A sin
+
2
k
sin
+ = 0,
2
k
yk A sin
+
2
k
cos
+ = 0.
2
yc2 + ys2 ,
yc
,
M L = tan1
ys
where
n
k
1
yk cos
yc =
n k=1
2
n/2
1
(1)k y2k ,
n k=1
and
n
k
1
yk sin
ys =
n k=1
2
n/2
1
(1)k+1 y2k1 .
n k=1
AM L +
2
r
n
1+
2(1+) 2
n
2
where n
2.
c. Note that, when (and the prior diuses), the MAP estimate of A does
not approach the MLE of A. However, as n , the MAP estimate does approach the
MLE.