0% found this document useful (0 votes)
63 views14 pages

STAT 426 Final Exam Overview

The document outlines the final exam for Statistics 426, detailing various statistical problems related to maximum likelihood estimation, hypothesis testing, and Bayesian analysis. It includes specific tasks such as finding MLEs, proving convergence in probability, and calculating posterior probabilities. The exam consists of multiple sections, each with distinct points assigned for successful completion.

Uploaded by

ivydang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
63 views14 pages

STAT 426 Final Exam Overview

The document outlines the final exam for Statistics 426, detailing various statistical problems related to maximum likelihood estimation, hypothesis testing, and Bayesian analysis. It includes specific tasks such as finding MLEs, proving convergence in probability, and calculating posterior probabilities. The exam consists of multiple sections, each with distinct points assigned for successful completion.

Uploaded by

ivydang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Statistics 426: Final Exam, Fall 2023

Moulinath Banerjee
December 15, 2023

Announcement: The exam carries 80 points but the maximum number of points you can
score is 65.
1. Let X1 , X2 , . . . , Xn be i.i.d. distributed as

3x2
f (x, ✓) = 1(0 < x  ✓) .
✓3
Let ✓0 denote the true parameter. Find the MLE of ✓ and show that it converges in
probability to ✓0 . [Hint: Calculate the probability that ✓ˆM LE > ✓0 ✏ where ✏ > 0.]
(6 + 6 = 12)

2. Let (Xi , Yi )ni=1 be i.i.d. random vectors where each of them is distributed like (X, Y )
where X ⇠ Exp(✓) and Y |X ⇠ Poisson(✓X). Let ✓0 be the true data generating
parameter.
(a) Show that the density of any (Xi , Yi ) pair is given by:
2✓x
✓e (✓x)y
f ((x, y), ✓) = .
y!
(b) Show that the MLE of ✓0 is:

Y +1
✓ˆM L = .
2X
Calculate E(Y ), E(X) and use these to show that ✓ˆM L converges in probability to ✓0 .
.
(c) Find I(✓) and an asymptotic level 1 ↵ confidence interval for ✓0 (4 + (7 + 7) +
(5 + 5) = 28 points)

3. Consider a hypothesis testing problem for an observation X from a distribution. Under


H0 , X ⇠ f0 (x) where f0 (x) = Cx2 1( 1  x  1) whilst under H1 , X ⇠ f1 (x) where

1
f1 (x) = C 0 (1 x4 ) 1( 1  x  1). Here C, C 0 are readily calculable constants so that
f0 and f1 integrate to 1 and all answers can be written in terms of these constants.
(a) Show that '(X), the most powerful test based on the Neyman-Pearson lemma,
rejects when |X|  , for some number  > 0.
(b) Find  such that the test '(X) has level ↵.
(c) Calculate the power of this level ↵ test. (8+ 6 + 6 = 20 points)

4. Consider a Bayesian setting where the parameter space is {✓0 , ✓1 }, with 0 < ✓0 , ✓1 < 1.
Let ⇥ be a random variable such that P (⇥ = ✓0 ) = p0 and P (⇥ = ✓1 ) = p1 with
p0 + p1 = 1.
We observe Bernoulli random variables (X1 , X2 , . . . , Xn ) which are i.i.d. Bernoulli(✓0 )
conditionally, given ⇥ = ✓0 , whilst given ⇥ = ✓1 , (X1 , X2 , . . . , Xn ) are conditionally
Bernoulli (✓1 ).

(a) Write down the joint probability mass function of (X1 , X2 , . . . , Xn , ⇥) i.e.
p(⇥ = ✓i , X1 = x1 , . . . , Xn = xn ) for i = 0, 1; and compute the posterior p.m.f of ⇥
given (X1 , . . . , Xn ), i.e. P (⇥ = ✓i |X1 = x1 , . . . , Xn = xn ).
(b) A statistician will pick ✓1 as the true data generating parameter if the ratio of the
posterior probability of ✓1 to that of ✓0 is adequately large, say greater than T . Show
that the criterion can be written as: pick ✓1 if:
1
X log(✓1 /✓0 ) + (1 X) log((1 ✓1 )/(1 ✓0 )) > [log(p0 /p1 ) + log T ] .
n
If ✓1 is indeed the true value that generates the data, show that the probability of
picking ✓1 goes to 1 as n ! 1 when p0 = p1 = 1/2 and ✓1 = 1/2 and ✓0 = 1/4. (10 +
10 = 20 points)

2
L(0 , 11x21 ... m)
=
e(b) =
(n)(b) = (3x) = (tw(3x) -

32n(t)
=
(en (3x4) -

3mln(t)

(10) = (4-3) = 0
E
E 0 => = No internal critical
point
=> EMLe =
max(Xn ,
Xa , ..., Xn)
Let Mr = max (X1 ,
X2 , . .

., Xn)
*
For EMLE 0
,
we need
P(/EMLE-tol >
3) - > 0 as w + & >O
For
large n. CDF Mr :

P(Mm [
y) =
(P(X(y)) (43) you y =
-t

((to-2))v (1 32t-3 S
e

=> P(Mm-to 2) -

= =
-

=
1 -

P(Mm = to 2) -
-

> 1 asn +

=> P(IEME-fol > 2) >


- 0 as n + 2

=> FMLE =
max(X1 , X2, ..., Xn) is a consistent estimator
you
to
Fe-zi (E)
a) Prove
+ ((x y) , b)
,
=

!
y
-

X-Exp(f)(fx(x , b) =
Ge
xyox > 0
(EcYe- *
YIX ~
Poisson (8X)(8x1x (y(x ) ,
=

!
you y ES0 ,
1
,
2 ...
%
y
y f)
fx y
(x , =
=

fy(x(y(x , b) +x(x) x
,

(c)Ye-oxte
=

_
=
)yyxbe -20x

y!
b) Prove :
EMLE =
Likelihood
junction L(bi (x yi))
:
,
= De-Zi
e(b) = ((n0 lu(txi)yi 20xi)
+

!
-

Yi

= (Int +
yiln(xi) Inyi - ! -

28xi)

nufyilnYilni-ni-
=
Inyyiln-2
Nint +

a l() = + -
+y 2 n -
= 0

E) - = i = 1
Yi + n

25 xi

T+ 1
=> EMLE =
where F =
Yi , = Xi
2

E(Y) =
E(E(Y(X) = E(OX) =
GE(X) =

0xz = 1
=> Y =1 asn + 2

Sce X
E(X) = x =

- =
fasn + 2

=> Fre ==
Do

6) We have Fisher
Information :
I(f) =
-

E((t)]
l() =_ +
02

+Si) = =
E(n
-

E( -n +
Y =

=> I(b)
=
EMLE =
N(to ,) No =

1- 2CI : Ente
e zrEmleze EMlez =
a) For fo(x)
J.. Code =
1

Es
/" d = 1

El
Cx
X3
For fe(ce) :
(! C (1 -

x4)dxc = 1

(21-x ]
*
=
1

ec'(2 - E) = 1

= 5

=> fo() =

- .
1) 1 = x(1)

fe() =

=(1 xi)1(
- -
14x - 1)

Likelihood ratio X(X) :


M(X)
=)
Reject Ho
: X) i

E)1 *
4x -

hx
*
E) 17X +

Gn) X + x4
b) x =
P/reject Ho /Hotrue) =
P(X1 => k) X-fo)
Since
X-fo(x) =

21(-1*1) :

X =

J fo() do Je =

=
= k =
213
2) Power =
P(reject Holth true) = P((X) - k(X-fe)
=

Jinfe(t)dx /(1 =
-

x4)dx =

=) (1 -
x4)d

=x(2k E) -

zx(2k -) = (k E)
-

= (213 -

((25) = (213 -
a) For i = 0 1
, ,
PMF :

P(0 = Gi , Xn = 2 , ...)
Xm =
xu) =
P(0 ti)P(Xe
= = x , ...)
Xn = xn)0 = Gi)
Since (X1 , X2, ...) Xn) ale i .
id Bernoulli (ti)
given O =
Gi :

P(Xn =
(1) ..., Xm = xn)0 = (i) = ↑ P(Xj xj)0 (i)
= =

= )
= Fi (1-Fi)m-EXj
=> P(G = Ei , X = c , ...,
Xn =) =

pi1(1-0) -Enj
Posterior
probability : P(G =
Gi(X1 =1 ...) Xn = xn)
By Bayes' Theorem :

P(G Ei , Xi
x) Xm xn)
P(G Gi(Xn xn)
= =

Xn
=
--.,
= =
x 1 ..., = =

P(X1 = > , . . .,
Xn = (n)

P(X1 =
() ...,
Xn = xu) = P(G =
Gi)P(X1 =
x) - - -

)Xn = xu)0 Gi)


=

=
pot(1-0) -E1 +
pet(1-)- E
=> P(G =
Eil Xn =
11 ,
...
>
Xn = xn)

=
Pit (1-Ei) -Enj
Po1j(1-0) -E (1 E1)n Ej 1x
-

=
+
-

P(G (1)X1
b) Will pick Fr Y :
=
=
x ,...,
Xn =
kn) >T
P(O = Go(X1 =
(1) ...

,
Xn =
kn)
Pron(1-)-E
E potr
(1-50) -E + per (1-0)-E ;

7 T
pot(1-to-Enj
poto (1-50)m- E + por (1-0) -E "
;
PrEE(1-0) Enj
=

E) >T
Pot (1-00) -E1]

log() +

((xlog()
xj)log)(J log(t) + ( -

log() () log (log() fl (1 -


+

Let =

X
So the criterior is
log () + (1- * )
log () (log() 10g(T) +

We have :

P((xlog() + ( -

xj)log)(J) log(t) - 1asm +

We have
:[Vilog ()
+
( xj)
-

log))] >
log(T)
Es [Xjlog(1) + (1 -

Xj) log(5)] >


log(t)
By
=X-an-
LN :

We have
: [Xjlog(2) + (1 -

X]) log(5)]
=
Flog() + (1 Y)log(z)
-
=

Elog(2) zlog(z)
+

&log()
For large =
n
n(log() >
-
0 asn +

=> P(nzlog(t)) log(T) + 1am +

You might also like