0% found this document useful (0 votes)
42 views5 pages

Minimun Variance Estimator

The document discusses the concept of Minimum Variance Estimators (MVBEs), which are unbiased estimators that minimize variance compared to other unbiased estimators. It presents theorems and proofs related to MVBEs, including conditions for attaining the Minimum Variance Bound (MVB) and examples of calculating MVBEs for different distributions. Additionally, it provides solutions for finding MVBEs and their variances for normal, Poisson, and binomial distributions.

Uploaded by

Kawsar ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views5 pages

Minimun Variance Estimator

The document discusses the concept of Minimum Variance Estimators (MVBEs), which are unbiased estimators that minimize variance compared to other unbiased estimators. It presents theorems and proofs related to MVBEs, including conditions for attaining the Minimum Variance Bound (MVB) and examples of calculating MVBEs for different distributions. Additionally, it provides solutions for finding MVBEs and their variances for normal, Poisson, and binomial distributions.

Uploaded by

Kawsar ali
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Minimum Variance Estimator

Minimum Variance Estimator:


An estimator which constrain the bias to be zero and find the estimate that minimizing the
variance is known as minimum variance estimator.
In statistics, a minimum-variance unbiased estimator (MVUE) or uniformly minimum-variance
unbiased estimator (UMVUE) is an unbiased estimator that has lower variance than any other
unbiased estimator for all possible values of the parameter.

Theorem:
 log L  log L
If the statistic be such that can be expressed in the form of = A( )  (t −  ) , then t is
 
1
an MVB (Minimum Variance Bound) unbiased estimator of  with variance .
A( )

Proof:
If E (t ) =  + b( ) , where b( ) is the biased of  and it is differentiable function of  . Then MVB is
of the following type-

var(t ) 
1 + b( )2 ... ... ... (1)
  2 log L 
− E 2 
  

Condition under which MVB is attained

var(t ) 
 ( )2 ... ... ... (2)
  2 log L 
− E 2 
  

Again, we know that,


2
   log L  
covt , 
   
2 =  1
  log L 
var(t )  var  
  
2
   log L     log L 
 covt ,
    var(t )  var   
     
 log L
So, t and are linearly related. Therefore, we can write as


 log L   log L 

− E  = At −  ( )
  
 log L
 = At −  ( )


Where A is independent of x' s but may be a function of  . Then we can write as (Cramer Rao
lower bound)
 log L
= A( )  t −  ( ) ... ... ... (3)


  log L 
 = A( )  var(t )
2
We have, var 
  

From equation (2) we can write

var(t ) =
 ( )2
  log L 
var  
  

 var(t ) =
 ( )2
A( )2  var(t )
 var(t )2 =
 ( )2
A( )2
 ( )
 var(t ) =
A( )

 ( )
. If  ( ) =  then the variance is
1
Thus t is MVB estimator of  with variance . Now from
A( ) A( )
the equation (3) we have,
 log L
= A( )  (t −  )


 log L
Note: If the frequency function is not of = A( )  (t −  ) form there may be still an

estimator of  ( ) which has uniformly in  smaller variance than any other estimator then it
is called a MVE.

MVB
Note: Efficiency of the estimator is given by
variance of the given estimator
Problem 1 : If x1 , x2 , , xn are drawn from the population N ( , 2 ) , where  2 is known. Find
MVB estimator for  and also find its variance.

Solution: Since X ~ N ( , 2 ) , then the p.d . f is-


2
1  x − 

( )=
−  
−   ( x,  )   ,  2  0
1
f x ; ,  2
e 2   ;
 2
n

 1  − 2 2  ( xi − )
n 1 2

 L =   e

i =1

  2 

( )
n
 (xi −  )2
n 1
 log L = − log 2 2 −
2 2 2 i =1

 log L n
 (xi −  )
1
 = 2
  i =1

 log L 1  n 
 = 2  xi − n 
   i =1 
 log L
= 2 nx − n 
1

 
 log L
= 2 (x −  )
1

 
n

Hence x is an MVB unbiased estimator for  and var(ˆ ) = var(x ) =


2
= A( ) .
n

Problem 2: If x1 , x2 , , xn are drawn from the population N (0, 2 ) , where  2 is unknown. Find
MVB estimator for  2 and also find its variance.

Solution: Since X ~ N (0,  2 ) , then the p.d . f is-


2
1 x 

( )= 
−  
1
f x ; 2
e 2   ; −   x  ,  2  0
2
n

 − 2 2 
n 1 2
 1 xi
 L =   e i =1

  2 
n
log(2 ) − log  2 −  xi2
n n 1
 log L = −
2 2 2 2 i =1

 log L n
 xi2
n 1
 =− +
 2 2 2 2 4 i =1

  n

 log L

n  i =1
xi2  
2
 = − 
 2 2 4  n 
 
 
 log L 1 n 
 xi2 −  2 
1
 = 
 2
2 4
 n i =1 
n
is an MVB unbiased estimator for  2 and var(ˆ 2 ) =
n
2 4
 xi2 = A( )
1
Hence
n i =1
n

Problem 3: A random sample x1 , x2 , , xn is drawn from P( ) . Find MVB estimator for  and
also find its variance.
Solution: Since X ~ P( ) , then the p.d . f is-

e − x
f (x ;  ) = ; x = 0,1,2, ... ,   0
x!
e − n  i
x
L=
x1! x2 ! ...  xn1!
n
 log L = − n +  xi log  −  log xi !
i =1


 log L
= −n +
 xi
 
 log L nx
 = −n
 
 log L
 (x −  )
1
 =
 
n

Hence x is an MVB unbiased estimator for  and var(ˆ ) =



= A( ) .
n
Problem 4 : A random sample x1, x2 , , xn is drawn from b(1, p ) . Find MVB estimator for p and
also find its variance.
Solution: Since X ~ b(1, p ) , then the p.d . f is-

f (x ; p ) = p x (1 − p )1− x
 L = p  i (1 − p )n− xi
x

 n 
 log L =  xi log p +  n −  xi  log(1 − p )
 i =1 
n

 log L  xi −
n−  xi
i =1
 =
p p 1− p
 log L nx n − nx
 = −
p p 1− p
 log L nx − pnx − np + pnx
 =
p p(1 − p )
 log L
 (x − p )
n
 =
p p(1 − p )
 log L
 (x − p )
1
 =
p p(1 − p )
n

p(1 − p )
Hence x is an MVB unbiased estimator for p and var( pˆ ) = = A( ) .
n

You might also like