0% found this document useful (0 votes)
214 views

Tests of Hypo PDF

This is a non-randomized test with rejection region C={3}. If the number of heads X=3, H0 is rejected. This is a randomized test if X=2. In this case, a random experiment is performed with probabilities φ(2)=1/8 of rejecting H0 and 1-φ(2)=7/8 of accepting H0.

Uploaded by

A Santhakumaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
214 views

Tests of Hypo PDF

This is a non-randomized test with rejection region C={3}. If the number of heads X=3, H0 is rejected. This is a randomized test if X=2. In this case, a random experiment is performed with probabilities φ(2)=1/8 of rejecting H0 and 1-φ(2)=7/8 of accepting H0.

Uploaded by

A Santhakumaran
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 176

FUNDAMENTALS OF TESTING STATISTICAL

HYPOTHESES

NET/JRF/IAS/ISS/JAM/GATE/STATISTICAL INFERENCE

A. SANTHAKUMARAN
About the Author
A. Santhakumaran received his Ph.D. in Mathematics - Statistics from
the Ramanujan Institute for Advanced Study in Mathematics, Univer-
sity of Madras.He has a rich experience in teaching and research. He
had positions as Associate Professor and Head of the Department of
Statistics at Salem Sowdeswari College,Salem, and Professor of Math-
ematics at the Indian Institute of Food Processing Technology, Than-
javur, Tamil Nadu. He has published research papers in Queuing
Theory, Statistical Quality Control,Neural Networks, Fuzzy
Statistics and Food Processing. He is the author of the book
Mathematical Statistics of Probability Models
A FEW REVIEWS OF THE FIRST EDITION

It in a single of the best pdf better then never, though


I am quite late in start reading this one. I realized this
ebook from dad and I encouraged this publication to
understand
· · · · · · Major Thompson

It really is full of knowledge and wisdom. Its been


developed in an exceptionally easy and it is right after
I finished reading through this publication by which
really altered me, the way in my opinion
· · · · · · Dr.Aexa Rogahn
DISTINCTIVE FEATURES

• The objectives are motivated the students to make ac-


tive interest for pleasant reading, power of understand-
ing,induce self thinking and leading to implement the
knowledge in practical for making creativity to works
in multi- disciplinary sciences

• Care has been taken to provide conceptual clarity, sim-


plicity and up to date materials for the current situa-
tions

• Properly graded and solved problems to illustrate each


concept and procedure are presented in the text

• Multiple choice questions are specially solved for UGC


NET/JRF/GATE examinations in each chapters

• The purpose of the book is different from the traditional


course materials

• The book is intended to serve as a text book of one


semester course on Statistical Inference of Under Grad-
uate and Post Graduate Statistics of Indian Univer-
sities and other Applicable Sciences, Allied Statisti-
cal Courses, Mathematical Sciences and various UGC
Competitive Examinations
TO
ALL MY TEACHERS
PREFACE TO THE SECOND EDITION

The first edition of Fundamentals of Testing Statistical Hy-


potheses appeared in 2001 and it has gone through many
printings since than. So it was with a degree of reverence
and some caution that we undertook this revision. Our guid-
ing principle was to make changes only where necessary to
bring the text in line with the enhancement of the subjects
Also, more emphasis is placed on UGC examinations ques-
tions.Overall we believe that the main purpose of the first
edition to present a modern introduction to test of statistical
hypotheses and the features that made the first edition such
a great success have been preserved. We hope that this edi-
tion can serve broader range of students in multi-disciplinary
sciences A . Santhakumaran
PREFACE TO THE FIRST EDITION

The book provides conceptual clarity, simplicity and up


to date materials. Properly graded and solved problems to
illustrate each concept and procedure are presented in the
text. Normal curve area and ordinate are obtained by using
C language
The book is intended to serve as course text for the
students of Indian Universities of Under-Graduate and Post-
Graduate Statistics and other Applicable Sciences. Allied
Statistical Courses, Mathematical Sciences and various com-
petitive examinations like ISS, UGC , SLET, etc
My special thanks to the Correspondent and Secre-
tary of Salem Sowdeswari College, Salem and my colleagues
for their enthusiastic and unstinted support rendered for pub-
lishing this book, I thank M/s Atlantic Publishers and Dis-
tributors, New Delhi for readily accepting the printing of
this book. Finally, I wish to express my gratitude to all
my teachers under whose influence I have come to appre-
ciate statistics as the science of winding and twisting net-
work, connecting Mathematics, Scientific Philosophy, Com-
puter Software and other intellectual sources of the millen-
nium A Santhakumaran
CONTENTS

1. Governing Basic Principles 1

1.1 Introduction 1

1.2 Test procedure 1

1.3 Non- Randomized test 2

1.4 Randomized test 2

1.5 Presentation of tests 3

1.6 Choosing a test 4

1.7 Most powerful test 10

1.8 Most powerful test is not unique 11

1.9 Monotone class of tests 13

1.10 Geometrical representation of test 13

1.11 Optimum test 16

1.12 Uniformly most powerful test 18

Problems 24

2. Method of obtaining the most powerful test 37

2.1 Introduction 37

2.2 Neyman - Pearson Lemma 38

Problems 64

3. Applications of Neyman -Pearson Lemma 75


3.1 Introduction 75

3.2 Monotone likelihood ratio property 75

3.3 Exponential family of distributions 76

3.4 Method of obtaining UMP test 77

3.5 MLR of location family of distributions 80

3.6 Locally most powerful test 106

3.7 Optimum property of LMP test 107

Problems 109

4. Applications of Generalized Neyman - Pearson Lemma 116

4.1 Introduction 116

4.2 Generalized Neyman - Pearson Lemma 116

4.3 Unbiased test 124

4.4 Uniformly most powerful unbiased test 128

4.5 Locally most powerful unbiased test 141

Problems 143

5. Neyman - Structure and Similar tests 148

5.1 Introduction 148

5.2 Similar Test 149

5.3 Construction of Similar test 153

5.4 Neyman-Structure test 153


Problems 156

Notation 161

Bibliography 163

Subject Index 165


1.GOVERNING BASIC PRINCIPLES

1.1 Introduction

Let X be an observable random variable with distribu-


tion P ∈ P, where P is a family of distributions. Let P0 ⊂ P
and P1 = P − P0 so that P0 ∩ P1 = ∅ and P0 ∪ P1 = P
Any statement about the distribution P0 of a random
variable X is called hypothesis H0 . A competitor to the hy-
pothesis H0 is called an alternative H1 . On the basis of the
observed value x of X, one likes to know whether H0 : P ∈ P0
or H1 : P ∈ P1 is called testing of statistical hypothesis
If P0 is a singleton set, H0 is called simple hypothesis. If
P0 consists of more than one distribution, then H0 is called
composite hypothesis. Alternative hypothesis H1 can be also
classified as simple and composite hypothesis
If P is the index family of distributions, i.e., for each fixed
θ, P = {Pθ , θ ∈ Ω}, the form of Pθ is completely specified and
distinct, then P is called parameter family of distributions
Ω is known as parameter space and any element of Ω is a
parameter

1.2 Test Procedure

A test is a rule for action which is needed to make a


decision between the hypothesis H0 and the alternative H1
A test tells us whether to accept H0 or reject H0 ( in favour
of H1 ) for any given value x of X. They are randomized and
non - randomized test
Governing Basic Principles 2

1.3 Non-Randomized test

A non - randomized test is described by the set of all


possible sample observations x of X for which H0 is rejected
and thereby the sample space is partitioned into two sets C
and C 0 so that C ∪C 0 = X (sample space ) and C ∩C 0 = ∅. If
the value x of X falls into the set C, H0 is rejected in favour
of H1 . C is called rejection region and C 0 is called acceptance
region

1.4 Randomized test

A randomized test is given by a function φ(x) defined


on the sample space, i.e., φ(x) : X → [0, 1]. For any value
x of X, the randomized test chooses from two decisions, re-
jection or acceptance with probabilities φ(x) and 1 − φ(x)
respectively. If the value of X is x, a random experiment
is performed for two possible outcomes R and R̄ with the
probabilities φ(x) and 1 − φ(x) respectively. If in this exper-
iment R occurs, the hypothesis H0 is rejected in favour of
H1 , otherwise H0 is accepted. Thus φ(x) is the probability
of rejecting H0 when X = x. The function φ(x) is called a
critical or a test function
Example 1.1 The hypothesis H0 is that a coin is unbiased
against that it has a positive bias for heads, the following
procedure is suggested. Throw the coin thrice, if there are
three heads, reject H0 and if there is not more than one head
accept it. If there are just two heads, throw the coin another
three times and reject H0 if and only if all three additional
Governing Basic Principles 3

throws show heads


If X denote the number of heads, then the test φ is



 1 if x = 3

φ(x) = 1
8 if x = 2



 0 otherwise

The probability of rejecting H0 is given by

α = EH0 [φ(X)]
1
= 1 × PH0 {X = 3} + × PH0 {X = 2}
8
+ 0 × PH0 {X = 0 or 1}
1 1 3 11
= 1× + × +0=
8 8 8 64
Note The class of all randomized tests is a larger class that
contains all non - randomized tests

1.5 Presentation of Tests

A non - randomized test with rejection region C can be


expressed as a test function φ(x) has two values 0 and 1 so
that 
 1 if x ∈ C
φ(x) =
 0 otherwise
Note A randomized test φ lies between 0 ≤ φ ≤ 1 where as
the non-randomized test takes only two vales 0 and 1. The
best test in the class of all randomized procedures will be
better than any non-randomized procedure
The introduction of randomized procedures leads to
an important mathematical theory of testing hypoth-
esis in a deep manner. While carrying a test one may arrive
at a correct decision or commit one of two type errors
Governing Basic Principles 4

Type 1 error : Rejecting H0 when H0 is true


Type 2 error : Accepting H0 when H0 is false

1.6 Choosing a Test

For a good test the probability of both type 1 and type


2 errors should be small.Hence, it is natural to aim a test for
which the probability of two type errors is minimum. But, it
is not possible to minimize both these probabilities simulta-
neously for a fixed number of observations.Usually, a test that
minimizes the probability of type 1 error, actually maximizes
the probability of type 2 error and vice versa
According to Neyman - Pearson, there is a discrepancy
between the type 1 and type 2 errors which is due to a corre-
sponding asymmetry between hypothesis H0 and alternative
H1 . In testing the hypothesis H0 is well formulated and one
does not want to reject H0 unless there is a strong evidence
against it. Thus the type 1 error is more important
than type 2 error. The importance of type 1 error, Neyman -
Pearson made the foundation of testing statistical hypotheses
In testing hypothesis, one wishes the probability of type 1
error less than a desired level and tries to minimize the proba-
bility of type 2 error within this restricted class of tests. That
is, one assigns a bound to the probability of incorrectly reject-
ing H0 when H0 is true and tries to minimize the probability
of type 2 error subject to the upper bound of the probability
of committing type 1 error
Governing Basic Principles 5

This situation of testing hypotheses likes a prob-


lem of solving a single equation with two variables
x + y = c For solving this, one must fix any one of the
variables as a known value
Thus one selects a number α between 0 and 1 called the
level of significance and imposes the condition to test φ for
which

Eθ [φ(X)] ≤ α ∀ θ ∈ ΩH0

i.e., Pθ {X ∈ C} ≤ α ∀ θ ∈ ΩH0

Thus one desires to minimize Pθ {X ∈ C 0 } for θ ∈ ΩH1

or to maximize Pθ {X ∈ C} for θ ∈ ΩH1


subject to Pθ {X ∈ C} ≤ α ∀ θ ∈ ΩH0

From this one can define

sup Eθ [φ(X)] = sup Pθ {X ∈ C}


θ∈ΩH0 θ∈ΩH0

This is called the size of test φ or size of the critical region


C. Further it can be seen

βφ (θ) = Eθ [φ(X)] ∀ θ ∈ Ω

= Pθ {X ∈ C} ∀ θ ∈ Ω

where C is the critical region.It is called power function of


the test φ.If θ ∈ ΩH0 ,then βφ (θ) is probability of type 1 error
of the test φ. If θ ∈ ΩH1 , βφ (θ) is the power of the test φ
Example 1.2 Let P = {∪(0, 2), ∪(3, 4)} be a family of uni-
form distributions. Consider the test of hypothesis

H0 : P0 ∈ P0 = ∪(0, 2) vs H1 : P1 ∈ P1 = ∪(3, 4)
Governing Basic Principles 6

Under H0 , the observable random variable X can take the


values in C 0 = (0, 2). Under H1 , the observable random
variable X can take the values in the set C = (3, 4). When
the observed value x of X lies in the set C, one has to reject
H0 and if the value x of X lies in C 0 , one has to accept H0 The
occurrence of values of C is critical to H0 ( in favour of H1 ). So
C is called critical region. C 0 is known as acceptance region.
The probability of type 1 error, β(P0 ) = 0, P0 ∈ P0 since
PH0 {X ∈ C} = 0 and power of the test β(P1 ) = 1, P1 ∈ P1
since PH1 {X ∈ C} = 1
Example 1.3 Denote P = {p0 , p1 }, where

 2 1
π 1+x2 −∞ < x < 0
p0 (x) =
 0 otherwise

and  q 2
2 − x2

π e 0<x<∞
p1 (x) =
 0 otherwise
Let H0 : X ∼ p0 vs H1 : X ∼ p1 . Here one who chooses
C = (0, ∞) and C 0 = (−∞, 0). Thus the probability of type
1 error is
Z ∞
PH0 {X ∈ C} = p0 (x)dx = 0
0

and the power of the test is



Z r
2 − x2
PH1 {X ∈ C} = e 2 dx = 1
0 π

Example 1.4 Let X be the Bernoulli distribution with pa-


1 1
rameters n = 5 and p. To test H0 : p = 4 vs H1 : p = 2 for a
Governing Basic Principles 7

test function



 0.3 if x = 0

φ(x) = 0.2 if x = 1



 0 otherwise

Find the power and the probability of type 1 error of the test
function φ. The probability of type 1 error is

α = EH0 [φ(X)]

= 0.3 × PH0 {X = 0} + 0.2 × PH0 {X = 1}


 5    4
3 1 3
= 0.3 × + 0.2 × 5 ×
4 4 4
= 0.15029

The power of the test is

β = EH1 [φ(X)]

= 0.3 × PH1 {X = 0} + 0.2 × PH1 {X = 1}


 5  5
1 1
= 0.3 × + 0.2 × 5 ×
2 2
= 0.3085

Example 1.5 To test H0 : θ = 1 vs H1 : θ = 2 with the pdf



 θxθ−1 0 ≤ x ≤ 1
pθ (x) =
 0 otherwise

from which two random observations x1 and x2 are available


find the size and the power of the test function given by

 1 if x1 x2 ≥ 3
4
φ(x1 , x2 ) =
 0 otherwise

3
The curve of x1 x2 = 4 is given in Figure 1.1 The shaded
portion is the critical region. The size of the test is
Governing Basic Principles 8

X2
1 Critical
region
x1 x2 = 3/4

0 1 X1
FIgure 1.1 Critical region

α = EH0 [φ(X1 , X2 )]
 
3
= PH0 X2 ≥
4X1
Z 1Z 1
= dx1 dx2
3 3
4 4x1
Z 1 
3
= 1− dx1
3 4x1
4
Z 1
3 3
= 1− − dx1
4 3 4x1
4
 
1 3 3
= + log
4 4 4

The power of the test is

β = EH1 [φ(X1 , X2 )]
Z 1Z 1
= 4x1 x2 dx1 dx2
3 3
4 4x1
1
32
Z 
= 2 1− dx1
3
4
16x21
1
9 1 1
Z Z
= 2 x1 dx1 − dx1
3 8 3 x1
4 4
 
7 9 3
= + log
16 8 4
Governing Basic Principles 9

Example 1.6 The hypothesis is to test that H0 : θ = θ0 vs


H1 : θ = θ1 (6= θ0 ) for the probability density function

 θ θ < x < ∞, θ > 0
x2
pθ (x) =
 0 otherwise

Choose a random sample of size n = 1 with size α. Here


X take values in θ < x < ∞ where H0 or H1 will be true
θ
Since pθ (x) = x2
↓ as x → ∞, one chooses the critical region
C = {x | x < c}, then C has less probability under H0 than
under H1 and one chooses the region C = {x | x > c}, then
C has more probability under H0 than under H1 . Thus when
X lie in C, H0 is rejected in favour H1 . That is, C can be a
desirable critical region of size α

i.e., α = PH0 {X ∈ C}

= PH0 {X < C} =
6 0
Z c
θ0
= 2
dx
θ0 x
θ0
⇒c =
1−α
θ0
The desirable critical region C = {x | x < 1−α }

Further, the power of the test is

β = PH1 {X ∈ C}
 
θ0
= PH 1 X <
1−α
Z θ0
1−α θ1
= dx
θ1 x2
θ1 θ 1
= 1− + α 6= 1
θ0 θ 0

Also β ≥ α if θ1 ≤ θ0
Governing Basic Principles 10

Thus the experimenter has more power, since the critical


region is larger and the chance of occurrence of the probabil-
ity of type 1 error is also larger. But altering the critical re-
gion, one has to meet the best and poorest effects, which offer
maximum and minimum probability of type 1 error among
the class of all critical regions. Therefore, in practice, the
problem is to estimate the smallest significance level α̂ at
which the hypothesis H0 would be rejected for the given ob-
servation X = x
Now the problem is to select a test function φ so as to
maximize the power βφ (θ) = Eθ [φ(X)] ∀ θ ∈ ΩH1 subject
to the condition that

Eθ [φ(X)] ≤ α ∀ θ ∈ ΩH0 and


Z
Eθ [φ(X)] = φ(x)dPθ (x) ∀ θ ∈ Ω

where the distribution of X is Pθ

1.7 Most Powerful Test

We cannot determine that one test is better than


another test by definition, but we can consider the
relative cost of each type of error. In Neyman-Pearson
approach, statisticians do not consider the relative cost of
the two errors because of the subjective nature of compar-
ison between the tests whereas Bayesian approach statisti-
cians compare the relative cost of the two errors using a loss
function
Governing Basic Principles 11

Definition A test φ is said to be the Most Powerful (MP)


level α test for testing the hypothesis H0 : θ ∈ ΩH0 vs a
particular alternative H1 : θ = θ1 (θ1 ∈ ΩH1 ), if

( i) Eθ [φ(X)] ≤ α θ ∈ ΩH0

(ii) βφ (θ1 ) = Eθ1 [φ(X)] is maximum among all φ0 s satisfy-


ing(i)

Note If ΩH1 contains only one point, then the test is the
MP test. If ΩH1 contains more than one point, then each
θ ∈ ΩH1 the test is the MP test

1.8 Most powerful test is not unique

Let us consider the following pmf ’s p0 and p1 when the


observations 0, 1, 2 of the random variable X. That is

x 0 1 2
p0 0.1 0.4 0.5
p1 0.02 0.08 0.9

One can show that there are uncountable number of MP


tests with level α = 0.6 for testing H0 : X ∼ p0 vs H1 : X ∼
p1 . Define 


 1 if x = 0, 2

φ1 (x) = γ1 if x = 1



 0 otherwise
For finding the value of γ1 with level α = 0.6, consider

EH0 [φ(X)] = PH0 {X = 0} + PH0 {X = 2} + γ1 PH0 {X = 1}

0.6 = 0.1 + 0.5 + 0.4γ1 ⇒ γ1 = 0


Governing Basic Principles 12

The MP test φ1 is

 1 if x = 0, 2
φ1 (x) =
 0 otherwise

with power βφ1 = EH1 [φ1 (X)] = 0.92


Let us again define that



 1 if x = 2

φ2 (x) = γ2 if x = 1



 0 otherwise

so that 0.6 = PH0 {X = 2} + γ2 PH0 {X = 1}


1
0.6 = 0.5 + 0.4γ2 ⇒ γ2 =
4

The another MP test φ2 is





 1 if x = 2

φ2 (x) = 1
4 if x = 1



 0 otherwise

with power βφ2 = EH1 [φ2 (X)] = 0.92


Again define that

φ(x) = λφ1 (x) + (1 − λ)φ2 (x)

is also a test function where 0 ≤ λ ≤ 1, i.e., the convex


combination of the test functions φ1 and φ2 is also a test
function. Thus the MP test φ is not unique. The power of
the test φ is βφ = βφ1 = βφ2 = 0.92. Here the test functions
φ1 and φ2 are different but powers are the same
Governing Basic Principles 13

1.9 Monotone class of tests

Let us consider the testing problem H0 : X ∼ p0 vs


H1 : X ∼ p1 . The pmf ’s p0 and p1 are specified as

x 0 1 2
p0 0.1 0.2 0.7
p1 0.3 0.4 0.3

Now choose a test function as



 1 if x = 1
φ1 (x) =
 0 otherwise

Then α = EH0 [φ1 (X)] = 0.2 and βφ1 = EH1 [φ1 (X)] = 0.4
.. . φ1 is a test function with level α = 0.2. Again define a
test 


 1 if x = 0

φ2 = 1
2 if x = 1



 0 otherwise
so that α = EH0 [φ2 (X)] = 0.2 and βφ2 = EH1 [φ2 (X)] =
1
0.3 + 2 × 0.4 = 0.5. Thus φ2 is also a test function with
level α = 0.2. One can easily see that φ1 is a sub class of φ2
with same level α = 0.2, i.e., φ1 ⊆ φ2 and power of φ1 is less
than the power of φ2 . The test φ2 has more power, since the
critical region of φ2 is larger as the power is larger

1.10 Geometrical representation of test

Consider the case in which both H0 and H1 are simple


such that ΩH0 = {θ0 } and ΩH1 = {θ1 }, i.e., H0 : θ = θ0 vs
H1 : θ = θ1 . Let C be the set of all points (α, β) for which
Eθ0 [φ(X)] = α and Eθ1 [φ(X)] = β for some test φ. Let
Governing Basic Principles 14

C = (α, β). For any test φ , C = (α, β) ∈ C and conversely


any point (α, β) in C will correspond to test φ such that
Eθ0 [φ(X)] = α, and Eθ1 [φ(X)] = β. Thus the class of all
tests can be identified with the set C. One can easily see
from the Figure 1.2 that
(i) the points (0, 0) and (1, 1) belong to C
since φ(x) = 1 ⇒ (1, 1) ∈ C and φ(x) = 0 ⇒ (0, 0) ∈ C
which means that
Z Z
φp0 (x)dx = φp1 dx ∀ φ ⇒ p0 (x) = p1 (x) ∀ x

(ii) C is a convex set since (α1 , β1 ) ∈ C the line segment join-


ing these two points is contained in C. For (α1 , β1 ), (α2 , β2 )
in C and 0 ≤ λ ≤ 1

λ(α1 , β1 ) + (1 − λ)(α2 , β2 ) = {λα1 + (1 − λ)α2 , λβ1 + (1 − λ)β2 } ∈ C

which is true because φ1 and φ2 are two test functions

i.e., φ = λφ1 + (1 − λ)φ2 is also a test function

(iii) C is symmetric about ( 21 , 12 ) in the sense that if (α, β) ∈


C, then (1 − α, 1 − β) ∈ C where if a test φ corresponds to
(α, β), then the test 1 − φ corresponds to (1 − α, 1 − β) and
(iv) C is closed
One can see that the points (0, 1) and (1, 0) will not be an
element of C, since (0, 1) ∈
/ C ⇒ (1, 0) ∈
/ C, a point (α, β)
in C close to (0, 1) will correspond to a good test. From the
geometric representation, one who looks at the picture can
understand that one cannot find a point in C which mini-
mizes the α - coordinate and maximizes the β - coordinate
simultaneously. If one fixes a level α = α0 and considers
Governing Basic Principles 15

all tests of level α0 , he can understand that these are rep-


resented by the points whose abscissa is ≤ α0 .The MP tests
of these (whose existence following from the fact that C is
closed) correspond to the point on the upper boundary of C
with abscissa α0 .This is the only point corresponding to a
MP level α0 test unless there exists a point (α, 1) in C with
α < α0 ( Figure 1.2).This gives the justification that one can-
not control both the type of errors simultaneously

β Best test of α0
(1,1)


Cq  
1 1
 (2, 2)




α0 α
Figure 1.2 Convex set critical region

Example 1.7 Let H0 : X ∼ U (0, 1) vs H1 : X ∼ U ( 14 , 34 ) for


the critical region C which is represented as Figure 1.3. The
test φ is not unique as given below



 0 if 0 ≤ x ≤ 41

φ(x) = γ if 14 < x < 1


 1 if 1 ≤ x ≤ 3

2

The test φ is the MP test with size

α = EH0 [φ(X)]
Z 1

= γdx = 0≤γ≤1
1 4
4

If γ = 0.1 ⇒ α = 0.075 and β = 0.4454


Governing Basic Principles 16

If γ = 0.2 ⇒ α = 0.15 and β = 0.49, etc., then the MP level


α test will be found in the critical region C which is a subset
of the sample space or a test φ such that

β = EH1 [φ(X)] is maximized

subject to EH0 [φ(X)] ≤ α

β
(1,1)

C
1
4
1
0 4 α
Figure 1.3 Convex set critical region

1.11 Optimum test

A traveler wants to reach a destination with a given dis-


tance in the shortest possible time by train.He must choose
the fastest mode of transport.Hence he opts for the fastest
train,if there are more than one train is available. It gives
the optimum to the traveler. Analogously to this is the prob-
lem of optimum test
Optimum test is different from MP test,since a test φ1 is
MP test as compared to φ2 , if
(i) α1 ≤ α2 and
(ii) β1 ≤ β2
with a strict inequality holding at least one point where
αi , i = 1, 2 and βi , i = 1, 2 are the size and power of the
test φi , i = 1, 2 respectively
Governing Basic Principles 17

Note MP test is proposed in homogeneous conditions


whereas optimum test is considered in heterogeneous situ-
ations.It can be easily seen in Example 1.9
Consider the test H0 : X ∼ p0 vs H1 : X ∼ p1 with level
α = 0.02.. The pmf ’s p0 and p1 are specified as below

x 0 1 x 0 1
H0 : vs H1 :
p0 0.4 0.6 p1 0.2 0.8

There are two tests of level α = 0.02. Let us define when


x = 0, φ(0) = γ1 and when x = 1, φ(1) = γ2
where 0 ≤ γ1 , γ2 ≤ 1. Thus the problem is

Maximize β = EH1 [φ(X)]

subject to EH0 [φ(X)] ≤ 0.02

i.e., Maximize β = 0.2γ1 + 0.8γ2 (1.1)

subject to 0.4γ1 + 0.6γ2 ≤ 0.02 (1.2)

and γ1 , γ2 ≥ 0

By graphical method of solving the LPP,Figure 1.4 shows


the visual representation of the solution region of equa-
tion(1.2).Table 1.1 gives the optimum solution of equations
(1.1) and (1.2)
Governing Basic Principles 18

γ1
1/30

"
"
"

0 1/20 γ2
Figure 1..4 Solution region

Table 1.1 Graphical Solutions


(γ1 , γ2 ) β

(0, 0) 0

( 1 , 1/30) 0.027
( 1/20 , 0) 0.01

There are two tests with level α = 0.02. They are



 1 if x = 1
30
φ1 (x) =
 0 if x = 0

 0.05 if x = 0
and φ2 (x) =
 0 if x = 1
The power of the test φ1 is βφ1 = 0.027 and φ2 is βφ2 = 0.01
But the power of the tests βφ1 ≥ βφ2 so that the test φ1 is
the optimum test
Thus, the test φ1 is maximizing the power against a par-
ticular alternative on the hypothesis H1 . An additional prin-
ciple has to be introduced to define what is meant by an
optimum test.If H1 contains onlty one distribution,i.e.,if one
Governing Basic Principles 19

is concerned with a simple alternative H1 , the problem is


to select a test function φ so as to have a maximum power
βφ = EH1 [φ(X)] subject to EH0 [φ(X)] ≤ α. If H1 contains
more than one distribution, i.e., if one is concerned with a
composite alternative H1 , the same test maximizes the power
for all alternative distributions in H1 . Such a test is known
as Uniformly Most Powerful (UMP) test

1.12 Uniformly Most Powerful test

A test φ is said to be the UMP level α test for testing


the hypothesis H0 : θ ∈ ΩH0 vs H1 : θ ∈ ΩH1 , if
(i) Eθ [φ(X)] ≤ α ∀ θ ∈ ΩH0
(ii) βφ (θ) = Eθ [φ(X)] is maximum ∀ θ ∈ ΩH1 and ∀ φ0 s
satisfying (i)
If ΩH0 and ΩH1 are both composite, then the problem is
to find the UMP for testing H0 : θ ∈ ΩH0 vs H1 : θ ∈ ΩH1 For
simple alternative, the UMP test means the MP test Further
the UMP test does not always exist, and it needs some more
restrictions on the class of tests φ satisfying (i)

• βφ (θ) is the probability of accepting a random sample


when it comes from the probability distribution of the
random variable X is specified by the hypothesis H0 or
H1 and the test φ

• For UMP test, the power function β(θ) is strictly in-


creasing for all points θ ∈ Ω for which β(θ) < 1

• If H0 : θ < θ0 vs H1 : θ ≥ θ0 , or H0 : θ > θ0 vs
H1 : θ ≤ θ0 , then for any θ < θ0 or θ > θ0 , the UMP test
Governing Basic Principles 20

φ minimizes β(θ) among all tests satisfying Eθ0 [φ(X)] =


α

Example 1.8 Consider three probability distributions spec-


ified as {x | x = 0, 1} with pmf ’s p0 , p1 , and p2 are given
below
x p0 p1 p2
0 0.2 0.3 0.4
1 0.8 0.7 0.6
Let H0 : X ∼ p0 or p1 vs H1 : X ∼ p2 . Find out the optimum
test of level 0.01
Define when x = 0, φ(0) = γ1 and when x = 1, φ(1) = γ2
where 0 ≤ γ1 , γ2 ≤ 1

Maximize β = EH1 [φ(X)]

= 0.4γ1 + 0.6γ2

subject to EH0 [φ(X)] ≤ 0.01

Using simplex method to solve the LPP, one has to

Maximize β = 0.4γ1 + 0.6γ2

subject to 0.2γ1 + 0.8γ2 ≤ 0.01

0.3γ1 + 0.7γ2 ≤ 0.01

where γ1 ≥ 0, γ2 ≥ 0

The MP test is given by



 0.0333 if x = 0
φ(x) =
 0 if x = 1
The power of the test is β = 0.0333
Example 1.9 Define the pmf ’s p0 , p1 , p2 and p3 with obser-
vations {x | x = −1, 1} and are given below
Governing Basic Principles 21

x p0 p1 p2 p3
-1 0.2 0.3 0.4 0.7
1 0.8 0.7 0.6 0.3

Let H0 : X ∼ p0 or p1 vs H1 : X ∼ p2 or p3 . Find out the


optimum test of level α = 0.01 for X ∼ p0 and α = 0.05 for
X ∼ p1 under H0
Define x = −1, φ(−1) = γ1 and x = 1, φ(1) = γ2
where 0 ≤ γ1 , γ2 ≤ 1

Maximize β = Ep2 orp3 [φ(X)]

subject to Ep0 [φ(X)] ≤ 0.01

Ep1 [φ(X)] ≤ 0.05

i.e., Maximize β = 1.1γ1 + 0.9γ2

subject to 0.2γ1 + 0.8γ2 ≤ 0.01

0.3γ1 + 0.7γ2 ≤ 0.05

Using simplex method, solution of the LPP is γ1 = 0.05 and


γ2 = 0. Hence the optimum test is

 0.05 if x = −1
φ(x) =
 0 if x = 1

and power of the test is β = 0.055


Example 1.10 Let X ∼ b(3, p). Consider the testing prob-
lem H0 : p = p0 ∈ (0.1, 0.3, 0.5) vs H1 : p = p1 ∈ (0.6, 0.7)
Using simplex method, examine whether the UMP test with
level 0.05 exists for the hypothesis
Define φ(i) = γi , i = 0, 1, 2, 3 and 0 ≤ γi ≤ 1
3
X 3!
Maximize β = γi (0.6)i (0.4)n−i
i!(3 − i)!
i=0
Governing Basic Principles 22

subject to
3
X 3!
γi pi (1 − p0 )n−i ≤ 0.05
i!(3 − i)! 0
i=0
where p0 = 0.1, 0.3, 0.5
3
X 3!
also maximize β = γi (0.7)i (0.3)n−i
i!(3 − i)!
i=0

subject to
3
X 3!
γi pi (1 − p0 )n−i ≤ 0.05
i!(3 − i)! 0
i=0
where p0 = 0.1, 0.3, 0.5

i.e., maximize β = 0.064γ0 + 0.288γ1 + 0.432γ2 + 0.216γ3


subject to

0.729γ0 + 0.243γ1 + 0.027γ2 + 0.0001γ3 ≤ 0.05

0.343γ0 + 0.441γ1 + 0.189γ2 + 0.027γ3 ≤ 0.05

0.125γ0 + 0.375γ1 + 0.375γ2 + 0.125γ3 ≤ 0.05

where 0 ≤ γ0 , γ1 , γ2 , γ3 ≤ 1

Using simplex method to solve the LPP, there are two opti-
mum tests
(i) The optimum test under the alternative H1 : p = 0.6 is
given by 
 0.4 if x = 3
φ1 (x) =
 0 otherwise
The power of the test φ1 (x) is β = 0.0864 under the alterna-
tive H1 : p = 0.6
(ii) The optimum test under the alternative H1 : p = 0.7 is
given by 
 0.1333 if x = 2
φ2 (x) =
 0 otherwise
Governing Basic Principles 23

The power of the test φ2 (x) is β = 0.0588 under the alterna-


tive H1 : p = 0.7
The UMP level 0.05 test of the hypothesis is φ1
Example 1.11 If it is decided to accept the hypothesis
H0 : p(x) = 21 , −1 ≤ x ≤ 1 whenever x2 − x ≤ 34 . What would
be the probability of rejecting H0 when H0 is true?

3
x2 − x =
4
3 1
⇒x = or x = −
2 2

Under H0 , the acceptance region is (−1, − 12 ). The critical


region is (− 12 , 1)
The probability of rejecting H0 when H0 is true
Z 1
1
i.e., α = dx = 0.75
− 12 2

Example 1.12 A test function





 1 if x1 + x2 > 1

φ(x1 , x2 ) = 1
4 if x1 + x2 = 1



 0 if x + x < 1
1 2

If the observed values are x1 = −3 and x2 = 4 would you


reject H0 or accept H1 ?
Here the sum of the observed value is 1. It lies on the
border of the line,so H0 is neither rejected nor accepted.The
test is inconclusive
Example 1.13 Consider the problem of testing H0 : θ = 1
1
vs H1 : θ = 2 where θ is the mean of the Poisson distribution.
Let X and Y be a random sample drawn from the Poisson
distribution. Consider the following test procedure reject H0
Governing Basic Principles 24

if either X = 0 or X = 1 and X + Y ≤ 2. Otherwise accept


H0 . Find the size and power of the test
Define the test function φ is

 1 if x = 0 or x = 1 and x + y ≤ 2
φ(x) =
 0 otherwise

The Poisson distribution of the random variable X with pa-


e−θ θx
rameter θ is P {X = x} = x! , x = 0, 1, 2, 3, · · ·
1
Here the hypothesis is to test that H0 : θ = 1 vs H1 : θ = 2

Size of the test


α = EH0 [φ(X)]
= 1 × PH0 {X = 0 or X = 1 and X + Y ≤ 2}
= PH0 {X = 0} + PH0 {X = 1 ∩ X + Y ≤ 2}
= PH0 {X = 0} + PH0 {X = 1}PH0 {Y ≤ 1}
= PH0 {X = 0} + PH0 {X = 0}PH0 {Y = 0 or Y = 1}
= e−1 + e−1 e−1 + e−1


= e−1 + e−2 + e−2


= e−1 + 2e−2
Power of the test
β = EH1 [φ(X)]
= PH1 {X = 0 or X = 1 and X + Y ≤ 2}
= PH1 {X = 0} + PH1 {X = 1 ∩ X + Y ≤ 2}
= PH1 {X = 0} + PH1 {X = 1}PH1 {Y ≤ 1}
1 1 1 1
= e− 2 + 12 e− 2 [e− 2 + 12 e− 2 ]
1
= e− 2 + 12 e−1 + 41 e−1
1
= e− 2 + 34 e−1

Problems

1.1 What is a randomized test? How is it used for testing a


Governing Basic Principles 25

hypothesis?

1.2 Distinguish between (a) probability of type 1 error, (b)


level of significance and (c) size of a test

1.3 Let X be the Bernoulli distribution with parameters n =


1
3 and p. Let H0 : p = 4 vs H1 : p = 12 . Consider the
test function



 0.3 if x = 0

φ(x) = 0.2 if x = 1



 0 otherwise

Find the power and probability of type 1 error of the


test function φ Ans. α = 0.21, β = 0.1

1.4 What do you understand by randomized test procedure?


Explain the need for such tests with examples

1.5 Distinguish between a randomized and a non-


randomized test. Where would you recommend them?

1.6 To test H0 : θ = 1 vs H1 : θ = 2 with reference to the


pdf 
 θxθ−1 if 0 ≤ x ≤ 1
p(x) =
 0 otherwise
from which two random observations x1 and x2 are
available, find out the size of the test

 1 if 3 ≤ 4x2
x1
φ(x) =
 0 otherwise

1
Ans: α = 4 + 43 log( 43 )
Governing Basic Principles 26

R1
1.7 Find a test function φ which maximizes 0 φ(x)3x2 dx
R1
subject to the two conditions (i) 0 φ(x)dx = 0.1 and
R1
(ii) 0 φ(x)2xdx = 0.19


9
 1 if 10 ≤x≤1
Ans: φ(x) =
 0 otherwise

1.8 Define the terms (a) size and power of a test and (b)
level of significance

1.9 Define the two kind of errors and the power of a test.
Which error is minimized in a statistical test? why not
both the errors?

1.10 Distinguish (a) simple and (b) composite hypothesis


giving illustrative examples

1.11 Define (a) critical function (b) power function and (c)
randomized test

1.12 Explain (a) probability of type 1 error (b) level of sig-


nificance and (c) size of the test

1.13 What is a randomized test? How is it used for testing


a hypothesis? Further show that the size of the MP
randomized test of level α will be α unless the power is
unity in the context of testing a simple H0 vs a simple
alternative H1

1.14 Let X be a random variable with pmf



2! x 2−x
x!(2−x)! θ (1 − θ) if x = 0, 1, 2, 3

pθ (x) =
 0 otherwise
Governing Basic Principles 27

1
Let H0 : θ = 2 vs H1 : θ = 34 . Let the critical region
be C = {1, 2}. Find the probability of type 1 error and
power associated with C Ans: α = 34 , β = 15
16

1.15 A sample of size n = 1 is taken from Poisson (λ). Let


H0 : λ = 1 vs H1 : λ = 2. Consider the test

 1 if x > 3
φ(x) =
 0 otherwise

Find the probability of type 1 error and the power of


the test for λ = 2 Ans: α = 0.02, β = 0.64

1.16 Consider the testing problem

x 0 1 x 0 1
H0 : H1 :
p 0.5 0.5 p 0.4 0.6

Determine the class of all test function of level 0.01


Ans:

 
 0.02 if x = 0  0.02 if x = 1
φ1 (x) = φ2 (x) =
 0 otherwise  0 otherwise

1.17 Consider three probability distributions specified as fol-


low

x 1 2 3 x 1 2 3
H0 : or H0 :
p1 0.2 0.6 0.2 p2 0.4 0.6 0

x 1 2 3
vs H1 :
p3 0.5 0.5 0
Governing Basic Principles 28

Examine whether the UMP test of level α = 0.05 exists


and if so find it by using the simplex method of LPP.

 0.125 if x = 1
Ans: φ(x) =
 0 if x = 2, 3

β = 0.0625

1.18 Let us consider the testing problem

x 0 1
H0 :
p0 0.2 0.8

x 0 1 x 0 1
H1 : or H1 :
p1 0.7 0.3 p2 0.1 0.9

Determine the MP test of level α = 0.02 by using the


graphical method LPP

 0.1 if x = 0
Ans: φ(x) =
 0 if x = 1

β = 0.08

1.19 Consider four probability distributions specified as be-


low

x p0 p1 p2 p3
1 0.05 0.15 0.20 0.25
2 0.95 0.85 0.80 0.75

Let H0 : X ∼ p0 or p1 vs H1 : X ∼ p2 or p3 . Determine
the MP level α = 0.05 test by using the LPP

 0.3333 if x = 1
Ans: φ(x) =
 0 if x = 2
α = 0.15
Governing Basic Principles 29

1.20 Let X ∼ b(3, p). Let H0 : p = p0 ∈ (0.1, 0.3, 0.2) vs


H1 : p = p1 ∈ (0.6, 0.8). Determine the UMP test of
level α = 0.01 using LPP

 0.3704 if x = 3
Ans: φ(x) =
 0 if x = 0, 1, 2
α = 0.2681

1.21 Let X be the Poisson distribution with parameter θ of


the form

e−θ θx
x! c(θ) x = 0, 1, 2, 3

P {X = x} =
 0 otherwise
 P3 −1
e−θ θx
where c(θ) = x=0
x! . Let H0 : θ = 1 or 2 vs
H1 : θ = 3. Find out the MP test of level α = 0.01
using the LPP

 0.05 if x = 3
Ans: φ(x) =
 0 if x = 0, 1, 2
β = 0.02

1.22 Let X be the Poisson distribution with parameter θ of


the form

e−θ θx
x! c(θ) x = 0, 1, 2, 3

P {X = x} =
 0 otherwise
 P3 −1
e−θ θx
where c(θ) = x=0
x! . Let H0 : θ = 1 or 2 vs
H1 : θ = 3. Find out the MP test of level α = 0.05
using the LPP.

 0.24 if x = 3
Ans: φ(x) =
 0 if x = 0, 1, 2
β = 0.08
Governing Basic Principles 30

1.23 For the following statements, which of them is true? (a)


The number of feasible solutions is infinite and hence
the number of basic feasible solutions is also infinite
(b) A linear programming problem may have no feasible
solution
(c) If one can get two optional solutions, one can obtain
an infinite number of optimal solutions
(d) All the above Ans: (a)

1.24 Considering the following statements, which of them is


true?
(a) The number of feasible solutions is infinite and
hence the number of basic feasible solutions is finite
(b) A linear programming problem may have no feasi-
ble solution
(c) The number of optimal solutions when they exist is
finite
(d) All the above Ans: (c)

1.25 Consider the following LPP


Maximize Z = x1 + 25 x2 subject to

5x1 + 3x2 ≤ 15

−x1 + x2 ≤ 1

2x1 + 5x2 ≤ 10 x1 , x2 ≥ 0

Which of the statement is true?


(a) The problem has no feasible solution
(b) It has infinitely many optimal solutions
(c) The LPP has an unique optimal solution
Governing Basic Principles 31

(d) The problem has an unbounded solution Ans: (b)

1.26 Consider the problem,


Maximize Z = 2y1 + 3y2 + 5y3 + 4y4 subject to

y1 + y2 ≤ 1

y2 + y3 ≤ 1

y3 + y4 ≤ 1

y4 + y1 ≤ 1 yi ≥ 0 ∀ i = 1, 2, 3, 4

Which of the following statements are true? Then the


optimum value is
(a) equal to 8
(b) between 8 and 9
(c) greater than or equal to 7
(d) less than or equal to 7 Ans: (c) and (d)

1.27 Let x1 , x2 , x3 , x4 be an optimal solution to the problem


of minimizing Z = x1 + x2 + x3 + x4 subject to

x1 + x2 ≥ 300

x2 + x3 ≥ 500

x3 + x4 ≥ 400

x4 + x1 ≥ 200, ∀ xi ≥ 0, i = 1, 2, 3, 4

Which of the following are not possible values for any


xi ? i = 1, 2, 3, 4
(a) 300 (b) 400 (c) 500 (d) 600 Ans: (b) , (c) and (d)

1.28 Maximize Z = 3x + 4y subject to


Governing Basic Principles 32

x ≥ 0, y ≥ 0, x ≤ 3

1
x+y ≤ 4
2
x+y ≤ 5

Which among the following are true?


(a) The optimal value is 19
(b) The optimal value is 18
(c) (3,2) is an extreme point of the feasible region
(d) (3, 52 ) is an extreme point of the feasible region
Ans: (b) and (c)

1.29 Consider the LPP , maximize Z = 3x + 5y subject to

x + 5y ≤ 10

2x + 2y ≤ 5, x ≥ 0, y ≥ 0

Which of the following are true?


(a) The LPP does not admit any feasible solutions
(b) There exists an unique optimal solution to the LLP
(c) There exists an unique optimal solution to the dual
problem
(d) The dual problem has an unbounded solution
Ans: (b) and (c)

1.30 For the following statements, which of them are true?


(a) NPL gives the MP test for a simple hypothesis vs
simple alternative, if it exists
(b) MP test always gives the power greater than its size
of the tests
(c) NPL gives the MP test for a simple hypothesis vs a
Governing Basic Principles 33

composite alternative
(d) MP test is not unique, if it exists
Ans:(a) , (b) and (d)

1.31 A test function





 1 if x1 + x2 > 1

φ(x1 , x2 ) = 1
4 if x1 + x2 = 1



 0 if x1 + x2 < 1

If the observed values are x1 = −2 and x2 = 4, then


you would
(a) reject H0
(b) accept H1
(c)reject both H0 and H1
(d) accept both H0 and H1 Ans: (a)

1.32 Which of the following statements are correct?


(a) If a test is significant at 5% level, then the proba-
bility of the null hypothesis being true is at least 0.05
(b) If a test is significant at 1% level, then the value of
the test statistics must be quite large
(c) If the probability value of a test statistics is 0.15
then the test is insignificant at 10% level
(d) If the sample mean based on a random sample of
size 1000 from a population turns out to be 0.003, then
the hypothesis that the population mean is 0 should be
rejected Ans: (c)

1.33 Let X1 and X2 be two random variables distributed as


the uniform distribution on (θ, θ + 1), −∞ < θ < ∞ for
Governing Basic Principles 34

testing H0 : θ = 0 vs H1 : θ > 0
(a) The test function φ which rejects H0 if and only if
X1 + X2 > C is UMP of its size
(b) The test function φ which rejects H0 if and only if
X1 + X2 > (0.05)2 is of size 0.05
(c) The test function φ which rejects H0 if and only if
X1 + X2 > 0.1 is of size 0.05
(d) None of these Ans: (d)

1.34 Type 1 error is defined as


(a) reject H0 when H0 is true
(b) reject H0 when H0 is false
(c) accept H0 when H0 is true
(d) accept H0 when H0 is false Ans: (a)

1.35 The power of a test is defined as the probability of


(a) rejecting H0 when H1 is true
(b) rejecting H0 when H0 is true
(c) rejecting H1 when H0 is true
(d) accepting H0 when H1 is true Ans: (a)

1.36 If a test function of the form




 1 if x > c

φ(x) = γ if x = c where 0 < γ < 1



 0 otherwise
then the test is
(a) randomized test
(b) non- randomized
(c) neither randomized nor non- randomized
Governing Basic Principles 35

(d) both randomized and non randomized Ans: (a)

1.37 If the test function


 γ if x > c where 0 < γ < 1
φ(x) =
 0 otherwise

then the test is


(a) randomized
(b) non-randomized
(c) neither randomized nor non- randomized
(d) all the above Ans: (a)

1.38 Which of the folloeing statements are true?


(a) A test is a non-randomized when the distribution
function of a random variable is continuous
(b) A test is a non-randomized when the distribution
function of a random variable is discrete
(c) A test is a non-randomized when the distribution
p1 (X)
function of a random variable p0 (X) is continuous
(d) A test is a non-randomized when the distribution
p1 (X)
function of a random variable p0 (X) is discrete
Ans: (c)

1.39 In the MP level α test, the power of the test β satisfies


(a) β ≥ α
(b) β < α
(c) β > α
(d) β + α = 1 Ans:(a)

1.40 Which of the following statements are true?


(a) Every randomized test procedure can be thought of
Governing Basic Principles 36

as a non-randomized procedure
(b) Every randomized test procedure can be thought of
as a randomized procedure
(c) Every randomized test procedure can be thought of
as equivalent to a non-randomized procedure
(d) All the above Ans: (b)

1.41 Consider the problem of testing H0 : θ = 1 vs H1 : θ =


1
2 where θ is the mean of a Poisson random variable.
Let X and Y be a random sample from Poisson(θ) dis-
tribution. Consider the following test procedure reject
H0 if either X = 0 or X = 1 and X + Y ≤ 2.Otherwise
accept H0 . Which of the following are true?
(a) P { type 1 error} = e−1 + 2e−2
1
(b) P { type 2 error} = 1 − 34 e−1 − e− 2
(c) Size of the test is e−1 + 2e−2
1
(d) Power of the test is 34 e−1 + e− 2
Ans: (a), (b), (c) and (d)
2. METHOD OF OBTAINING THE MOST POWERFUL TEST

2.1 Introduction

A hawker wants to purchase commodities with a limited


capacity of Rs. 25.00. The commodities available in a market
with costs are given in Table 2.1

Table 2.1 Costs of commodities

Commodity 1 2 3 4 5 6 7 8 9 10
Cost ct Rs. 5 10 6 20 25 30 40 50 35 12

If he wishes to sell it on streets and to make maximum profit


out of these business.Assume that the profits scale for selling
the commodities are in Table 2.2

Table 2.2 Profits scale for selling the commodities


Commodity 1 2 3 4 5 6 7 8 9 10
Profit pt Rs. 2 4 3 8 10 5 4 1 6 1

Obviously the best choice to purchase these com-


modities with a given limit is that the profit is maximized
subject to the cost 10
P
i=1 ci ≤ 25.00. He decides to purchase
pi
these commodities for which ci ( profit per unit cost) are
large. For this example, the set of possible purchasing com-
modities subject to the condition 10
P
i=1 ci ≤ 25.00 and the

profits corresponding to the possible sets are tabulated in


Table 2.3
Most powerful test 38

Table 2.3 Profits corresponding to the sets of commodities

Set {1} {2} {3} {4}


Profit 0.4 0.4 0.5 0.4
Set {5} { 10 } { 1, 2 } { 1,3 }
Profit 0.4 0.08 0.8 0.9
Set { 1,4 } { 2,3 } { 1,2,3} { 1,10 }
Profit 0.8 0.9 1.3 0.48
Set {2,10 } { 3,10 } { 4,10 } { 1,3,10 }
Profit 0.48 0.58 0.48 0.98

The hawker has more profit 1.3 with the given capacity
of Rs. 25.00. for purchasing the commodity set { 1, 2, 3 }
only. Similarly the method of finding the best test for which
the critical region should include the value x of X for the ratio
p1 (x)
p0 (x) is large, where p1 (x) is the pdf of X under H1 and p0 (x)
is the pdf of X under H0 . First consider the case in which
both H0 and H1 are simple, i.e., ΩH0 = {θ0 }, ΩH1 = {θ1 }. For
testing H0 : θ = θ0 vs H1 : θ = θ1 , now states the Neyman
Pearson Lemma(NPL)

2.2 Neyman Pearson Lemma

Statement Let X be a random variable with two possi-


ble distributions P0 and P1 . Assume p0 and p1 are their re-
spective densities. The problem is to test H0 : X ∼ P0 vs
H1 : X ∼ P1
Most powerful test 39

(a) Sufficiency

If a test φ is of the form



 1 if p1 (x) > kp0 (x)
φ(x) =
 0 if p (x) < kp (x) for some k ≥ 0 and
1 0

EH0 [φ(X)] = α
then φ is the MP level α test for testing H0 vs H1

(b) Existence

For every 0 < α < 1, there exists k ≥ 0 and 0 ≤ γ ≤ 1 such


that the test φ is defined by



 1 if p1 (x) > kp0 (x)

φ(x) = γ if p1 (x) = kp0 (x)



 0 if p (x) < kp (x)
1 0

and satisfies EH0 [φ(X)] = α

(c) Necessary

If a test φ is the MP level α for testing H0 vs H1 , then it


has the form as (a)[ except perhaps for x of X in a set of
probability zero under both H0 and H1 ]
Proof: (a) Sufficient part
Let X be a random variable of the sample space X
Let φ be a test as (a) and φ∗ : X → [0, 1] be another test for
which EH0 [φ∗ (X)] ≤ EH0 [φ(X)] = α
Now to show that βφ ≥ βφ∗ Define

C + = {x | φ(x) − φ∗ (x) > 0} and

C − = {x | φ(x) − φ∗ (x) < 0}


Most powerful test 40

The test φ∗ ∈ {φ | EH0 [φ(X)] ≤ α} and


{x | φ(x) − φ∗ (x) > 0} ⇒ x ∈
/ {x | p1 (x) < kp0 (x)}
If x ∈ C +

φ(x) − φ∗ (x) > 0 ⇔ {x | p1 (x) ≥ kp0 (x)}

If x ∈ C −

φ(x) − φ∗ (x) < 0 ⇔ {x | p1 (x) ≤ kp0 (x)}

We have

[φ(x) − φ∗ (x)][p1 (x) − kp0 (x)] ≥ 0 ∀ x


Z
...
[φ(x) − φ∗ (x)][p1 (x) − kp0 (x)]dµ(x) ≥ 0

EH1 [φ(X)] − EH1 [φ∗ (X)] ≥ kEH0 [φ(X)] − kEH0 [φ∗ (X)] ≥ 0

EH1 [φ(X) − EH1 [φ∗ (X)] ≥ kα − kα = 0

⇒ β φ ≥ β φ∗
(b) Existence part
n o
Define α(c) = PH0 pp01 (X)
(X)
> c , the probability of the ran-
p1 (X)
dom variable p0 (X) under H0
p1 (X)
.. . 1 − α(c) is the distribution function of the following p0 (X)

under H0
The distribution function 1−α(c) has the following properties

• It is non-negative

• It is non-decreasing

• It is right continuous
Most powerful test 41

• α(−∞) = 1 and α(+∞) = 0

Using the function α(c), given 0 < α < 1, it is possible


to choose a c0 such that for c < c0 , α(c) ≥ α and for c >
c0 , α(c) ≤ α. This is known as the problem of Dedekind’s
cut at c0
By Dedekind’s Theorem
there exists a glb for the set {c | α(c) ≤ α} and there exists
a lub for the set {c | α(c) > α}
These two values are equal and it belongs to one of this
classes. When α is rational and it belongs to both of them.
When α is irrational,there exists a value c0 is the lub of
{c | α(c) > α} and c0 is the glb of {c | α(c) ≤ α}, such that
a constant c0 is unique
Using the property of the real line <, it is possible to
(1)
construct a sequence {cn } from the set {c | α(c) > α} such
(1)
that cn → c0 ↑ as n → ∞
(1)
⇒ α(cn ) > α ∀ n = 1, 2, 3, · · ·
(1)
Thus limn→∞ α(cn ) = α(c0 ) ≥ α and by right continuity
(1)
α(c0 ) = α(c0 − 0) ⇒ limn→∞ α(cn ) = α(c0 − 0) ≥ α
(2)
Similarly again there exists a sequence {cn } from the set
(2)
{c | α(c) ≤ α} such that cn → c0 ↓ as n → ∞
(2)
.. . limn→∞ cn = α(c0 ) ≤ α. Thus there exists a unique c0
such that for given α (0 < α < 1) ⇒ α(c0 ) ≤ α ≤ α(c0 − 0)
Consider the test function



 1 p1 (x) > c0 p0 (x)

α−α(c0 )
φ(x) = α(c0 −0)−α(c0 ) if p1 (x) = c0 p0 (x)



 0 if p1 (x) < c0 p0 (x)
Such a test function φ has the form as (a)
Most powerful test 42

If F (x) is not continuous at x = c0 , then


P {X = c0 } = F (c0 ) − F (c0 − 0) 6= 0

.. . EH0 [φ(X)] = 1 × PH0 {p1 (X) > c0 p0 (X)}


α − α(c0 )
+
α(c0 − 0) − α(c0 )
×PH0 {p1 (X) = c0 p0 (X)}

+0 × PH0 {p1 (X) < p0 (X)}

= α(c0 )
α − α(c0 )
+ [α(c0 − 0) − α(c0 )]
[α(c0 − 0) − α(c0 )]
= α

Suppose α(c0 − 0) = α(c0 ) ⇒ PH0 {p1 (X) = c0 p0 (X)} = 0


This point can be excluded from the sample space, then for
such a situation φ takes only two values 1 and 0. .. . φ is a
p1 (x)
non-randomized test. So if p0 (x) is continuous at c0 , there is
no randomized test procedure for testing H0 vs H1
(c) Necessity part
If φ∗ is a MP level α test for testing H0 vs H1
then φ∗ = φ a.e., µ where φ is given by (a)
Define C + : φ − φ∗ > 0 ⇒ {x | p1 (x) − kp0 (x) ≥ 0} and
C − : φ − φ∗ < 0 ⇒ {x | p1 (x) − p0 (x) ≤ 0}
Let C be the subset of the sample space wherein φ and φ∗
are not equal and C = {(C + ∪ C − ) ∩ {x | p1 (x) − kp0 (x) 6= 0}
Z Z

.
.. (φ − φ )[(p1 − kp0 ]dµ = (φ − φ∗ )[p1 − kp0 ]dµ ≥ 0
X C

if µ{(C + ∪ C − ) ∩ ([p1 − kp0 ] 6= 0)} =


6 0
Since on both C + and C − , the function φ − φ∗ > 0 and
φ − φ? < 0 respectively
Most powerful test 43

But both the products of these (φ − φ∗ )[p1 − kp0 ] ≥ 0


⇒ βφ ≥ βφ∗ , i.e., φ is the MP test than φ∗ . But φ∗ is assumed
to be the MP test. So the contradiction arises that φ is a MP
level α test. To avoid the contradiction of the assumption

(φ − φ∗ )[p1 − kp0 ] = 0

i.e., µ[(C + ∪ C − ) ∩ {x | [p1 − kp0 ] 6= 0}] = 0

⇒ µ[φ − φ∗ ] = 0 i.e., φ = φ∗ a.e. µ


Fact The power of the level α MP test β ≥ α
Corollary If φ is the MP level α test for testing H0 vs H1 as
in NPL, then the power of the test β > α if P0 6= P1 Proof
Let φ be the MP level α test
⇒ EH0 [φ(X)] = α
Assume EH1 [φ(X)] ≥ EH1 [φ∗ (X)] where φ∗ is a critical func-
tion satisfying EH0 [φ(X)] ≤ α
Consider a member φ∗ such that
φ∗ ≡ α ⇒ EH0 [φ∗ (X)] = α and EH1 [φ∗ (X)] = α
.. . φ∗ ∈ {φ | EH0 [φ(X) ≤ α}
On comparison EH1 [φ(X)] ≥ EH1 [φ∗ (X)] = α
⇒ φ∗ is also the MP test
i.e., φ = φ∗ a.e. µ
⇒ µ[φ − φ∗ ] = 0
i.e., µ[x | {p1 (x) − kp0 (x)} =
6 0] = 0
R
⇒ (p1 − kp0 )dµ = 0
⇒ p1 (x) = kp0 (x) ∀ k a.e. µ
i.e., p1 (x) = p0 (x) for k = 1 a.e. µ
R R
Its hold only k = 1, i.e., p1 dµ = 1 and k p0 dµ = 1 ⇒ k = 1
Thus P1 = P0 a.e. µ
.. . β = α ⇒ P1 = P0 a.e.µ
Thus β > α ⇒ P1 6= P0 a.e.µ
Example 2.1 Let H0 be the hypothesis that a bank man-
Most powerful test 44

ager is in good mood and H1 be the hypothesis that he is


bad mood based on the responses x1 or x2 or x3 or x4 or x5
of the manager. On being told that an officer is coming to
the bank by late 0, 5, 10,15 and 20 minutes respectively to
the responses. One wants to test the mood of the manager
by the help of MP test for the hypothesis H0 vs H1 with level
α = 0.05. The distributions of responses under H0 and H1
are in Table 2.4

Table 2.4 Distributions of responses

Time x x1 = 0 x2 = 5 x3 = 10 x4 = 15 x5 = 20
Ho p0 0.60 0.20 0.15 0.05 0.00
H1 p1 0.00 0.10 0.20 0.30 0.40

Use NPL to obtain the MP test. State whether it is ran-


domized or non-randomized. Calculate the power of the test
Let φ be the MP test for testing the hypothesis

H0 ; the manager is in good mood


vs
H1 : the manager is not in good mood



 1 if pp10 (x)
(x) > c

φ(x) = γ if pp10 (x)
(x) = c


 0 if p1 (x) < c

p0 (x)

where c > 0 is chosen such that EH0 [φ(X)] = 0.05.Calculate


p1 (x)
the ratio p0 (x) for all x. Table 2.5 shows the likelihood ratios
Most powerful test 45

Table 2.5 Likelihood ratios

x x1 = 0 x2 =5 x3 = 10 x4 = 15 x5 = 20
p1 (x) 1 4
p0 (x) 0 2 3 6 ∞

p1 (x)
It can be seen that p0 (x) ↑ x ⇒ x > c. so the first choice is
to select the constant c = ∞ for which φ is a test function
For α = 0 or α = 1 is ruled out in practical situations.If α = 0
means 100% , he is in good mood and if α = 1 means 100%
he is not in good mood. Thus if α = 0, the test function is

 1 if p0 (x) = 0
φ(x) =
 0 if p (x) > 0
1

and α = 1 
 1 if p1 (x) > 0
φ(x) =
 0 if p0 (x) = 0
Now one can choose c = 6 and define the test function

p1 (x)


 1 if p0 (x) >6

p1 (x)
φ(x) = γ if p0 (x) =6



 0 p1 (x)
if p0 (x) <6

It is equivalent to choose the test function as





 1 if x = 20

φ(x) = γ if x = 15



 0 if x = 0, 5, 10

where γ is determined by EH0 [φ(X)] = 0.05


i.e., EH0 [φ(X)] = 1 × PH0 {X = 20} + γPH0 {X = 15} = 0.05
⇒ γ = 1. Thus the MP test is

 1 if x = 15, 20
φ(x) =
 0 if x = 0, 5, 10
Most powerful test 46

The power of the test is β = EH1 [φ(X)] = 0.30 + 0.40 = 0.70


The manager is mood out when the officers are coming by
late of more than or equal to 15 minutes. On the basis of
these responses the manager is 70% of the time is mood out.
The test is a non-randomized test, since φ has only two values
0 and 1
Note NPL gives a method of constructing the MP test for
testing simple hypothesis against simple alternative. How-
ever when the hypothesis or alternative is composite, there
is no method for constructing the UMP test. But in such
a situation, it can be tested with simple hypothesis against
simple alternative
Example 2.2 Assume X ∼ b(n.θ), 0 < θ < 1. Let the
statistical hypothesis H0 : θ = θ0 vs H1 : θ = θ1 (θ1 > θ0 ).Fix
0 < α < 1. Obtain the MP level α test and the power of the
test
The pmf of the binomial distribution is

n! x n−x 0 < θ < 1, x = 0, 1, 2, · · · , n
x!(n−x)! θ (1 − θ)

pθ (x) =
 0 otherwise

 x  n
p1 (x) θ1 (1 − θ0 ) (1 − θ1 )
=
p0 (x) θ0 (1 − θ1 ) (1 − θ0 )
= ax b
h i h i
θ1 (1−θ0 ) (1−θ1 )
where a = θ0 (1−θ1 ) > 1 if θ1 > θ0 and b = (1−θ0 ) >0

ax b ↑ x ⇒ x > c
Most powerful test 47

The MP test for the hypotheses are given by





 1 if x > c

φ(x) = γ if x = c



 0 otherwise

where c is determined by Eθ0 [φ(X)] = α

i.e., PH0 {X > c} + γPH0 {X = c} = α

n
X n! n!
θ0x (1 − θ0 )n−x + γ θ0c (1 − θ0 )n−c = α
x=c+1
x!(n − x)! c!(n − c)!
The power of the test is
n
X n! n!
βφ (θ1 ) = θ1x (1 − θ1 )n−x + γ θ1c (1 − θ1 )n−c
x=c+1
x!(n − x)! c!(n − c)!

Note If θ1 < θ0 , then α < 1. so that the MP test is given


in the test function φ, the inequality sign is reversed
Example 2.3 A foundry produces steel casting used in the
automotive industry, one wishes to test the hypothesis that
the fraction non-conforming or fallout from this process is
20%.In a random sample of 5 casting 2 were found to be non-
conforming. The foundry producer wants to know whether
the non-conforming. The foundry producer wants to know
whether the non-conforming steel casting is 20% with level
α = 0.05
The fraction non-conforming defective steel casting
X ∼ b(5, p). Here H0 : p0 = 0.2 vs H1 : p1 = 0.4 and p1 > p0
p1 (x)
As in Example 2.2 , p0 (x)↑ x, the MP test is given by



 1 if x > c

φ(x) = γ if x = c



 0 otherwise
Most powerful test 48

p1 (x)
For determining the constant c, p0 (x) is given in Table 2.6

Table 2.6 Likelihood ratios

p1 (x)
x p0 (x) p1 (x) p0 (x)

0 0.32768 0.07780 0.2374


1 0.4096 0.25920 0.6328
2 0.20480 0.34560 1.6875
3 0.05120 0.23040 4.5000
4 0.00640 0.07605 12.0000
5 0.00032 0.01024 32.0000

p1 (x)
The MP level α = 0.05 is obtained by NPL. Since p0 (x) ↑x
choose c = 32 ⇔ x = 5. The MP test is

 1 if x > 5



φ1 (x) = γ1 if x = 5



 0 otherwise

where γ1 is determined by EH0 [φ1 (X)] = 0.05

PH0 {X > 5} + γ1 PH0 {X = 5} = 0.05

i.e., γ1 × 0.00032 = 0.05 ⇒ γ1 = 156.26 > 1. But 0 ≤ γ1 ≤ 1


.. . φ1 is not a test function
Choose c = 12 ⇔ x = 4 and the corresponding test
function is 


 1 if x > 4

φ2 (x) = γ2 if x = 4



 0 otherwise
Here also γ2 = 11.8828 > 1 and φ2 is not a test function
Most powerful test 49

Again c = 4 ⇔ x = 3 and the test function is





 1 if x > 3

φ3 (x) = γ3 if x = 3



 0 otherwise

From this γ3 × 0.0512 + 0.0064 + 0.00032 = 0.05


⇒ γ3 = 0.8453. Hence, the MP level α = 0.05 test is



 1 if x = 4, 5

φ3 (x) = 0.8453 if x = 3



 0 otherwise

The power of the test is

β = EH1 [φ3 (X)]

= PH1 {X = 4 or 5} + 0.8453 × PH1 {X = 3}

= 0.28

Decision The hypothesis H0 is not rejected, since the sample


has only 2 non-conforming steel casting out of 5 which is less
than 3. The fraction of steel casting is 20% in the production
process
Example 2.4 Let X1 , X2 , X3 , · · · , Xn be iid Poisson dis-
tribution with parameter θ(> 0). Obtain the MP level α test
for testing H0 : θ = θ0 vs H1 : θ = θ1 (θ1 > θ0 ). Also find the
power of the test
The pmf of the random sample Xi
i = 1, 2, 3, · · · n of size n with Poisson pmf is

 e−θ θxi xi = 0, 1, 2, · · ·
xi !
pθ (xi ) =
 0 otherwise
Most powerful test 50

consider

pθ1 (x) pθ1 (x1 , x2 , · · · xn )


=
pθ0 (x) pθ0 (x1 , x2 · · · , xn )
 Pni=1 xi
θ1
= e−n(θ1 −θ0 )
θ0
= bat ↑ t ⇒ t > c

since a = θθ01 > 1, b = e−n(θ1 −θ0 ) > 0 if θ1 > θ0 and


t = ni=n xi . So the MP level α test is
P




 1 if t > c

φ(t) = γ if t = c



 0 otherwise

where c is determined by EH0 [φ(T )] = α where T = T (X)

i.e., PH0 {T > c} + γPH0 {T = c} = α



X (nθ0 )t (nθ0 )c
e−nθ0 + γe−nθ0 =α
t! c!
t=c+1

The power of the test is

β = EH1 [φ(T )]

= PH1 {T > c} + γPH1 {T = c}



X (nθ1 )t (nθ1 )c
= e−nθ1 + γe−nθ1
t! c!
i=c+1

Note If θ1 < θ0 , and α < 1, then the MP level α test in the


Example 2.4 test function inequality sign is reversed
Example 2.5 A random sample of three office copiers yields
x1 = 3, x2 = 1, x3 = 4 non-conformity per unit respectively.
The manufacturer wishers to know whether the copiers have
non-conformity per unit is two.But from previous experience
he assumes, that the copiers have non-conformity per unit
Most powerful test 51

is one. Assume that each copiers non-conformity per unit


follows Poisson distribution with parameter θ. Obtain the
MP level α = 0.05 test and the power of the test
Here H0 : θ = 1 vs H1 : θ = 2 and θ1 > θ0 . As in
pθ1 (t)
Example 2.4, pθ0 (t) ↑ t, the MP level α = 0.05 test is given
by 


 1 if t > c

φ(t) = γ if t = c



 0 otherwise
where c and γ are determined by EH0 [φ(T )] = 0.05 where
p (t)
T = 3i=1 Xi ∼ P (3θ). pθθ1 (t) is calculated in Table 2.7
P
0

Table 2.7 Likelihood ratios

pθ1 (t)
t pθ0 (t) pθ1 (t) pθ0 (t)

0 0.049 0.002 0.0408


1 0.150 0.015 0.1000
2 0.224 0.044 0.1964
3 0.224 0.090 0.4017
4 0.168 0.134 0.7976
5 0.101 0.160 1.5840
6 0.050 0.161 3.2200
7 0.022 0.137 6.2270
8 0.008 0.104 13.000
9 0.001 0.069 69.000
10 - 0.041 -
11 - 0.022 -
12 - 0.012 -
13 - 0.001 -
Most powerful test 52

pθ1 (t)
Since pθ0 (t) ↑ t, choose the constant c = 69 ⇔ x = 9 and
define the test function



 1 if t > 9

φ1 (t) = γ1 if t = 9



 0 otherwise

For this test φ1 , γ1 = 50 > 1, so φ1 is not a test function


Again choose c = 3.22 ⇔ x = 6, and define the test function



 1 if t > 6

φ2 (t) = γ2 if t = 6



 0 otherwise

0.019
Here γ2 = 0.050 = 0.38 < 1, so φ2 is a test function. Thus
the MP level α = 0.05 test is



 1 if t > 6

φ2 (t) = 0.38 if t = 6



 0 otherwise

The power of the test is β = 0.447


Decision The hypothesis H0 : θ = 1 is rejected in favour of
H1 : θ = 2, since t = x1 + x2 + x3 = 8 which is greater than 6.
Thus the three copiers yield non-conformity more than one
per unit with probability 0.447
Example 2.6 Let X1 , X2 , X3 , · · · , Xn be a random sample
drawn from the normal distribution with mean θ and known
variance σ 2 . Find the MP level α test for testing the hypoth-
esis H0 : θ = θ0 vs H1 : θ = θ1 , (θ1 > θ0 ). Also find the power
of the test
The pdf of the random sample Xi , i = 1, 2, 3, · · · , n from
Most powerful test 53

independent and identically normal distribution is



 √ 1 e− 2σ12 (xi −θ)2 −∞ < xi < ∞
pθ (xi ) = 2πσ
 0 otherwise

pθ1 (x) pθ1 (x1 , x2 , · · · , xn )


=
pθ0 (x) pθ0 (x1 , x2 , · · · , xn )
1 n
(θ2 −θ02 )
P
xi (θ1 −θ0 )−
= e σ2 2σ 2 1

nx̄ n 2 2
= e σ2 (θ1 −θ0 ) e− 2σ2 (θ1 −θ0 )

= aebx̄ ↑ x̄ ⇒ x̄ > c

since b > 0 and a > 0 if θ1 > θ0


n 2 2
where a = e− 2σ2 (θ1 −θ0 ) and b = n

σ2 1
− θ0 )
The MP level α test is

 1 if x̄ > c
φ(x̄) =
 0 otherwise

where c is determined by EH0 [φ(X̄)] = α


2
i.e., PH0 {X̄ > c} = α, and X̄ ∼ N (θ0 , σn ) under H0
 
X̄ − θ0 c − θ0
α = PH 0 √ > √
σ/ n σ/ n
Z ∞
= p(z)dz where Z = X̄−θ √ 0 ∼ N (0, 1)
σ/ n
(c−θ0 )

σ/ n

c − θ0 σ
zα = ⇒ c = zα √ + θ0
√σ n
n

where zα is the ordinate corresponding to the area α. The


MP level α test is

 1 if x̄ > zα √σn + θ0
φ(x̄) =
 0 otherwise

The power of the test is β = PH1 {X̄ > zα √σn + θ0 } where


R∞
X̄ ∼ N (θ1 , σ 2 /n) under H1 , i.e., β = z −(θ −θ ) √n p(z)dz
α 1 0 σ
Most powerful test 54

Example 2.7 The internal pressure strength of glass bot-


tles used to package a carbonated beverage is an important
quality characteristic. The bottler wants to know whether
the mean pressure strength is 172psi. From previous experi-
ence he knows that the pressure strength follows normal with
mean θ and standard deviation 4psi. The glass manufacturer
submits lots of this bottles, who is interested in testing the
hypothesis H0 : θ = 170 vs H1 : θ = 172 with level 5%.
A random sample of 25 bottle is selected and the bottles
are oplaced on a hydro static pressure testing machine that
increases the pressure in the bottle until it fails are 173.5,
170.5, 171.5, 172.5, 172.0, 173.5, 170.5, 171.5, 172.0,
172.5, 174.5, 170.5, 169.0, 175.0, 171.0, 170.0, 170.5,
176.0, 170.0 173.5, 172.0, 172.75, 171.25 171.25, 175.75
The hypothesis is H0 : θ = 170(= θ0 ) vs H1 : θ = 172(= θ1 )
The random sample Xi , i = 1, 2, 3, · · · , 25 are the pressure
strength of glass bottles , i.e., each Xi ∼ N (θ, σ 2 = 16). As
pθ1 (x̄)
in Example 2.6, pθ0 (x̄) ↑ x̄. The MP test is

 1 x̄ > c
φ(x̄) =
 0 otherwise
Most powerful test 55

where c is given by

0.05 = EH0 [φ(X̄)]

= PH0 {X̄ > c}


 
(X̄ − θ0 ) (c − 170)
= PH0 √ >
σ/ n 4/5
Z ∞
= p(z)dz where Z ∼ N (0, 1)
(c−170)
0.8
Z ∞
p(z)dz = 0.05
1.65
(c − 170)
⇒ = 1.65, i.e., c = 1.65 × 0.8 + 170 = 171.32
0.8

The MP level 0.05 test is



 1 if x̄ > 171.32
φ(x̄) =
 0 otherwise

The power of test is

β = EH1 [φ(X̄)]

= PH1 {X̄ > 171.32}


 
(X̄ − θ1 ) (171.32 − 172.00)
= PH 1 √ >
σ/ n 4/5
Z ∞
= p(z)dz = 0.80
−0.85

4303
Decision The sample mean x̄ = 25 = 172.12psi is greater
than 171.32psi. The hypothesis H0 : θ = 170 is rejected in
favour of H1 : θ = 172 . The experimenter concludes that
the mean pressure strength of the bottles exceed 170psi, the
pressure strength of the bottles are 172psi with probability
80%
Example 2.8 Let X1 , X2 , X3 , · · · , Xn be iid random sample
drawn from normal distribution with known mean θ and un-
Most powerful test 56

known variance σ 2 . Find the MP test for testing H0 : σ 2 = σ02


vs H1 : σ 2 = σ12 (σ12 > σ02 ). Also find the power of test
The pdf of random sample Xi , i = 1, 2, 3, · · · , n is
1 1 2
pσ2 (xi ) = √ e− 2σ2 (xi −θ) , i = 1, 2, 3, · · · n

pσ12 (x) pσ12 (x1 , x2 , x3 , · · · , xn )


=
pσ02 (x) pσ02 (x1 , x2 , x3 , · · · , xn )
 n 1  1 1  P
σ0 2 2− 2 (xi −θ)2
= e σ 0 σ1
σ1
= abt ↑ t ⇒ t > c
 n  
where a σσ01 > 0, b = 21 σ12 − σ12 > 0 if σ12 > σ02
0 1

and t = (xi − θ)2


P

The MP level α test is



 1 if t > c
φ(t) =
 0 otherwise
and c is determined by

EH0 [φ(X)] = α

PH0 {T > c} = α
(Xi − θ)2
P 
c
PH 0 > 2 = α
σ02 σ0
Z ∞
pn (χ2 )dχ2 = α
c
σ02

If χ2α is the ordinate corresponding to the area α where


Pn (Xi −θ)2
i=1 σ02
∼ χ2 distribution with n degrees of freedom
R∞
i.e., χ2 pn (χ2 )dχ2 = α, then σc2 = χ2α ⇒ c = σ02 χ2α
α 0

Thus the MP level α test is



 1 if P(Xi − θ)2 > σ 2 χ2
0 α
φ(t) =
 0 otherwise
Most powerful test 57

The power of the test is

β = EH1 [φ(T )]
nX o
= PH 1 (Xi − θ)2 > σ02 χ2α
(Xi − θ)2 σ02 2
P 
= PH 1 > 2 χα
σ12 σ1
Z ∞
= σ2
pn (χ2 )dχ2
0 χ2
2 α
σ1

Note The MP level α test for testing H0 : σ 2 = σ02 vs


H1 : σ 2 = σ12 (σ12 < σ02 ) is

 1 if P(Xi − θ)2 < σ 2 χ2
0 (1−α)
φ(t) =
 0 otherwise

where χ2(1−α) is obtained from the following equation


Z ∞
pn (χ2 )dχ2 = 1 − α
χ2(1−α)
Z χ2α
since pn (χ2 )dχ2 = α
0

The power of the test is

(Xi − θ)2 σ02 2


P 
β = PH 1 < 2 χ(1−α)
σ12 σ1
2
σ0 2
2 χ(1−α)
Z
σ1
= pn (χ2 )dχ2
0
Example 2.9 The tensile strength of a synthetic fiber is
an important quality characteristic that is of interest to the
manufacturer. From the past experience, the manufacturer is
willing to assume that tensile strength is approximately nor-
mally distributed : however both the mean tensile strength is
50psi and standard deviation of tensile strength is σ. A ran-
dom sample of 16 fiber specimens is selected, and their tensile
strength are determined. The sample data are in Table 2.8
Most powerful test 58

Table 2.8 Tensile strength

Specimen 1 2 3 4 5 6
Strength psi 48.89 52.07 49.29 51.66 52.16 49.72
Speciman 7 8 9 10 11 12
Srength psi 48.00 49.96 49.20 48.10 47.90 46.94
Speciman 13 14 15 16 - -
Strength psi 51.76 50.75 49.86 51.57 - -

The producer wants to know whether the variance of the


tensile strength of a synthetic fiber is 4psi. But from previous
experience, he assumes that σ 2 = 2.75psi with level α = 0.05
Let Xi , i = 1, 2, 3, · · · 16 be the tensile strength of the
synthetic fiber. Each Xi ∼ N (50, σ 2 ). The hypothesis is to
test that H0 : σ02 = 2.75 vs H1 : σ12 = 4 and σ12 > σ02 . As in
pσ2 (x)
Example 2.8 p 12 (x) ↑ t ⇒ t > c where t = (xi − θ)2
P
σ0

and mean θ = 50
The MP level α = 0.05 test is

 1 if P(xi − 50)2 > 72.325
φ(t) =
 0 otherwise

R∞ c
Since c p16 (χ2 )dχ2 = 0.05 ⇒ 2.75 = 26.3, i.e., c = 72.325
σ02
R∞
where 26.3 p16 (χ2 )dχ2 = 0.05
The power of the test is
Z ∞
β= p16 (χ2 )dχ2 = 0.33
18.08

Decision The hypothesis H0 : σ 2 = 2.75 is not rejected,


since the sample sum of squared deviations from the mean
µ = 50 is 16 2
P
i=1 (xi − 50) = 41.69 < 72.325. Thus the experi-

menter concludes that the variance of the tensile strength of


a synthetic fiber is σ 2 = 2.75psi
Most powerful test 59

Example 2.10 The hypothesis is to test that H0 : θ = 1


vs H1 : θ = 0 for single observation x of random variable X
with pdf is

 (2xθ + 1 − θ) if 0 < x < 1
pθ (x) =
 0 otherwise

Find the power of the MP level α test.


Under H0 : p0 (x) = 2x and under H1 : p1 (x) = 1

p1 (x) 1
Consider = ↓ x ∈ (0, 1) ⇒ x < c
p0 (x) 2x

So the MP level α test is



 1 if x < c
φ(x) =
 0 otherwise

where c is determined by α = EH0 [φ(X)]

= PH0 {X < c}
Z c
= 2xdx = c2
0

⇒c = α

The MP level α test is




 1 if x < α
φ(x) =
 0 otherwise

The power of the test is

β = EH1 [φ(X)]
Z √α

= dx = α
o
Most powerful test 60

Example 2.11 Use NPL to obtain the MP test for testing


H0 : X ∼ N (0, σ 2 = 21 ) vs H1 : X ∼ Cauchy density standard
form at level 0.05. Find the power of the test
2
Under H0 : X ∼ p0 (x) = √1 e−x vs
π
H1 : X ∼ p1 (x) = π1 1+x
1
2

p1 (x)
Consider >k
p0 (x)
2
ex √
> c where c = πk
1 + x2

Consider the function

ey
f (y) = where y = x2
1+y
yey
f 0 (y) = > 0 if y > 0
(1 + y)2

f 0 (y) is positive if y > 0 and f (0) = 1. Here f (y) is monotonic


increasing function of y
2
ex
i.e., ↑ x2
(1 + x2 )
2
ex
i.e., ↑ |x|
(1 + x2 )

The best critical region of the form is extreme two tails of


N (0, 12 ). By NPL, the MP test is

 1 if |x| ≥ c
φ(x) =
 0 otherwise

 1 if x < − c or x > c
or φ(x) =
 0 otherwise
Most powerful test 61

c is determined by

EH0 [φ(X)] = 0.05


Z −c Z ∞
p0 (x)dx + p0 (x)dx = 0.05 = 0.025 + 0.025
−∞ c
Z −c
p0 (x)dx = 0.025
Z−∞

and p0 (x)dx = 0.025
c
Z ∞ √
2 2
i.e., √ e−x dx = 0.025
c 2π
Z ∞
1 − 1 x2
√ √ e 2 dx = 0.025 (2.1)
2c 2π

From the normal table


Z ∞
p(z)dz = 0.025 where Z ∼ N (0, 1) (2.2)
1.96

Comparing the equations (2.1) and (2.2) give



2c = 1.96 ⇒ c = 1.386. The MP level α = 0.05 test is

 1 if x < −1.386 or x > 1.386
φ(x) =
 0 otherwise

The power of the test is

β = EH1 [φ(X)]
Z −1.396 Z ∞
= p1 (x)dx + p1 (x)dx
−∞ 1.386
Z −1.386 Z ∞
1 1 1 1
= 2
dx + dx
π −∞ 1+x π 1.386 1 + x2
1 1.386
Z
1
= 1− dx
π −1.386 1 + x2
2
= 1 − tan−1 (1.386) since tan(−θ) = − tan(θ)
π
2 π
= 1− (540 .18950 )
π 1800
= 1 − 0.6021 = 0.3979
Most powerful test 62

Example 2.12

 6x for 0 ≤ x ≤ 12
Let p0 (x) =
 2 − 2x for 1 ≤ x ≤ 1
2

1

2 if 0 ≤ x ≤ 2
and p1 (x) =
 0 otherwise
H0 specifies the function p0 (x) and H1 specifies the function
p1 (x). Find out the MP test of size α = 0.05 for testing H0
vs H1 based on a single observation. Also obtain the power
of the test
p( x) 1
Consider p0 (x) in the interval 0 < x < 2

p1 (x) 1
i.e., = ↓ x ⇒x<c
p0 (x) 12x

1 p1 (x) 1
When 2 < x < 1, = ↑x ⇒x>c
p0 (x) 4(1 − x)
p1 (x)
Thus p0 (x) is monotonic in x
.. . The MP test is

 1 x < c or x > c
φ(x) =
 0 otherwise
Most powerful test 63

where c is determined by equal two tails

i.e., 0.05 = EH0 [φ(X)]

0.025 + 0.025 PH0 {X < c} + PH0 {X > c}


=
Z c Z 1
0.025 + 0.025 = 6xdx + (2 − 2x)dx
0 c
Z c
0.025 = 6xdx
0
r
0.025
⇒ c= = 0.0913
3
Z 1
0.025 = (2 − 2x)dx
c
2
⇒ c − 2c + 0.975 = 0

2 ± 0.1
c = = 0.8419 or1.1581
2
The value of c is 0.8419 and c = 1.1581 is deleted, since the
pdf p0 (x) has the support {x | 0 ≤ x ≤ 1}. The MP level
α = 0.05 is

 1 if x < 0.0913 or x > 0.8419
φ(x) =
 0 otherwise
The power of the test is

β = EH1 [φ(X)]
Z 0.0913 Z 2
1 1
= dx + dx = 0.6249
0 2 0.8419 2
1
Example 2.13 Find the MP test of size 10 based on a
single observation x of X for testing H0 : X ∼ p0 (x) vs
H1 : X ∼ p1 (x) where

 2 1
if 0 < x < 4
p0 (x) =
 2 if 1
≤x<1
3 4

1 1

2 if 0 < x < 2
and p1 (x) =
3 1

2 if 2 ≤x<1
Most powerful test 64

p1 (x) 1
Consider p0 (x) in the interval 0 < x < 4

p1 (x)
i.e., = 0.25
p0 (x)
p1 (x)
when 14 ≤ x < 12 ⇒ = 0.75
p0 (x)
p1 (x)
when 12 ≤ x < 1 ⇒ = 2.25
p0 (x)

p1 (x)
Thus ↑ x ⇒x>c
p0 (x)
The MP level α = 0.1 is

 1 if x > c
φ(x) =
 0 otherwise
R1 2
where 0.1 = c 3 dx ⇒ c = 0.85. The MP level α = 0.1 test is

 1 if x > 0.85
φ(x) =
 0 otherwiase

The power of the test is β = 0.225


Problems

2.1 Show that the size of the MP randomized test of level


α will be α unless the power is unity in the context of
testing a simple H0 vs simple H1

2.2 Show that any test φ(x) of the form





 1 if p1 (x) > kp0 (x)

φ(x) = γ if p1 (x) = kp0 (x)



 0 if p1 (x) < kp0 (x)

for some k ≥ 0 and 0 < γ < 1 is the MP test of size α


for testing H0 : θ = θ0 vs H1 : θ = θ0 . If k = ∞, then
Most powerful test 65

the MP test is given by



 1 if p0 (x) = 0
α = 0 ⇒ φ(x) =
 0 if p1 (x) > 0

 1 if p1 (x) > 0
and α = 1 ⇒ φ(x) =
 0 if p0 (x) = 0

2.3 Find the MP test to test H0 : θ = 2 vs H1 : θ = 1 using


a random observation x of X with size α = 0.05 from

 θ if 0 < θ ≤ x < ∞
x2
pθ (x) =
 0 otherwise

Also find the power of test



 1 if x < 2.10526
Ans: φ(x) = and β = 0.75
 0 otherwise

2.4 State the Neyman- Pearson Lemma for randomized tests


and prove the sufficient part of the Lemma

2.5 Show that the power of the MP test is greater than α if


the size of the test is α

2.6 Let φ be a test with size α. Neyman Pearson type test


for simple H0 vs simple H1 . Let k(α) denote the value
of k such that

 1 if p1 (x) > kp0 (x)
φ(x) =
 0 if p1 (x) < kp0 (x)

Show that if α1 < α2 , then k(α2 ) ≤ k(α1 )

2.7 State Neyman Pearson fundamental Lemma. Prove the


existence of the test given by the Lemma
Most powerful test 66

2.8 
1
 6x for 0 ≤ x < 2
Let p(x) =
 2 − 2x 1
for 2 ≤x≤1

1

2 for 0 ≤ x ≤ 2
and g(x) =
 0 otherwise
H0 specifies the function p(x) and H1 specifies the fre-
quency function g(x). Find the best test of size α = 0.1
for testing H0 vs H1 , based on a single observation x
on X. Also obtain the power of the test

 1 if x < 0.1291 or x > 0.7764
Ans: φ(x) =
 0 otherwise β = 0.6764

2.9 Obtain the MP randomized of size 0.25 for testing H0 :


θ = 0.4 vs H1 : θ = 0.6, θ being the parameter of
a Bernoulli distribution from which three observations
are available



 1 if x = 3

Ans: φ(x) = 0.6458 if x = 2 β = 0.5



 0 if x = 0, 1

2.10 Let X1 , X2 , X3 , · · · , Xn be the iid random sample


drawn from a normal distribution with mean θ and vari-
ance σ 2 . Let H0 : θ = 0, σ 2 = σ12 vs H1 : θ = θ1 , σ 2 =
σ12 where σ12 is specified. Obtain the MP level α test
for testing H0 vs H1

2.11 Find the MP level α = 0.05 test based on a single


observation x on X for testing H0 : X ∼ p0 (x) vs H1 :
Most powerful test 67

X ∼ p1 (x) where

 2 1
if 0 < x < 4
p0 (x) =
 2 if 1
≤x<1
3 4

1 1

2 if 0 < x < 2
and p1 (x) =
3 1

4 if 2 ≤x<1

 1 if x > 0.925
Ans: φ(x) =
 0 otherwise β = 0.1125

2.12 To test H0 : θ = 1 vs H1 : θ = 0 for a single observation


x of X from

 (2θx + 1 − θ) if o < x < 1
p(x) =
 0 otherwise

is used. Find the power of the MP level α = 0.05


test Ans: β = 0.2236

2.13 If a sufficient statistic T (X) exists for the family


{pθ , θ ∈ Ω}, θ = {θ0 , θ1 }, then find the Neyman Pearson
MP test is a function of T (X)

2.14 A random sample of size n is available from a uni-


form distribution over (0, θ) to test the hypothesis
H0 : θ = θ0 vs H: θ = θ1 (> θ0 ). Show that the criti-
cal function φ1 (x) and φ2 (x) given below are MP level
α teats for testing H0 vs H1 where Yn is the maximum
of the observations

1
 1 if yn > θ0 (1 − α) n
φ1 (x) =
 0 otherwise
Most powerful test 68


 1 if yn > θ0
and φ2 (x) =
 α if yn ≤ θ0
Explain why the necessity part of the NPL is not con-
tradicted

2.15 Testing a simple hypothesis H0 against a simple alter-


native H1 , let the powers of the MP tests at level α and
α0 be β and β 0 respectively. Then always

(a) β ≥ β 0 (c) β = β 0
(b) β ≤ β 0 (d) β 6= β 0 Ans: (c)

2.16 A test is one sided or two sided depends on


(a) an alternative hypothesis
(b) a composite hypothesis
(c) a simple hypothesis
p1 (x)
(d) the ratio of p0 (x) Ans: (d)

2 .17 Neyman Pearson Lemma gives the method of testing


(a) a simple hypothesis vs a simple alternative
(b) a simple hypothesis vs a composite alternative
(c) a composite hypothesis vs a composite alternative
(d) a composite hypothesis vs a simple hypothesis
Ans:(a)

2.18 If the distribution function under H0 is P0 and under


H1 is P1 and they are equal, then the MP level α test
with power β satisfies
Most powerful test 69

(a) α 6= β (c) β > α


(b) α > β (d) α = β Ans: (d)

2.19 A family of distributions P = {Pθ , θ ∈ Ω}, is an single


parameter regular exponential family , then the NPL
MP test is a function of
(a) the sufficient statistic
(b) an order statistic
(c) the first order statistic
(d) the maximum order statistic Ans: (a)

2.20 In a foot ball league, the goals scored by home teams


over 30 matches have the following frequency distribu-
tion.

Number
of goals 0 1 2 3 4 5
Frequency 92 121 91 50 19 7

The average goals scored by home team is 1.49. We


want to test H0 : Goal distribution is Poisson based
on observations the value of the χ2 statistic for good-
ness of fit is 1.27. Given χ20.05,6 = 1.64 , χ20.05,5 = 1.15
χ20.05,6 = 12.59 and χ20.95,5 = 11.07, which of the follow-
ing are true?
(a) H0 is rejected at 5% level of significance
(b) χ2 statistic has 5 degrees of freedom under H0 , the
maximum likelihood estimate of the rate parameter of
Poisson is 1.49
(c) Under H0 , in a game, the MLE of the probability
that home team will score at most goal is 2.49e−1.49
Most powerful test 70

(d) H0 is accepted at 5% level of significance


Ans: (a) and (c)

2.21 Let {X1 , X2 , · · · , Xn } be a random sample from the


distribution with pdf p(x). Consider the following test-
1 2
ing problem, H0 : p0 (x) = √1 e− 2 x , −∞ < x < ∞ vs

1 1
H1 : p1 (x) = π 1+x2 , −∞ < x < ∞. Using NPL which
of the following statements are true?
(a) The rejection region is ni=1 (Xi2 − 1)2 ≤ c
P

(b) The rejection region is a function of


|X1 |, |X2 |, · · · , |Xn |
(c) The rejection region is a function of X12 , X22 , · · · , Xn2
Pn 2 ≥ c
(d) The rejection region is i=1 (|Xi | − 1)

Ans: (b) and (c)

2.22 In the content of testing of statistical hypotheses which


one of the following statement is true?
(a) When testing a simple hypothesis H0 vs simple H1
the likelihood ratio test leads to the MP test
(b) When testing a simple hypothesis H0 vs simple H1
P {Rejecting H0 | H0 is true}
+ P { Accepting H0 | H1 is true} = 1
(c) For testing a simple hypothesis H0 vs simple H1
randomized test is used to achieve the desired level of
the power of the test
(d) UMP tests for testing a simple hypothesis H0 vs
composite H1 always exist Ans: (a)

2.23 Suppose X and Y are two independent exponential ran-


dom variables with mean θ and 2θ respectively where
Most powerful test 71

θ is unknown. Which of the following statements are


true?
(a) Right tailed test based on X + 2Y is UMP for test-
ing H0 : θ = 1 vs H1 : θ < 1
(b) Left tailed test based on 2X + Y is UMP for testing
H0 : θ = 1 vs H1 : θ < 1
(c) UMP test does not exist for testing H0 : θ = 1 vs
H1 : θ 6= 1
(d) all the above Ans: (d)

2.24 Suppose X has pdf pθ (x) where θ ∈ {0, 1}. Also


p0 (x) = 1 if 0<x<1
and p1 (x) = √1 if 0 < x < 1
2x
For a single observation x on X,to test H0 : θ = 0 vs
H1 : θ = 1 at level α, 0 < α < 1, the MP test
(a) rejects H0 if x > 1 − α
(b) rejects H0 if x < α

(c) rejects H0 if x < α

(d) has power α Ans : (b) and (d)

2.25 For a random variable X with E[X] > 0, the coeffi-


σx
cient of variation ρ is defined as ρ = E[X] where σx2 is
the variance of X. Suppose X1 , X2 , · · · , Xn are inde-
pendent samples from a normal population with mean
2 and unknown coefficient of variation ρ. It is desired
to test H0 : ρ ≤ 5 vs H1 : ρ > 5. The likelihood ratio
test is of the form reject H0 if
(Xi − 2)2 > c
P
(a)

(b) (Xi − 2)2 < c


P
Most powerful test 72

(Xi − X̄)2 > c


P
(c)

(d) (Xi − X̄)2 < c


P
Ans: (a)

2.26 Suppose in a one - way analysis of variance model, the


sum of squares of all the group means is o ( Assume that
all the observations are not same). Then the value of
the usual F - statistic for testing the equality of means
is

(a) undefined (c) 1


(b) 0 (d) ∞ Ans: (b)

2.27 Let be a single observation x on X from a Cauchy


distribution with pdf

 1 1
−∞ < x < ∞, θ ∈ (−∞, ∞)
π 1+(x−θ)2
pθ (x) =
 0 otherwise

For testing H0 : θ = −1 vs H1 : θ = 0, the following


function is used

 1 √ x >c
i.e., φ(x) = 1+x2
 0 otherwise

What is the value of c so that the power of the test is 0.5?

(a) A solution of (c) 6


q 
tan−1 c
1+c =
π
3 (d) tan−1 ( 21 )
π
(b) 4 Ans: (a)

2.28 Suppose the pdf of a random variable X under the


parameter θ = θ0 and θ = θ1 (6= θ0 ) are given by
Most powerful test 73

x 0 1 2 3
pθ0 (x) 0.01 0.04 0.5 0.45
pθ1 (x) 0.02 0.08 0.4 0.5

Define a test φ such that



 1 if x = 0, 1
φ(x) =
 0 if x = 2, 3

For testing H0 : θ = θ0 vs H1 : θ = θ1 the test φ is


(a) a MP test at level α = 0.05
(b) a likelihood ratio test at level α = 0.05
(c) an unbiased test
(d) test of size 0.05 Ans: (a), (b), (c) and (d)

2.29 Consider the problem of testing H0 : X ∼ Normal with


1
mean 0 and variance 2 vs H1 : X ∼ Cauchy( 0, 1) Then
for testing H0 vs H1 the MP level α test
(a) Does not exist
(b) Reject H0 iff |x| > c1 , where c1 is such that the test
is of size α
(c) Reject H0 iff |x| < c2 , where c2 is such that the test
is of size α
(d) Rejects H0 iff |x| < c3 or |x| > c4 , c3 < c4 where c3
and c4 are such that the test is of size α Ans: (b)

2.30To test the equality of effects of 10 schools against all al-


ternatives,we take a random sample of 5 students from
each school for calculating the F statistic and note
their marks in a common examination between sum of
squares and total sum of squares are found to be 180
Most powerful test 74

and 500 respectively. What is the p value for the stan-


dard F test?
(a) P {F4,45 ≥ 1.5}
(b) P {F9,40 ≥ 1.6}
(c) P {F9,40 ≥ 2.5}
(d) P {F4,45 ≥ 3.6} Ans : (c)

2.31 Let X1 , X2 , · · · , Xn be a random sample from an un-


known continuous distribution F with median θ. Let
Tn count the number of i for which Xi > 0, i =
1, 2, 3, · · · , n. Consider the problem of testing H0 : θ =
0 vs H1 : θ = −1 based on the test statistic Tn . Which
of the following are true?
(a) The distribution of Tn is independent of F under H1

(b) Left tailed test based on Tn is consistent against H1

(c) Left tailed test based on Tn is unbiased against H1


(d) Left tailed test based on Tn has the p value P {Tn ≤
0} under H1 Ans: (c)

2.32 Suppose pθi (xi ) ∼ N (θi , σi2 ), i = 1, 2 are independently


distributed. Under the prior distribution θ1 and θ2 are
iid N (µ, τ 2 ) where σ 2 , µ and τ 2 are known.Then which
of the following is true about the marginal distributions
of X1 and X2 ?
(a) X1 and X2 are iid N (µ, τ 2 + σ 2 )
(b) X1 and X2 are not normally distributed
(c) X1 and X2 are not iid N (µ, τ 2 + σ 2 )
(d) X1 and X2 are normally distributed but not iden-
tically distributed Ans: (a)
3. APPLICATIONS OF NEYMAN PEARSON LEMMA

3.1 Introduction

Most of a parametric family of distributions depending


on one or more continuous parameters. One sided hypothesis
H0 : θ ≤ θ0 or H0 : θ ≥ θ0 , the MP test of H0 vs H1 depends
on the alternative hypothesis values which are not UMP test.
UMP test does exist, if the families with Monotone Likelihood
Ratio (MLR) property. A few examples of the families with
MLR have UMP one sided tests
pθ1 (x)
In NPL, the MP test depends on the ratio pθ0 (x) If
it is an increasing function in t(x), then the optimum critical
n o
p (x)
region pθθ1 (x) > k can be expressed as {t(x) > c} which
0

is the free value of the alternative hypothesis H1 and


hence there exist an UMP level α test for testing H0 : θ ≤ θ0
vs H1 : θ > θ0 . In general the UMP test does not exist. If it
exists an additional assumption MLR is satisfied. Thus UMP
one sided tests exist for only a few families of distributions
which have the MLR property

3.2 Monotone Likelihood Ratio Property

In many statistical decision problems, the observations


can be summarized in a single sufficient statistic, such that
the likelihood ratio for any two distributions in the family un-
der consideration is a monotone function of that statistic.The
MLR property is given below
Definition A density or mas function pθ (x) of Pθ which is ab-

solutely continuous to have MLR with respect to a real valued


Neyman-Pearson Lemma 76

pθ1 (x)
function t(x) = t. If for θ0 < θ1 , pθ0 (x) is non-decreasing function

in t

pθ1 (x)
i.e., = g(t) ↑ t
pθ0 (x)
g(t1 ) ≤ g(t2 ) whenever t1 ≤ t2

3.3 Exponential Family of Distributions

A family {pθ (x), θ ∈ Ω} of probability mass or density


functions of the form

 c(θ) eQ(θ)t(x) h(x) if a < x < b
pθ (x) =
 0 otherwise

is said to be regular exponential family of the probability


functions, if

• the range of the family of distributions a < x < b is


independent of the parameter θ

• Q(θ) is non-trivial continuous function in θ

• t(x) is a non- trivial funtion in x

• h(x) is a continuous function of x in a < x < b

Definition If θ is a single value of the parameter space, then


pθ (x) = c(θ) eQ(θ)t(x) h(x) is a single parameter exponential
family
Theorem 3.1 One parameter exponentially family with
mass or density function of the form

pθ (x) = c(θ)eQ(θ)t(x) h(x)

has MLR property provided Q(θ) is a strictly monotonic func-


tion of θ
Neyman-Pearson Lemma 77

Proof Assume Q(θ) ↑ θ, i.e., Q(θ1 ) ≤ Q(θ2 ) for θ1 ≤ θ2

pθ2 (x) c(θ2 )eQ(θ2 )t(x) h(x)


=
pθ1 (x) c(θ1 )eQ(θ1 )t(x) h(x)
c(θ2 ) [Q(θ2 )−Q(θ1 )]t(x)
= e
c(θ1 )
= g(t) ↑ t if Q(θ2 ) − Q(θ1 ) ≥ 0

Assume Q(θ) ↓ θ, i.e., Q(θ1 ) ≥ Q(θ2 ) for θ1 ≤ θ2

pθ2 (x) c(θ2 )eQ(θ2 )t(x) h(x)


=
pθ1 (x) c(θ1 )eQ(θ1 )t(x) h(x)
c(θ2 ) [Q(θ2 )−Q(θ1 )]t(x)
= e
c(θ1 )
= g(t) ↓ t if Q(θ2 ) − Q(θ1 ) ≤ 0

Thus pθ (x) has MLR property

3.4 Method of obtaining UMP test

A condition under which the UMP test exists, is when


the family of distributions being considered possesses the
MLR property. Karlin and Rubin have proposed the method
of obtaining the UMP test. The Karlin - Rubin Theorem 3.2
is an extension of the Neyman - Pearson Lemma for compos-
ite hypotheses
Theorem 3.2 Let θ be a real parameter and let the random
variable X have probability density function pθ (x) with MLR
in t(x). For testing H0 : θ ≤ θ0 vs H1 : θ > θ0 , there exists a
UMP test which is given by



 1 if t(x) > c

φ(t) = γ if t(x) = c



 0 otherwise
Neyman-Pearson Lemma 78

where c and γ are determine by Eθ0 [φ(T )] = α


Proof Considering an auxiliary problem by fixing θ1 > θ0
for testing H00 : θ = θ0 vs H10 : θ = θ1
By applying NPL for testing the hypothesis H00 vs H10 the
pθ1 (x)
MP test rejects, if pθ0 (x) ↑ t(x), then t(x) > c
For this the MP test is given by existence of the NPL as



 1 if t(x) > c

φ(t) = γ if t(x) = c



 0 otherwise

where c and γ are determined by Eθ0 [φ(T )] = α


Now to show that the test function φ is the UMP
i.e., to show that among φ0 s, Eθ [φ(T )] ≤ α, θ ∈ ΩH0 and
this function φ , Eθ [φ(T )] has maximum ∀ θ ∈ ΩH1
From the corollary of the NPL, it is clear that β(θ1 ) > β(θ0 )
whenever θ1 > θ0 and β(θ) < 1
By repeating these arguments for any θ0 < θ00
⇒ β(θ00 ) > β(θ0 ) whenever θ0 < θ00
⇒ β(θ) is strictly increasing function of θ when β(θ) < 1
Since β(θ) ↑ θ the test φ satisfies Eθ [φ(T )] ≤ α, for θ ≤ θ0
The class of tests satisfying Eθ [φ(T )] ≤ α for θ ≤ θ0 is con-
tained in the class satisfying Eθ [φ(T )] ≤ α
The given test φ maximizes Eθ1 [φ(T )] = β(θ1 ) within the
wider class Eθ [φ(T )] ≤ α and it also maximizes β(θ1 ) subject
to Eθ0 [φ(T )] ≤ α, θ ≤ θ0
since the test φ does not involve θ ∀ θ ∈ ΩH1
.. . φ is the UMP test for testing H0 vs H1
Note To test H0 : θ ≥ θ0 vs H1 : θ < θ0 , then in the test
function φ, the inequality sign will be reversed
Neyman-Pearson Lemma 79

Corollary 3.1 Let θ be a real parameter, and let X have


probability density or mass

pθ (x) = c(θ)eQ(θ)t(x) h(x)

where Q(θ) is strictly monotone. Then there exists an UMP


test φ for testing H0 : θ ≤ θ0 vs H1 : θ > θ0
Case(i) If Q(θ) ↑ θ, the UMP level α test is given by



 1 if t(x) > c

φ(t) = γ if t(x) = c



 0 otherwise

where c and γ are determined by Eθ0 [φ(T )] = α


Case(ii) If Q(θ) ↓ θ, the UMP level α test is given by



 1 if t(x) < c

φ(t) = γ if t(x) = c



 0 otherwiase

where c and γ are determined by Eθ0 [φ(T )] = α


Proof: Case(i)
Suppose Q(θ) is increasing in θ and θ1 > θ0 , then

pθ1 (x) c(θ1 ) [Q(θ1 )−Q(θ0 )]t(x)


= e
pθ0 (x) c(θ0 )
pθ (x)
⇒ 1 ↑ t(x), since Q(θ1 ) − Q(θ0 ) > 0 for θ1 > θ0
pθ0 (x)

.. . The UMP level α test is given by





 1 if t(x) > c

φ(t) = γ if t(x) = c



 0 otherwise
Neyman-Pearson Lemma 80

where c and γ are determined by Eθ0 [φ(T )] = α


Case(ii)
Suppose Q(θ) is decreasing function in θ and θ1 < θ0 , then
pθ0 (x) c(θ0 ) −[Q(θ1 )−Q(θ0 )]t(x)
= e
pθ1 (x) c(θ1 )
since Q(θ) ↓ θ ⇒ Q(θ1 ) > Q(θ0 ) for θ1 < θ0
pθ0 (x)
⇒ ↑ − t(x) if Q(θ1 ) − Q(θ0 ) > 0
pθ1 (x)

.. . The UMP level α test is given by





 1 if −t(x) > c0

φ(t) = γ if −t(x) = c0



 0 otherwise

or



 1 if t(x) > c

φ(t) = γ if t(x) = c where −c0 = c



 0 otherwise

where c and γ are determined by Eθ0 [φ(T )] = α and c = −c0

3.5 MLR of location family of distributions

Suppose that f (x − θ) is pdf of the random variable X ∀ x ∈


<. Let pθ (x) = f (x − θ), θ ∈ <. Then pθ (x) has MLR in x
if and only if f (x − θ) is log - concave. Prove that pθ (x) has
MLR in x if f (x − θ) is log-concave. Note that MLR holds if
and only if

f (x1 − θ2 ) f (x2 − θ2 )
≤ for all x1 < x2 , θ1 < θ2
f (x1 − θ1 ) f (x2 − θ1 )

This holds if and only if

log f (x1 − θ2 ) − log f (x1 − θ1 ) ≤ log f (x2 − θ2 ) − log f (x2 − θ1 )


Neyman-Pearson Lemma 81

i.e., log f (x2 −θ1 )+log f (x1 −θ2 ) ≤ log f (x1 −θ1 )+log f (x2 −θ2 ) (3.1)

(x2 − x1 )
Let t =
(x2 − x1 + θ2 − θ1 )
and x1 − θ1 = t(x1 − θ2 ) + (1 − t)(x2 − θ1 )

x2 − θ2 = (1 − t)(x1 − θ2 ) + t(x2 − θ1 )

Thus log - concavity of f (x − θ) implies that

log f (x1 −θ1 ) ≥ t log f (x1 −θ2 )+(1−t) log f (x2 −θ1 ) (3.2)

and log f (x2 −θ2 ) ≥ (1−t) log f (x1 −θ2 )+t log f (x2 −θ1 ) (3.3)

Adding (3.2) and (3.3) give (3.1). Thus pθ (x) has MLR prop-
erty
Assume pθ (x) has MLR property, to prove that f (x − θ) is
log - concave. Let a < b be any real number, and set

x1 − θ2 = a, x2 − θ1 = b
and x1 − θ1 = x2 − θ2

a+b
But x1 − θ1 = x2 − θ2 = and (3.1) becomes
2
 
a+b
log f (a) + log f (b) ≤ 2 log f ∀ a, b ∈ <
2

⇒ f (x − θ) is concave
Example 3.1 Show that there is no UMP test exist for test-
ing H0 : θ = θ0 vs H1 : θ 6= θ0 in one parameter exponential
family of distributions
The one parameter exponential family of densities is

pθ (x) = c(θ)eQ(θ)t(x) h(x)


Neyman-Pearson Lemma 82

Without loss of generality,let us assume that Q(θ) = θ, since


Q(θ) is monotonic in θ. Let H10 : θ = θ0 (6= θ0 )
By NPL, the MP level α test for testing H00 : θ = θ0 vs
H10 : θ = θ0 (6= θ0 ) has the form

pθ0 (x)


 1 if pθ0 (x) >k

pθ0 (x)
φ(x) = γ if pθ0 (x) =k



0 otherwise

where k and γ are determined so that Eθ0 [φ(X)] = α


0
c(θ0 )eθ t(x)
Now φ(x) = 1 ⇒ > k
c(θ0 )eθ0 t(x)
⇒ (θ0 − θ0 )t(x) > c
h i
where c = log k + log[ c(θ 0)
c(θ0 ) ]

If θ0 > θ0 and φ(t) = 1, then t(x) > c1 and


if θ0 < θ0 and φ(t) = 1, then t(x) < c2
Thus φ(x) depends on θ0 , the UMP test also depends on
θ0 . Hence the UMP test does not exist for H0 : θ = θ0 vs
H1 : θ 6= θ0
Example 3.2 In a production process constitutes indepen-
dent trials with constant probabilities θ and 1 − θ of being
defective as 1 and non-defective as 0 respectively. The num-
ber of defectives X in a sample of size n is distributed as
b(θ, n). Find the UMP level α test for testing the hypothesis
H0 : θ ≥ θ0 vs H1 : θ < θ0 . Also find the power function. The
pmf of binomial distribution is

n! x n−x
x!(n−x)! θ (1 − θ) if x = 0, 1, 2, 3, · · · , n

pθ (x) =
 0 otherwise
Neyman-Pearson Lemma 83

θ n!
i.e., pθ (x) = (1 − θ)n ex log [ 1−θ ]
x!(n − x)!
= c(θ)eQ(θ)t(x) h(x)

θ
where c(θ) = (1 − θ)n , Q(θ) = log[ 1−θ ], t(x) = x and
n! θ −1
h(x) = x!(n−x)! . Further Q(θ) = − log( 1−θ ) = − log 1−θ
θ

i.e., e−Q(θ) = f (θ) = 1


θ − 1 and f 0 (θ) = − θ12 < 0. Thus
f (θ) ↓ θ, i.e., Q(θ) ↓ θ ∀ θ ≤ θ0 . The UMP level α test is



 1 it x < c

φ(x) = γ if x = c



 0 otherwise

where c is determined by Eθ0 [φ(X)] = α

i.e., Pθ0 {X < c} + γPθ0 {X = c} = α


c−1
X n! n!
θ0x (1 − θ0 )n−x + γ θc (1 − θ0 )n−c = α
x!(n − x)! c!(n − c)! 0
x=0

The power function is

β(θ) = EH1 [φ(X)]

= Pθ {X < c} + γPθ {X = c} ∀ θ < θ0


c−1
X n! n!
= θx (1 − θ)n−x + γ θc (1 − θ)n−c
x!(n − x)! c!(n − c)!
x=0

Example 3.3 Let X1 , X2 , · · · , Xn be a random sample from


a geometric distribution with parameter θ. The hypothesis is
to test that H0 : θ ≤ θ0 vs H1 : θ > θ0 . Find the UMP level
α test and the power function
The pmf of the geometric random sample Xi , i =
1, 2, 3, · · · , n is

 θ(1 − θ)xi if xi = 1, 2, 3, · · · , n, 0 < θ < 1
pθ (x) =
 0 otherwise
Neyman-Pearson Lemma 84

Consider the likelihood of X1 , X2 , · · · , Xn


n
Y
pθ (x) = θ(1 − θ)xi
i=1
P
n xi
= θ (1 − θ)
P
= θn elog(1−θ) xi

= c(θ)eQ(θ)t(x) h(x)

Pn
where c(θ) = θn , Q(θ) = log(1 − θ), and t(x) = i=1 xi
−1
Q0 (θ) = 1−θ < 0, ⇒ Q(θ) ↓ θ (θ ≥ 0)
.. . The UMP test is



 1 if t(x) < c

φ(t) = γ if t(x) = c



 0 otherwise

where c and γ are determined by Eθ0 [φ(T )] = α


T = ni=1 Xi ∼ Negative binomial distribution and
P


 (n+t−1) θn (1 − θ)t if t = 0, 1, 2, 3, · · ·
(n−1)!t!
pθ (x) =
 0 otherwise

Eθ0 [φ(T )] = α

Pθ0 {T < c} + γPθ0 {T = c} = α


c−1
X (n + t − 1) n (n + c − 1)! n
θ0 (1 − θ0 )t + γ θ (1 − θ0 )c = α
(n − 1)!t! (n − 1)!c! 0
t=0

The power function of the test is

β(θ) = EH1 [φ(T )]

= Pθ {T < c} + γPθ {T = c} ∀ θ > θ0


c−1
X (n + t − 1)! n (n + c − 1)! n
= θ (1 − θ)t + γ θ (1 − θ)c
(n − 1)!t! (n − 1)!c!
t=0
Neyman-Pearson Lemma 85

Example 3.4 Find the UMP test for testing H0 : D ≤ 2


vs H1 : D > 2 for the hyper geometric distribution with
parameter (N = 20, D) based on a sample size 5 at level
α = 0.05. Also find the power at D = 5
The pmf of the hyper geometric distribution is
 D N −D
 ( x )(Nn−x ) x = 0, 1, 2, · · · min(n, D)

PD {X = x} = (n)
 0

otherwise

where N is the population size, n is the sample size and D is


the defective in the sample. Consider the likelihood ratio
D+1 N −D−1
 
pD+1 (x) x n−x
= D N −D
 
pD (x) x n−x
(D + 1)(N − D − n + x)
= ↑ x
(D + 1 − x)(N − D)
Here D = 2, N = 20, n = 5, x = 0, 1, 2
p3 (0) 13
When x = 0 ⇒ p2 (0) = 18 = 0.7222
When x = 1 ⇒ pp32 (1) 7
(1) = 3 = 2.3333
When x = 2 ⇒ pp23 (2) 5
(2) ⇒ 2 = 2.5000
p3 (x)
Thus p2 (x) ↑ x. .. . The UMP test is



 1 if x > c

φ(x) = γ if x = c



 0 otherwise

where γ is determine by EH0 [φ(X)] = 0.05


p3 (x)
But p2 (x) ↑ x, choose c = 2.5 ⇒ c = 2 or 3 , since x is an
integer. The UMP test is given by



 1 if x > 2

φ(x) = γ if x = 2



 0 otherwise
Neyman-Pearson Lemma 86

where γ is determined by EH0 [φ(X)] = 0.05

PH0 {D > 2} + γPD {D = 2} = 0.05

0.05
⇒γ= 0.05265 = 0.95003 and 0 ≤ γ ≤ 1
.. . φ is a test function. The UMP level α = 0.05 is

 0.95003 if x = 2
φ(x) =
 0 otherwise

The power of the test at D = 5 is 0.28


Example 3.5 Let X1 , X2 , X3 , · · · , X15 be iid with exponen-
tial pdf 
 θe−θx if x > 0, θ > 0
pθ (x) =
 0 otherwise
The hypothesis is to test that H0 : θ ≤ 2 vs H1 : θ > 2. Find
the UMP level α = 0.05 test and sketch the power curve
Assume H0 : θ ≤ θ0 vs H1 : θ > θ0 . Consider the
likelihood of X1 , X2 , · · · , Xn

pθ (x) = pθ (xi , x2 , · · · , xn )
P
= θn e−θ xi

= c(θ)eQ(θ)t(x) h(x)

where c(θ) = θn , Q(θ) = −θ, t(x) =


P
xi and h(x) = 1
Q(θ) ↓ θ ∀ θ > θ0 . The UMP level α test is

 1 if t < c
φ(t) =
 0 otherwise

where c is determined by Eθ0 [φ(X)] = α


Neyman-Pearson Lemma 87

Pn
Let T = i=1 Xi ∼ G(n, 1θ ) and its pdf is
θn −θt n−1
pθ (t) = e t , 0<t<∞
Γn
1
Let 2θt = y, dt = 2θ dy
θ n − y  y n 1
.. . p(y) = e 2
Γn 2θ 2θ
1 y 2n
= 2n e− 2 y 2 −1
2 2 Γ 2n
2

This is the pdf of χ2 distribution with 2n degrees of freedom

α = Eθ0 [φ(T )] = Pθ0 {T < c}


Z c n
θ0 −θ0 t n−1
= e t dt
0 Γn
Z 2cθ0
1 y 2n
= 2n e− 2 y 2 −1
0 2 2 Γ 2n
2

From the χ2 table with 2n = 30 degrees of freedom, 18.493


is the ordinate value corresponding to the area = 0.05

Z ∞
i.e., p(χ2 )dχ2 = 0.05 at 30 degrees of freedom
18.493
where n = 15 and θ0 = 2
.. . 2cθ0 = 18.493 ⇒ c = 4.6232. The UMP test is

 1 if t < 4.6232
φ(t) =
 0 otherwise

The power function

β(θ) = EH1 [φ(T )]

= Pθ {T < 4.6232} ∀ θ ≥ 2
Z 4.6232 n
θ −θt n−1
= e t dt ∀ θ ≥ 2
0 Γn
Z 2×4.6232θ
1 y 2n
= 2n e− 2 y 2 −1
2n
0 22Γ2
Neyman-Pearson Lemma 88

Using χ2 table for different values of θ ≥ 2 and are given in


Table 3.1

Table 3.1 UMP test Power function values

θ 2.0 2.2 2.5 2.7 3.1 3.6


β(θ) 0.05 0.10 0.20 0.30 0.50 0.70
θ 3.9 4.4 4.7 5.2 5.5 6.0
β(θ) 0.80 0.90 0.95 0.98 0.99 1.0
Figure 3.1 is the power curve of the UMP test
β(θ)
1

.5

.1

0 2 4 6 8 θ
Figure 3.1 UMP test power curve

Example 3.6 Obtain the UMP test at level α for testing


H0 : θ < θ0 vs H1 : θ > θ0 for the pdf

 1 if 0 < x < θ
θ
pθ (x) =
 0 otherwise

based on a random sample of size n


Define θ1 ∈ (0, θ0 ) and θ2 ∈ (θ0 , θ̂(x)) so that θ1 <
θ2 . Let X1 , X2 , X3 , · · · , Xn be a random sample of size n
Consider the likelihood ratio
 n
pθ2 (x) θ1
= ↑ θ̂(x) = max{x1 , x2 , · · · , xn }
pθ1 (x) θ2
For illustration, the hypothesis is to test that H0 : θ ≤ 2 vs
H1 : θ > 2. Consider a sample of size n = 4 and random
Neyman-Pearson Lemma 89

sample observations {x1 = 2, x2 = 3, x3 = 4, x4 = 5}. Here


θ̂(x) = max{2, 3, 4, 5} = 5, so that θ1 ∈ (0, 2) and θ2 ∈ (2, 5).
 4
For fixed θ0 = 2, and θ1 < θ2 , choose θ1 = 2 , then θθ21 =
2 4

5 = 0.0256. Similarly for n = 3 ,n = 2 the ratios are in
Table 3.2. It shows the MLR property

Table 3.2 MLR ratio values

 n
θ1
n θ1 Observations θ2 θ2

4 2 {2, 3, 4, 5} 5 0.0256
3 2 {2, 3, 4} 4 0.1250
2 2 {2, 3} 3 0.4444

Thus pθ (x1 , x2 , · · · , xn ) has MLR property. .. . There exist an


UMP level α test.

 1 if t > c
φ(t) =
 0 otherwise

where c is determined by EH0 [φ(T )] = α. Denote T =


M ax{X1 , X2 , X3 , · · · , Xn }. The pdf of T is given by

 nn tn−1 if o < t < θ
θ
pθ (t) =
 0 otherwise

Eθ0 [φ(T )] = α, Pθ0 {T > c} = α}


Z θ0
n n−1 1
t dt = α ⇒ c = θ0 (1 − α) n
c θ0

1
 1 if t > θ0 (1 − α) n
φ(t) =
 0 otherwise
Neyman-Pearson Lemma 90

The power function is

β(θ) = EH1 [φ(T )]


n 1
o
= Pθ T > θ0 (1 − α) n ∀ θ ≥ θ0
Z θ
n n−1
= n
t dt
1
θ0 (1−α) n θ
 n
θ0
= 1− (1 − α), ∀ θ ≥ θ0
θ

Example 3.7 Let X be a random variable with pmf



 θx (1 − θ)1−x if x = 0, 1
pθ (x) =
 0 otherwise
1 1
The hypothesis is to test that H0 : θ = 4 vs H1 : θ > 4 by
taking a random sample of size n = 10 from a production
P10
process and rejecting H0 : θ = 41 iff i=1 xi > 8. Find the

size and power function of the test


Denote T = 10
P
i=1 Xi ∼ b(10, θ) and its pmf

 10θt (1 − θ)10−t t = 0, 1, 2, · · · , 10
t
pθ (t) =
 0 otherwise

The UMP test function φ(t) is given by



 1 if t > 8
φ(t) =
 0 otherwise

The size of the test is

α = EH0 [φ(T )]

= PH0 {T > 8}

= PH0 {T = 9} + PH0 {T = 10}


   9    10
10 1 3 1 31
= + = 10
9 4 4 4 4
Neyman-Pearson Lemma 91

The UMP test power function is

β(θ) = EH1 [φ(T )]

= PH1 {T > 8}

= PH1 {T = 9} + PH1 {T > 10}

= θ9 (10 − 9θ), for θ ≥ 1


4

Example 3.8 Let X! , X2 , X3 , · · · , X25 be a random sample


of size n = 25 from a normal distribution N (θ, 100). Find the
UMP level α = 0.1 test for testing the statistical hypothesis
H0 : θ = 70 vs H1 : θ > 75. Also find the power of the test at
θ = 80 and draw the power curve
The normal pdf with mean θ and σ 2 = 100 is given by

1
 √1 e− 200 (x−75)2
10 2π
if −∞ < x < ∞
pθ (x) =
 0 otherwise

The joint pdf of X1 , X2 , X3 , · · · , Xn is


 25
1 1 P 2
pθ (x) = √ e− 200 (xi −75)
10 2π
 25
1 1 P 2 1 1 2
= √ e− 200 xi e 4 x̄θ e− 8 θ
10 2π
= c(θ)eQ(θ)t(x) h(x)

1 2
where t(x) = x̄, c(θ) = e 8 θ and
 25 1 P 2
h(x) = 10√1 2π e− 200 xi

1
Q(θ) = θ ↑ θ if θ > 75
4

The UMP test is given by



 1 if t > c
φ(t) =
 0 otherwise
Neyman-Pearson Lemma 92

where c is determined by

α = EH0 [φ(T )]

= PH0 {X̄ > c}


 
X̄ − θ0 c − θ0
= PH0 √ > √
σ/ n σ/ n
 √ 
(c − θ0 ) n
= PH0 Z >
σ
Z ∞
= √ p(z)dz (3.4)
(c−θ0 ) n
σ
(X̄−θ0 )
where Z = √
σ/ n
∼ N (0, 1)

From normal table, 1.28 is the ordinate value corresponding


to the area = 0.1
R∞
i.e., 1.28 p(z)dz = 0.1 (3.5)

Equations (3.4) and (3.5) give (c−θσ0 ) n = 1.28
⇒ c − θ0 = 1.28 × √σ , i.e., c = θ0 + 1.28 × √σ
n n
Here θ0 = 75, σ = 10 and n = 25
.. . c = 75 + 1.282 = 77.56. The UMP level α = 0.1 test

 1 if x̄ > 77.56
φ(x̄) =
 0 otherwise

The power function is

β(θ) = EH1 [φ(X̄)]


 
σ
= PH1 X̄ > θ0 + zα √
n
R∞
Using the normal table, zα is given by zα p(z)dz = α
 √ 
X̄ − θ n
= PH1 √ > zα − (θ − θ0 ) for θ ≥ 75
σ/ n σ
Z ∞
= √ p(z)dz
n
zα −(θ−θ0 ) σ
Z ∞
= p(z)dz
(77.56−θ)
2
where zα = 1.28, θ0 = 75, σ = 10 and n = 25
Neyman-Pearson Lemma 93

Figure 3.2 is the power curve of the UMP level α = 0.1 test
for the hypothesis H0 : θ = 75 vs H1 : θ > 75

1.0
β(θ)
.50

.10
75 77 79 81 θ

Figure 3.2 UMP test power curve

The UMP test power function values are given in Table 3.3

Table 3.3 UMP test Power function values

θ 75.0 75.5 76.0 76.5 77.0 77.5


β(θ) 0.10 0.15 0.22 0.30 0.40 0.49
θ 78.0 78.5 79.0 79.5 80.0 81.5
β(θ) 0.59 0.68 0.76 0.83 0.88 0.98

Example 3.9 Let X1 , X2 , X3 , · · · , Xn be a random sample


from a normal distribution N (θ, 16). Find the sample size
n and UMP test of H0 : θ = 25 vs H1 : θ < 25 with power
function β(θ) so that approximately β(θ) = 0.10 and β(23) =
0.90
The joint pdf of the normal distribution with param-
eter θ and variance σ 2 = 16 is

1 1 P 2
pθ (x) = √ e− 32 (xi −θ)
4 2π
= c(θ)eQ(θ)t(x) h(x)

nθ 2
where Q(θ) = nθ
16 , t(x) = x̄, c(θ) = e− 32 and
Neyman-Pearson Lemma 94

1
x2i
P
h(x) = √1 e− 32 , Q(θ) ↓ θ if θ < 25
4 2π
The UMP test is given by

 1 if x̄ < c
φ(x̄) =
 0 otherwise

To find n and c subject to β(25) = 0.1 and β(23) = 0.90

β(θ) = Eθ [φ(X̄)]

= Pθ {X̄ < c}
 
X̄ − θ c−θ
= Pθ √ < √
σ/ n σ/ n
 
c−θ
= Pθ Z < √
σ/ n
X̄−θ
where Z = √
σ/ n
 
c−θ
= φ √
σ/ n
Given β(25) = 0.1
 
c−25
i.e., φ σ/ √
n
= 0.10
Z c−25

σ/ n
p(z)dz = 0.10 (3.6)
−∞
Z −1.28
But p(z)dz = 0.10 (3.7)
−∞
From equations (3.6) and (3.7)

(c − 25) n
⇒ = −1.28 (3.8)
 σ 
c − 23
and β(23) = 0.90, i.e., φ √ = 0.90
σ/ n

(c − 23) n
⇒ = 1.28 (3.9)
σ
Solve the equations (3.8) and (3.9) with σ = 4, give n =
26.265, i.e., n = 26 or 27 and c = 24.001. The UMP test is

 1 if x̄ < 24.001
φ(x̄) =
 0 otherwise
Neyman-Pearson Lemma 95

Example 3.10 A sample of size 20 is drawn from normal


distribution with known mean θ and unknown variance σ 2
The hypothesis is to test that H0 : σ 2 = 5 vs H1 : σ 2 > 5
Construct the power curve at 5% level of significance and also
find the power at σ 2 = 10
The pdf of the random sample X1 , X2 , X3 , · · · , Xn
from N (θ, σ 2 ) for testing the hypothesis H0 : σ 2 = σ02 vs
H1 : σ 2 > σ02 is
 n
1 1 P 2
pσ (x) = √ e− 2σ2 (xi −θ)
2πσ
= c(σ)eQ(σ)t(x) h(x)
 n
−1
where c(σ) = √12π , Q(σ) = 2σ 2
2 ↑ σ, if σ > 5

t(x) = (xi − θ)2 and h(x) = 1. This is an one parameter


P

exponential family of distributions. The UMP level α test is



 1 if t(x) > c
φ(t) =
 0 if otherwise

where c is determined by EH0 [φ(X)] = 0.05


( n )
X
2
i.e., PH0 (Xi − θ)) > c = 0.05
i=1
(Xi − θ)2
P 
c
PH0 > 2 = 0.05
σ2 σ
Z ∞
pn (χ2 )dχ2 = 0.05 (3.10)
c
σ02

where pn (χ2 ) is the pdf of Chi -Square distribution with n


degrees of freedom. From χ2 table
Z ∞
p20 (χ2 )dχ2 = 0.05 (3.11)
31.41
Neyman-Pearson Lemma 96

c
Comparing (3.10) and (3.11) give σ02
= 31.41
.. . c = 5 × 31.41 = 157.05. The UMP level α = 0.05 test is

 1 if P20 (xi − θ)2 > 157.05
i=1
φ(t) =
 0 otherwise

The power function is

β(σ) = EH1 [φ(T )]


( 20 )
X
2
= PH1 (Xi − θ) > 157.052
i=1
(P )
20
i=1 (Xi − θ)2 157.052
= PH1 >
σ2 σ2
Z ∞
= p20 (χ2 )dχ2 ∀ σ2 ≥ 5
157.052
σ2

The power of the test at σ 2 = 10 is β(10) = 0.75 . Table 3.4


shows the power function values for different values at σ 2 ≥ 5

Table 3.4 UMP test power function values

σ2 5.0 5.5 6.0 6.9 8.1 9.7


β(σ) 0.05 0.10 0.20 0.30 0.50 0.70
σ2 10 10.8 12.6 14.5 17.0 19.0
β(σ) 0.75 0.80 0.90 0.95 0.98 0.99

The power curve for testing the hypothesis


H0 : σ 2 = 5 vs H1 : σ 2 > 5 is Figure 3.3
Neyman-Pearson Lemma 97

1
β(σ)

.5

.1
5 7 9 11 13
σ2
Figure 3.3 UMP test power curve

Example 3.11 Find the UMP test for testing the H0 : θ ≤ θ0


vs H1 : θ > θ0 for the parameter θ of the exponential distri-
bution with density function

 1 e− xθ if x > 0
θ
pθ =
 0 otherwise
based on a sample size n with level α. Also find the power
function
The joint pdf of the random sample of size n is
1 − 1 P xi
pθ (x) = e θ
θn
= c(θ)eQ(θ)t(x) h(x)
1
= − 1θ ↑ θ, if θ > θ0 , t(x) =
P
where c(θ) = θn , Q(θ) xi and
h(x) = 1. .. . The UMP test with level α is

 1 if t > c
φ(t) =
 0 otherwise

where c is determined by EH0 [φ(T )] = α


P
T = Xi ∼ G(n, θ) and its pdf is
1 − 1 t n−1
p(t) = e θ t 0<t<∞
θn Γn
2t
If y = θ ,then
1 − 1 y n−1
p(y) = e 2 y 0<y<∞
2n
Neyman-Pearson Lemma 98

Thus p(y) ∼ χ2 distribution with 2n degrees of freedom.


 
2T 2c
PH0 {T > c} = Pθ0 >
θ0 θ0
 
2c
= Pθ0 Y >
θ0
Z ∞
1 1 2n
α = 2n e− 2 y y 2 −1 dy (3.12)
2c 2n
θ0 2 2 Γ
2

If χ2α is the ordinate corresponding to the upper tail area


R∞
i.e., α = χ2 p2n (χ2 )dχ2 (3.13)
α

Comparing (3.12) and(3.13) give


χ2
2cθ0 = χ2α ⇒ c = 2θ0 The UMP test is

 1 χ2
if t > 2θ0
φ(t) =
 0 otherwise

The power function is

β(θ) = EH1 [φ(T )]


χ2α
 
= PH1 T >
2θ0
χ2α
 
2T
= Pθ > ∀ θ ≥ θ0
θ θθ0
χ2α
 
= Pθ Y > ∀ θ ≥ θ0
θθ0
Z ∞
1 1 2n
= 2 2n e− 2 y y 2 −1 dy ∀ θ ≥ θ0
χα 2n
θθ0 2
2 Γ
2

Example 3.12 Let X1 , X2 , X3 , · · · , Xn be a random sample


drawn from the normal distribution with mean θ and known
variance σ 2 . The hypothesis is to test that H0 : θ = θ0 vs
H1 : θ 6= θ0 . Show that for any α, 0 < α < 1, there does not
exist a UMP level α test
Choose a test H0 : θ = θ0 vs H1 : θ = θ1 (> θ0 ). As
Neyman-Pearson Lemma 99

in Example 3.8, the MP level α test is



 1 if x̄ > c1
φ1 (x̄) =
 0 otherwise

where c1 is determined by

α = EH0 [φ(X̄)]
 
X̄ − θ0 c1 − θ0
= Pθ0 √ > √
σ/ n σ/ n
Z ∞
= √ p(z)dz (3.14)
n
(c1 −θ0 ) σ
X̄−θ
where Z = √0
σ/ n
∼ N (0, 1)

From the normal table zα is the upper ordinate corre-


sponding to the area α
R∞
i.e., zα p(z)dz = α (3.15)

n
Comparing equations(3.14) and (3.15) give zα = (c1 − θ0 ) σ

⇒ c1 = zα √σn + θ0 . Thus the MP level α test for the


H0 : θ = θ0 vs H1 : θ = θ1 (> θ0 ) is

 1 if x̄ > zα √σ + θ0
n
φ1 (x̄) =
 0 otherwise

The power of the test φ1 is

βφ1 (θ) = EH1 [φ1 (X̄)]


 
σ
= PH1 X̄ > zα √ + θ0
n
 √ 
X̄ − θ n
= Pθ √ > zα − (θ − θ0 ) ∀ θ ≥ θ0
σ/ n σ
 √ 
n
= PH1 Z > zα − (θ − θ0 ) ∀ θ ≥ θ0
σ
Z ∞
= √ p(z)dz ∀ θ ≥ θ0
n
zα −(θ−θ0 ) σ
Neyman-Pearson Lemma 100

The MP level α critical region c1 = {x | x̄ > √σ zα + θ0 } is


n
same for all the alternative H1 : θ > θ0 . .. . φ1 is the UMP
level α test for testing H0 : θ = θ0 vs H1 : θ > θ0 Similarly the
MP level α test for testing H0 : θ = θ0 vs H1 : θ = θ2 (< θ0 )
is given by 
 1 if x̄ < c2
φ2 (x̄) =
 0 otherwise

where c2 is determined by α = EH1 [φ2 (X̄)].



 1 if x̄ < −zα √σ + θ0
n
i.e., φ2 (x̄) =
 0 otherwise

The power function of the test φ2 is



n
Z −zα −(θ−θ0 ) σ
βφ2 (θ) = p(z)dz ∀ θ < θ0
−∞

The critical region c2 = {x | x̄ < −zα √σn + θ0 } is same for all


the alternative hypothesis H1 : θ < θ0 .. . φ2 is the UMP level
α test for testing the hypothesis H0 : θ = θ0 vs H1 : θ < θ0
For the hypothesis H0 : θ = θ0 vs H1 : θ = θ1 or θ = θ2 (θ1 6=
θ2 6= θ0 ), the MP level α test is

 1 if x̄ < c3 or x̄ > c4
φ3 (x̄) =
 0 otherwise

where c3 and c4 are determined by

α = EH0 [φ3 (X̄)]

= PH0 {X̄ < c3 or X̄ > c4 }


   
c3 − θ0 c4 − θ0
= P θ0 Z < √ + P θ0 Z > √
σ/ n σ/ n
Neyman-Pearson Lemma 101

Considering equal two tails test, then


c3 −θ0
Z √ Z ∞
α α σ/ n
+ = p(z)dz + p(z)dz
2 2 −∞
c4 −θ0

σ/ n
c3 −θ0
Z √
α σ/ n
Choose = p(z)dz (3.16)
2
Z−∞

α
and = p(z)dz (3.17)
2 c4 −θ0

σ/ n

c3 − θ0
From equation (3.16) ⇒ − z α2 = √
σ/ n

Z −z α
2 α
where −z α is obtained from p(z)dz =
2
−∞ 2
σ
⇒ c3 = −z α2 √ + θ0
n
c4 − θ0
From equation (3.17) z α2 = √
σ/ n
Z ∞
where z α2 is obtained from p(z)dz = α/2

2
σ
⇒ c4 = z α2 √ + θ0
n
The power function of θ ∈ ΩH1 is

Z −zα/2 −(θ−θ0 ) n/σ Z ∞
βφ3 (θ) = p(z)dz + √
p(z)dz
−∞ zα/2 −(θ−θ0 ) n/σ

The critical region for testing the hypothesis H0 : θ = θ0 vs


H1 : θ 6= θ0 is { x | x̄ < −z α2 √σn + θ0 or x̄ > z α2 √σn + θ0 }.
This region does not serve for all alternative values θ 6= θ0 so
that α1 + α2 = α. .. . The UMP level α test is not exist for
testing the hypothesis H0 : θ = θ0 vs H1 : θ 6= θ0
Example 3.13 Let X1 , X2 , X3 , · · · , Xn be a random sample
from the pdf of the N (θ, σ 2 = 4). Draw the power curves for
the following hypotheses with level α = 0.05 are
Neyman-Pearson Lemma 102

(i) H0 : θ = 10 vs H1 : θ > 10
(ii) H0 : θ = 10 vs H1 : θ < 10
(iii) H0 : θ = 10 vs H1 : θ 6= 10
As in Example 3.12 , the UMP level α = 0.05 test
for the hypothesis H) : θ = 10 vs H1 : θ > 10 is given by

 1 if x̄ > 10.66
φ1 (x̄) =
 0 otherwise

Here θ0 = 10, n = 25, σ = 2 and zα = 1.65


R∞
i.e., 1.65 p(z)dz = 0.05( using normal table). The power
function of the hypothesis H0 : θ = 10 vs H1 : θ > 10 is
Z ∞
βφ1 (θ) = p(z)dz ∀ θ > 10
26.65−2.5θ

The power function βφ1 is evaluated for different values θ ≥ 10


and they are given in Table 3.5

Table 3.5 UMP right tail test function values

θ 10 10.2 10.4 10.6 10.8 11.0 11.2 11.4 11.6


βφ1 (θ) 0.05 0.13 0.26 0.44 0.64 0.80 0.91 0.97 0.99

Figure 3.4 shows the UMP right tail test power curve
Neyman-Pearson Lemma 103

1.0
βφ1 (θ)

,50

,10

10 10.8 11.2 11.0 θ


Figgure 3.4 UMP right tail test power curve

As in Example 3.12, the UMP level α = 0.05 test for the hypothesis
H0 : θ = 10 vs H1 : θ < 10 is given by

 1 if x̄ < 9.34
φ2 (x̄) =
 0 otherwise

Here θ0 = 10, n = 25, σ = 2, α = 0.05 and zα = −1.65


R −1.65
i.e., −∞ p(z)dz = 0.05 ( using normal table). The power function
of the hypothesis H0 : θ = 10 vs H1 : θ < 10 is given by
Z 23.35−2.5θ
βφ2 (θ) = p(z)dz ∀ θ < 10
−∞

The power function φ2 values for θ ≤ 10 is given in Table 3.6

Table 3.6 UMP left tail test power function values

θ 10 9.8 9.6 9.4 9.2 9.0 8.8 8.6 8.4


βφ2 (θ) 0.05 0.13 0.26 0.44 0.64 0.80 0.91 0.97 0.99

Figure 3.5 shows UMP left tail test power curve


Neyman-Pearson Lemma 104

1.0
βφ2 (θ)

.50

.10
6 8
10 θ 9
Figure 3.5 UMP left tail test power curve

Example 3.12, the test for the hypothesis H0 : θ = 10 vs H1 : θ 6= 10


with level α = 0.05 is given by

 1 if x̄ < 9.216 or x̄ > 10.784
φ3 (x̄) =
 0 otherwise

Here θ0 = 10, α/2 = 0.025, zα/2 = 1.96 and σ = 2. The power


function is
Z 23.04−2.5θ Z ∞
βφ3 (θ) = p(z)dz + p(z)dz ∀ θ 6= 10
−∞ 26.96−2.5θ

The power function φ3 of test values are given in Table 3.7

Table 3.7 two tails(sided) test power function values

θ 10 10.2 10.4 10.6 10.8 11.0 11.2 11.4


As11.6
in

βφ3 (θ) 0.05 0.08 0.17 0.32 0.52 0.80 0.85 0.94 0.98
θ 10 9.8 9.6 9.4 9.2 9.0 8.8 8.6 8.4
βφ3 (θ) 0.05 0.08 0.17 0.32 0.52 0.80 0.85 0.94 0.98

Figure 3.6 shows the two tails(sided) test power curve


Neyman-Pearson Lemma 105

1.0

βφ3 (θ)

.5

.1
0
8.4 9.4 10 10.6 11.6 θ
Figure 3.6 Two tails test power curve

Example 3.14 Sixteen items put on test and the failure times(
in hours) are 13.4, 14.2 , 28.8, 29.0, 29.8, 33.0, 37.8, 39.6, 43.4,
49.8, 54.8, 58.2, 67.4, 70.2, 91.2, 44.04. Assuming the failure time
distribution to be given by N (θ, σ 2 = 16). Obtain the UMP level
α = 0.05 test for the failure times coming from the normal popu-
lation with mean failure time H0 : θ ≤ 40 vs H1 : θ > 40
The hypothesis is to test that H0 : θ ≤ 40 vs H1 : θ > 40
The joint pdf of the random sample of X1 , X2 , X3 , · · · , Xn from
N (θ, 16) is
  n 1
(xi −θ)2
P
 √1
2πσ
e− 2σ2 if −∞ < x < ∞
pθ (x) =
 0 otherwise

As in Example 3.12, the UMP level α = 0.05 is given by



 1 if x̄ > 41.650
φ(x̄) =
 0 otherwise

where α = 0.05, zα = 1.65, σ = 4, n = 16 and θ0 = 40. The


sample mean x̄ = 44.04 which is more than 41.65. The hypothesis
H0 : θ ≤ 40 is rejected , i.e., the sample belongs to the normal
population with mean θ > 40 and σ 2 = 16. The decision is that
the average time to failure is 44,04 hours in the long run with
probability 0.99
Neyman-Pearson Lemma 106

The power function

β(θ) = EH1 [φ(X̄]

= PH1 {X̄ > 41.65}


 
X̄ − θ 41.65 − θ
= Pθ √ > √ ∀ θ ≥ 40
σ/ n σ/ n
Z ∞
= p(z)dz ∀ θ > 40
41.85−θ

Table 3.8 shows the UMP test power function values for testing the
hypothesis H0 : θ ≤ 40 vs H1 : θ > 40

Table 3.8 UMP test power function values

θ 40.0 40.5 41.0 41.5 42.0 42.5 43.0 43.5 44.04


β(θ) 0.05 0.12 0.26 0.44 0.63 0.80 0.91 0.96 0.99

3.6 Locally Most Powerful Test

For tetsing the hypothesis H0 : θ ≤ θ0 vs H1 : θ > θ0 or


H0 : θ > θ0 vs H1 : θ ≤ θ0 , there exists UMP or UMPU test
when the distribution of the random variable X has one parameter
exponential or MLR property. To find the optimal tests when the
distribution of X has not possessing any of the properties
Consider the problem of testing H0 : θ ≤ θ0 vs H1 : θ > θ0 .
Assume that the power function of the distribution X admits first
order continuous derivatives
i.e., βφ0 (θ) = φ(x) ∂p∂θ
θ (x)
R
dx
Let D be the class of all tests satisfying Eθ0 [φ(X)] = α
i.e., D = {φ | Eθ0 [φ(X)] = α}
The test φ0 is a LMP test, if βφ0 0 (θ) > βφ0 (θ) ∀ φ ∈ D
Thus a LMP test has maximum βφ0 (θ) at θ = θ0 that of are tests
with the same size at θ = θ0
Neyman-Pearson Lemma 107

3.7 Optimum Property of LMP Test

Consider

βφ0 0 (θ0 ) − βφ0 (θ) ≥ 0


∂ ∂
i.e., βφ (θ) |θ=θ0 − βφ (θ) |θ=θ0 ≥ 0
∂θ 0 ∂θ

[βφ (θ) − βφ (θ)] |θ=θ0 ≥ 0
∂θ 0

Let ψ(θ) = βφ0 (θ) − βφ (θ), differentiate this with respect to θ



∂θ ψ(θ) |θ−θ0 ≥ 0
⇒ there exists an interval (θ0 − , θ0 + )
ψ(θ) ↑ θ ∀ θ ∈ (θ0 − , θ + ) ∀  > 0
⇒ ψ(θ) ↑ θ ∀ θ ∈ (θ0 − , θ0 + )
⇒ ψ(θ) ≥ 0, i.e., βφ0 (θ) ≥ βφ (θ), ∀ θ ∈ (θ0 − , θ0 + ). Thus the
LMP test φ0 has maximum power at θ0
A LMP test may be found with the help of NPL.According to
this φ0 is the LMP test at θ = θ0 , if




 1 if ∂θ pθ (x) > kpθ (x)

φ0 (x) = ∂
γ if ∂θ pθ (x) = kpθ (x)



 0 ∂
if ∂θ pθ (x) < kpθ (x)
where γ and k are determined by Eθ [φ0 (X)] = α for θ ∈ ΩH0

or




 1 if ∂θ log pθ (x) > k

φ0 (x) = ∂
γ if ∂θ log pθ (x) = k



 0 ∂
if ∂θ log pθ (x) < k
Example 3.15 Find the LMP level α test for testing the hypoth-
esis H0 : θ ≤ 0 vs H1 : θ > 0 of the pdf

 1 1
if − ∞ < x < ∞
π 1+(x−θ)
pθ (x) =
 0 otherwise

by taking a sample observation x of X


Neyman-Pearson Lemma 108

pθ0 (x) 1+(x−θ)2


For θ < θ0 , pθ (x) = 1+(x−θ 0 )2 ⇒ 1 as n → ∞ The MLR
property has not satisfied for the Cauchy distribution. The LMP
level α test is given by

 1 2x
if 1+x2 >k
φ0 (x) =
 0 otherwise

∂ 2x
where ∂θ log pθ (x) |θ=θ0 = 1+x2 and k is determined by
Eθ0 [φ0 (X)] = α
Example 3.16 Construct LMP level α test, if any for testing H0 :
σ = σ0 vs H: σ > σ0 on the basis of n observations drawn from
N (θ, σ 2 ) where θ is known.
The joint pdf of the random sample X1 , X2 , X3 , · · · , Xn is
 n
1 1
P 2
pσ (x) = √ e− 2σ2 (xi −θ)
2πσ 2
n n 1 X
log pσ (x) = − log 2π − log σ 2 − 2 (xi − θ)2
2 2 2σ
∂ n 1 X
log pσ (x) = − + 3 (xi − θ)2
∂σ σ σ

The LMP level α test for testing H0 : σ = σ0 is given by



 1 if (xi2−θ)2 − n > k
P
σ0 σ0
φ0 (t) =
 0 otherwise

n
Let c = σ03 (k + σ0 ), then the LMP level α test is given by

(xi − θ)2 > c
 1 P
if
φ0 (t) =
 0 otherwise

where c is determined by

EH0 [φ0 (T )] = α
nX o
PH 0 (Xi − θ)2 > c = α
Xi − θ)2
P 
c
PH 0 > 2 = α
σ2 σ0
Z 0∞
pn (χ2 )dχ2 = α (3.18)
c
σ02
Neyman-Pearson Lemma 109

R∞
and χ2α
pn (χ2 )dχ2 = α( using χ2 table) (3.19)
c
From equations (3.18) and (3.19) give σ02
= χ2α ⇒ c = σ02 χ2α . The
LMP level α test is

 1 if P(x − θ)2 > σ 2 χ2
i 0 α
φ0 (t) =
 0 otherwise

Problems

3.1 Stating the results used, obtain the UMP level α test for
testing H0 : θ < θ0 vs H1 : θ > θ0 , using a random sample
of size n from pθ (x) = θe−θx , x > 0, θ > 0

3.2 What is a distribution said to have the MLR property? Also


show that, for such distributions UMP test exists for testing
one sided hypothesis

3.3 What is a family of distribution said to process the MLR


property? Examine whether the rectangular distribution in
(0, θ) possesses this property

3.4 Starting the necessary results to obtain the UMP test for
testing θ ≤ θ0 vs θ > θ0 , using a random sample of size n
from a Poisson distribution with parameter θ

3.5 Show that UMP tests for testing one sided hypothesis for the
family of distributions with MLR property

3.6 Define UMP test, State and establish a sufficient condition for
existence of the UMP test for testing one sided hypothesis
against an one sided alternative in the case of exponential
families

3.7 Show that for a family having MLR property, there exists
UMP test for testing one sided hypothesis against one sided
alternative
Neyman-Pearson Lemma 110

3.8 Define the MLR property. Verify whether the following family
has the MLR property

1


 2(1 − θ) if 0 < x < 2

pθ (x) = 1
2θ if 2 ≤ x < 1, 0 < θ < 1



 0 otherwise

pθ0 (x) (1−θ 0 )


Ans: For θ < θ0 and 0 < θ < 1, pθ (x) = (1−θ) ↓θ
0
pθ0 (x) θ
1 1
when 0 < x < 2 and pθ (x) θ ↑ θ when 2 ≤ x < 1.

3.9 When do you say that the real parameter family of densities
pθ (x) to have MLR ? Explain

3.10 If X has a distribution belonging to one parameter exponen-


tial family. Show that there exists a UMP test of level α for
testing H0 : θ ≤ θ0 vs H1 : θ > θ0 .State the property of the
power function of this test
1 −x
3.11 Let X ∼ pθ (x) = θe
θ ,θ > 0. Obtain UMP test of size
α for testing H0 : θ ≤ 1 vs H1 : θ > 1 based on a single
observation. Compute the power function

3.12 Obtain the UMP test, if one such exists, for testing the hy-
1 1
pothesis H0 : θ ≤ 4 vs H1 : θ > 4 using a random sample
from the uniform distribution over (0, 1)
1
3.13 Examine whether a UMP test exists for testing H0 : θ ≤ 2
1
vs H1 : θ > 2 given n independent observations on a random
variable X with pdf



 θ if 0 < x < 1

pθ (x) = 1−θ if 1 < x < 2, 0 < θ < 1



 0 otherwise

Describe the test procedure if it exists, otherwise explain


why it fails to exist

3.14 Show that the family of Cauchy densities does not possess
MLR property
Neyman-Pearson Lemma 111

3.15 If pθ (x) is a family with MLR property, show that there exists
a UMP test of H0 : θ ≤ θ0 vs H1 : θ > θ0

3.16 Prove that no UMP test exists for testing H0 : θ = 0 vs


H1 : θ 6= 0 in a normal population N (θ, 1)

3.17 For pθ (x) = θx (1 − θ)1−x , x = 0, 1, 0 < θ < 1, find a UMP


level size α test for testing H0 : θ ≤ θ0 vs H1 : θ > θ0 based
on n observation

3.18 Let X ∼ pθ (x),where the family has a MLR in T (X). Show


that the following test is UMP level size α to test H0 : θ ≤ θ0
vs H1 : θ > θ0



 1 if t(x) > c

φ(t) = γ if t(x) = c



 0 otherwise

3.19 Let X be the life time of a light bulb manufactured by a


certain company. Suppose X has density function pλ (x) =
1
1 −λ x
λe x > 0, λ > 0. Find UMP test of H0 : λ ≤ λ0 vs
H1 : λ > λ0 based on a sample of size n with level α.

3.20 What are LMP tests? When are they preferred? Let
X1 , X2 , X3 , · · · , Xn be a random sample from a distribution
with density
(x−1)2 x2
e− 2 e− 2
pθ (x) = θ √ + (1 − θ) √ , 0 ≤ θ < 1, x ∈ <
2π 2π
Derive the LMP test for testing H0 : θ = 0 vs H1 : θ > 0

3.21 Describe LMP test. How will you derive the same?

3.22 Construct LMP test, if any for testing H0 : σ = 4 vs H1 :


σ > 4 on the basis of 100 observations drawn from N (1, σ 2 )
population

3.23 For testing H0 : µ = 0 vs H1 : µ > 0 on the basis of a random


sample X1 , X2 , X3 , · · · , Xn a proposed test rejects H0 if and
Neyman-Pearson Lemma 112

only if the number of positive observation is too large. The


test is
(a) an UMP for the problem
(b)not an UMP but unbised for the problem
(c) not an UMP but unbiased for the problem
(d) not an UMP and not an unbiased for the problem.
Ans: (a)

3.24 Let Pθ , θ ∈ Ω = {θ0 , θ1 }. Neyman-Pearson MP test is based


on the hypothesis
(a) H0 : θ ≤ θ0 vs H1 : θ > θ0
(b) H0 : θ1 < θ < θ2 vs H1 : θ ≤ θ0 or θ ≥ θ0
(c)H0 : θ = θ0 vs H1 : θ = θ1
(d) H0 : θ ≤ θ1 or θ > θ2 vs H1 : θ1 < θ < θ2 Ans: (c)

3.25 Let X1 , X2 , · · · , Xn be a random sample drawn from Uniform


distribution U (0, θ) for testing the hypothesis H0 : θ < θ0
vs H1 : θ > θ0 .Which of the following statements are true?
(a) The test is an UMP level α test
(b) The family of distributions has the MLR property
(c) The test is a MP level α test
(d) The test is an unbiased test with level α
Ans: (a) , (b) , (c) and (d)

3.26 The hypothesis is to test that H0 : θ ≤ 0 vs H1 : θ > 0 of


1 1
the pdf pθ (x) = π 1+(1−θ)2 , if −∞ < x < ∞. A sample of
observation x of X is drawn from the population. Which of
the following statement is true? The LMP test for rejecting
the hypothesis H0 , the rejection region is
2x
(a) 1+x2 > k(> 0)
2x
(b) 1+x2 < k(> 0)
1
(c) 1+x2 > k(> 0)
1
(d) 1+x2 < k(> 0) Ans:( a)

3.27 Let X1 , X2 , X3 , · · · , Xn be a random sample drawn from a


Neyman-Pearson Lemma 113

normal distribution N (θ, σ 2 ) where θ is known. For testing


the hypothesis H0 : σ 2 = σ02 vs H1 : σ 2 > σ02 . Which of the
following statement is true? The LMP test for rejecting H0
the rejection region is
Pn
(a) i=1 (xi − θ)2 > c
Pn
(b) i=1 (xi − θ)2 < c
Pn
(c) i=1 (x2i − θ)2 > c
Pn
(d) i=1 (x2i − θ)2 < c Ans:(a)

3.28 Let a family {Pθ , θ ∈ Ω} admits a sufficient statistic. For


testing the hypothesis H0 vs H1 the tests are functions of
the sufficient statistic. If φ is a test function and T (X) is a
sufficient statistic, then
(a) E[φ(X) | T (X) = t] is a test function
(b) 0 ≤ E[φ(X) | T (x) = t] ≤ 1
(c) Both test function φ(x) and E[φ(X) | T (X) = t] have
the same power
(d) The test function φ(x) and E[φ(X) | T (X) = t] are
different Ans: (a) , (b) and (c)

3.29 For testing H0 : θ ≥ θ0 vs H1 : θ < θ0 and H0 : θ ≤ θ0 vs


H1 : θ > θ0 , which of the following statements are true?
(a) UMP test exists for the family of distributions
(b) UMP test exists for the family of distributions belongs
to one parameter exponential family
(c) UMP test for the family of distributions has the MLR
property
(d) LMP test exists when the test has power func-
tion which admits first order continuous derivative
Ans: (b), (c) and (d)

3.30 Let X1 , X2 , X3 , · · · , Xn be a random sample from a distribu-


tion having a probability density pθ (x) = θxθ−1 , 0 < x < 1
Pn
and θ > 0. The set {x1 , x2 , · · · , xn : i=1 log xi ≥ c} , where
Neyman-Pearson Lemma 114

c is a constant, is an UMP region for testing H0 vs H1 when


1
(a) H0 : θ = 2 vs H1 : θ > 1
(b) H0 : θ = 1 vs H1 : θ ≥ 4
(c) H0 : θ = 4 vs H1 : θ ≤ 1
(d) H0 : θ = 4 vs H1 : θ 6= 1 Ans: (a) and (b)

3.31 A data set gave a 95% confidence interval (2.5, 3.6) for the
mean µ of a normal population with known variance . Let
µ0 < 25 be a fixed number. If we use the same data to
test H0 : µ = µ0 va H1 : µ 6= µ0 . Which of the following
statements are true?
(a) H0 would be necessarily rejected at α = 0.1
(b) H0 would be necessarily rejected at α = 0.025
(c) For α = 0.1, the information is not enough to draw a
conclusion
(d) For α = 0.025, the information is not enough to draw a
conclusion Ans: (a) and (d)

3.32 let X1 , X2 , · · · , Xn be a random sample from N (µ, σ 2 ), where


µ and σ 2 are unknown. Consider a problem of testing
H0 : µ = 2 vs H1 : µ > 2. Suppose the observed values
of x1 , x2 , · · · , x7 are 1.2 , 1.3, 1.7, 1.8, 2.1, 2.3, 2.7. If we use
the uniformly most powerful test , which of the following is
true?
(a) H0 is accepted both at 5% and 1% levels of significance
(b) H0 is rejected both at 5% and 1% levels of significance
(c) H0 is rejected at 5% level of significance but accepted at
1% level of significance
(d) H0 is rejected at 1% level of significance but accepted at
5% level of significance Ans: (a)

3.33 Let X1 , X2 , X3 , · · · , Xn be a random sample from



 2λxe−λx2 if x > 0
pλ (x) =
 0 otherwise
Neyman-Pearson Lemma 115

Here λ > 0 is an unknown parameter. It is desired to test


the following hypothesis at level α = 0. We want to test
H0 : λ ≤ 1 vs H1 : λ > 1 Then which of the following are
true?
Pn
(a) UMP test os of the form i=1 xi < cn with cn < cn+1 ∀ n
Pn
(b) UMP test is of the form i=1 x2i < dn with dn < dn+1 ∀ n
Pn
(c) UMP test is of the form i=1 xi < cn with cn+1 < cn ∀ n
Pn 2
(d) UMP test is of the form i=1 xi < dn with dn+1 <

dn ∀ n Ans: (b)

3.34 Let X1 , X2 , · · · , Xn be iid N (µ, 1). It is proposed to test


H0 : µ = 0 vs H1 : µ > 0. Let pn (µ, α) denote the power of
the UMP test at µ of size α based on sample size n. Then
which of the following statements are correct?
(a)limn→∞ pn (µ, α) = 1 ∀ µ > o ∀ α > 0
(b)limµ→0 pn (µ, α) = α ∀ n ≥ 1, ∀ α > 0
(c) limα→0 pn (µ, α) = 0 ∀ n ≥ 1 ∀ µ > 0
(d) limα→1 pn (µ, α) = 0 ∀ n ≥ 1 ∀ µ > 0
Ans: (a) and (b) and (c)
4. APPLICATIONS OF GENERALIZED NEYMAN

PEARSON LEMMA

4.1 Introduction

The hypothesis is to test that H0 : θ ∈ ΩH0 vs


H1 : θ ∈ ΩH1 . If ΩH0 is finite number of points and ΩH1 has
only one point, then Generalized Neyman Pearson Lemma
(GNPL) always gives the MP level α test whose form is also
given by GNPL. If ΩH0 and ΩH1 are not finite, then there is no
general method of constructing the UMP test. In fact UMP
test does not exist in many situations. But for certain two
sided hypothesis of the form H0 : θ ≤ θ1 or θ ≥ θ2 (θ1 < θ2 )
with reference to one parameter exponential family of distri-
butions, there exists an UMP test
There is no UMP test for testing the statistical hy-
pothesis H0 : θ1 < θ < θ2 vs H1 : θ < θ1 or θ > θ2 and
H0 : θ = θ0 vs H1 : θ 6= θ0 even for MLR family. In such
cases the test restricts to smaller class tests such as unbiased
test.For testing these two sided hypotheses, now to state the
sufficient condition of the GNPL

4.2 Generalized Neyman Pearson Lemma

Hypothesis testing is a key component in statistical anal-


yses and many of the complex hypotheses that utilized in
testing can be summarized in GNPL
Sufficient condition Let p1 (x), p2 (x), · · · , pm+1 (x) be func-
tions defined on a set Ω ⊆ <m+1 . Let c1 , c2 , · · · , cm be
R
constant. Then for maximizing φ(x)pm+1 (x)dx subject to
Applications of GNPL 117

R
0 ≤ φ(x) ≤ 1 and φ(x)pi (x)dx = ci , i = 1, 2, 3, · · · , m a
sufficient condition of the form is

 1 if pm+1 (x) > Pm ki pi (x)
i=1
φ(x) = Pm
 0 if p
m+1 (x) < i=1 ki pi (x)
R
for some k1 , k2 , · · · , km such that φ(x)pi (x)dx = ci , i =
1, 2, · · · , m
Theorem 4.1 For testing the hypothesis H0 : θ ≤ θ1 or
θ ≥ θ2 (θ1 < θ2 ) vs H1 : θ1 < θ < θ2 in an one parameter
exponential family, there exists an UMP test given by



 1 if ci < t < c2

φ(t) = γi if t = ci , i = 1, 2



 0 otherwise
where c0 s and γ 0 s are determined by Eθ1 [φ(T )] = Eθ2 [φ(T )] = α

Proof One who restricts the family of distributions with the


sufficient statistics T = T (X), is the one parameter exponen-
tial family
i.e., pθ (t) = c(θ)eQ(θ)t dv(t)

Assume Q(θ) is strictly increasing in θ. Choose the testing


of hypothesis H00 : θ = θ1 or θ = θ2 (θ1 < θ2 ) vs H10 : θ = θ0
where θ0 ∈ (θ1 , θ2 ). By GNPL, the test function φ is given by
the above hypothesis is

 1 if c(θ0 )eQ(θ0 )t > k1 c(θ1 )eQ(θ1 )t + k2 c(θ2 )eQ(θ2 )t
φ(t) =
 0 otherwise

Consider
k1 c(θ1 ) [Q(θ1 )−Q(θ0 )]t k2 c(θ2 ) [Q(θ2 )−Q(θ0 )]t
1 > e + e
c(θ0 ) c(θ0 )
1 > a1 eb1 t + a2 eb2 t

where b1 = Q(θ1 ) − Q(θ0 ) < 0, b2 = Q(θ2 ) − Q(θ0 ) > 0


Applications of GNPL 118

Let g(t) = a1 eb1 t + a2 eb2 t , then the test φ becomes



 1 if g(t) < 1
φ(t) =
 0 if g(t) > 1

Case(i) If a1 ≤ 0 and a2 ≤ 0
then g(t) = a1 eb1 t + a2 eb2 t < 1 is true ∀ t
In this case H0 : θ ≤ θ1 or θ ≤ θ2 is always rejected
since the entire sample space is the critical region which is
not our interest for testing the hypothesis
Case (ii) If a1 ≥ 0 and a2 ≤ 0
then g(t) = a1 eb1 t + a2 eb2 t ↓ t
since g(t)0 = a1 b1 eb1 t + a2 b2 eb2 t ≤ 0 ∀ t
⇒ g(t) is monotonic decreasing function in t
.. . The test function has critical region on one sided (left tail)
of the form as in Figure 4.1

g(t)
g(t) > 1

1
g(t) < 1
c1 t
Figure 4.1 Left tail critical region
The test function is

 1 if t < c1
φ(t) =
 0 if t > c
1

For such test the power function is increasing and hence the
test does not satisfy the two sided conditions

Eθ1 [φ(T )] = Eθ2 [φ(T )] = α

Thus the case(ii) is ruled out


Applications of GNPL 119

Case(iii) If a1 ≤ 0 and a2 ≥ 0, then

g 0 (t) = a1 b1 eb1 t + a2 b2 eb2 t ≥ 0 ∀ t ⇒ g(t) ↑ t

g(t) is monotonic increasing function in t


.. . The test function has critical region on one sided( right
tail)of the form as in Figure 4.2

g(t)

g(t) > 1

1
g(t) < 1
c2
t
Figure 4.2 Right tail critical region

The test function is



 1 if t > c2
φ(t) =
 0 it t < c2

The power function is increasing and hence, the test does not
satisfy the two sided conditions

Eθ1 [φ(T )] = Eθ2 [φ(T )] = α

Hence the case (iii) is not possible


Case(iv) Assume a1 ≥ 0 and a2 ≥ 0. For minimum value of
the function g(t)
⇒ g 0 (t) = 0 and g 00 (t) > 0
i.e., b1 a1 + a2 b2 e[b1 −b2 ]t = 0
⇒ e[b2 −b1 ]t = c > 0 where c = − bb12 aa12
logc
.. . t = b2 −b1 and g 00 (t) = a1 b21 eb1 t + a2 b22 eb2 t > 0 ∀ t
Thus g(t) is a convex function in t. It has minimum value
Applications of GNPL 120

log c
say t0 = b2 −b1 . The test function φ for the hypothesis
H00 : θ = θ1 or θ = θ2 vs H10 : θ = θ0 ∈ (θ1 , θ2 ) has the form
as in Figure 4.3

g(t)

g(t) = a1 eb1 t + a2 eb2 t

t0 c2
c1 t
Figure 4.3 Two sided critical region

The test function is





 1 if c1 < t < c2

φ(t) = γi if t = ci , i = 1, 2



 0 otherwise

where ci and γi , i = 1, 2 are determined by

Eθ1 [φ(T )] = Eθ2 [φ(T )] = α

The above test function does not depend on θ0 and hence, it


is the UMP for testing H0 : θ ≤ θ1 or θ ≥ θ2 vs H 0 : θ = θ0 ∈
(θ1 , θ2 ), i.e., the test function has two sided critical regions
which is unique for all θ0 ∈ (θ1 , θ2 )
Now to prove that φ to be the UMP test for testing H0 :
θ ≤ θ1 or θ ≥ θ2 vs H1 : θ1 < θ < θ2 , i.e., it is enough
to prove that Eθ [φ(T )] is maximum for all θ ∈ ΩH1 subject
to Eθ [φ(T )] ≤ α ∀ θ ∈ ΩH0 . To do this one who repeats
the method by choosing θ0 < θ1 so that minimize Eθ0 [φ(T )]
Applications of GNPL 121

subject to Eθ1 [φ(T )] = α



pθ1 (x)
 1 if pθ0 (x) > k ⇒ t(x) > c2
i.e., φ(t) =
 0 otherwise

Similarly minimize Eθ0 [φ(T )] ∀ θ0 > θ2 subject to

Eθ2 [φ(T )] = α

 1 if pθ0 (x) > k ⇒ t(x) < c1
Pθ1 (x)
i.e., φ(t) =
 0 otherwise

The test function φ minimizes Eθ [φ(T )] for θ < θ1 or θ > θ2


subject to Eθ1 [φ(T )] = Eθ2 [φ(T )] = α
Consider a test φ? (t) ≡ α compare the test φ? with the the
test function φ

i.e., Eθ [φ(T )] ≤ Eθ [φ? (T )] for θ < θ1

≤ α for θ < θ1

and Eθ [φ(T )] ≤ α for θ > θ2

Hence, the test φ is the UMP for testing the two sided hy-
pothesis H0 : θ ≤ θ1 or θ ≥ θ2 vs H1 : θ1 < θ < θ2
Example 4.1 Let X be a random sample drawn from bino-
mial distribution with parameters θ and n = 2. Derive the
1 1
UMP level α = 27 for testing the hypothesis H0 : θ ≤ 3 or
1 1
θ≥ 2 vs H1 : 3 < θ < 12 . Also draw the power curve
The pmf of the random variable X is

2!
pθ (x) = θ2 (1 − θ)2−x , x = 0, 1, 2
x!(2 − x)!
 x
2! θ
= (1 − θ)2
x!(2 − x)! 1 − θ
θ
= c(θ)ex log 1−θ h(x)
Applications of GNPL 122

θ 2!
where c(θ) = (1 − θ)2 , Q(θ) = log 1−θ and h(x) = x!(2−x)!

pθ (x) is an one parameter exponential family with the suffi-


1
cient statistic t(x) = x. There exists a UMP level α = 27

test φ1 is 


 1 if c1 < x < c2


 γ1 if x = c1

φ1 (x) =



 γ2 if x = c2


 0 otherwise
where c1 , c2 and 0 ≤ γ1 , γ2 ≤ 1 are found by using the
1
Eθ1 [φ1 (X)] = Eθ2 [φ1 (X)] = 27 . By trial and er-
ror method choose c1 = 1 and c2 = 2, define the test function
φ1 as 


 1 if 1 < x < 2


 γ1 if x = 1

φ1 (x) =



 γ2 if x = 2


 0 otherwise
1 1
so that Eθ1 = 1 [φ1 (X)] = 27 and Eθ2 = 1 [φ1 (X)] = 27
3 2

1
γ1 Pθ1 = 1 {X = 1} + γ2 Pθ1 = 1 {X = 2} =
3 3 27
1
and γ1 Pθ2 = 1 {X = 1} + γ2 Pθ2 = 1 {X = 2} =
2 2 27

1
4γ1 + γ2 =
3
8
and 4γ1 + 2γ2 =
27
1
Solving the above two equations, one gets γ2 = − 27 < 0 and
5
γ1 = 54 . Since γ2 < 0, φ1 is not a test function
Applications of GNPL 123

Again choose c1 = 0 and c2 = 1, define φ2 as





 1 if 0 < x < 1


 γ1

if x = 0
φ2 (x) =



 γ2 if x = 1


 0 otherwise

1 1
so that Eθ1 = 1 [φ2 (X)] = 27 and Eθ2 = 1 [φ2 [(X)] = 27
3 2

1
γ1 Pθ1 = 1 {X = 0} + γ2 Pθ1 = 1 {X = 1} =
3 3 27
1
and γ1 Pθ2 = 1 {X = 0} + γ2 Pθ2 = 1 {X = 1} =
2 2 27

1
4γ1 + 4γ2 =
3
4
and γ1 + 2γ2 =
27
7
Solving the above two equations, one gets γ2 = 108 and γ1 =
1
54 . Since 0 ≤ γ1 , γ2 ≤ 1, so the φ2 is a test function.Thus the
1
UMP level α = 27 test is

1


 54 if x = 0

φ2 (x) = 7
108 if x = 1



 0 otherwise

The power function of the test φ2 is


 
1 1
βφ2 (θ) = Eθ [φ(X)], for all θ ∈ ,
3 2
1 7
= Pθ {X = 0} + Pθ {X = 1}
54 108  
1 1 1
= (1 − θ)(1 + 6θ), for all θ ∈ ,
54 3 2

The power function values of the test φ2 are in Table 4.1


Figure 4.4 shows the UMP two sided test power curve
Applications of GNPL 124

β(θ)

.0380

.0375
.0370
.33 .40 .45 .50
θ
Figure 4,4 UMP two sided test power curve

Table 4.1 UMP two sided test power function values


θ 0.33 0.34 0.35 0.36 0.37 0.38
β(θ) 0.0370 0.0372 0.0373 0.0375 0.0376 0.0377
θ 0.39 0.40 0.41 0.42 0.43 0.44
β(θ) 0.0377 0.0378 0.0378 0.0378 0.0378 0.0378
θ 0.45 0.46 0.47 0.48 0.49 0.50
β(θ) 0.0377 0.0376 0.0375 0.0373 0.0372 0.0370

4.3 Unbiased Test

Let H0 : θ ∈ ΩH0 vs H1 : θ ∈ ΩH1 . The test is said to be an


unbiased test , if

• Eθ [φ(X)] ≤ α, θ ∈ ΩH0

• Eθ [φ(X)] ≥ α, θ ∈ ΩH1

Fact The test obtained from NPL is an unbiased test.


Example 4.2 Show that UMP level α test is always an un-
biased test
Let φ be the level α UMP test for testing
H0 : θ ∈ ΩH0 vs H1 : θ ∈ ΩH1
Suppose φ? (x) ≡ α . Compare the test φ with test φ?
⇒ EH1 [φ(X)] ≥ EH1 [φ? (X)]
Applications of GNPL 125

i.e., β ≥ EH1 [φ? (X)] ∀ θ ∈ ΩH1


β ≥ EH1 [α] since φ? ≡ α
β≥α
⇒ the test φ is an unbiased test
Example 4.3 Under H0 : X ∼ p(x) where

1 1
 if 0 ≤ x <
 4 2


p(x) = 7 1
4 if 2 ≤x<1



 0 otherwise

and H1 : X ∼ g(x) where



 θxθ−1 if 0 < x < 1, θ > 0
g(x) =
 0 otherwise

Considering a single sample observation x on X rejects H0 if


1 9
x< 10 or x > 10 . Otherwise accepts H0 . Examine the test
is unbiased
The test function φ is

 1 if x < 1 or x > 9
10 10
φ(x) =
 0 otherwise
Applications of GNPL 126

α = EH0 [φ(X)]
 
1 9
= PH0 X < or X >
10 10
Z 1 Z 1
10
= p(x)dx + p(x)dx
9
0 10
1
Z Z 1
10 1 7
= dx + dx
0 4 9 4
10
= 0.20

β = EH1 [φ(X)]
Z 1 Z 1
10
= g(x)dx + g(x)d
9
0 10
1
Z
10
Z 1
θ−1
= θx dx + θxθ−1 dx
9
0 10
 θ
1
= 2 > 0.20 ∀ θ > 0
10

Thus the test φ is an unbiased test


1
Example 4.4 The hypothesis is to test that H0 : θ ≤ 2 vs
H1 : θ > 12 , using a single observation x on X whose density is



 2x if 0 < x < θ

pθ (x) = 1+θ if θ ≤ x < 1



 0 otherwise

The test φ used is φ(x) = 1, if x > c where c is a constant


selected such that the test has size α = 0.1. Examine whether
the test is unbiased
The test φ is

 1 if x > c
φ(x) =
 0 otherwise
Applications of GNPL 127

where c is determined by

α = EH0 [φ(X)]
Z 1 
1
= 1+ dx
c 2
3 14
0.10 = [1 − c] ⇒ c =
2 15

The test φ is 
 1 14
if x > 15
φ(x) =
 0 otherwise

β = EH1 [φ(X)]
Z 1
1+θ 1
= (1 + θ)dx = > 0.1 ∀ θ ≥
14 15 2
15

Thus the test φ is unbiased test


Example 4.5 An observation x on X is available with pdf

1 x
pθ (x) = e− θ , θ > 0, x > 0
θ

for testing H0 : θ ≤ 1 vs H1 : θ > 1. Examine of the test


which rejects H0 when x > c or otherwise accept H0 . It is
unbiased where c is such that the test has a given size α
The test function is given by

 1 if x > c
φ(x) =
 0 otherwise

The constant c is obtained from

α = EH0 [φ(X)]

= PH0 {X > c}
Z ∞
= e−x dx = e−x dx = e−c ⇒ c = − loge α
c
Applications of GNPL 128

Thus the test function φ is



 1 if x > − loge α
φ(x) =
 0 otherwise

The power of the test is

β = EH1 [φ(X)]

= PH1 {X > − loge α}


Z ∞
1 −x
= e θ dx ∀ θ > 1
− loge α θ
1
1
= e θ loge α = eloge α θ
1
≥ αθ ∀ θ ≥ 1

Thus the test is unbiased

4.4 Uniformly Most Powerful Unbiased Test

Let Uα be the class of all level α unbiased test. It


is denoted by Uα = {φ? | EH0 [φ? (X)] ≤ α, ∀ θ ∈ ΩH0 and

EH1 [φ? (X)] ≥ α ∀ θ ∈ ΩH1 } A test φ ∈ Uα is called Uniformly


Most Powerful Unbiased (UMPU) tests for testing the hy-
pothesis H0 : θ ∈ ΩH0 vs H1 : θ ∈ ΩH1 , if
(i) Eθ [φ(X)] ≤ α ∀ θ ∈ ΩH0
(ii)Eθ [φ(X)] ≥ Eθ [φ? (X)] ∀ θ ∈ ΩH1 and ∀ φ? ∈ Uα
i.e., UMPU test φ has maximum power among the tests in Uα

Theorem 4.2 For testing the hypothesis H0 : θ = θ0 vs


H1 : θ 6= θ0 in an one parameter exponential family, there
exists a UMPU test is given by



 1 if t < c1 or t > c2

φ(t) = γi if t = ci i = 1, 2



 0 otherwise
Applications of GNPL 129

where c1 , c2 and 0 ≤ γ1 , γ2 ≤ 1 are determined by


(i) Eθ0 [φ(T )] = α
(ii) Eθ0 [T φ(T )] = αEθ0 [T ]
Proof One who restricts the class of one parameter expo-
nential family with sufficient statistic T (X) is considered
The one parameter exponential family is assumed

pθ (x) = c(θ)eQ(θ)t h(t)

where Q(θ) is a monotonic function in θ. For any test function


φ related to the exponential family, the function βφ (θ) =
Eθ [φ(X)] is differentiable. Also differentiation can be carried
out under the sign of integration
Set Uα = {φ | Eθ0 [φ(X)] = α, Eθ [φ(X)] ≥ α, ∀ θ 6= θ0 } be the

class of all level α unbiased tests. The problem is to maximizes

βφ (θ) = Eθ [φ(X)] uniformly ∀ θ 6= θ0 in the class Uα

Now φ ∈ Uα ⇒ (iii) βφ (θ0 ) = α


(iv) βφ0 (θ0 ) = 0
Consider the class Dα = {φ | φ satisfies (iii) and (iv)}. Then
Uα ⊂ Dα . For any θ0 6= θ0 to maximize Eθ0 [φ(X)] in Dα
i.e., to maximize βφ (θ0 ) = Eθ0 [φ(X)] subject to the conditions
(iii) and (iv)
Condition (iv) ⇒
Z 
d θt
φ(x)c(θ)e h(x)dx = 0

Z
φ(x)[c0 (θ0 )eθ0 t h(x) + c(θ)eθ0 t h(x)]dx = 0
c0 (θ0 )
Eθ [φ(X)] + Eθ0 [T φ(X)] = 0
c(θ0 ) 0
Unbiasedness of a test ψ(t) ⇒ condition (iii) βφ (θ0 ) = α with
φ(x) = ψ(t) also that the power function β(θ) = Eθ [ψ(T )]
Applications of GNPL 130

has a minimum at θ = θ0

c0 (θ0 )
.. . Eθ0 [ψ(T )] + Eθ [T ψ(T )] = 0
c(θ)
c0 (θ0 )
⇒ α + Eθ0 [T α] = 0
c(θ0 )
c0 (θ0 )
⇒ Eθ0 [T ] = −
c(θ0 )
c0 (θ0 )
.. . Eθ0 [T ψ(T )] = − Eθ [ψ(T )]
c(θ0 ) 0
= Eθ0 [T ]Eθ0 [ψ(T )]

= αEθ0 [T ]

where Eθ0 [ψ(T )] = α

Thus the class Dα becomes

Dα = {ψ | Eθ0 [ψ(T )] = α, and Eθ0 [T ψ(T )] = αEθ0 [T ]}

Hence ψ ∈ D ⇒ Uα ⊂ Dα
Now to find the UMP test in Uα which will be the required
UMPU test. For θ0 6= θ0 , Eθ0 [ψ(T )] is maximized in Dα
subject to (iii) and (iv) is given by using GNPL

 1 if p 0 (t) > k1 p (t) + k2 tp (t)
θ θ0 θ0
i.e., ψ(t) =
 0 otherwise

where k1 and k2 are found so that ψ(t) ∈ Dα . Consider

0
c(θ0 )eθ t > k1 c(θ0 )eθ0 t + k2 tc(θ0 )eθ0 t
c(θ0 ) (θ0 −θ0 )t
⇒ e > k1 + tk2
c(θ0 )
c(θ0 )
⇒ aebt > k1 + tk2 where b = θ0 − θ0 , a = c(θ0 )

One can show that the inequality aebt > k1 + tk2 implies t
must be either outside certain interval as in Figure 4.5 and
Applications of GNPL 131

4.6 ( since the straight line has two points of intersection) or


t must be lie in one sided interval as in Figures 4.7 and 4.8(
since the straight line has only one point of intersection)

y = aebt , b < 0
y
@ y = k1 + k2 t
@
@
@
@

c1 c2 t
t∈
/ (c1 , c2 )
Figure 4.5 Two sided critical regions

y = aebt , b > 0
y
y = k1 + k2 t

c1 c2
t∈/ (c1 , c2 ) t
Figure 4.6 Two sided critical regions

The test function is


 1 if t < c1 or t > c2
ψ(t) =
 0 otherwise
Applications of GNPL 132

y = aebt , b < 0
y y = k1 + k2 t

c2 t > c2 t
Figure 4.7 One sided right tail critical region

y = aebt , b > 0
y = k1 + k2 t

t < c1 c1 t
Figure 4.8 One sided left tail critical region

The test function is



 1 if t < c1
ψ(t) =
 0 otherwise

or

 1 if t > c2
ψ(t) =
 0 otherwise
If there is only one point of intersection the critical region
will be either left tail or right tail of the distribution of t (
depending upon the sign of a). But a single tail test will
lead to a monotonic power curve by virtue of the NPL which
against the property of unbiasedness . Thus the one sided
tests are ruled out. But unbiasedness ⇒ β(θ) has a turning
Applications of GNPL 133

point at θ = θ0 . Hence a two tailed test is





 1 if t < c1 or t > c2

ψ(t) = γi if t = ci , i = 1, 2



 0 otherwise

where c1 , c2 , γ1 and γ2 are determined by (i) and (ii). Since


ψ(t) is unbiased and does not depend on θ0 . It is the UMPU
test for testing H0 : θ = θ0 vs H1 : θ 6= θ0
Example 4.6 Obtain the UMPU level α test for testing the
hypothesis H0 : θ = θ0 vs H1 : θ 6= θ0 based on b(n, θ)
The pmf of b(n, θ) is

n!
pθ (x) = θx (1 − θ)n−x , x = 0, 1, 2 · · · , n
x!(n − x)!

It is an one parameter exponential family. The UMPU level


α test is given by



 1 if x < c1 or x > c2


 γ1 if x = c1

φ(x) =
 γ2 if x = c2





 0 otherwise

where c1 , c2 , γ1 and γ2 are determined by

EH0 [φ(X)] = α (4.1)

and EH0 [Xφ(X)] = αEH0 [X] (4.2)


Applications of GNPL 134

From the equation(4.1)

α = PH0 {X < c1 or X > c2 } + γ1 PH0 {X = c1 }

+γ2 PH0 {X = c2 }
1 −1  
cX n  
n x n−x
X n x
= θ0 (1 − θ0 ) + θ (1 − θ0 )n−x
x x 0
x=0 x=c2 +1
 
n ci
+ γ1 θ (1 − θ0 )n−ci , i = 1, 2
ci 0
2 −1  
cX
n x
1−α = θ (1 − θ0 )n−x
x 0
x=c1 +1
2  
X n ci
+ (1 − γi ) θ (1 − θ0 )n−ci (4.3)
ci 0
i=1

From equation(4.2) and use EH0 [X] = nθ0

1 −1
cX   n  
n x n−x
X n x
αnθ0 = x θ0 (1 − θ0 ) + x θ (1 − θ0 )n−x
x x 0
x=0 x=c2 +1
   
n c1 n−c1 n c2
+ γ1 c1 θ0 (1 − θ0 ) + γ2 c2 θ (1 − θ0 )n−c2
c1 c2 0

Again consider
 
n x n!
x θ (1 − θ)n−x = x θx (1 − θ)n−x
x x!(n − x)!
n(n − 1)!
= θx (1 − θ)n−x
x(x − 1)!(n − x)!
 
n − 1 x−1
= nθ θ (1 − θ)(n−1)−(x−1)
x−1
1 −1
cX  
n − 1 x−1
nθ0 α = nθ0 θ (1 − θ0 )n−x
x−1 0
x=0
n  
X n − 1 x−1
+ nθ0 θ (1 − θ0 )n−x
x−1 0
x=c2 +1
 
n − 1 c1 −1
+ γ1 nθ0 θ (1 − θ0 )(n−1)−(c1 −1)
c1 − 1 0
 
n − 1 c2 −1
+ γ2 nθ0 θ (1 − θ0 )(n−1)−(c2 −1)
c2 − 1 0
Applications of GNPL 135

2 −1
cX
 
n − 1 x−1
1−α = θ (1 − θ0 )n−x
x−1 0
x=c2 +1
2  
X n − 1 ci −1
+ (1 − γi ) θ (1 − θ0 )(n−1)−(ci −1)
ci − 1 0
i=1
(4.4)

For solving the equations (4.3) and (4.4) for c1 , c2 , γ1 and


γ2 , the UMPU level α test can be obtained.If the distribution
is symmetric , then the equation (4.1) is equivalent to

α
PH0 {X < c1 } + γ1 PH0 {X = c1 } =
2
α
PH0 {X > c2 } + γ2 PH0 {X = c2 } =
2

Solving these equations, the UMPU level α test can be easily


obtained
Example 4.7 Construct the UMPU level 0.05 test for testing
1 1
the hypotheses H0 : θ = 2 vs H1 : θ 6= 2 based on the b(n, 8)
Also draw the power curve
As in Example 4.6, the UMPU level 0.05 test is given by



 1 if x < c1 or x > c2

φ(x) = γi if x = ci , 1, 2



 0 otherwise

where c1 , c2 , γ1 and γ2 are determined by EH0 [φ(X)] = 0.05


since the distribution is symmetric. By trial and error
method, choose c1 = 1 and c2 = 7, the test function



 1 if x < 1 or x > 7


 γ1 if x = 1

φ(x) =



 γ2 if x = 7


 0 otherwise
Applications of GNPL 136

γ1 and γ2 are obtained for solving the following two equations

PH0 {X < 1} + γ1 PH0 {X = 1} = 0.025

PH0 {X > 7} + γ2 PH0 {X = 7} = 0.025

PH0 {X = 0} + γ1 PH0 {X = 1} = 0.025


1 8
8
+ γ1 8 = 0.025
2 2
γ1 = 0.675

PH0 {X = 8} + γ2 PH0 {X = 7} = 0.025


1 8
+ γ2 8 = 0.025
28 2
γ2 = 0.675

The UMPU level 0.05 test is



 1 if x < 1 or x > 7




 0.675

if x = 1
φ(x) =



 0.675 if x = 7


 0 otherwise

The power function of the test φ is

β(θ) = EH1 [φ(X)]

= PH1 {X = 0} + PH1 {X = 8}

+ 0.675PH1 {X = 1} + 0.675PH1 {X = 7}

= θ8 + (1 − θ)8 + 0.675[8θ(1 − θ)7 + 8θ7 (1 − θ)]

θ ∈ [0, 1]

Table 4.2 shows the UMPU test power function values.Figure


4.9 is the UMPU test power curve
Applications of GNPL 137

Table 4.2 UMPU test Power function values

θ 0.1 0.2 0.3 0.4 0.5


β(θ) 0.6887 0.3943 0.1915 0.0832 0.05
θ 0.6 0.7 0.8 0.9 1.0
β 0.0832 0.1915 0.3943 0.6887 1.00

1
β(θ)
.5

.1

0 .3 .5 .8 1 θ
Figure 4.9 UMPU test power curve
Example 4.8 Obtain the UMPU level α test for testing
the hypothesis H0 : θ = θ0 vs H1 : θ 6= θ0 based on normal
distribution with mean θ and known variance σ 2 for a sample
of size n
The joint pdf of the random sample X1 , X2 , · · · , Xn is
 n
1 1 P 2 n 2 θ
pθ (x) = √ e− 2σ2 xi − 2σ2 θ + σ2 nx̄
2πσ
= c(θ)eQ(θ)t h(x)

n 2
where c(θ) = e− 2σ2 θ , Q(θ) = nθ σ2
, t = x̄ and h(x) =
 n 1 P 2
√1
2πσ
e− 2σ2 xi . Thus pθ (x) is an one parameter expo-
nential family. The UMPU level α test is

 1 if t < c1 or t > c2
φ(t) =
 0 otherwise
Applications of GNPL 138

where c1 and c2 are determined by

EH0 [φ(T )] = α (4.5)

and EH0 [T φ(T )] = αEH0 [T ] (4.6)

From equation(4.5)

PH0 {T < c1 or T > c2 } = α


Z c1 Z ∞
α α
tp(t)dt + tp(t)dt = +
−∞ c2 2 2
From equation (4.6)
Z c1 Z ∞ Z ∞
tp(t)dt + tp(t)dt = α tp(t)dt
−∞ c2 −∞

Z c1 Z ∞ Z ∞
(t−θ0 )p(t−θ0 )dt+ (t−θ0 )p(t−θ0 )dt = α (t−θ0 )p(t−θ0 )dt
−∞ c2 −∞
Z ∞ Z ∞
=α tp(t − θ0 )dt − αθ0 p(t − θ0 )dt = α[θ0 − θ0 ] = 0
−∞ −∞

Z −(c1 −θ0 ) Z ∞
tp(t)dt = − tp(t)dt
−∞ (c2 −θ0 )
Z (c2 −θ0 )
= tp(t)dt
−∞

.. . − (c1 − θ0 ) = (c2 − θ0 ) ⇒ c2 = 2θ0 − c1 . The UMPU level


α is given by

 1 if t < c1 or t > 2θ0 − c1
φ(t) =
 0 otherwise

Example 4.9 Draw the power curve and testing the hy-
pothesis H0 : θ = 2 vs H1 : θ 6= 2 with level α = 0.05 based
on N (0, 1) by taking a sample of size n = 25
As in Example 4.8, the UMPU level 0.05 test is given by

 1 if x̄ < c1 or x̄ > 2θ0 − c1
φ(x̄) =
 0 otherwise
Applications of GNPL 139

Here t = x̄, θ0 = 2, n = 25, σ 2 = 1 and c1 is determined by


Pθ0 {X̄ < c1 } = 0.025
 
X̄ − θ0 c1 − θ0
i.e., Pθ0 √ < √ = 0.025
σ/ n σ/ n
Z (c1 −2)5
p(z)dz = 0.025
−∞
R −1.96
From the normal table −∞ p(z)dz = 0.025 ⇒ (c1 − 2)5 =
−1.96 i.e., c1 = 1.608 and c2 = 2θ0 − c1 ⇒ c2 = 2.392. Thus
the UMPU level 0.05 test is

 1 if x̄ < 1.608 or x̄ > 2.392
φ(x̄) =
 0 otherwise

The power function of the test φ is

β(θ) = PH1 {X̄ < 1.608 or X̄ > 2.392}


Z (1.608−θ)5 Z ∞
= p(z)dz + p(z)dz ∀ θ 6= θ0
−∞ (2.392−θ)5

Table 4.3 shows the UMPU test power function values. Fig-
ure 4.10 visualizes the UMPU test power curve

Table 4.3 UMPU test power function values

θ 1.0 1.2 1.4 1.6 1.8 2.0


β(θ) 0.9988 0.9793 0.8508 0.5160 0.1701 0.05
θ 2.2 2.4 2.6 2.8 3.0 -
β(θ) 0.1701 0.5160 0.8508 0.9793 0.9988 -
Applications of GNPL 140

β(θ)

1.8 2θ 2.2
Figure 4.10 UMPU test power curve

Example 4.10 Obtain the UMPU level α test for testing


H0 : σ 2 = σ02 vs H1 : σ 2 6= σ02 based on N (0, σ 2 ) and by
taking a sample of size n
The joint pdf of the random sample X1 , X2 , X3 , · · · , Xn
is given by
n

1 1 P 2
pσ (x) = e− 2σ2 xi

2πσ
= c(σ)eQ(σ)t h(x)
 n
√1 −1
P
where c(σ) = 2πσ
, Q(σ) = 2σ 2 , t= xi and h(x) = 1. It

is an one parameter exponential family. The UMPU level α


test is given by

 1 if t < c1 or t > c2
φ(t) =
 0 otherwise

where c1 and c2 are determined by

EH0 [φ(T )] = α (4.7)

and EH0 [T φ(T )] = αEH0 [T ] (4.8)


Applications of GNPL 141

Xi2
P
Let Y = σ2
∼ χ2 with n degrees of freedom

1 1 n
pn (y) = n
n
e− 2 y y 2 −1
2 Γ2
2

1 1 n
ypn (y) = n
n
e− 2 y y 2 +1−1
22 Γ2
n 1 n+2
npn+2 (y) = n+2 e− 2 y y 2 −1
n+2
2 2 Γ 2
n − 12 y n
= n
n
e y 2 +1−1
2 Γ2
2

= ypn (y)

Z c1 Z ∞
From (4.7) pn (y)dy + pn (y)dy = α
0 c2
Z c2
⇒ pn (y)dy = 1 − α (4.9)
c1

Z c1 Z ∞ Z ∞
From (4.8) ypn (y)dy + ypn (y)dy = α ypn (y)dy
0 c 0
Z 1∞
Since ypn (y)dy = n
0
Z c1 Z ∞
⇒ npn+2 (y)dy + npn+2 (y)dy = αn
0 c2
Z c2
.. . pn+2 (y)dy = 1−α (4.10)
c1

Solve (4.9) and (4.10) for c1 and c2 the UMPU level α can
be obtained

4.5 Locally Most Powerful Unbiased Test

Consider the problem of testing the statistical hypothesis


H0 : θ = θ0 vs H1 : θ 6= θ0 . To find the Locally Most
Powerful Unbiased (LMPU) test, assume the distribution of
observed random variable X such that the power function
βφ (θ) = Eθ [φ(X)] admits continuous derivatives up to order
Applications of GNPL 142

two. Thus
Z

βφ0 (θ) = φ(x)
pθ (x)dx
∂θ
∂2
Z
βφ00 (θ) = φ(x) 2 pθ (x)dx
∂θ

A test φ0 is said to be LMPU level α test for testing the


hypothesis H0 : θ = θ0 vs H1 : θ 6= θ0 , if all unbiased tests
φ are satisfying βφ (θ0 ) = α and βφ0 (θ0 ) = 0 and the test φ0
maximizes the value of second order derivative at θ = θ0
i.e., βφ00 (θ0 ) > βφ00 (θ0 )
A LMPU test may be found by using GNPL. The LMPU
test φ0 at θ = θ0 is

∂2 ∂


 1 if p (x)
∂θ2 θ
> k1 pθ (x) + k2 ∂θ pθ (x)

φ0 (x) = ∂2 ∂
γ if p (x)
∂θ2 θ
= k1 pθ (x) + k2 ∂θ pθ (x)



 o if ∂2 ∂
p (x)
∂θ2 θ
< k1 pθ (x) + k2 ∂θ pθ (x)

where k1 , k2 and γ are determined by


βφ0 (θ0 ) = Eθ0 [φ0 (X)] = α and βφ0 0 (θ0 ) = 0. If
pθ (x) > 0, then

∂ 1 ∂
log pθ (x) = pθ (x)
∂θ pθ (x) ∂θ
2
∂2 ∂2

1 1 ∂
log pθ (x) = pθ (x) − pθ (x)
∂θ2 ∂θ2 pθ (x) p2θ (x) ∂θ
2
∂2 ∂2

1 ∂ 1
2
pθ (x) = 2
log pθ (x) + pθ (x)
∂θ pθ (x) ∂θ ∂θ [pθ (x)]2

.. . The LUMPU test φ0 at θ = θ0 is given by



∂2 1 ∂ 1


 1 if ∂θ 2 pθ (x) p (x) > k1 + k2 ∂θ pθ (x) p (x)
θ θ

φ0 (x) = ∂2 1 ∂ 1
γ if ∂θ 2 pθ (x) p (x) = k1 + k2 ∂θ pθ (x) p (x)
θ θ


 0 if ∂ 2 p (x) 1 < k + k ∂ p (x) 1

∂θ2 θ pθ (x) 1 2 ∂θ θ pθ (x)
Applications of GNPL 143

 2
∂2 ∂


 1 if ∂θ 2 log p θ (x) + ∂θ log p θ (x)

 ∂
> k1 + k2 ∂θ log pθ (x)





 γ if ∂ log p (x) + ∂ log p (x)2
2


∂θ2 θ ∂θ θ
i.e., φ0 (x) =




 = k1 + k2 ∂θ log pθ (x)

 ∂ 2 ∂
2



 0 if ∂θ 2 log pθ (x) + ∂θ log pθ (x)



< k1 + k2 ∂θ log pθ (x)

Problems

4.1 For one parameter family with parameter θ, derive


UMPU level α test for testing the statistical hypoth-
esis H0 : θ = θ0 vs H1 : θ 6= θ0

4.2 Define unbiasedness of a test.Why is it considered to


be a desirable property of a test? Also that the MP
test given by the Neyman - Pearson lemma for testing
a simple hypothesis vs simple alternative is unbiased

4.3 Derive an UMPU level α test for testing the statistical


hypothesis H0 : θ1 ≤ θ ≤ θ2 vs H1 : θ ∈
/ [θ1 , θ2 ] us-
ing a random sample from N (θ, σ 2 ), σ 2 known. What
changes are to be made if θ1 = θ2 ?

4.4 Construct MP level α test, if any for testing the statis-


tical hypothesis H0 : σ = 4 vs H1 : σ 6= 4 on the basis
of 100 observation drawn from N (1, σ 2 ) population

4.5 Stating the necessary results to obtain the UMP level α


for testing H0 : θ ∈
/ (a, b), a < b vs H1 : θ ∈ (a, b), using
a random sample from N (θ, σ 2 ) where σ 2 is known

4.6 What is UMPU test? Examine whether UMPU test


exists for testing H1 : θ ∈ (a, b), a < b vs H1 : θ ∈
/ [a, b]
Applications of GNPL 144

for the case of the one parameter exponential family


Obtain the same if it exists

4.7 For testing the hypothesis H0 : θ ≤ θ1 or θ ≥ θ2 vs


H1 : θ1 < θ < θ2 in the one parameter exponential
family, find the UMP level α test

4.8 Derive UMPU level α test for testing H0 : θ = θ0 vs


H1 : θ 6= θ0 on the basis of a random sample of size n
drawn from N (θ, σ 2 ) where σ 2 is known

4.9 Examine whether an UMP test exists for testing the


hypothesis H0 : θ ≤ θ0 ∩ θ ≥ θ1 vs H1 : θ0 < θ < θ1
using a random sample from N (θ, 1). Find the test for
taking α = 5% if you answer is ”yes”

4.10 State the Neyman - Pearson Generalized Lemma. Also


explain in detail any one of its application

4.11 When in a test said to be unbiased? For testing the


hypothesis H0 : θ ≤ 1 vs H1 : θ > 1, the test function
used is

1
 1 if x(n) > (1 − α) n
φ(x(n) ) =
 0 otherwise

Here X(n) is the biggest observation in a random sample


of size n drawn from a uniform distribution over (0, θ)
Is this test unbiased?

4.12 Obtain the UMP test, if one such exists for testing
1 1
H0 : θ ≤ 4 vs H1 : θ > 4 using a random sample from
the uniform distribution over (0, θ)
Applications of GNPL 145

4.13 Explain how you would obtain the constants of the


UMPU level α test for testing H0 : θ1 ≤ θ ≤ θ2 vs
H1 : θ < θ1 or θ > θ2 in case of a binomial distribution

4.14 Derive the UMPU level α test for testing the hypoth-
esis equality of variances of two independent normal
distributions

4.15 Let pθ (x) be pdf of one parameter exponential family


having MLR property in T (X) = T . Let H0 : θ = θ0
vs H1 : θ 6= θ0 . If φ is unbiased test procedure at level
α, then show that Eθ [T φ(T )] = αEθ0 [T ]

4.16 Define an unbiased test. Show that under certain con-


ditions to be stated a UMP level α test for testing H0
vs H1 and satisfying Eθ [φ(X)] = α for θ ∈ ΩH0 ∩ ΩH1
is UMPU for testing H0 vs H1

4.17 Obtain the LMPU test for testing the statistical hy-
pothesis H0 : θ = 0 vs H1 : θ 6= 0 using a random
sample from N (0, 1)

4.18 Which of the following statement is true?


(a) For unbiased tests probability of rejecting the hy-
pothesis H0 when it is false, is bigger than that when
it is true
(b) Unbiasedness guarantee that chance of rejecting a
true hypothesis is always less than or equal to chance
of rejecting a false hypothesis
(c) MP level α test is always an unbiased test
(d) MP level α test for testing a simple hypoth-
Applications of GNPL 146

esis against simple alternative is obtained by NPL


Ans: (a), (b) , (c) and (d)

4.19 Let X1 , X2 , · · · , Xn are iid and they are following


N (θ, σ 2 ). Let H0 : σ 2 = σ02 vs H1 : σ 2 6= σ02 . The
UMPU level α test rejects H0 iff
x2i x2i
P P
(a) σ02
< c1 or σ02
> c2
x2i
P
(b) c1 < σ02
< c2
x2i
P
(c) σ 2 > c1
P0 2
xi
(d) σ02
> c2 Ans: (a)

4.20 Let X1 , X2 , · · · , Xn be iid following N (θ, 1). Let H0 :


θ = θ0 vs H1 : θ 6= θ0 . The test function is
 P
 1 if (x√ i −nθ0 )
n > zα/2
φ(x) =
 0 otherwise

Which of the following statements are true?


(a) φ is the UMP level α test among all unbiased tests
with level α
(b) φ is the UMP level α test
(c) φ has the power β < α
(d) φ has the power β ≥ α Ans: (a) and (d)

4.21 Let X1 , X2 , · · · , Xn be iid with N (θ, 1). Let H0 : θ = θ0


vs H1 : θ 6= θ0 . For any level α, 0 < α < 1. Which of
the following statements are true?
(a) There is no UMP level α test exist
(b) There exist an UMP level α test
(c) There exist an UMPU level α test
(d) Ther exist an unbised test with level α
Ans: (a), (c) and (d)
Applications of GNPL 147

4.22 Let X1 , X2 , · · · Xn be iid N (0, 1). Let the hypothesis


H0 : θ = 0 vs H1 : θ 6= 0 is rejected with level α iff
P √
(a) | xi | > nzα/2
P √
(b) | xi | < nzα/2
P √
(c) xi > nzα/2
P √
(d) xi < nzα/2 Ans: (a)

4.23 The statistic T (X) = T is symmetric about θ of a


distribution and for t ∈ <. Which of the following
statements are true?
(a) P {T ≤ t − θ} = P {T ≥ t + θ}
(b)P {T ≤ t} = P {T ≥ t + θ}
(c) P {T ≤ t − θ} = P {T < t}
1
(d) P {T > θ} = P {T < θ} = 2 Ans: (a) and (d)
5. NEYMAN - STRUCTURE AND SIMILAR TESTS

5.1 Introduction

Neyman - Structure tests are an exposition of the ideas


of sufficient statistics and similar regions. One of the virtues
is that it incorporates some distribution free tests into the
tests based on similar regions. For testing the composite
hypothesis H0 : θ ∈ ΩH0 vs H1 : θ ∈ ΩH1 , the size of the
critical region C is defined by

sup α(θ) = sup Eθ [φ(X)]


θ∈ΩH0 θ∈ΩH0

If α(θ) = α for every θ ∈ ΩH0 , the critical region C has


size α is said to be similar to the sample space. For any
given problem one who has interested to determine the entire
class of similar regions which gives the choice of an optimum
critical region to similar regions
Example 5.1 A single observation x of a random variable
X with pdf

2x

θ if 0 < x ≤ θ
pθ (x) =
2(1−x)

1−θ if θ < x < 1, o < θ < 1

1 3
To test H0 : 4 ≤θ≤ 4 the critical function φ suggested is

1 1 3

8 if 4 ≤x≤ 4
φ(x) =
 0 otherwise

Compute the size of the test


Neyman - Structure 149

The hypothesis H0 is composite, so size of the test is

α = sup Eθ [φ(X)]
θ∈ΩH0
1
= sup Pθ {X ∈ C | θ}
8 θ∈ΩH0
 
1 1 3 3
= PH ≤X≤ |θ=
8 0 4 4 4
Z 3
1 4 1
= xdx =
3 1 12
4

5.2 Similar Test

Let the hypothesis be H0 : θ ∈ ΩH0 vs H1 : θ ∈ ΩH1


A test function φ with level α is said to be similar on the
boundary if it satisfies

• βφ (θ) is a continuous function in θ

• βφ (θ) = α ∀ θ ∈ ω where ω = Ω̄H0 ∩ Ω̄H1 and 1 Ω̄H0 is


the closure of the set ΩH0 and Ω̄H1 is the closure of the
set ΩH1

Example 5.2 The hypothesis is to test that H0 : θ ≤ 0 vs


H1 : θ > 0 for the Normal distribution N (θ, 1) with size α
based on a sample of size n
The pdf of the Normal distribution with mean θ and vari-
ance σ 2 = 1 is an one parameter exponential family of dis-
tributions with sufficient statistic T = X̄. The test for the
1
A point p is said to be adherent point of a non-empty set E if every
neighbourhood of p contains a point of E. A point p is said to be an
accumulation point( or limit point) of E if every neighbourhood of p
contains at least one point of the set E. The closure of Ē of E is the
set of all adherent points of E. Every accumulation point of E is also
an adherent point of E but not conversely
Neyman - Structure 150

hypothesis H0 : θ ≤ 0 vs H1 : θ > 0 is the UMP level α test


which is given by

 1 if x̄ > c
φ(x̄) =
 0 otherwise

where c is determined by EH0 [φ(X̄)] = α Thus


 
X̄ − θ0 c − θ0
PH0 {X̄ > c} = Pθ0 √ > √
σ/ n σ/ n
Here σ = 1 and θ0 = 0 Hence
R∞ X̄−θ

c n p(z)dz = α where Z =
√0
σ/ n
∼ N (0, 1) (5.1)
R∞
But zα p(z)dz = α (5.2)
√ zα
By comparing (5.1) and (5.2) give c n = zα , ⇒ c = √ n
The UMP level α test is

 1 zα
if x̄ > √
n
φ(x̄) =
 0 otherwise

Now ω = {0}, since the closure of Ω̄H0 = {0} and Ω̄H1 = {0}

β(θ) = EH1 [φ(X̄)]


 

= PH1 X̄ > √
n

 
X̄ − θ
= Pθ √ > zα − θ n ∀ θ≥0
1/ n
But ω = {0}, i.e.,PH1 {Z > zα } and
Z ∞
β(θ) = p(z)dz = α

Thus the test φ is similar on ω


Example 5.3 Two independent observations x1 and x2 of a
random variable X have the pdf
1 1
pθ (x) = e− θ (x−µ) , µ < x < ∞, θ > 0
θ
Neyman - Structure 151

It is decided to reject the hypothesis H0 : θ ≤ 1 in favour of


H1 : θ > 1 if |x1 − x2 | > loge 100. Is this test similar?
The test function φ for rejecting H0 is

 1 if |x1 − x2 | > log 100
e
φ(x) =
 0 otherwise

Let y = x1 and z = x1 − x2 , i.e., x1 = y and x2 = y − z The


Jacobian transformation J is given by

∂x1 ∂x2
∂y ∂y

J =
∂x
∂z
1 ∂x2
∂z

1 −1


J =
0 −1

The joint pdf of X1 and X2 is

1 − 1 (x1 +x2 −2µ)


pθ (x1 , x2 ) = e θ , µ < x1 , x2 < ∞
θ2

The joint pdf of Y and Z is

1 − 12 (2y−2µ)+ θ1
pθ (y, z) = e θ
θ2

The marginal density of Z is


Z ∞
1 z 2
pθ (z) = 2 e θ e− θ (y−µ) dy
θ µ

1 zθ

2θ e if − ∞ < z < 0
pθ (z) =
1 − zθ

2θ e if 0 < z < ∞
Now the test function is

 1 if |z| > loge 100
φ(z) =
 0 otherwise
Neyman - Structure 152


 1 if z < − loge 100 or z > loge 100
i.e., φ(z) =
 0 otherwise
The power function of the test φ is

β(θ) = Eθ [φ(Z)] ∀ θ ≥ 1
Z − loge 100 Z ∞
1 z 1 −z
= e θ dz + e θ dz
−∞ 2 loge 100 2
1 − 1 loge 100 1 − 1 loge 100
= e θ + e θ
2 2
1
 1
1
log( 100 )θ 1 θ
= e =
100
1
Here ω = Ω̄H0 ∩ Ω̄H1 = { 1 } and β(ω) = 100 = 0.01 = α the
size of the test function φ. .. . The test φ is similar on ω
Theorem 5.1 Suppose φ is a level α unbiased for testing
H0 : θ ∈ ΩH0 vs H1 : θ ∈ ΩH1 . If the power function βφ (θ) be
continuous function of θ , then the test function φ is similar
on ω
Proof Suppose βφ (θ) is a continuous function in θ
i.e., limn→∞ βφ (θn ) = θ exists finitely
⇒ θ is a limit point
Also φ is a level α unbiased test
i.e., EH0 [φ(X)] ≤ α ∀ θ ∈ ΩH0 and βφ (θ) ≥ α ∀ θ ∈ ΩH1
When θn0 ∈ ΩH0 , consider a sequence {θn0 , n = 1, 2, 3 · · · }
⇒ limn→∞ θn0 = θ
i.e., limn→∞ βφ (θn0 ) = βφ (limn→∞ θn0 )
= βφ (θ) ≤ α ∀ θ ∈ ΩH0 (5.3)
Again, consider a sequence {θn1 , n = 1, 2, · · · } when θn1 ∈ ΩH1
⇒ limn→∞ θn1 = θ
i.e., limn→∞ βφ (θn1 ) = βφ (limn→∞ θn1 )
= βφ (θ) ≥ α ∀ θ ∈ ΩH1 (5.4)
Neyman - Structure 153

From (5.3) and (5.4) ⇒ βφ (θ) ≤ α ∀ θ ∈ ΩH0 and βφ (θ) ≥


α ∀ θ ∈ ΩH1 ⇒ βφ (θ) = α for θ ∈ ω
⇒ φ is a similar test on ω

5.3 Construction of similar test

Let ω be the closure of ΩH0 , and ΩH1 . Let ω be the


subsets of Ω relating to the hypothesis H0 and H1 respectively
on the probability distribution P = {Pθ , θ ∈ Ω}. Let P X ⊂
P. Suppose T (X) = T is the sufficient statistic of P. Let
P T = {PθT , θ ∈ ω} and 0 ≤ φ(x) ≤ 1 be any critical function
with

Eθ [φ(X)] = α ∀ θ ∈ ω (5.5)

If φ(x) is a non - randomized test procedure with critical re-


R
gion C, then Eθ [φ(X)] = α ⇒ C dPθ = α ∀ θ ∈ ω
Thus the probability constant of the critical region C is free
from θ on the boundary ω A subset of X has a probability
constant free from θ ∀ ∈ ω and X itself has this proba-
R
bility, i.e., X dPθ = 1, ∀ θ ∈ ω. Thus the test function φ
satisfies(5.3) and it is a similar test on ω

5.4 Neyman - Structure

Let H0 : θ ∈ ΩH0 and H1 : θ ∈ ΩH1 be the hypotheses.


Let P X = {Pθ , θ ∈ ω} and T (X) be a sufficient statistic of
P X and P T = {PθT , θ ∈ ω}, where ω = Ω̄H0 ∩ Ω̄H1 . A test
satisfies E[φ(X) | T = t] = α ∀ P T ∈ P T , then the test φ has
Neyman - Structure with respect to P T
Neyman - Structure 154

Theorem 5.2 Let H0 : θ ∈ ΩH) vs H1 : θ ∈ ΩH1 . Let


the family of pdf {pθ (x), θ ∈ Ω} be such that the power
function of every test function is continuous. Let ω be the
boundary of Ω̄H0 and Ω̄H1 . Let D1 = {φ | Eθ [φ(X)] ≤
α, ∀ θ ∈ ΩH1 and Eθ [φ(X)] ≥ α, ∀ θ ∈ ΩH1 } and D2 = {φ |
Eθ [φ(X)] = α, ∀ θ ∈ ΩH1 }. Suppose there exists an UMPU
test φ0 in the class of tests D2 and if φ0 ∈ D1 then φ0 is the
UMPU test for testing H0 vs H1
Proof It is easily seen that D1 ⊆ D2 . Suppose φ0 ∈ D2
then φ0 is a size α unbiased test for the hypothesis H0 vs
H1 . Since φ0 is an UMPU level α in a bigger class, it is the
UMPU in a smaller class of α level unbiased tests. Thus it is
the UMPU level α test
Theorem 5.3 Let H0 : θ ∈ ΩH0 vs H1 : θ ∈ ΩH1 . If a
test function φ has Neyman- Structure w.r.t. P T , then it is
similar w.r.t. P X
Proof Suppose the test function φ with level α has Neyman
- Structure w.r.t. P T , i.e.,E[φ(X) | T = t] = α ∀ P T ∈ P T
To prove that φ is similar on ω, where ω = Ω̄H0 ∩ Ω̄H1
i.e., to show that βφ (θ) = α ∀ θ ∈ ω

Eθ [E[φ(X)] | T = t] = Eθ (α) ∀ θ ∈ ω

Since E[EX | Y ] = E[X] ⇒ Eθ [φ(X)] = α ∀ θ ∈ ω


i.e., βφ (θ) = α ∀ θ ∈ ω, ⇒ φ is similar on ω
Completeness A family of distributions P T = {PθT , θ ∈ Ω}
is said to be complete if Eθ [g(T )] = 0 ⇔ g(t) = 0 ∀ t ∈ <
Bounded Completeness Let P T = {PθT , θ ∈ Ω} be a class
of distributions of T (X) under consideration. This class P T
Neyman - Structure 155

of distribution of T is said to be bounded complete, if for


every bounded function g(T ), i.e., |g(T )| ≤ M, M ∈ < such
that Eθ [g(T )] = 0 ⇔ g(t) = 0 ∀ t ∈ <
Note If P T = {Pθ , θ ∈ Ω} is not boundedly complete, than
there exists a function g(T ) with |g(T )| ≤ M, M ∈ <∀ t such
that Eθ [g(T )] = 0 ∀ PθT ∈ P T and g(t) 6= 0 with probability
Pθ {g(T ) 6= 0} > 0 for some PθT ∈ P T
Theorem 5.4 Let X be a random variable with probability
distribution P ∈ P. Let T (X) be a sufficient statistic for P
A necessary and sufficient condition for all similar tests to
have a Neyman - Structure w.r.t. T (X) is that the class P T
of distributions of T (X) is boundedly complete
Proof Assume P T is boundedly complete. To show that all
similar tests have Neyman - Structure w.r.t. T (X). Let φ be
a similar test

i.e., Eθ [φ(X)] = α ∀ θ ∈ ω ⇒ E[φ(X) − α] = 0 ∀P ∈ P T

Let ψ(t) = E[φ(X) − α | T = t]

E[ψ(T )] = E{E[φ(X) − α | T = t]}

= E[φ(X) − α] = α − α = 0, ∀ P T ∈ P T

Thus ψ(t) is bounded and lies between α and 1 − α so that


o ≤ ψ(t) ≤ 1. Since P T is boundedly complete
⇒ E[φ(X) − α | T = t] = 0 ∀ P T ∈ P T
Thus every similar test has a Neyman - Structure
Conversely, let if possible, P T be not boundedly complete
There exists a function g(T ) such that
|g(T )| ≤ M ∀ t, i.e., E[g(T )] = 0 ∀ P T ∈ P T and
Neyman - Structure 156

P {g(T ) 6= 0} > 0 for some P T ∈ P T


Define φ(t) = c g(t)
M + α where c = min(α, 1 − α), then φ(t) is

a test function, since φ(t) lies between 0 and 1


g(t)
Thus φ(t) = c +α
M
M
≤ c +α
M
≤ min(α, 1 − α) + α

Now define a function φ as



 1 if α > 0.5
φ(t) =
 ≤1 if α ≤ 0.5

M
Again φ(t) ≥ −c +α
M
≥ −c min(α, 1 − α) + α

 0 if α ≤ 0.5
Now φ(t) =
 ≥ 0 if α > 0.5

.. . φ is a test function
and E[φ(T ] = cE[g(T )] + α = α ∀ P T ∈ P T
Thus φ is a similar test
This test φ has not the Neyman - Structure
since P {g(T ) 6= 0} > 0 for some P T ∈ P T
⇒ P {φ(T ) 6= α} > 0 for some P T ∈ P T
So the contradiction arises. To avoid the contradiction of the
assumption,P T must be boundedly complete. Hence φ has
the Neyman - Structure
Problems

5.1 Define similar tests and tests with Neyman-Structure


Also show that if T (X) is sufficient, a necessary and
Neyman - Structure 157

sufficient condition for all similar tests to have Neyman


- Structure w.r.t. T (X) is that the family of distribu-
tion of T (X) is boundedly complete

5.2 Prove that a test with Neyman-Structure is similar. Is


the converse true?

5.3 Define a similar test and a test with Neyman - Structure


Bring out a relation between the two concepts

5.4 Let x1 and x2 be two independent observations on a


random variable X. The pdf of the X is

1 1
pθ (x) = e− θ (x−µ) , µ < x < ∞, θ > 0
θ

It is decided to reject the hypothesis H0 : θ ≤ 1 in


favour of H1 : θ > 1 if |x1 − x2 | > loge 100. Find out
1
the power function of the similar test Ans:(0.01) θ

5.5 Define similar test procedure. Show that α− similar test


procedure can become UMPU under suitable conditions
to be stated

5.6 Define (a) UMP unbiased test (b) UMP similar test
When will they coincide?

5.7 What is an UMP test? Is this related to Neyman -


Structure?

5.8 Show that class of tests with Neyman - Structure coin-


cide with class of tests based on a boundedly complete
sufficient statistics
Neyman - Structure 158

5.9 Let T (X) be a sufficient statistic which is boundedly


complete and φ is a level α test. Which of the following
statements are true?
(a) Bounded completeness of a statistic T implies com-
pleteness of the statistic
(b) Completeness of a statistic T implies bonded com-
pleteness of the statistic
(c) A test satisfies E[φ(X) | T = t] = α ∀ P T ∈ P T
then the test φ has Neyman - Structure
(d) A test satisfies E[φ(X) | T = t] = α ∀ P T ∈ P T
then the test φ is a similar test Ans: (b), (c) and (d)

5.10 Let θ be the probability of obtaining a head in the


toss of a coin. The coin is tossed three times.We record
Y = 1 , if all the three tosses result in heads and Y = 2
if all the three tosses result in tails. Further Y = 3
otherwise. If the prior density of θ is Beta (α, β) and θ̂i
is the posterior mean of θ given Y = i for i = 1, 2.Which
of the following statements are true?
(a) The posterior density of θ given Y = 3 is a Beta
density
(b) The posterior density of θ given Y = 3 is an Uniform
density
(c) The posterior density of θ given Y = 3 is not a Beta
density
(d) The posterior density of θ given Y = 3 is a Binomial
distribution Ans: (a)

5.11 Let X1 , X2 , · · · , Xn be independent and identically dis-


Neyman - Structure 159

tributed random variables having an exponential distri-


bution with mean λ1 . Let Sn = X1 + X2 + · · · + Xn and
N = inf{n ≥ 1, Sn > 1}. Which of the following state-
ment is true? . The variance of N equals
(a) 1 (b) λ (c) λ2 (d) ∞ Ans: (b)

5.12 Let X1 , X2 , · · · , Xn be 20 observations in the interval


[0, 1]. Let x̄ be the mean and the median of these ob-
servations and let s2 = n1 (xi − x̄)2 . Which of the
P

following statement is true?


(a) If the 15 observations are smaller than 0.3 , then x̄
cannot exceed 0.5
(b) s2 will be maximum if 10 of these observations are
1 and the rest are 0.0
(c) if all observations except one are smaller than 0.5
then x̄ cannot be smaller than x̄
(d) s2 ≤ x̄(1 − x̄) Ans: (c)

5.13 The hypothesis is to test that H0 : θ ≤ 0 vs H1 : θ > 0


for the normal distribution N (0, 1) with size α based
on a sample of size n.Which of the following statement
is true?
(a) The hypothesis H0 : θ ≤ 0 is rejected iff the critical

region C = {x̄ | x̄ > √
n
}
(b) The set ω = {0} is the closure of the sets ΩH0 =
{θ | θ ≤ 0} and ΩH1 = {θ | θ > 0}
(c) The test function

 1 zα
if x̄ > √
n
φ(x̄) =
 0 otherwise
Neyman - Structure 160

is similar on ω = {0}
(d) The test function φ is depending on the sufficient
statistic T = X̄ Ans: (a), (b), (c) and (d)

5.14 Consider a linear model yij = µ + τi + ij , i =


1, 2, · · · , k, j = 1, 2, · · · , n where µ is unknown and
τi are independently and identically distributed as
N (0, σ 2 ), τi and ij are independent for all i and
j. Note that τi is the treatment effect. Suppose
SS Total , SS Treatment , SS Error are total sum of squares
and error sum of squares respectively. The hypothe-
sis is to test that H0 : στ2 = 0 vs H1 : στ2 > 0 which of
the following statement is true? which of the following
statements is true?
(a) The sum of squares identity is

SS Total = SS Treatment + SSError

(b) SSError ∼ σ 2 χ2 distribution with n(k − 1) degrees


of freedom.
(c) Under H0 the F statistic ∼ F(k−1,n−1) degrees of
freedom
(d) E[SSer ] = n(k − 1)(k 2 + nστ2 ) Ans: (a)
NOTATION

X - Sample space

Ω - Parameter space

H - Hypotheses

H0 - Null hypothesis

H1 - Alternative hypothesis

ΩH - Parameter space under H

X - Random variable

x - Random observation

θ - Parameter

ω - Boundary point

X - Vector random variable

x - Vector random observation

P- Family of distributions

Pθ - Distribution with θ

F (x) - Distribution function

pdf - Probability density function

pmf - Probaility mass function

pθ (x) -pdf or pmf of x of X with θ

Pθ {X = x} - Probability of x of X with θ

PH {X = x} - Probability of x of X under H
A.Santhakumaran 162

iid - Independent and identically distributed

T (X) - Function in X

t(x) - Function in x

φ - Test function

ψ - Test function

βφ (θ) - Power of the test φ at θ

β(P ) - Power of the test φ under P

EH [φ(X)] - Expected value of φ under H

glb - greast lower bound

lub - least upper bound

MP - Most Powerful

UMP - Uniformily Most Powerful

UMPU - Uniformly Most Powerful Unbiased

NPL - Neyman - Pearson Lemma

GNPL- Generalized Neyman - Pearson Lemma

LMP- Locally Most Powerful

LMPU - Locally Most Powerful Unbised

LR - Likelihood Ratio

MLR - Monotone Likelihood Ratio

LPP - Linear Progemming Problem

Uα - Class of all level α unbiased tests


BIBLIOGRAPHY

1. Apostal,T.M., Mathematical Analysis, Addison-Wesley, 1960.

2. Cramer,H.,Mathematical Methods of Statistics, Princeton University Press, Prince-


ton , N.J., 1916.

3. Durairajan,T.M., Invited lecture programme on Testing of Hypotheses, University


of Madras, 1997.

4. Deshpande,J.V., A.P. Gore and A. Shanubhogue, Statistical Analysis of Non-


normal data, New Age International(P) Ltd., New Delhi, 2003.

5. Fisher,R.A., On the mathematical foundations of theoretical statistics, Phil.Trans.


Royal Soc.A, 222, 309 -368, 1972.

6. Ferguson T.S., Mathematical Statistics, Academic Press, 1967.

7. Hogg,R.V., and A. T. Craig, Introduction to Mathematical Statistics, Macmillan


Publishing Co., Inc., New York, 1970.

8. Lehmann,E.L.,Testing Statistical Hypotheses, John Wiley and Sons., 1959.

9. Lehmann,E.L.,Theory of Point Estimation, John Wiley and Sons., 1983.

10. Lindgren,B.W.,Statistical Theory , Macmillan Publishing Co., Inc. New York,


1976.

11. Nelson, W., Accelerated Testing Statistical Models, Tests Plan and Data Analysis,
John Wiley and Sons, Inc., 2004.

12. Nelson,W., Accelerated life testing step stress models and data analysis, IEEE
Trans. Reliability , 29, 103 - 108, 1980.

13. Neyman,J. and E.S. Pearson, On the problem of the most efficient tests of statis-
tical hypotheses, Phil. Trans.Roy. Soc., A, 231, 289 - 337,1933.

14. Rao,C.R., Linear Statistical Inference and Its Applications, Wiley and Sons, 1984
A.Santhakumaran 164

15. Rohatgi,V.K., Introduction to Probability Theory and Mathematical Statistics,


Wiley and Sons, 1985.

16. Santhakumaran,A.,Decision making tools in real life problem, The Hindu, 27th
Oct. 1997.

17. Santhakumaran,A.,Mathematical Statistics of Probability Models, Atlantic Pub-


lishers and Distributors, New Delhi, 2019.

18. Zacks,S., Theory of Statistical Inference, John Wiley and Sons, New York, 1971.
SUBJECT INDEX

Acceptance region 2 Likelihood ratio 44


Accumulation point 149 Location family 80
Adherent point 149 Locally Most Powerful test 106
Alternative 1 log concave 81
Analysis of variance model 71 LMPU test 141
Binomial pmf 46 Most Powerful test 10
Bounded completeness 154 Monotone class of tests 12
Boundary 149 Monotonic increasing 60
Cauchy standard form 60 Maximum order statistic 69
Chi-Square statistic 69 MLR property 75
Chi- Square distribution 95 Neyman - Pearson Lemma 38
Closure of the set 149 Neyman - Structure 148
Composite 1 Neighbourhood 149
Competitor 1 Non- Negative 40
Completeness 154 Non - decreasing 40
Concave 81 Non- Randomized test 2
Critical region 2 Normal distribution 22
Critical function 2 Negative Binomial distribution 84
Dedekind’s Cut 40 Optimum test 16
Distribution 40 One sided left tail test 132
Exponential family 116 One sided right tail test 132
First order statistics 69 Order statistic 69
Goodness of fit 80 Parameter 1
Hypergeometric distribution 85 Poisson disteibution 22
Hypothesis 1 Right continuous 40
Irrational 41 Rational 41
Jacobian transformation 151 Randomized test 2
Karlin - Rubin Theorem 77 Sample space 2
A.Santhakumaran 166

Standard F statistic 73 Type 1 error 4


Solution region 16 17 Type 2 error 4
Simple hypothesis 1 Uniformly Most Powerful test 17
Simplex method 19 UMP one sided test 75
Significance 5 UMP left tail test 104
Size of a test 5 UMP right tail test 103
Similar test 149 UMP two sided test 123
Similar to the sample space 148 UMP test power curve 88
sufficient statistic 69 Unbiased test 124
Test function 2 UMPU test 128
Two sided test 131
Two tail tests test 104

You might also like