Statistics 333
Statistics 333
MATHEMATICAL STATISTICS
(Lecture Notes)
c Jan Vrbik
2
3
Contents
1 PROBABILITY REVIEW 7
Basic Combinatorics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Binomial expansion . . . . . . . . . . . . . . . . . . . . . . . . . 7
Multinomial expansion . . . . . . . . . . . . . . . . . . . . . . . 7
Random Experiments (Basic Denitions) . . . . . . . . . . . . . . 7
Sample space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Set Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Boolean Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Probability of Events . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Probability rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Important result . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Probability tree . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Product rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Conditional probability . . . . . . . . . . . . . . . . . . . . . . . 9
Total-probability formula . . . . . . . . . . . . . . . . . . . . . . 10
Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
Discrete Random Variables . . . . . . . . . . . . . . . . . . . . . . . 10
Bivariate (joint) distribution . . . . . . . . . . . . . . . . . . . 11
Conditional distribution . . . . . . . . . . . . . . . . . . . . . . 11
Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Multivariate distribution . . . . . . . . . . . . . . . . . . . . . . 11
Expected Value of a RV . . . . . . . . . . . . . . . . . . . . . . . . . 11
Expected values related to X and Y . . . . . . . . . . . . . . . 12
Moments (univariate) . . . . . . . . . . . . . . . . . . . . . . . . 12
Moments (bivariate or joint) . . . . . . . . . . . . . . . . . . . 12
Variance of aX +bY +c . . . . . . . . . . . . . . . . . . . . . . . 13
Moment generating function . . . . . . . . . . . . . . . . . . . . . . 13
Main results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Probability generating function . . . . . . . . . . . . . . . . . . . . . 13
Conditional expected value . . . . . . . . . . . . . . . . . . . . . . . 14
Common discrete distributions . . . . . . . . . . . . . . . . . . . . . 14
Binomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Geometric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Negative Binomial . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Hypergeometric . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4
Multinomial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
Multivariate Hypergeometric . . . . . . . . . . . . . . . . . . . 15
Continuous Random Variables . . . . . . . . . . . . . . . . . . . . . 16
Univariate probability density function (pdf) . . . . . . . . . 16
Distribution Function . . . . . . . . . . . . . . . . . . . . . . . . 16
Bivariate (multivariate) pdf . . . . . . . . . . . . . . . . . . . . 16
Marginal Distributions . . . . . . . . . . . . . . . . . . . . . . . 16
Conditional Distribution . . . . . . . . . . . . . . . . . . . . . . 17
Mutual Independence . . . . . . . . . . . . . . . . . . . . . . . . 17
Expected value . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Common Continuous Distributions . . . . . . . . . . . . . . . . . . 18
Transforming Random Variables . . . . . . . . . . . . . . . . . . . . 18
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2 Transforming Random Variables 21
Univariate transformation . . . . . . . . . . . . . . . . . . . . . . . . 21
Distribution-Function (F) Technique . . . . . . . . . . . . . . 21
Probability-Density-Function (f) Technique . . . . . . . . . . 23
Bivariate transformation . . . . . . . . . . . . . . . . . . . . . . . . . 24
Distribution-Function Technique . . . . . . . . . . . . . . . . . 24
Pdf (Shortcut) Technique . . . . . . . . . . . . . . . . . . . . . 25
3 Random Sampling 31
Sample mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
Central Limit Theorem . . . . . . . . . . . . . . . . . . . . . . . 31
Sample variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Sampling from N(, ) . . . . . . . . . . . . . . . . . . . . . . . 33
Sampling without replacement . . . . . . . . . . . . . . . . . . . . . 35
Bivariate samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Order Statistics 37
Univariate pdf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Sample median . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Bivariate pdf . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
5 Estimating Distribution Parameters 45
A few denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Cramr-Rao inequality . . . . . . . . . . . . . . . . . . . . . . . . . . 47
Suciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
Method of moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
One Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
Two Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
Maximum-likelihood technique . . . . . . . . . . . . . . . . . . . . . 53
One Parameter . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
Two-parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5
6 Condence Intervals 57
CI for mean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
unknown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Large-sample case . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Dierence of two means . . . . . . . . . . . . . . . . . . . . . . 58
Proportion(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Variance(s) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
7 Testing Hypotheses 61
Tests concerning mean(s) . . . . . . . . . . . . . . . . . . . . . . . . 62
Concerning variance(s) . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Concerning proportion(s) . . . . . . . . . . . . . . . . . . . . . . . . 63
Contingency tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Goodness of t . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
8 Linear Regression and Correlation 65
Simple regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
Maximum likelihood method . . . . . . . . . . . . . . . . . . . 65
Least-squares technique . . . . . . . . . . . . . . . . . . . . . . . 65
Normal equations . . . . . . . . . . . . . . . . . . . . . . . . . . 66
Statistical properties of the estimators . . . . . . . . . . . . . 67
Condence intervals . . . . . . . . . . . . . . . . . . . . . . . . . 69
Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Multiple regression . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Various standard errors . . . . . . . . . . . . . . . . . . . . . . . 73
9 Analysis of Variance 75
One-way ANOVA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
Two-way ANOVA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
No interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
With interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
10 Nonparametric Tests 79
Sign test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Signed-rank test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Rank-sum tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Mann-Whitney . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Kruskal-Wallis . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
Run test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
(Spermans) rank correlation coecient . . . . . . . . . . . . . . . 83
6
7
Chapter 1 PROBABILITY REVIEW
Basic Combinatorics
Number of permutations of n distinct objects: n!
Not all distinct, such as, for example aaabbc:
6!
3!2!1!
def.
=
6
3, 2, 1
or
N!
n
1
!n
2
!n
3
!.....n
k
!
def.
=
N
n
1
, n
2
, n
3
, ...., n
k
in general, where N =
k
P
i=1
n
i
which is the total word length (multinomial coef-
ficient).
Selecting r out of n objects (without duplication), counting all possible arrange-
ments:
n (n 1) (n 2) .... (n r + 1) =
n!
(n r)!
def.
= P
n
r
(number of permutations).
Forget their nal arrangement:
P
n
r
r!
=
n!
(n r)!r!
def.
= C
n
r
(number of combinations). This will also be called the binomial coeffi-
cient.
If we can duplicate (any number of times), and count the arrangements:
n
r
Binomial expansion
(x +y)
n
=
n
X
i=0
n
i
x
ni
y
i
Multinomial expansion
(x +y +z)
n
X
i,j,k0
i+j+k=n
n
i, j, k
x
i
y
j
z
k
(x +y +z +w)
n
=
X
i,j,k,0
i+j+k+=n
n
i, j, k,
x
i
y
j
z
k
w
etc.
Random Experiments (Basic Denitions)
Sample space
is a collection of all possible outcomes of an experiment.
The individual (complete) outcomes are called simple events.
8
Events
are subsets of the sample space (A, B, C,...).
Set Theory
The old notion of: is (are) now called:
Universal set Sample space
Elements of (its individual points) Simple events (complete outcomes)
Subsets of Events
Empty set Null event
We continue to use the word intersection (notation: A B, representing
the collection of simple events common to both A and B ), union (AB, simple
events belonging to either A or B or both), and complement (A, simple events
not in A ). One should be able to visualize these using Venn diagrams, but when
dealing with more than 3 events at a time, one can tackle problems only with the
help of
Boolean Algebra
Both and (individually) are commutative and associative.
Intersection is distributive over union: A(BC...) = (AB)(AC)...
Similarly, union is distributive over intersection: A(B C ...) = (AB)
(A C) ...
Trivial rules: A = A, A = , A A = A, A = , A = A,
A A = A, A A = , A A = ,
A = A.
Also, when A B (A is a subset of B, meaning that every element of A also
belongs to B), we get: A B = A (the smaller event) and A B = B (the bigger
event).
DeMorgan Laws: A B = A B, and A B = A B, or in general
A B C ... = A B C ...
and vice versa (i.e. ).
A and B are called (mutually) exclusive or disjoint when A B = (no
overlap).
Probability of Events
Simple events can be assigned a probability (relative frequency of its occurrence
in a long run). Its obvious that each of these probabilities must be a non-negative
number. To nd a probability of any other event A (not necessarily simple), we
then add the probabilities of the simple events A consists of. This immediately
implies that probabilities must follow a few basic rules:
Pr(A) 0
Pr() = 0
Pr() = 1
(the relative frequency of all is obviously 1).
We should mention that Pr(A) = 0 does not necessarily imply that A = .
9
Probability rules
Pr(AB) = Pr(A) +Pr(B) but only when AB = (disjoint). This implies that
Pr(A) = 1 Pr(A) as a special case.
This also implies that Pr(A B) = Pr(A) Pr(A B).
For any A and B (possibly overlapping) we have
Pr(A B) = Pr(A) + Pr(B) Pr(A B)
Can be extended to: Pr(ABC) = Pr(A)+Pr(B)+Pr(C)Pr(AB)Pr(A
C) Pr(B C) + Pr(A B C).
In general
Pr(A
1
A
2
A
3
... A
k
) =
k
X
i=1
Pr(A
i
)
k
X
i<j
Pr(A
i
A
j
) +
k
X
i<j<
Pr(A
i
A
j
A
) ...
Pr(A
1
A
2
A
3
... A
k
)
The formula computes the probability that at least one of the A
i
events happens.
The probability of getting exactly one of the A
i
events is similarly computed
by:
k
X
i=1
Pr(A
i
) 2
k
X
i<j
Pr(A
i
A
j
) + 3
k
X
i<j<
Pr(A
i
A
j
A
) ...
k Pr(A
1
A
2
A
3
... A
k
)
Important result
Probability of any (Boolean) expression involving events A, B, C, ... can be always
converted to a linear combination of probabilities of the individual events and their
simple (non-complemented) intersections (A B, A B C, etc.) only.
Probability tree
is a graphical representation of a two-stage (three-stage) randomexperiment.(eectively
its sample space - each complete path being a simple event).
The individual branch probabilities (usually simple to gure out), are the so
called conditional probabilities.
Product rule
Pr(A B) = Pr(A) Pr(B|A)
Pr(A B C) = Pr(A) Pr(B|A) Pr(C|A B)
Pr(A B C D) = Pr(A) Pr(B|A) Pr(C|A B) Pr(D|A B C)
.
.
.
Conditional probability
The general denition:
Pr(B|A)
Pr(A B)
Pr(A)
All basic formulas of probability remain true. conditionally, e.g.: Pr(B|A) =
1 Pr(B|A), Pr(B C|A) = Pr(B|A) + Pr(C|A) Pr(B C|A), etc.
10
Total-probability formula
A partition represents chopping the sample space into several smaller events, say
A
1
, A
2
, A
3
, ...., A
k
, so that they
(i) dont overlap (i.e. are all mutually exclusive): A
i
A
j
= for any 1 i, j k
(ii) cover the whole (i.e. no gaps): A
1
A
2
A
3
... A
k
= .
For any partition, and an unrelated even B, we have
Pr(B) = Pr(B|A
1
) Pr(A
1
) + Pr(B|A
2
) Pr(A
2
) +... + Pr(B|A
k
) Pr(A
k
)
Independence
of two events is a very natural notion (we should be able to tell from the experi-
ment): when one of these events happens, it does not eect the probability of the
other. Mathematically, this is expressed by either
Pr(B|A) P(B)
or, equivalently, by
Pr(A B) = Pr(A) Pr(B)
Similarly, for three events, their mutual independence means
Pr(A B C) = Pr(A) Pr(B) Pr(C)
etc.
Mutual independence of A, B, C, D, ... implies that any event build of A, B,
... must be independent of any event build out of C, D, ... [as long as the two sets
are distinct].
Another important result is: To compute the probability of a Boolean ex-
pression (itself an event) involving only mutually independent events, it is sucient
to know the events individual probabilities.
Discrete Random Variables
A random variable yields a number, for every possible outcome of a random
experiment.
A table (or a formula, called probability function) summarizing the in-
formation about
1. possible outcomes of the RV (numbers, arranged from the smallest to the
largest)
2. the corresponding probabilities
is called the probability distribution.
Similarly, distribution function: F
x
(k) = Pr(X k) computes cumulative
probabilities.
11
Bivariate (joint) distribution
of two random variables is similarly specied via the corresponding probability
function
f(i, j) = Pr(X = i Y = j)
with the range of possible i and j values. One of the two ranges is always marginal
(the limits are constant), the other one is conditional (i.e. both of its limits may
depend on the value of the other random variable).
Based on this, one can always nd the corresponding marginal distribution
of X:
f
x
(i) = Pr(X = i) =
X
j|i
f(i, j)
and, similarly, the marginal distribution of Y.
Conditional distribution
of X, given an (observed) value of Y , is dened by
f
x
(i|Y = j) Pr(X = i | Y = j) =
Pr(X = i Y = j)
Pr(Y = j)
where i varies over its conditional range of values (given Y = j).
Conditional distribution has all the properties of an ordinary distribution.
Independence
of X and Y means that the outcome of X cannot inuence the outcome of Y (and
vice versa) - something we can gather from the experiment.
This implies that Pr(X = iY = j) =Pr(X = i)Pr(Y = j) for every possible
combination of i and j
Multivariate distribution
is a distribution of three of more RVs - conditional distributions can get rather
tricky.
Expected Value of a RV
also called its mean or average, is a number which corresponds (empirically)
to the average value of the random variable when the experiment is repeated,
independently, innitely many times (i.e. it is the limit of such averages). It is
computed by
x
E(X)
X
i
i Pr(X = i)
(weighted average), where the summation is over all possible values of i.
In general, E[g(X)] 6= g (E[X]).
But, for a linear transformation,
E(aX +c) = aE(X) +c
12
Expected values related to X and Y
In general we have
E[g(X, Y )] =
X
i
X
j
g(i, j) Pr(X = i Y = j)
This would normally not equal to g(
x
,
y
), except:
E[aX +bY +c] = aE(X) +bE(Y ) +c
The previous formula easily extends to any number of variables:
E[a
1
X
1
+a
2
X
2
+... +a
k
X
k
+c] = a
1
E(X
1
) +a
2
E(X
2
) +... +a
k
E(X
k
) +c
(no independence necessary).
When X and Y are independent, we also have
E(X Y ) = E(X) E(Y )
and, in general:
E[g
1
(X) g
2
(Y )] = E[g
1
(X)] E[g
2
(Y )]
Moments (univariate)
Simple:
E(X
n
)
Central:
E[(X
x
)
n
]
Of these, the most important is the variance of X:
E
(X
x
)
2
= E(X
2
)
x
2
Its square root is the standard deviation of X, notation:
x
=
p
Var(X) (this
is the Greek letter sigma).
The interval to + should contain the bulk of the distribution
anywhere from 50 to 90%.
When Y aX +c (a linear transformation of X), we get
Var(Y ) = a
2
Var(X)
which implies
y
= |a|
x
Moments (bivariate or joint)
Simple:
E(X
n
Y
m
)
Central
E
(X
x
)
n
(Y
y
)
m
(X
x
) (Y
y
)
E(X Y )
x
y
13
It becomes zero when X and Y are independent, but: zero covariance does not
necessarily imply independence.
A related quantity is the correlation coecient between X and Y :
xy
=
Cov(X, Y )
x
y
(this is the Greek letter rho). The absolute value of this coecient cannot be
greater than 1.
Variance of aX +bY +c
is equal to
a
2
Var(X) +b
2
Var(Y ) + 2abCov(X, Y )
Independence would make the last term zero.
Extended to a linear combination of any number of random variables:
Var(a
1
X
1
+a
2
X
2
+...a
k
X
k
+c) = a
2
1
Var(X
1
) +a
2
2
Var(X
2
) +.... +a
2
k
Var(X
k
)
+2a
1
a
2
Cov(X
1
, X
2
) + 2a
1
a
3
Cov(X
1
, X
3
) +... + 2a
k1
a
k
Cov(X
k1
, X
k
)
Moment generating function
is dened by
M
x
(t) E
e
t X
n
i
p
i
q
ni
where 0 i n
Expected value (mean):
np
Variance:
npq
Geometric
The number of trials to get the rst success, in an independent series of trials.
f(i) = pq
i1
where i 1
The mean
1
p
and variance:
1
p
1
p
1
i 1
k 1
p
k
q
ik
i 1
i k
p
k
q
ik
where i k
The mean
k
p
and variance:
k
p
1
p
1
F(j) = 1
k1
X
i=0
j
i
p
i
q
ji
where j k
15
Hypergeometric
Suppose there are N objects, K of which have some special property. Of these N
objects, n are randomly selected [sampling without replacement]. X is the
number of special objects found in the sample.
f(i) =
K
i
NK
ni
N
n
where max(0, n N +K) i min(n, K)
The mean
n
K
N
and variance:
n
K
N
N K
N
N n
N 1
Note the similarity (and dierence) to the binomial npq formula.
Poisson
Assume that customers arrive at a store randomly, at a constant rate of per hour.
X is the number of customers who will arrive during the next T hours. = T .
f(i) =
i
i!
e
where i 0
Both the mean and the variance of this distribution are equal to .
The remaining two distributions are of the multivariate type.
Multinomial
is an extension of the binomial distribution, in which each trial can result in 3 (or
more) possible outcomes (not just S and F). The trials are repeated, independently,
n times; this time we need three RVs X, Y and Z, which count the total number
of outcomes of the rst, second and third type, respectively.
Pr(X = i Y = j Z = k) =
n
i,j,k
p
i
x
p
j
y
p
k
z
for any non-negative integer values of i, j, k which add up to n. This formula can
be easily extended to the case of 4 or more possible outcomes.
The marginal distribution of X is obviously binomial (with n and p p
x
being the two parameters).
We also need
Cov(X, Y ) = np
x
p
y
etc.
Multivariate Hypergeometric
is a simple extension of the univariate hypergeometric distribution, to the case of
having thee (or more) types of objects. We now assume that the total number of
objects of each type is K
1
, K
2
and K
3
, where K
1
+K
2
+K
3
= N.
Pr(X = i Y = j Z = k) =
K
1
i
K
2
j
K
3
k
N
n
16
where X, Y and Z count the number of objects of Type 1, 2 and 3, respectively, in
the sample. Naturally, i +j +k = n. Otherwise, i, j and k can be any non-negative
integers for which the above expression is meaningful (i.e. no negative factorials).
The marginal distribution of X (and Y, and Z) is univariate hypergeometric
(of the old kind) with obvious parameters.
Cov(X, Y ) = n
K
1
N
K
2
N
N n
N 1
Continuous Random Variables
Any real value from a certain interval can happen. Pr(X = x) is always equal to
zero (we have lost the individual probabilities)! Instead, we use
Univariate probability density function (pdf)
formally dened by
f(x) lim
0
Pr(x X < x +)
f(u) du
which is quite crucial to us now (without it, we cannot compute probabilities).
Bivariate (multivariate) pdf
f(x, y) = lim
0
0
Pr(x X < x + y Y < y +)
which implies that the probability of (X, Y )-values falling inside a 2D region A
is computed by
ZZ
A
f(x, y) dxdy
Similarly for three or more variables.
Marginal Distributions
Given a bivariate pdf f(x, y), we can eliminate Y and get the marginal pdf of X
by
f(x) =
Z
All y|x
f(x, y) dy
The integration is over the conditional range of y given x, the result is valid in
the marginal range of x.
17
Conditional Distribution
is the distribution of X given that Y has been observed to result in a specic value
y. The corresponding conditional pdf of X is computed by
f(x | Y = y) =
f(x, y)
f(y)
valid in the corresponding conditional range of x values.
Mutual Independence
implies that f
XY
(x, y) = f
X
(x) f
Y
(y), with all the other consequences (same as
in the discrete case), most notably f(x | Y = y) = f
X
(x).
Expected value
of a continuous RV X is computed by
E(X) =
Z
All x
x f(x) dx
Similarly:
E[g(X)] =
Z
All x
g(x) f(x) dx
where g(..) is an arbitrary function.
In the bivariate case:
E[g(X, Y )] =
ZZ
R
g(x, y) f(x, y) dxdy
Simple moments, central moments, variance, covariance, etc. are dened in
exactly same manner as in the discrete case. Also, all previous formulas for dealing
with linear combinations of RVs (expected value, variance, covariance) still hold,
without change.
Also, the Moment Generating Function is dened in the analogous manner as
is dened via:
M
x
(t) E(e
tX
) =
Z
All x
e
tx
f(x) dx
with all the previous results still being correct.
18
Common Continuous Distributions
First, the univariate case:
Name Notation Range f(x)
Uniform U(a, b) a < x < b
1
ba
Normal N(, ) < x <
1
2
exp
h
(x)
2
2
2
i
Exponential E() x > 0
1
exp
h
i
Gamma gamma(, ) x > 0
x
1
()
exp
h
i
Beta beta(k, m) 0 < x < 1
(k+m)
(k)(m)
x
k1
(1 x)
m1
Chi-square
2
m
x > 0
x
m/21
(m/2)2
m/2
exp
x
2
Student t
m
< x <
(
m+1
2
)
(
m
2
)
1 +
x
2
m
m+1
2
Fisher F
k,m
x > 0
(
k +m
2
)
(
k
2
) (
m
2
)
(
k
m
)
k
2
x
k
2
1
(1+
k
m
x)
k +m
2
Cauchy C(a, b) < x <
b
1
b
2
+(ya)
2
Name F(x) Mean Variance
Uniform
xa
ba
a+b
2
(ba)
2
12
Normal Tables
2
Exponential 1 exp
h
i
2
Gamma Integer only
2
Beta Integer k, m only
k
k+m
k m
(k+m+1)(k+m)
2
Chi-square Tables m 2m
Student Tables 0
m
m2
Fisher Tables
m
m2
2 m
2
(k+m2)
(m2)
2
(m4) k
Cauchy
1
2
+
1
arctan(
ya
b
)
We need only one bivariate example:
Bivariate Normal distribution has, in general, 5 parameters,
x
,
y
,
x
,
y
,
and . Its joint pdf can be simplied by introducing Z
1
X
x
x
and Z
2
Y
y
y
(standardized RVs), for which
f(z
1
, z
2
) =
1
2
1
2
exp
h
z
2
1
2z
1
z
2
+z
2
2
2(1
2
)
i
Its marginal distributions are both Normal, so it the conditional distribution:
Distr(X|Y = y) N(
x
+
x
u
y
y
,
x
p
1
2
)
Transforming Random Variables
i.e. if Y = g(X), where X has a given distribution, what is the distribution of Y ?
Two techniques to deal with this, one uses F(x), the other one f(x) - this only
for one-to-one transformations.
This can be generalized to: Given the joint distribution of X and Y (usually
independent), nd the distribution of g(X, Y ).
19
Examples
g: Distribution:
ln U(0, 1) E()
N(0, 1)
2
2
1
N
1
(0, 1)
2
+N
2
(0, 1)
2
+.... +N
m
(0, 1)
2
2
m
C(a, b) C(a, b)
E
1
()
E
1
()+E
2
()
U(0, 1)
gamma
1
(k,)
gamma
1
(k,)+gamma
2
(m,)
beta(k, m)
N(0,1)
q
2
m
m
t
m
2
k
2
m
m
k
F
k,m
20
21
Chapter 2 TRANSFORMING
RANDOM VARIABLES
of continuous type only (the less interesting discrete case was dealt with earlier).
The main issue of this chapter is: Given the distribution of X, nd the distri-
bution of Y
1
1+X
(an expression involving X). Since only one old RV variable
(namely X) appear in the denition of the new RV, we call this a univariate
transformation. Eventually, we must also deal with the so called bivariate trans-
formations of two old RVs (say X and Y ), to nd the distribution of a new
RV, say U
X
X+Y
(or any other expression involving X and Y ). Another simple
example of this bivariate type is nding the distribution of V X + Y (i.e. we
will nally learn how to add two random variables).
Let us rst deal with the
Univariate transformation
There are two basic techniques for constructing the new distribution:
Distribution-Function (F) Technique
which works as follows:
When the new random variable Y is dened as g(X), we nd its distribution
function F
Y
(y) by computing Pr(Y < y) = Pr[g(X) < y]. This amounts to solving
the g(X) < y inequality for X [usually resulting in an interval of values], and then
integrating f(x) over this interval [or, equivalently, substituting into F(x)].
EXAMPLES:
1. Consider X U(
2
,
2
) [this corresponds to a spinning wheel with a two-
directional pointer, say a laser beam, where X is the pointers angle from
a xed direction when the wheel stops spinning]. We want to know the
distribution of Y = b tan(X) + a [this represents the location of a dot our
laser beam would leave on a screen placed b units from the wheels center,
with a scale whose origin is a units o the center]. Note that Y can have any
real value.
Solution: We start by writing down F
X
(x) = [in our case]
x+
+
1
2
when
2
< x <
2
. To get F
Y
(y) we need: Pr[b tan(X) + a < y] = Pr[X <
arctan(
ya
b
)] = F
X
[arctan(
ya
b
)] =
1
arctan(
ya
b
) +
1
2
where < y < .
Usually, we can relate better to the corresponding f
Y
(y) [which tells us what
is likely and what is not] =
1
b
1
1+(
ya
b
)
2
=
b
1
b
2
+ (y a)
2
(f)
[any real y]. Graphically, this function looks very similar to the Normal pdf
(also a bell-shaped curve), but in terms of its properties, the new distribu-
tion turns out to be totally dierent from Normal, [as we will see later].
22
The name of this new distribution is Cauchy [notation: C(a, b)]. Since the
y f
Y
(y) dy integral leads to , the Cauchy distribution does not
have a mean (consequently, its variance is innite). Yet it possesses a clear
center (at y = a) and width (b). These are now identied with the median
Y
= a [verify by solving F
Y
( ) =
1
2
] and the so called semi-inter-quartile
range (quartile deviation, for short)
Q
U
Q
L
2
where Q
U
and Q
L
are the
upper and lower quartiles [dened by F(Q
U
) =
3
4
and F(Q
L
) =
1
4
]. One
can easily verify that, in this case, Q
L
= a b and Q
U
= a +b [note that the
semi-inter-quartile range contains exactly 50% of all probability], thus the
quartile deviation equals to b. The most typical (standardized) case of the
Cauchy distribution is C(0, 1), whose pdf equals
f(y) =
1
1
1 +y
2
Its rare (<
1
2
%) values start at 70, we need to go beyond 3000 to reach
extremely unlikely (<10
6
), and only 300 billion become practically im-
possible (10
12
). Since the mean does not exist, the central limit theorem
breaks down [it is no longer true that
Y N(,
n
), there is no and is
innite]. Yet,
Y must have some well dened distribution. We will discover
what that distribution is in the next section.
2. Let X have its pdf dened by f(x) = 6x(1 x) for 0 < x < 1.Find the pdf
of Y = X
3
.
Solution: First we realize that 0 < Y < 1. Secondly, we nd F
X
(x) = 6
x
R
0
(x
x
2
) dx = 6(
x
2
2
x
3
3
) = 3x
2
2x
3
. And nally: F
Y
(y) Pr(Y < y) =
Pr(X
3
< y) = Pr(X < y
1
3
) = F
X
(y
1
3
) = 3y
2
3
2y.This easily converts to
f
Y
(y) = 2y
1
3
2 where 0 < y < 1 [zero otherwise]. (Note that when y 0
this pdf becomes innite, which is OK).
3. Let X U(0, 1). Find and identify the distribution of Y = ln X (its range
is obviously 0 < y < ).
Solution: First we need F
X
(x) = x when 0 < x < 1. Then: F
Y
(y) =
Pr(ln X < y) = Pr(X > e
y
) [note the sign reversal] = 1 F
X
(e
y
) =
1 e
y
where y > 0 ( f
Y
(y) = e
y
). This can be easily identied as the
exponential distribution with the mean of 1 [note that Y = ln X would
result in the exponential distribution with the mean equal to ].
4. If Z N(0, 1), what is the distribution of Y = Z
2
.
Solution: F
Y
(y) = Pr(Z
2
< y) = Pr(
y < Z <
y) [right?] = F
Z
(
y)
F
Z
(
y)
dy
dF
Z
(
y)
dy
=
1
2
y
1
2
f
Z
(
y) +
1
2
y
1
2
f
Z
(
y) =
23
y
1
2
e
y
2
2
where y > 0. This can be identied as the gamma distribution with
=
1
2
and = 2 [the normalizing constant is equal to (
1
2
) 2
1
2
=
2,
check].
Due to its importance, this distribution has yet another name, it is called
the chi-square distribution with one degree of freedom, or
2
1
for short. It
has the expected value of ( =) 1, its variance equals (
2
=) 2, and the
MGF is M(t) =
1
12t
.
General Chi-square distribution
(This is an extension of the previous example). We want to investigate the RV
dened by U = Z
2
1
+ Z
2
2
+ Z
2
3
+.... +Z
2
n
, where Z
1
, Z
2
, Z
3
, ...Z
n
are independent
RVs from the N(0, 1) distribution. Its MGF must obviously equal to M(t) =
1
(1 2t)
n
2
; we can thus identify its distribution as gamma, with =
n
2
and = 2
(mean = n, variance = 2n). Due to its importance, it is also called the chi-square
distribution with n (integer) degrees of freedom (
2
n
for short).
Probability-Density-Function (f) Technique
is a bit faster and usually somehow easier (technically) to carry out, but it works
for one-to-one transformations only (e.g. it would not work in our last Y = Z
2
example). The procedure consists of three simple steps:
(i) Express X (the old variable) in terms of y the new variable [getting an
expression which involves only Y ].
(ii) Substitute the result [we will call it x(y), switching to small letters] for the
argument of f
X
(x), getting f
X
[x(y)] a function of y!
(iii) Multiply this by
dx(y)
dy
dx(y)
dy
EXAMPLES (we will redo the rst three examples of the previous section):
1. X U(
2
,
2
) and Y = b tan(X) +a.
Solution: (i) x = arctan(
ya
b
), (ii)
1
, (iii)
1
1
b
1
1+(
ya
b
)
2
=
b
1
b
2
+(ya)
2
where
< y < [check].
2. f(x) = 6x(1 x) for 0 < x < 1 and Y = X
3
.
Solution: (i) x = y
1/3
, (ii) 6y
1/3
(1 y
1/3
), (iii) 6y
1/3
(1 y
1/3
)
1
3
y
2/3
=
2(y
1/3
1) when 0 < y < 1 [check].
3. X U(0, 1) and Y = lnX.
Solution: (i) x = e
y
, (ii) 1, (iii) 1 e
y
= e
y
for y > 0 [check].
This does appear to be a fairly fast way of obtaining f
Y
(y).
And now we extend all this to the
24
Bivariate transformation
Distribution-Function Technique
follows essentially the same pattern as the univariate case:
The new random variable Y is now dened in terms of two old RVs, say X
1
and X
2
, by y g(X
1
, X
2
). We nd F
Y
(y) = Pr(Y < y) = Pr[g(X
1
, X
2
) < y] by
realizing that the g(X
1
, X
2
) < y inequality (for X
1
and X
2
, y is considered xed)
will now result in some 2-D region, and then integrating f(x
1
, x
2
) over this region.
Thus, the technique is simple in principle, but often quite involved in terms of
technical details.
EXAMPLES:
1. Suppose that X
1
and X
2
are independent RVs, both fromE(1), and Y =
X
2
X
1
.
Solution: F
Y
(y) = Pr
X
2
X
1
< y
= Pr(X
2
< yX
1
) =
RR
0<x
2
<yx
1
e
x
1
x
2
dx
1
dx
2
=
R
0
e
x
1
yx
1
R
0
e
x
2
dx
2
dx
1
=
R
0
e
x
1
(1 e
yx
1
) dx
1
=
R
0
(e
x
1
e
x
1
(1+y)
) dx
1
=
1
1
1 +y
, where y > 0. This implies that f
Y
(y) =
1
(1 +y)
2
when y > 0.
(The median of this distribution equals to 1, the lower and upper quartiles
are Q
L
=
1
3
and Q
U
= 3).
2. This time Z
1
and Z
2
are independent RVs from N(0, 1) and Y = Z
2
1
+ Z
2
2
[here, we know the answer:
2
2
, let us proceed anyhow].
Solution: F
Y
(y) = Pr(Z
2
1
+Z
2
2
< y) =
1
2
RR
z
2
1
+z
2
2
<y
e
z
2
1
+
2
2
2
dz
1
dz
2
=
1
2
2
R
0
y
R
0
e
r
2
2
r dr d =[substitution: w =
r
2
2
]
y
2
R
0
e
w
dw = 1e
y
2
where (obviously) y > 0.
This is the exponential distribution with = 2 [not
2
2
as expected, how
come?]. It does not take long to realize that the two distributions are iden-
tical.
3. (Sum of two independent RVs): Assume that X
1
and X
2
are independent
RVs from a distribution having L and H as its lowest and highest possible
value, respectively. Find the distribution of X
1
+X
2
[nally learning how to
add two RVs!].
Solution: F
Y
(y) = Pr(X
1
+ X
2
< y) =
RR
x
1
+x
2
<y
L<x
1
,x
2
<H
f(x
1
) f(x
2
) dx
1
dx
2
=
yL
R
L
yx
1
R
L
f(x
1
) f(x
2
) dx
2
dx
1
when y < L +H
1
H
R
yH
H
R
yx
1
f(x
1
) f(x
2
) dx
2
dx
1
when y > L +H
. Dierentiating this with
respect to y (for the rst line, this amounts to: substituting y L for x
1
25
and dropping the dx
1
integration contributing zero in this case plus:
substituting y x
1
for x
2
and dropping dx
2
; same for the rst line, ex-
cept that we have to subtract the second contribution) results in f
Y
(y) =
yL
R
L
f(x
1
) f(y x
1
) dx
1
when y < L +H
H
R
yH
f(x
1
) f(y x
1
) dx
1
when y > L +H
or, equivalently,
f
Y
(y) =
min(H,yL)
Z
max(L,yH)
f(x) f(y x) dx
where the y-range is obviously 2L < y < 2H. The right hand side of the
last formula is sometimes referred to as the convolution of two pdfs (in
general, the two fs may be distinct).
Examples:
In the specic case of the uniform U(0, 1) distribution, the last formula
yields, for the pdf of Y X
1
+X
2
:
f
Y
(y) =
min(1,y)
R
max(0,y1)
dx =
y
R
0
dx = y when 0 < y < 1
1
R
y1
dx = 2 y when 1 < y < 2
[triangular
distribution].
Similarly, for the standardized Cauchy distribution
f(x) =
1
1
1+x
2
, we
get: f
X
1
+X
2
(y) =
1
1
1+x
2
1
1+(yx)
2
dx =
2
1
4+y
2
[where < y < ].
The last result can be easily converted to the pdf of
X =
X
1
+X
2
2
[the sample
mean of the two random values], yielding f
X
( x) =
2
1
4+(2 x)
2
2 =
1
1
1+ x
2
.
Thus, the sample mean
X has the same Cauchy distribution as do the two
individual observations (the result can be extended to any number of obser-
vations). We knew that the Central Limit Theorem [
X
e
N(,
n
)] would
not apply to this case, but the actual distribution of
X still comes as a big
surprise. This implies that the sample mean of even millions of values (from a
Cauchy distribution) cannot estimate the center of the distribution any bet-
ter than a single observation [one can verify this by actual simulation]. Yet,
one feels that there must be a way of substantially improving the estimate (of
the location of a laser gun hidden behind a screen) when going from a single
observation to a large sample. Yes, there is, if one does not use the sample
mean but something else; later on we discover that the sample median will
do just ne.
Pdf (Shortcut) Technique
works a bit faster, even though it may appear more complicated, as it requires the
following (several) steps:
26
1. The procedure can work only for one-to-one (invertible) transformations.
This implies that the new RV Y g(X
1
, X
2
) must be accompanied by yet
another arbitrarily chosen function of X
1
and/or X
2
[the original Y will be
called Y
1
, and the auxiliary one Y
2
, or vice versa]. We usually choose this
second (auxiliary) function in the simplest possible manner, i.e. we make it
equal to X
2
(or X
1
):
2. Invert the transformation, i.e. solve the two equations y
1
= g(x
1
, x
2
) and
y
2
= x
2
for x
1
and x
2
(in terms of y
1
and y
2
). Getting a unique solution
guarantees that the transformation is one-to-one.
3. Substitute this solution x
1
(y
1
, y
2
) and x
2
(y
2
) into the joint pdf of the old
X
1
, X
2
pair (yielding a function of y
1
and y
2
).
4. Multiply this function by the transformations Jacobian
x
1
y
1
x
1
y
2
x
2
y
1
x
2
y
2
. The
result is the joint pdf of Y
1
and Y
2
. At the same time, establish the region
of possible (Y
1
, Y
2
) values in the (y
1
, y
2
)-plane [this is often the most dicult
part of the procedure].
5. Eliminate Y
2
[the phoney, auxiliary RV introduced to help us with the
inverse] by integrating it out (nding the Y
1
marginal). Dont forget that
you must integrate over the conditional range of y
2
given y
1
.
EXAMPLES:
1. X
1
, X
2
E(1), independent; Y =
X
1
X
1
+X
2
[the time of the rst catch relative
to the time needed to catch two shes].
Solution: Y
2
= X
2
x
2
= y
2
and x
1
y
1
+x
2
y
1
= x
1
x
1
=
y
1
y
2
1y
1
. Substitute
into e
x
1
x
2
getting e
y
2
1+
y
1
1y
1
= e
y
2
1y
1
, multiply by
y
2
1y
1
+y
1
(1y
1
)
2
y
1
1y
1
0 1
=
y
2
(1y
1
)
2
getting f(y
1
, y
2
) =
y
2
(1y
1
)
2
e
y
2
1y
1
with 0 < y
1
< 1 and y
2
> 0. Elim-
inate Y
2
by
R
0
y
2
(1y
1
)
2
e
y
2
1y
1
dy
2
=
1
(1y
1
)
2
(1 y
1
)
2
1 when 0 < y
1
< 1
[recall the
R
0
x
k
e
x
a
dx = k! a
k+1
formula]. The distribution of Y is thus
U(0, 1). Note that if we started with X
1
, X
2
E() instead of E(1), the
result would have been the same since this new Y =
X
1
X
1
+X
2
X
1
X
1
+
X
2
where
X
1
and
X
2
1 0
y
2
y
1
= y
1
gives the joint pdf for y
1
> 0 and y
2
> 0. Eliminate y
1
by
R
0
y
1
e
y
1
(1+y
2
)
dy
1
=
27
1
(1+y
2
)
2
, where y
2
> 0. Thus, f
Y
(y) =
1
(1+y)
2
with y > 0 [check, we have
solved this problem before].
3. In this example we introduce the so called Beta distribution
Let X
1
and X
2
be independent RVs from the gamma distribution with pa-
rameters (k, ) and (m, ) respectively, and let Y
1
=
X
1
X
1
+X
2
.
Solution: Using the argument of Example 1 one can show that cancels out,
and we can assume that = 1 without aecting the answer. The denition of
Y
1
is also the same as in Example 1 x
1
=
y
1
y
2
1y
1
, x
2
= y
2
, and the Jacobian =
y
2
(1y
1
)
2
. Substituting into f(x
1
, x
2
) =
x
k1
1
x
m1
2
e
x
1
x
2
(k)(m)
and multiplying by the
Jacobian yields f(y
1
, y
2
) =
y
k1
1
y
k1
2
y
m1
2
e
y
2
1y
1
(k)(m)(1 y
1
)
k1
y
2
(1 y
1
)
2
for 0 < y
1
< 1
and y
2
> 0. Integrating over y
2
results in:
y
k1
1
(k)(m)(1 y
1
)
k+1
R
0
y
k+m1
2
e
y
2
1y
1
dy
2
=
(k +m)
(k) (m)
y
k1
1
(1 y
1
)
m1
(f)
where 0 < y
1
< 1.
This is the pdf of a new two-parameters (k and m) distribution which is
called beta. Note that, as a by-product, we have eectively proved the follow-
ing formula:
1
R
0
y
k1
(1 y)
m1
dy =
(k)(m)
(k+m)
for any k, m > 0. This enables
us to nd the distributions mean: E(Y ) =
(k+m)
(k)(m)
1
R
0
y
k
(1 y)
m1
dy =
(k+m)
(k)(m)
(k+1)(m)
(k+m+1)
=
k
k +m
(mean)
and similarly E(Y
2
) =
(k+m)
(k)(m)
1
R
0
y
k+1
(1y)
m1
dy =
(k+m)
(k)(m)
(k+2)(m)
(k+m+2)
=
(k+1) k
(k+m+1) (k+m)
V ar(Y ) =
(k+1) k
(k+m+1) (k+m)
(
k
k+m
)
2
=
k m
(k +m+ 1) (k +m)
2
(variance)
Note that the distribution of 1 Y
X
2
X
1
+X
2
is also beta (why?) with
parameters m and k [reversed].
We learn how to compute related probabilities in the following set of Ex-
amples:
(a) Pr(X
1
<
X
2
2
) where X
1
and X
2
have the gamma distribution with param-
eters (4, ) and (3, ) respectively [this corresponds to the probability
that Mr.A catches 4 shes in less than half the time Mr.B takes to catch
3].
28
Solution: Pr(2X
1
< X
2
) = Pr(3X
1
< X
1
+ X
2
) = Pr(
X
1
X
1
+X
2
<
1
3
) =
(4+3)
(4)(3)
1
3
R
0
y
3
(1 y)
2
dy = 60
h
y
4
4
2
y
5
5
+
y
6
6
i1
3
y=0
= 10.01%.
(b) Evaluate Pr(Y < 0.4) where Y has the beta distribution with parameters
(
3
2
, 2) [half-integer values are not unusual, as we learn shortly].
Solution:
(
7
2
)
(
3
2
)(2)
0.4
R
0
y
1
2
(1 y) dy =
5
2
3
2
y
3
2
3
2
y
5
2
5
2
0.4
y=0
= 48.07%.
(c) Evaluate Pr(Y < 0.7) where Y beta(4,
5
2
).
Solution: This equals [it is more convenient to have the half-integer rst]
Pr(1Y > 0.3) =
(
13
2
)
(
5
2
)(4)
1
R
0.3
u
3
2
(1u)
3
du =
11
2
9
2
7
2
5
2
3!
y
5
2
5
2
3
y
7
2
7
2
+ 3
y
9
2
9
2
y
11
2
11
2
1
y=0.3
=
1 0.3522 = 64.78%.
d Pr(Y < 0.5) when Y beta(
3
2
,
1
2
).
Solution:
(2)
(
3
2
)(
1
2
)
0.5
R
0
y
1
2
(1 y)
1
2
dy = 18.17% (Maple).
4. In this example we introduce the so called Students or t-distribution
[notation: t
n
, where n is called degrees of freedom the only parameter].
We start with two independent RVs X
1
N(0, 1) and X
2
2
n
, and introduce
a new RV by Y
1
=
X
1
q
X
2
n
.
To get its pdf we take Y
2
X
2
, solve for x
2
= y
2
and x
1
= y
1
p
y
2
n
, substitute
into f(x
1
, x
2
) =
e
x
2
1
2
2
x
n
2
1
2
e
x
2
2
(
n
2
) 2
n
2
and multiply by
p
y
2
n
1
2
y
1
ny
2
0 1
=
p
y
2
n
to get f(y
1
, y
2
) =
e
y
2
1
y
2
2n
2
y
n
2
1
2
e
y
2
2
(
n
2
) 2
n
2
p
y
2
n
where < y
1
< and
y
2
> 0. To eliminate y
2
we integrate:
1
2(
n
2
) 2
n
2
R
0
y
n1
2
2
e
y
2
2
(1+
y
2
1
n
)
dy
2
=
(
n+1
2
) 2
n+1
2
2(
n
2
) 2
n
2
1 +
y
2
1
n
n+1
2
=
(
n+1
2
)
(
n
2
)
n
1
1 +
y
2
1
n
n+1
2
(f)
with < y
1
< . Note that when n = 1 this gives
1
1
1+y
2
1
(Cauchy),
when n the second part of the formula tends to e
y
2
1
2
which is, up to the
normalizing constant, the pdf of N(0, 1) [implying that
(
n+1
2
)
(
n
2
)
n
n
1
2
,
why?].
29
Due to the symmetry of the distribution [f(y) = f(y)] its mean is zero
(when is exists, i.e. when n 2).
To compute its variance: V ar(Y ) = E(Y
2
) =
(
n+1
2
)
(
n
2
)
(y
2
+n n) dy
1 +
y
2
n
n+1
2
=
(
n+1
2
)
(
n
2
)
n
(
n2
2
)
n
(
n1
2
)
n
(
n
2
)
n
(
n+1
2
)
= n
n1
2
n2
2
n =
n
n 2
(variance)
for n 3 (for n = 1 and 2 the variance is innite).
Note that when n 30 the t-distribution can be closely approximated by
N(0, 1).
5. And nally, we introduce the Fishers F-distribution
(notation: F
n,m
where n and m are its two parameters, also referred to as
degrees of freedom), dened by Y
1
=
X
1
n
X
2
m
where X
1
and X
2
are inde-
pendent, both having the chi-square distribution, with degrees of freedom n
and m, respectively.
First we solve for x
2
= y
2
and x
1
=
n
m
y
1
y
2
Jacobian equals to
n
m
y
2
. Then
we substitute into
x
n
2
1
1
e
x
1
2
(
n
2
) 2
n
2
x
m
2
1
2
e
x
2
2
(
m
2
) 2
m
2
and multiply by this Jacobian to get
(
n
m
)
n
2
(
n
2
) (
m
2
) 2
n+m
2
y
n
2
1
1
y
n+m
2
1
2
e
y
2
(1+
n
m
y
1
)
2
with y
1
> 0 and y
2
> 0. Integrating
over y
2
(from 0 to ) yields the following formula for the corresponding pdf
f(y
1
) =
(
n+m
2
)
(
n
2
) (
m
2
)
(
n
m
)
n
2
y
n
2
1
1
(1 +
n
m
y
1
)
n+m
2
for y
1
> 0.
We can also nd E(Y ) =
(
n+m
2
)
(
n
2
) (
m
2
)
(
n
m
)
n
2
R
0
y
n
2 dy
(1+
n
m
y)
n+m
2
=
m
m2
(mean)
for m 3 (the mean is innite for m = 1 and 2).
Similarly E(Y
2
) =
(n+2) m
2
(m2) (m4) n
V ar(Y ) =
(n+2) m
2
(m2) (m4) n
m
2
(m2)
2
=
m
2
(m2)
2
h
(n+2) (m2)
(m4) n
1
i
=
2 m
2
(n +m2)
(m2)
2
(m4) n
(variance)
for m 5 [innite for m = 1, 2, 3 and 4].
30
Note that the distribution of
1
Y
is obviously F
m,n
[degrees of freedom reversed],
also that F
1,m
2
1
2
m
m
Z
2
2
m
m
t
2
m
, and nally when both n and m are large (say
> 30) then Y is approximately normal N
1,
q
2(n+m)
nm
.
The last assertion can be proven by introducing U =
m (Y 1), getting its
pdf: (i) y = 1+
u
m
, (ii) substituting:
(
n+m
2
)
(
n
2
) (
m
2
)
(
n
m
)
n
2
(1 +
u
m
)
n
2
1
(1 +
n
m
+
n
m
u
m
)
n+m
2
m
[the Jacobian] =
(
n+m
2
)
(
n
2
) (
m
2
)
m
(
n
m
)
n
2
(1 +
n
m
)
n+m
2
(1 +
u
m
)
n
2
1
(1 +
n
n+m
u
m
)
n+m
2
where
m < u < . Now, taking the limit of the last factor (since that is
the only part containing u, the rest being only a normalizing constant)
we get [this is actually easier with the corresponding logarithm, namely
(
n
2
1) ln(1 +
u
m
)
n+m
2
ln(1 +
n
n+m
u
m
) =
u
m
h
(
n
2
1)
n
2
2(n+m)
i
u
2
2m
.... =
u
m
+
u
2
2m
n
n+m
u
2
4
....
n,m
1
1 +
m
n
u
2
4
[assuming that
the
m
n
ratio remains nite]. This implies that the limiting pdf is C e
u
2
n
4(n+m)
where C is a normalizing constant (try to establish its value). The limiting
distribution is thus, obviously, N
0,
q
2(n+m)
n
m
+ 1 must be also (approximately) normal
with the mean of 1 and the standard deviation of
q
2(n+m)
nm
.
We will see more examples of the F, t and
2
distributions in the next chapter,
which discusses the importance of these distributions to Statistics, and the context
in which they usually arise.
31
Chapter 3 RANDOM SAMPLING
A random independent sample (RIS) of size n from a (specic) distribution
is a collection of n independent RVs X
1
, X
2
, ..., X
n
, each of them having the same
(aforementioned) distribution. At this point, it is important to visualize these as
true random variables (i.e. before the actual sample is taken, with all their would-
be values), and not just as a collection of numbers (which they become eventually).
The information of a RIS is usually summarized by a handful of statistics (one
is called a statistic), each of them being an expression (a transformation) involving
the individual X
i
s. The most important of these is the
Sample mean
dened as the usual (arithmetic) average of the X
i
s:
X
P
n
i=1
X
i
n
One has to realize that the sample mean, unlike the distributions mean, is a
random variable, with its own expected value, variance, and distribution. The
obvious question is: How do these relate to the distribution from which we are
sampling?
For the expected value and variance the answer is quite simple
E
=
P
n
i=1
E(X
i
)
n
=
P
n
i=1
n
=
n
n
=
and
Var
=
1
n
2
n
X
i=1
Var (X
i
) =
n
2
n
2
=
2
n
Note that this implies
X
=
n
(one of the most important formulas of Statistics).
Central Limit Theorem
The distribution of X is a lot trickier. When n = 1, it is clearly the same as
the distribution form which we are sampling. But as soon as we take n = 2, we
have to work out (which is a rather elaborate process) a convolution of two
such distributions (taking care of the
1
2
factor is quite simple), and end up with a
distribution which usually looks fairly dierent from the original. This procedure
can then be repeated to get the n = 3, 4, etc. results. By the time we reach
n = 10 (even though most books say 30), we notice something almost mysterious:
The resulting distribution (of X) will very quickly assume a shape which not only
has nothing to do with the shape of the original distribution, it is the same for all
(large) values of n, and (even more importantly) for practically all distributions
(discrete or continuous) from which we may sample. This of course is the well
known (bell-like) shape of the Normal distribution (mind you, there are other bell-
look-alike distributions).
32
The proof of this utilizes a few things we have learned about the moment
generating function:
Proof. We already know the mean and standard deviation of the distribution of
X are and
n
respectively, now we want to establish its asymptotic (i.e. large-
n) shape. This is, in a sense, trivial: since
n
n
0, we get in the n limit a
degenerate (single-valued, with zero variance) distribution, with all probability
concentrated at .
We can prevent this distribution from shrinking to a zero width by standard-
izing
X rst, i.e. dening a new RV
Z
X
n
and investigating its asymptotic distribution instead (the new random variable has
the mean of 0 and the standard deviation of 1, thus its shape cannot disappear
on us).
We do this by constructing the MGF of Z and nding its n limit. Since
Z =
P
n
i=1
(X
i
)
n
n
=
P
n
i=1
X
i
n
Y, raised to the power of n.
We know that M
Y
(t) = 1 + E(Y ) t + E(Y
2
)
t
2
2
+ E(Y
3
)
t
3
3!
+ ... = 1 +
t
2
2n
+
3
t
3
6n
3/2
+
4
t
4
24n
2
+ .... where
3
,
4
,... is the skewness, kurtosis, ... of the original
distribution. Raising M
Y
(t) to the power of n and taking the n limit results
in e
t
2
2
regardless of the values of
3
and
4
, .... (since each is divided by higher-
than-one power of n). This is easily recognized to be the MGF of the standardized
(zero mean, unit variance) Normal distribution.
Note that, to be able to do all this, we had to assume that and are nite.
There are (unusual) cases of distributions with an innite variance (and sometimes
also indenite or innite mean) for which the central limit theorem breaks down.
A prime example is sampling from the Cauchy distribution, X (for any n) has the
same Cauchy distribution as the individual X
i
s - it does not get any narrower!
Sample variance
This is yet another expression involving the X
i
s, intended as (what will later be
called) an estimator of
2
. Its denition is
s
2
P
n
i=1
(X
i
X)
2
n 1
where s, the corresponding square root, is the sample standard deviation (the
sample variance does not have its own symbol).
To nd its expected value, we rst simplify its numerator:
n
X
i=1
(X
i
X)
2
=
n
X
i=1
[(X
i
)(
X)]
2
=
n
X
i=1
(X
i
)
2
2
n
X
i=1
(
X)(X
i
)+n(
X)
2
33
This implies that
E
"
n
X
i=1
(X
i
X)
2
#
=
n
X
i=1
Var(X
i
)2
n
X
i=1
Cov(
X, X
i
)+nVar(
X) = n
2
+n
2
n
2n
2
n
=
2
(n1)
since
Cov(
X, X
1
) =
1
n
n
X
i=1
Cov(X
i
, X
1
) =
1
n
Cov(X
1
, X
1
) + 0 =
1
n
Var(X
1
) =
2
n
and Cov(
X, X
2
), Cov(
X, X
3
), ... must all have the same value.
Finally,
E(s
2
) =
2
(n 1)
n 1
=
2
Thus, s
2
is a so called unbiased estimator of the distributions variance
2
(meaning it has the correct expected value).
Does this imply that s
r
P
n
i=1
(X
i
X)
2
n 1
has the expected value of ? The
answer is no, s is (slightly) biased.
Sampling from N(, )
To be able to say anything more about s
2
, we need to know the distribution form
which we are sampling. We will thus assume that the distribution is Normal, with
mean and variance
2
. This immediately simplies the distribution of X, which
must also be Normal (with mean and standard deviation of
n
, as we already
know) for any sample size n (not just large).
Regarding s
2
, one can show that it is independent of X, and that the distribu-
tion of
(n1)s
2
2
is
2
n1
. The proof of this is fairly complex.
Proof. We introduce a new set of n RVs Y
1
=
X, Y
2
= X
2
, Y
3
= X
3
, ..., Y
n
= X
n
and nd their joint pdf by
1. solving for
x
1
= ny
1
x
2
x
3
... x
n
x
2
= y
2
x
3
= y
3
...
x
n
= y
n
2. substituting into
1
(2)
n
2
n
e
n
P
i=1
(x
i
)
2
2
2
(the pdf of the X
i
s)
3. and multiplying by the Jacobian, which in this case equals to n.
Furthermore, since
n
P
i=1
(x
i
)
2
=
n
P
i=1
(x
i
X+
X)
2
=
n
P
i=1
(x
i
X)
2
2(
X
)
n
P
i=1
(x
i
X) + n(
X )
2
= (n 1)s
2
+ n(
X )
2
, the resulting pdf can be
34
expressed as follows:
n
(2)
n
2
n
e
(n 1)s
2
+n(y
1
)
2
2
2
(dy
1
dy
2
....dy
n
)
where s
2
is now to be seen as a function of the y
i
s.
The conditional pdf of y
2
, y
3
, ..., y
n
|y
1
thus equals - all we have to do is divide
the previous result by the marginal pdf of y
1
, i.e.
n
(2)
1
2
n(y
1
)
2
2
2
:
n
(2)
n1
2
n1
e
(n 1)s
2
2
2
(dy
2
....dy
n
)
This implies that
ZZZ
(n 1)s
2
2
2
dy
2
....dy
n
=
(2)
n1
2
n1
n
for any > 0 (just changing the name of ). The last formula enables us to
compute the corresponding conditional MGF of
(n 1)s
2
2
(given y
1
) by:
n
(2)
n1
2
n1
ZZZ
e
t(n 1)s
2
2
e
(n 1)s
2
2
2
dy
2
....dy
n
=
n
(2)
n1
2
n1
ZZZ
(1 2t)(n 1)s
2
2
2
dy
2
....dy
n
=
n
(2)
n1
2
n1
(2)
n1
2
12t
n1
n
=
1
(1 2t)
n1
2
(substituting =
12t
). This is the MGF of the
2
n1
distribution, regardless of
the value of y
1
(
X). This clearly makes
(n 1)s
2
2
independent of
X.
The important implication of this is that
(
X )
s
n
has the t
n1
distribution.
(
X )
s
(
X )
n
s
s
2
(n1)
2
n 1
Z
q
2
n1
n1
35
Sampling without replacement
First, we have to understand the concept of a population. This is a special case of
a distribution with N equally likely values, say x
1
, x
2
, ..., x
N
, where N is often fairly
large (millions). The x
i
s dont have to be integers, they may not be all distinct
(allowing only two possible values results in the hypergeometric distribution), and
they may be dense in one region of the real numbers and sparse in another.
They may thus mimic just about any distribution, including Normal. Thats why
sometimes we use the words distribution and population interchangeably.
The mean and variance of this special distribution are simply
=
P
N
i=1
x
i
N
and
2
=
P
N
i=1
(x
i
)
2
N
To generate a RIS form this distribution, we clearly have to do the so called
sampling with replacement (meaning that each selected x
i
value must be
returned to the population before the next draw, and potentially selected again
- only this can guarantee independence). In this case, all our previous formulas
concerning X and s
2
remain valid.
Sometimes though (and more eciently), the sampling is done without re-
placement. This means that X
1
, X
2
, ..., X
n
are no longer independent (they are
still identically distributed). How does this eect the properties of X and s
2
? Lets
see.
The expected value of X remains equal to , by essentially the same argument
as before (note that the proof does not require independence). Its variance is now
computed by
Var
=
1
n
2
n
X
i=1
Var (X
i
) +
1
n
2
X
i6=j
Cov(X
i
, X
j
)
=
n
2
n
2
n(n 1)
2
n
2
(N 1)
=
2
n
N n
N 1
since all the covariance (when i 6= j) have the same value, equal to
Cov(X
1
, X
2
) =
P
k6=
(x
k
)(x
)
N(N 1)
=
P
N
k=1
P
N
=1
(x
k
)(x
)
P
N
k=1
(x
k
)
2
N(N 1)
=
2
N 1
Note that this variance is smaller (which is good) than what it was in the inde-
pendent case.
We dont need to pursue this topic any further.
36
Bivariate samples
A random independent sample of size n from a bivarite distribution consists of n
pairs of RVs (X
1
, Y
1
), (X
2
, Y
2
), .... (X
n
, Y
n
), which are independent between (but
not within) - each pair having the same (aforementioned) distribution.
We already know what are the individual properties of X, Y (and of the two
sample variances). Jointly, X and Y have a (complicated) bivariate distribution
which, for n , tends to be bivariate Normal. Accepting this statement (its
proof would be similar to the univariate case), we need to know the ve param-
eters which describe this distribution. Four of them are the marginal means and
variances (already known), the last one is the correlation coecient between X
and Y . One can prove that this equals to the correlation coecient of the original
distribution (from which we are sampling).
Proof. First we have
Cov(
n
X
i=1
X
i
,
n
X
i=1
Y
i
) = Cov(X
1
, Y
1
)+Cov(X
2
, Y
2
)+..... +Cov(X
n
, Y
n
) = nCov(X, Y )
since Cov(X
i
, Y
j
) = 0 when i 6= j. This implies that the covariance between X
and X equals
Cov(X,Y )
n
. Finally, the corresponding correlation coecient is:
XY
=
Cov(X,Y )
n
q
2
x
n
2
y
n
=
Cov(X,Y )
y
=
xy
, same as that of a single (X
i
, Y
i
) pair.
37
Chapter 4 ORDER STATISTICS
In this section we consider a RIS of size n from any distribution [not just N(, )],
calling the individual observations X
1
, X
2
, ..., X
n
(as we usually do). Based on
these we dene a new set of RVs X
(1)
, X
(2)
, ....X
(n)
[your textbook calls them Y
1
,
Y
2
, ...Y
n
] to be the smallest sample value, the second smallest value, ..., the largest
value, respectively. Even though the original X
i
s were independent, X
(1)
, X
(2)
, ...,
X
(n)
are strongly correlated. They are called the rst, the second, ..., and the last
order statistic, respectively. Note that when n is odd, X
(
n+1
2
)
is the sample
median
X.
Univariate pdf
To nd the (marginal) pdf of a single order statistic X
(i)
, we proceed as follows:
f
(i)
(x) lim
40
Pr(x X
(i)
< x +4)
4
= lim
40
n
i1,1,ni
F(x)
i1
F(x+4)F(x)
4
[1 F(x +4)]
ni
[i 1 of the original observations must be smaller than x, one must be between x
and x +4, the rest must be bigger than x +4] =
n!
(i 1)!(n i)!
F(x)
i1
[1 F(x)]
ni
f(x) (f)
It has the same range as the original distribution.
Using this formula, we can compute the mean and variance of any such order
statistic; to answer a related probability question, instead of integrating f
(i)
(x)
[which would be legitimate but tedious] we use a dierent, simplied approach.
EXAMPLES:
1. Consider a RIS of size 7 from E( = 23 min.) [seven shermen independently
catching one sh each].
(a) Find Pr(X
(3)
< 15 min.) [the third catch of the group will not take
longer than 15 min.].
Solution: First nd the probability that any one of the original 7 indepen-
dent observations is < 15 min. [using F(x) of the corresponding exponential
distribution]: Pr(X
i
< 15 min.) = 1 e
15
23
= 0.479088 p. Now interpret
the same sampling as a binomial experiment, where a value smaller than 15
min. denes a success, and a value bigger than 15 min. represents a fail-
ure. The question is: what is the probability of getting at least 3 successes
(right)? Using binomial probabilities (and the complement shortcut) we get
1
q
7
+ 7pq
6
+
7
2
p
2
q
5
= 73.77%.
(b) Now, nd the mean and standard deviation of X
(3)
.
Solution: First we have to construct the corresponding pdf. By the above
formula, this equals:
7!
2!4!
(1 e
)
31
(e
)
73
=
105
(1 e
)
2
e
5x
38
[x > 0] where = 23 min. This yields the following mean: 105
R
0
x (1
e
)
2
e
5x
dx
= 105
R
0
u (1 e
u
)
2
e
5u
du = 105
R
0
u (e
5u
2e
6u
+
e
7u
)du = 105 [
1
5
2
2
1
6
2
+
1
7
2
] = 11.72 min. [recall the
R
0
u
k
e
u
a
du =
k! a
k+1
formula]. The second sample moment E(X
2
(3)
) is similarly 105
2
R
0
u
2
(e
5u
2e
6u
+ e
7u
)du = 105
2
2 [
1
5
3
2
1
6
3
+
1
7
3
] = 184.0
X
(3)
=
184 11.72
2
= 6.830 min.
Note that if each of the sherman continued shing (when getting his rst,
second, ... catch), the distribution of the time of the third catch would be
gamma(3,
23
7
), with the mean of 9.86 min. and =
3
23
7
= 5.69 min.
[similar, but shorter than the original answer].
(c) Repeat both (a) and (b) with X
(7)
.
Solution: The probability question is trivial: Pr(X
(7)
< 15 min.) = p
7
=
0.579%. The new pdf is: 7(1 e
)
6
4356.2 59.64
2
= 28.28 min.
Note: By a dierent approach, one can derive the following general formulas
(applicable only for sampling from an exponential distribution):
E(X
(i)
) =
i1
X
j=0
1
n j
V ar(X
(i)
) =
2
i1
X
j=0
1
(n j)
2
Verify that they give the same answers as our lengthy integration above.
2. Consider a RIS of size 5 form U(0, 1). Find the mean and standard deviation
of X
(2)
.
Solution: The corresponding pdf is equal to
5!
1!3!
x(1x)
3
[0 < x < 1] which can
be readily identied as beta(2, 4) [for this uniform sampling, X
(i)
beta(i, n+
1i) in general]. By our former formulas E(X
(2)
) =
2
2+4
=
1
3
and V ar(X
(2)
) =
24
(2+4)
2
(2+4+1)
=
2
63
= 0.031746
X
(2)
= 0.1782 (no integration necessary).
Note: These results can be easily extended to sampling from any uniform
distribution U(a, b), by utilizing the Y (b a)X +a transformation.
Sample median
is obviously the most important sample statistic; let us have a closer look at it.
39
For small samples, we treat the sample median as one of the order statistics.
This enables us to get its mean and standard deviation, and to answer a related
probability question (see the previous set of examples).
When n is large (to simplify the issue, we assume that n is odd, i.e. n 2k+1)
we can show that the sample median is approximately Normal, with the mean of
(the distributions median) and the standard deviation of
1
2f( )
n
This is true even for distributions whose mean does not exist (e.g. Cauchy).
Proof: The sample median
X X
(k+1)
has the following pdf:
(2k+1)!
k!k!
F(x)
k
[1
F(x)]
k
f(x). To explore what happens when k (and to avoid getting a
degenerate distribution) we introduce a new RV Y (
X )
n [we assume
that the standard deviation of
X decreases, like that of
X, with
1
n
; this
guess will prove correct!]. We build the pdf of Y in the usual three steps:
1. x =
y
n
+
2.
(2k+1)!
k!k!
F(
y
2k+1
+ )
k
[1 F(
y
2k+1
+ )]
k
f(
y
2k+1
+ )
3. multiply the last line by
1
2k+1
.
To take the limit of the resulting pdf we rst expand F(
y
2k+1
+ ) as F( ) +
F
0
( )
y
2k+1
+
F
00
( )
2
y
2
2k+1
+ .... =
1
2
+ f( )
y
2k + 1
+
f
0
( )
2
y
2
2k + 1
+ .... (F)
1 F(
y
2k+1
+ )
1
2
f( )
y
2k+1
f
0
( )
2
y
2
2k+1
+ ... . Multiplying the
two results in F(
y
2k+1
+ ) [1 F(
y
2k+1
+ )]
1
4
f( )
2 y
2
2k+1
+ .... [the
dots imply terms proportional to
1
(2k+1)
3/2
,
1
(2k+1)
2
, ...; these cannot eect the
subsequent limit].
Substituting into the above pdf yields:
(2k + 1)!
2
2k
k! k!
2k + 1
[1 4f( )
2
y
2
2k + 1
+....]
k
f(
y
2k + 1
+ )
[we extracted
1
4
from inside the brackets]. Taking the k limit of
the expression to the right of [which carries the y-dependence] is trivial:
e
2f( )
2
y
2
f( ). This is [up to the normalizing constant] the pdf of N(0,
1
2f( )
)
[as a by-product, we derived the so called Wallis formula:
(2k+1)!
2
2k
k!k!
2k+1
k
q
2
n
, the distri-
bution of the sample median must be, approximately, N( ,
1
2f( )
n
).
40
EXAMPLES:
1. Consider a RIS of size 1001 from the Cauchy distribution with f(x) =
1
1
1+x
2
.
Find Pr(0.1 <
X < 0.1).
Solution: We know that
X N(0,
1
2
1
1001
= 0.049648). Thus Pr(
0.1
0.049648
<
X
0.049648
<
0.1
0.049648
) = Pr(2.0142 < Z < 2.0142) = 95.60%.
Note that Pr(0.1 <
X < 0.1) =
1
arctan(x)
0.1
x=0.1
= 6.35% only (and it
does not improve with n). So, in this case, the sample median enables us to
estimate the center of the Cauchy distribution much more accurately then
the sample mean would (but dont generalize this to other distributions).
2. Sampling from N(, ), is it better to estimate by the sample mean or by
the sample median (trying to nd the best estimator of a parameter will
be the issue of the subsequent chapter)?
Solution: Since
X N(,
n
) and
X N(,
1
2
1
n
=
p
n
), it is
obvious that
Xs standard error is
p
2
= 1.253 times bigger than that of
2
[the corresponding f( ) is equal to
2].
Our probability thus equals Pr(
X
1
2
1
2
349
<
0.75
1
2
1
2
349
) Pr(Z < 2.26645) =
98.83% [The exact probability, which can be evaluated by computer, is 99.05%].
Subsidiary: Find Pr(
X < 0.75).
Solution: First we need E(X) =
1
R
0
2x xdx =
2
3
and V ar(X) =
1
R
0
2x x
2
dx
(
2
3
)
2
=
1
18
. We know that
X N(
2
3
,
1
18
349
= 0.0126168) Pr(
X
2
3
0.0126168
<
0.75
2
3
0.0126168
) = Pr(Z < 6.6049) = 100%.
Bivariate pdf
We now construct the joint distribution of two order statistics X
(i)
and X
(j)
[i < j]. By our former denition, f(x, y) = lim
0
0
Pr(xX
(i)
<x+yX
(j)
<y+)
. To
make the event in parentheses happen, exactly i 1 observations must have a
value less than x, 1 observation must fall in the [x, x + ) interval, j i 1
observations must be between x+ and y, 1 observation must fall in [y, y +) and
n j observations must be bigger than y + . By our multinomial formula, this
equals
n
i1,1,ji1,1,nj
F(x)
i1
[F(x + ) F(x)] [F(y) F(x + )]
ji1
[F(y +
) F(y)] [1 F(y +)]
nj
. Dividing by and taking the two limits yields
n!
(i 1)!(j i 1)!(n j)!
F(x)
i1
f(x) [F(y) F(x)]
ji1
f(y) [1 F(y)]
nj
41
with L < x < y < H, where L and H is the lower and upper limit (respectively)
of the original distribution.
Let us discuss two important
Special Cases
of this formula:
1. Consecutive order statistics, i and i + 1:
f(x, y) =
n!
(i 1)!(n i 1)!
F(x)
i1
[1 F(y)]
ni1
f(x) f(y)
where L < x < y < H [x corresponds to X
(i)
, y to X
(i+1)
].
This reduces to
n!
(i1)!(ni1)!
x
i1
(1 y)
ni1
with 0 < x < y < 1 when
the distribution is uniform U(0, 1). Based on this, we can
nd the distribution of U = X
(i+1)
X
(i)
:
Solution: We introduce V X
(i)
. Then
(i) y = u +v and x = v,
(ii) the joint pdf of u and v is f(u, v) =
n!
(i1)!(ni1)!
v
i1
(1 u v)
ni1
1
[Jacobian] where 0 < v < 1 and 0 < u < 1 v 0 < v < 1 u and
0 < u < 1,
(iii) the marginal pdf of u is
n!
(i1)!(ni1)!
1u
R
0
v
i1
(1 uv)
ni1
dv = n(1
u)
n1
for 0 < u < n [with the help of
a
R
0
v
i1
(av)
j1
dv = a
i+j1
a
R
0
(
v
a
)
i1
(1
v
a
)
j1 dv
a
= a
i+j1
1
R
0
y
i1
(1 y)
j1
dy = a
i+j1
(i)(j)
(i+j)
].
The corresponding distribution function is F(u) = 1 (1 u)
n
for 0 < u < 1
(the same, regardless of the i value).
To see what happens to this distribution in the n limit, we must
rst introduce W U n (why?). Then, clearly, F
W
(w) = Pr(U <
w
n
) =
1(1
w
n
)
n
for 0 < w < n. In the n limit, this F
W
(w) tends to 1e
w
for w > 0 [the exponential distribution with = 1]. This is what we have
always used for the time interval between two consecutive arrivals (and now
we understand why). We note in passing that a similar results holds even
when the original distribution is not uniform (the inter-arrival times are still
exponential, but the corresponding values now depend on whether we are
in the slack or busy period).
EXAMPLE:
100 students choose, independently and uniformly, to visit the library be-
tween 12 a.m. and 1 p.m. Find Pr(X
(47)
X
(46)
> 3 min.) [probability that
the time interval between the 46
th
and 47
th
arrival is at least 3 minutes].
Solution: Based on the distribution function just derived, this equals Pr[X
(47)
X
(46)
>
3
60
hr.] = 1 F(
1
20
) = (1
1
20
)
100
= 0.592%.
42
2. First and last order statistics, i = 1 and j = n:
f(x, y) = n(n 1) [F(y) F(x)]
n2
f(x) f(y)
where L < x < y < H.
Based on this result, you will be asked (in the assignment) to investigate
the distribution of the sample range X
(n)
X
(1)
.
When the sampling distribution is U(0, 1), the pdf simplies to: f(x, y) =
n(n 1) [y x]
n2
, where 0 < x < y < 1. For this special case we want
to
nd the distribution of U
X
(1 )
+X
(n)
2
[the mid-range value]:
Solution: V X
(1)
(i) x = v and y = 2u v,
(ii) f(u, v) = 2n(n 1) (2u 2v)
n2
, where 0 < v < 1 and v < u <
v+1
2
[visualize the region!]
(iii) f(u) = 2
n1
n(n1)
u
R
max(0,2u1)
(uv)
n2
dv = 2
n1
n
u
n1
0 < u <
1
2
(1 u)
n1 1
2
< u < 1
F(u)
1 F(u)
= 2
n1
u
n
0 < u <
1
2
(1 u)
n 1
2
< u < 1
.
Pursuing this further: E(U) =
1
2
[based on the f(
1
2
+u) f(
1
2
u) symme-
try] and V ar(U) =
1
R
0
(u
1
2
)
2
f(u) du =
n
1
2
R
0
u
1
2
2
(2u)
n1
du+n
1
R
1
2
(1 u)
1
2
2
(2(1 u))
n1
du = 2
n
n
1
2
R
0
1
2
u
2
u
n1
du =
2
n
n
(3)(n)
(n+3)
1
2
n+2
=
1
2(n+2)(n+1)
U
=
1
2(n+2)(n+1)
.
These results can be now easily extended to cover the case of a general
uniform distribution U(a, b) [note that all it takes is the X
G
(b a)X +a
transformation, applied to each of the X
(i)
variables, and consequently to U].
The results are now
E(U
G
) =
a +b
2
U
G
=
b a
p
2(n + 2)(n + 1)
This means, as an estimator of
a+b
2
, the mid-range value is a lot better (judged
by its standard error) than either
X N(
a+b
2
,
ba
12n
) or
X N(
a+b
2
,
ba
2
n
).
EXAMPLE:
Consider a RIS of size 1001 from U(0, 1). Compare
43
Pr(0.499 <
X
(1)
+X
(1001 )
2
< 0.501) = 1
1
2
(2 0.499)
1001
1
2
(2 0.499)
1001
[using F(u) of the previous example] = 86.52%
Pr(0.499 <
X < 0.501) ' Pr
0.4990.5
1
121001
<
X
1
2
1
121001
<
0.5010.5
1
121001
=
Pr (.1095993 < Z < .1095993) = 8.73%
Pr(0.499 <
X < 0.501) ' Pr
0.4990.5
1
2
1001
<
X
1
2
1
2
1001
<
0.5010.5
1
2
1001
=
Pr (0.063277 < Z < 0.063277) = 5.05%.
This demonstrates that, for a uniform distribution, the mid-range value is a
lot more likely to nd the true center than either the sample mean or the sample
median.
44
45
Chapter 5 ESTIMATING
DISTRIBUTION PARAMETERS
Until now we have studied Probability, proceeding as follows: we assumed pa-
rameters of all distributions to be known and, based on this, computed probabilities
of various outcomes (in a random experiment). In this chapter we make the es-
sential transition to Statistics, which is concerned with the exact opposite: the
random experiment is performed (usually many times) and the individual outcomes
recorded; based on these, we want to estimate values of the distribution parameters
(one or more). Until the last two sections, we restrict our attention to the (easier
and most common) case of estimating only one parameter of a distribution.
EXAMPLE: How should we estimate the mean of a Normal distribution
N(, ), based on a RIS of size n? We would probably take
X (the sample
mean) to be a reasonable estimator of [note that this name applies to
the random variable
X, with all its potential (would-be) values; as soon as
the experiment is completed and a particular value of
X recorded, this value
(i.e. a specic number) is called an estimate of ].
There is a few related issues we have to sort out:
How do we know that
X is a good estimator of , i.e. is there some sen-
sible set of criteria which would enable us to judge the quality of individual
estimators?
Using these criteria, can we then nd the best estimator of a parameter, at
least in some restricted sense?
Would not it be better to use, instead of a single number [the so called
point estimate, which can never precisely agree with the exact value of
the unknown parameter, and is thus in this sense always wrong], an interval
of values which may have a good chance of containing the correct answer?
The rest of this chapter tackles the rst two issues. We start with
A few denitions
First we allow an estimator of a parameter to be any reasonable combination
(transformation) of X
1
, X
2
, ..., X
n
[our RIS], say
(X
1
, X
2
, ...., X
n
) [the sample
mean
X
1
+X
2
+...+X
n
n
being a good example]. Note that n (being known) can be used
in the expression for
; similarly, we can use values of other parameter if these are
known [e.g.: in the case of hypergeometric distribution, N is usually known and
only K needs to be estimated; in the case of negative binomial distribution k is
given and p estimated, etc.]. Also note that some parameters may have only integer
values, while others are real; typically, we concentrate on estimating parameters of
the latter type.
46
To narrow down our choices (we are after sensible, good estimators) we rst
insist that our estimators be unbiased
E(
) =
(having the exact long-run average), or at least asymptotically unbiased , i.e.
E(
)
n
(being unbiased in the large-sample limit).
The E(
) =
n1
n
2
. Our estimator is thus asymptotically unbiased
only. This bias can be easily removed by dening a new estimator s
2
n
n1
2
s
2
2
n1
, we can
establish not only that E(s
2
) =
2
n1
n 1 =
2
(unbiased), but also that
V ar(s
2
) = (
2
n1
)
2
2(n 1) =
2
4
n1
, which we need later.
Supplementary: Does this imply that s is an unbiased estimator of ? The an-
swer is No, as we can see fromE
p
2
n1
=
1
(
n1
2
) 2
n1
2
R
0
xx
n3
2
e
x
2
dx =
2(
n
2
)
(
n1
2
)
E(s) =
n 1
2(
n
2
)
(
n1
2
)
(1
1
4n
7
32n
2
+...). We know how
to x this: use
q
n1
2
(
n1
2
)
(
n
2
)
s instead, it is a fully unbiased estimator of
.
Yet, making an estimator unbiased (or at least asymptotically so) is not enough
to make it even acceptable (let alone good). Consider estimating of a distribu-
tion by taking
= X
1
(the rst observation only), throwing away X
2
, X
3
, ....X
n
[most of our sample!]. We get a fully unbiased estimator which is evidently unac-
ceptable, since we are wasting nearly all the information contained in our sample.
It is thus obvious that being unbiased is only one essential ingredient of a good
estimator, the other one is its variance (a square of its standard error). A good
estimator should not only be unbiased, but it should also have a variance which is
as small as possible. This leads to two new denitions:
Consistent estimator is such that
47
1. E(
)
n
[asymptotically unbiased], and
2. V ar(
)
n
0.
This implies that we can reach the exact value of by indenitely increasing the
sample size. That sounds fairly good, yet it represents what I would call minimal
standards (or less), i.e. every decent estimator is consistent; that by itself does
not make it particularly good.
Example:
=
X
2
+X
4
+X
6
+...X
n
n
2
[n even] is a consistent estimator of ,
since its asymptotic (large n) distribution is N(,
n
2
). Yet, we are wasting
one half of our sample, which is unacceptable.
Minimum variance unbiased estimator (MVUE or best estimator from
now on) is an unbiased estimator whose variance is better or equal to the variance of
any other unbiased estimator [uniformly, i.e. for all values of ]. (The restriction to
unbiased estimators is essential: an arbitrary constant may be totally nonsensical
as an estimator (in all but lucky-guess situations), yet no other estimator can
compete with its variance which is identically equal to zero).
Having such an estimator would of course be ideal, but we run into two di-
culties:
1. The variance of an estimator is, in general, a function of the unknown param-
eter [to see that, go back to the s
2
example], so we are comparing functions,
not values. It may easily happen that two unbiased estimators have variances
such that one estimator is better in some range of values and worse in an-
other. Neither estimator is then (uniformly) better than its counterpart, and
the best estimator may therefore not exit at all.
2. Even when the best estimator exists, how do we know that it does and,
more importantly, how do we nd it (out of the multitude of all unbiased
estimators)?
To partially answer the last issues: luckily, there is a theoretical lower bound
on the variance of all unbiased estimators; when an estimator achieves this bound,
it is automatically MVUE. The relevant details are summarized in the following
Theorem:
Cramr-Rao inequality
When estimating a parameter which does not appear in the limits of the distri-
bution (the so called regular case), by an unbiased estimator
, then
V ar(
)
1
nE
lnf(x|)
2
1
nE
h
2
lnf(x|)
2
i (C-R)
where f(x|) stands for the old f(x) we are now emphasizing its functional
dependence on the parameter . As is xed (albeit unknown) and not random
in any sense, this is not to be confused with our conditional-pdf notation.
48
Proof: The joint pdf of X
1
, X
2
, ..., X
n
is
n
Q
i=1
f(x
i
|) where L < x
1
, x
2
, ...x
n
< H.
Dene a new RV U
n
P
i=1
U
i
n
P
i=1
ln f(X
i
|)
n
P
i=1
lnf(X
i
|) =
ln
n
Q
i=1
f(X
i
|) =
n
Q
i=1
f(X
i
|)
n
Q
i=1
f(X
i
|)
E(U) =
n
P
i=1
E(U
i
) = n
H
R
L
ln f(x|)
f(x|) dx = n
H
R
L
f(x|)
dx = n
H
R
L
f(x|) dx = n
lnf(X|)
2
#
. We also know that E(
) =
H
R
L
...
H
R
L
n
Q
i=1
f(x
i
|) dx
1
dx
2
....dx
n
= [unbiased]. Dierentiating this equation with
respect to yields: E(
U) = 1 Cov(
, U)
2
V ar(
) V ar(U) V ar(
)
1
nE
"
lnf(X|)
2
#,
which is the C-R bound. Dierentiating
H
R
L
f(x|) dx = 1 yields
H
R
L
f(x|)
dx
H
R
L
2
ln f(x|) f(x|) +
ln f(x|)
f(x|)
i
dx = 0 E
"
lnf(X|)
2
#
2
lnf(X|)
)
1
nE
2
lnf(X|)
[we will use CRV as a shorthand for the last expression]. Note that this proof
holds in the case of a discrete distribution as well (each integration needs to
be replaced by the corresponding summation).
Based on this C-R bound we dene the so called eciency of an unbiased
estimator
as the ratio of the theoretical variance bound CRV to the actual
variance of
, thus:
CRV
V ar(
)
usually expressed in percent [we know that its value cannot be bigger that 1, i.e.
100%]. An estimator whose variance is as small as RCV is called efficient [note
that, from what we know already, this makes it automatically the MVUE or best
estimator of ]. An estimator which reaches 100% eciency only in the n
limit is called asymptotically efficient.
One can also dene relative efficiency of two estimators with respect to
one another as
V ar(
2
)
V ar(
1
)
[this is the relative eciency of
1
compared to
2
note
that the variance ratio is reversed!].
49
EXAMPLES:
1. How good is
X as an estimator of of the Normal distribution N(, ).
Solution: We know that its variance is
2
n
. To compute the C-R bound we
do
2
2
h
ln(
2)
(x)
2
2
2
i
=
1
2
. Thus CRV equals
1
n
2
=
2
n
implying
that
X is the best (unbiased) estimator of .
2. Consider a RIS of size 3 form N(, ). What is the relative eciency of
X
1
+2X
2
+X
3
4
[obviously unbiased] with respect to
X (when estimating )?
Solution: V ar(
X
1
+2X
2
+X
3
4
) = (
2
16
+
4
2
16
+
2
16
) =
3
8
2
.
Answer:
2
3
3
8
2
=
8
9
= 88.89%.
3. Suppose we want to estimate p of a Bernoulli distribution by the experimental
proportion of successes, i.e.
=
n
P
i=1
X
i
n
. The mean of our estimator is
np
n
=
p [unbiased], its variance equals
npq
n
2
=
pq
n
[since
n
P
i=1
X
i
has the binomial
distribution]. Is this the best we can do?
Solution: Let us compute the corresponding CRV by starting from f(x) =
p
x
(1 p)
1x
[x = 0, 1] and computing
2
p
2
[xln p + (1 x) ln p] =
x
p
2
1x
(1p)
2
E
h
X
p
2
+
1X
(1p)
2
i
=
1
p
+
1
1p
=
1
pq
CRV =
pq
n
. So again, our
estimator is the best one can nd.
4. Let us nd the eciency of
X to estimate the mean of the exponential
distribution, with f(x) =
1
[x > 0].
Solution:
2
2
h
ln
x
i
=
1
2
2x
3
E
h
2X
3
1
2
i
=
1
2
CRV =
2
n
.
We know that E(
X) =
n
n
= and V ar(
X) =
n
2
n
2
=
2
n
.
Conclusion:
X is the best estimator of .
5. Similarly, how good is
X in estimating of the Poisson distribution?
Solution:
2
2
[xln ln(x!) ] =
x
2
E
=
1
CRV =
n
. Since
E(
X) =
n
n
= and V ar(
X) =
n
n
2
=
n
, we again have the best estimator.
6. Let us try estimating of the uniform distribution U(0, ). This is not a
regular case, so we dont have CRV and the concept of (absolute) eciency.
We propose, and compare the quality of, two estimators, namely 2
X and
X
(n)
[the largest sample value].
To investigate the former one we need E(X
i
) =
2
and V ar(X
i
) =
2
12
E(2
X) =
2n
2n
= [unbiased] and V ar(2
X) =
4n
2
12n
2
=
2
3n
[consistent].
As to X
(n)
, we realize that
X
(n)
beta(n, 1) E(
X
(n)
) =
n
n+1
and V ar(
X
(n)
) =
n
(n+1)
2
(n+2)
[X
(n)
is consistent, but unbiased only asymptotically]
n+1
n
X
(n)
is
50
an unbiased estimator of , having the variance of
2
(n+2) n
. Its relative eciency
with respect to 2
X is therefore
n+2
3
i.e., in the large-sample limit,
n+1
n
X
(n)
is
innitely more ecient than 2
X. But how can we establish whether
n+1
n
X
(n)
is the best unbiased estimator, lacking the C-R bound? Obviously, some-
thing else is needed for cases (like this) which are not regular. This is the
concept of
Suciency
which, in addition to providing a new criterion for being the best estimator (of a
regular case or not), will also help us nd it (the C-R bound does not do that!).
Denition:
(X
1
, X
2
, ...X
n
) is called a sucient statistic (not an estimator
yet) for estimating i the joint pdf of the sample
n
Q
i=1
f(x
i
|) can be factorized
into a product of a function of and
only, times a function of all the x
i
s (but
no ), thus:
n
Y
i=1
f(x
i
|) g(,
) h(x
1
, x
2
, ...x
n
)
where g(,
) must fully take care of the joint pdfs dependence, including the
ranges limits (L and H). Such
(when it exists) extracts, from the RIS, all the
information relevant for estimating . All we have to do to convert
into the best
possible estimator of is to make it unbiased (by some transformation, which is
usually easy to design).
One can show that, if this transformation is unique, the resulting estimator is
MVUE (best), even if it does not reach the C-R limit (but: it must be ecient at
least asymptotically). To prove uniqueness, one has to show that E
n
u(
)
o
0
(for each value of ) implies u(
) 0, where u(
) is a function of
.
EXAMPLES:
1. Bernoulli distribution:
n
Q
i=1
f(x
i
|p) = p
x
1
+x
2
+....+xn
(1 p)
nx
1
x
2
...xn
is a
function of p and of a single combination of the x
i
s, namely
n
P
i=1
x
i
. A sucient
statistic for estimating p is thus
n
P
i=1
X
i
[we know how to make it into an
unbiased estimator].
2. Normal distribution:
n
Q
i=1
f(x
i
|) =
1
n
exp
n
P
i=1
x
2
i
2
2
!
exp
n
2
2
n
P
i=1
x
i
2
2
where the rst factor (to the left of ) contains no and the second factor
is a function of only a single combination of the x
i
s, namely their sum. This
leads to the same conclusion as in the previous example.
3. Exponential:
n
Q
i=1
f(x
i
|) =
1
n
exp
n
P
i=1
x
i
ditto.
51
4. Referring to the same exponential distribution: what if the parameter to
estimate is
1
n
exp
n
P
i=1
x
i
n
P
i=1
X
i
gamma(n,
1
1
n
P
i=1
X
i
=
n
(n1)!
R
0
1
u
u
n1
e
u
du =
(n2)!
n
(n1)!
n1
=
n1
n1
n
P
i=1
X
i
is an unbiased estimator of . Its variance can be shown (by
a similar integration) to be equal to
2
n2
, whereas the C-R bound yields
2
n
[verify!]. Thus the eciency of
n1
n
P
i=1
X
i
is
n2
n
, making it only asymptotically
ecient [it is still the MVUE and therefore the best unbiased estimator in
existence, i.e. 100% eciency is, in this case, an impossible goal].
5. Gamma(k, ):
n
Q
i=1
f(x
i
|) =
n
Q
i=1
x
i
k1
(k 1)!
n
exp
n
P
i=1
x
i
kn
, which makes
n
P
i=1
X
i
a sucient statistics for estimating [similarly,
n
Q
i=1
X
i
would be a
sucient statistics for estimating k]. Since E(
n
P
i=1
X
i
) = nk,
P
n
i=1
X
i
nk
is the
corresponding unbiased estimator. Its variance equals to
nk
2
(nk)
2
=
2
nk
, which
agrees with the C-R bound (verify!).
6. We can show that X
(n)
is a sucient statistic for estimating of the uniform
U(0, ) distribution.
Proof: Introduce G
a,b
(x)
0 x < a
1 a x b
0 x > b
. The joint pdf of X
1
, X
2
, ..., X
n
can be written as
1
n
n
Q
i=1
G
0,
(x
i
) =
1
n
G
,
(x
(n)
)G
0,
(x
(1)
) where the rst
factor is a function of and x
(n)
only.
Knowing that E(X
(n)
) =
n
n+1
[as done earlier], we can easily see that
n+1
n
X
(n)
is an unbiased estimator of . Now we also know that it is the best estimator
we can nd for this purpose.
The only diculty with the approach of this section arises when a sucient
statistic does not exist (try nding it for the Cauchy distribution). In that case,
one can resort to using one of the following two techniques for nding an estimator
of a parameter (or joint estimators of two or more parameters):
Method of moments
is the simpler of the two; it provides adequate (often best) estimators in most
cases, but it can also, on occasion, result in estimators which are pathetically in-
ecient. It works like this: set each of the following expressions: E(X), V ar(X),
52
E[(X )
3
] , etc. [use as many of these as the number of parameters to be esti-
mated usually one or two] equal to its empirical equivalent, i.e.
E(X) =
n
P
i=1
X
i
n
(
X)
V ar(X) =
n
P
i=1
(X
i
X)
2
n
( S
2
)
E
(X )
3
=
n
P
i=1
(X
i
X)
3
n
etc., then solve for the unknown parameters. The result yields the corresponding
estimators (each a function of
X, S
2
, etc. depending on the number of parameters).
These will be asymptotically (but not necessarily fully) unbiased, consistent (but
not necessarily ecient nor MVUE). The method fails when E(X) does not exist
(Cauchy).
EXAMPLES:
One Parameter
1. Exponential E() distribution; estimating .
Solution: E(X) = =
X
=
X.
2. Uniform U(0, ) distribution; estimating .
Solution: E(X) =
2
=
X
= 2
X [a very inecient estimator].
3. Geometric distribution; estimate p.
Solution: E(X) =
1
p
=
X p =
1
X
. One can show that E(
1
X
) = p +
pq
n
pq(pq)
n
2
+.... [biased].
The following adjustment would make it into an unbiased estimator: p =
1
1
n
X
1
n
n 1
n
P
i=1
X
i
1
. Its variance is
p
2
q
n
+
2p
2
q
2
n
2
+... whereas the C-R bound
equals to
p
2
q
n
, so p is only asymptotically ecient.
4. Distribution given by f(x) =
2x
a
e
x
2
a
for x > 0; estimate a.
Solution: E(X) =
R
0
2x
2
a
e
x
2
a
dx =
R
0
aue
u
du [using the u =
x
2
a
substitu-
tion] =
a(
3
2
) =
1
2
. Since E[
X
2
] =
n
n
2
E[X
2
1
] +
n(n1)
n
2
E[X
1
X
2
] =
a
n
+
n1
n
a
4
E[ a] = a +
a
n
(
4
.
6. gamma(, ): estimate assuming known.
Solution: E(X) = =
X b =
X
.
Two Parameters
1. For N(, ), estimate both and .
Solution: E(X) = =
X and V ar(X) =
2
= S
2
=
X and =
S
2
[the latter being unbiased only asymptotically].
2. For U(a, b) estimate both a and b.
Solution: E(X) =
a+b
2
=
X and V ar(X) =
(ba)
12
2
= S
2
a =
X +
3S
2
and
b =
X
3S
2
[this would prove to be a very inecient way of estimating
a and b].
3. Binomial, where both n and p need to be estimated.
Solution: E(X) = np =
X and V ar(X) = npq = S
2
p = 1
S
2
X
and
n =
X
1
S
2
X
(rounded to the nearest integer). Both estimators appear biased
when explored by computer simulation [generating many RISs using the
binomial distribution with specic values of n and p, then computing n and
p to see how they perform; in this case p is consistently overestimated and n
underestimated].
4. beta(n, m), estimate both n and m.
Solution: E(X) =
n
n+m
=
X [
m
n+m
= 1
X] and V ar(X) =
nm
(n+m)
2
(n+m+1)
=
S
2
n =
X
h
X(1
X)
S
2
1
i
and m = (1
X)
h
X(1
X)
S
2
1
i
.
5. gamma(, ): estimate both parameters.
Solution: E(X) = = X and V ar(X) =
2
= S
2
b
=
S
2
X
and b =
X
2
S
2
.
Maximum-likelihood technique
always performs very well; it guarantees to nd the best estimators under the
circumstances (even though they may be only asymptotically unbiased) the major
diculty is that the estimators may turn out to be rather complicated functions
of the X
i
s (to the extent that we may be able to nd them only numerically, via
computer optimization).
The technique for nding them is rather simple (in principle, not in technical
detail): In the joint pdf of X
1
, X
2
, ..., X
n
, i.e. in
n
Q
i=1
f(x
i
|
1
,
2
, ...), replace x
i
by
the actually observed value of X
i
and maximize the resulting expression (called the
likelihood function) with respect to
1
,
2
, ... The corresponding (optimal)
-values are the actual parameter estimates. Note that it is frequently easier (yet
equivalent) to maximize the natural logarithm of the likelihood function instead.
EXAMPLES:
54
One Parameter
1. Exponential distribution, estimating .
Solution: We have to maximize nln
P
n
i=1
X
i
+
P
n
i=1
X
i
2
= 0
=
P
n
i=1
X
i
n
[same as the method of moments].
2. Uniform distribution U(0, ), estimate .
Solution: We have to maximize
1
n
G
,
(X
(n)
) G
0,
(X
(1)
) with respect to ;
this can be done by choosing the smallest possible value for while keeping
G
,
(X
(n)
) = 1. This is achieved by
= X
(n)
[any smaller value of and
G
0,
(X
(n)
) drops down to 0]. We already know that this estimator has a small
1
n
bias and also how to x it.
3. Geometric distribution, estimating p.
Solution: Maximize nln p + (
n
P
i=1
X
i
n) ln(1 p)
n
p
P
n
i=1
X
i
n
1p
= 0
p =
n
P
n
i=1
X
i
[same as the method of moments].
4. The distribution is given by f(x) =
2x
a
e
x
2
a
for x > 0, estimate a.
Solution: Maximize nln2 nln a + ln
n
Q
i=1
X
i
P
n
i=1
X
2
i
a
n
a
+
P
n
i=1
X
2
i
a
2
= 0
a =
P
n
i=1
X
2
i
a
X
2
. Since E(X
2
i
) = a [done earlier], a is an unbiased
estimator. Based on
2
a
2
[ln(2X) lna
X
2
a
] =
1
a
2
2
X
2
a
3
(whose expected
value equals to
1
a
2
) the C-R bound is
a
2
n
. Since V ar(X
2
) =
V ar(X
2
)
n
=
E(X
4
) a
2
n
=
2a
2
a
2
n
=
a
2
n
, our estimator is 100% ecient.
5. Normal distribution N(, ), assuming that is known, and
2
is to be
estimated [a rather unusual situation].
Solution: Maximize
n
2
ln(2) nln
P
n
i=1
(X
i
)
2
2
2
with respect to
n
+
P
n
i=1
(X
i
)
2
3
= 0
2
=
P
n
i=1
(X
i
)
2
n
[clearly an unbiased estimator]. To as-
sess its eciency: C-Rbound can be computed based on
2
(
2
)
2
h
1
2
ln(2)
1
2
ln
2
(x)
2
2
2
i
=
1
2
2
(x)
2
6
, whose expected value is
1
2
4
2
4
n
. Since V ar(
2
) = E
h
P
n
i=1
[(X
i
)
2
2
]
2
n
i
=
E[(X)
4
]2E[(X)
2
]
2
+
4
n
=
3
4
2
4
+
4
n
=
2
4
n
, our estimator is 100% ecient.
6. gamma(, ): estimate .
Solution: Maximize ( 1) ln
n
Q
i=1
X
i
P
n
i=1
X
i
2
n
= 0
b
=
X
values) b =
1
n
r
n
Q
i=1
X
i
.
8. Cauchy distribution with f(x) =
1
1
1+(xa)
2
, estimate a [the location of
the laser gun, knowing its (unit) distance behind a screen]. Note that the
method of moments would not work in this case.
Solution: Maximize nln
n
P
i=1
ln[1 + (X
i
a)
2
]
n
P
i=1
X
i
a
1 + (X
i
a)
2
= 0.
This equation would have to be solved, for a, numerically [i.e. one would
need a computer].
Would this give us something substantially better than our (sensible but ad
hoc) sample median
X ? Well, we know that the new estimator is asymptoti-
cally ecient, i.e. its variance approaches the C-R bound of
1
nE
h
lnf
a
2
i =
1
n
4(xa)
2
dx
[1+(xa)
2
]
3
=
2
n
. The variance of
X was
1
4nf(a)
2
=
2
4n
, so its relative ef-
ciency is
8
2
= 81.06%. The loss of 19% eciency seems an acceptable trade
o, since
X is so much easier to evaluate and (which is another substantial
advantage over the best estimator), it does not require the knowledge of the
distance of the laser gun from the screen.
Two-parameters
1. The distribution is N(, ), estimate both and .
Solution: Maximize
n
2
ln(2) nln
P
n
i=1
(X
i
)
2
2
2
by setting both derivatives
equal to zero, i.e.
P
n
i=1
(X
i
)
2
2
= 0 and
n
+
P
n
i=1
(X
i
)
2
3
= 0, and solving for
=
X and =
S
2
(same as when using the method of moments).
2. Uniform U(a, b), estimate both limits a and b.
Solution: Maximize
1
(ba)
n
n
Q
i=1
G
a,b
(X
i
) by choosing a and b as close to each
other as the G-functions allow (before dropping to zero). Obviously, a cannot
be any bigger that X
(1)
and b cannot be any smaller than X
(n)
, so these are
the corresponding estimators [both slightly biased, but we know how to x
that]. These estimators are much better then what we got from the method
of moments.
3. gamma(, ), estimate both parameters.
56
Solution: Maximize ( 1) ln
n
Q
i=1
X
i
P
n
i=1
X
i
2
n
= 0. Solving
them jointly can be done only numerically.
4. Binomial distribution, with both n and p to be estimated.
Solution: Maximize N ln(n!) ln
N
Q
i=1
X
i
! ln
N
Q
i=1
(nX
i
)!+ lnp
N
P
i=1
X
i
ln(1
p) (Nn
N
P
i=1
X
i
), where N is the sample size. Dierentiating, we get [
n
:]
N(n+1)
N
P
i=1
(nX
i
+1)N ln(1p) = 0 and [
p
:]
P
N
i=1
X
i
p
Nn
P
N
i=1
X
i
1p
=
0. One can solve the second equation for p =
P
N
i=1
X
i
Nn
, then substitute into
the rst equation and solve, numerically, for n. This would require a help of
a computer, which is frequently the price to pay for high-quality estimators.
57
Chapter 6 CONFIDENCE
INTERVALS
The last chapter considered the issue of so called point estimates (good, better
and best), but one can easily see that, even for the best of these, a statement which
claims a parameter, say , to be close to 8.3, is not very informative, unless we
can specify what close means. This is the purpose of a confidence interval,
which requires quoting the estimate together with specic limits, e.g. 8.3 0.1 (or
8.2 8.4, using an interval form). The limits are established to meet a certain
(usually 95%) level of confidence (not a probability, since the statement does
not involve any randomness - we are either 100% right, or 100% wrong!).
The level of condence (1 in general) corresponds to the original, a-priori
probability (i.e. before the sample is even taken) of the procedure to get it right
(the probability is, as always, in the random sampling). To be able to calculate
this probability exactly, we must know what distribution we are sampling from.
So, until further notice, we will assume that the distribution is Normal.
CI for mean
We rst assume that, even though is to be estimated (being unknown), we still
know the exact (population) value of (based on past experience).
We know that
X
/
n
(6.1)
is standardized normal (usually denoted Z). This mean that
Pr
X
/
< z
/2
= Pr
< z
/2
/
= 1
where z
/2
(the so called critical value) is easily found from tables (such as the
last row of Table IV). Note that in general
Pr(Z > z
a
) = a
Usually, we need /2 = 0.025, which corresponds to 95% probability (eventually
called condence).
The random variable of the last statement is clearly X (before a sample is
taken, and the value is computed). Assume now that the (random independent)
sample has been taken, and X has been computed to have a specic value (8.3
say). The inequality in parentheses is then either true or false - the only trouble is
that it contains whose value we dont know! We can thus solve it for , i.e.
X z
/2
/
n < < X +z
/2
/
n
and interpret this as a 100(1)% condence interval for the exact (still unknown)
value of .
58
unknown
In this case, we have to replace by the next best thing, which is of course sample
standard deviation s. We know than, that the distribution of
X
s/
n
(6.2)
changes from N(0, 1) to t
n1
. This means that we also have to change z
/2
to
t
/2,n1
, the rest remains the same. A 100 (1 )% condence interval for is
then constructed by
X t
/2,n1
s/
n < < X + t
/2,n1
s/
n
Large-sample case
When n is large (n 30), there is little dierence between z
/2
and t
/2,n1
, so
we would use z
/2
in either case.
Furthermore, both (6.1) and (6.2) are approximately Normal, even when the
population is not (and, regardless what the distribution is). This means we can
still construct an approximate condence interval for (using if its known, s
when it isnt - z
/2
in either case).
Dierence of two means
When two populations are Normal, with the same but potentially dierent , we
already know (assuming the two samples be independent) that
X
1
X
2
(
1
2
)
r
1
n
1
+
1
n
2
(6.3)
is standardized normal (Z).
Proof. X
1
N(
1
,
n
1
) and X
2
N(
2
,
n
2
) implies X
1
X
2
N
2
,
q
2
n
1
+
2
n
2
2
is thus
X
1
X
2
z
/2
r
1
n
1
+
1
n
2
<
1
2
< X
1
X
2
+z
/2
r
1
n
1
+
1
n
2
When is not known, (6.3) changes to
X
1
X
2
(
1
2
)
s
p
r
1
n
1
+
1
n
2
(6.4)
where
s
p
s
(n
1
1)s
2
1
+ (n
2
1)s
2
2
n
1
+n
2
2
is called the pooled sample standard deviation.
(6.4) now has the t
n
1
+n
2
2
distribution.
59
Proof. We need to proof that
(n
1
+n
2
2)s
2
p
2
=
(n
1
1)s
2
1
+(n
2
1)s
2
2
2
2
n
1
+n
2
2
(auto-
matically independent of X
1
X
2
). This follows from the fact that
(n
1
1)s
2
1
2
2
n
1
1
and
(n
2
1)s
2
2
2
2
n
2
1
, and they are independent of each other.
The corresponding condence interval is now
X
1
X
2
t
/2,n
1
+n
2
2
s
p
r
1
n
1
+
1
n
2
<
1
2
< X
1
X
2
+ t
/2,n
1
+n
2
2
s
p
r
1
n
1
+
1
n
2
When the two s are not identical (but both known), we have
X
1
X
2
(
1
2
)
r
2
1
n
1
+
2
2
n
2
N(0, 1)
and the corresponding condence interval:
X
1
X
2
z
/2
r
2
1
n
1
+
2
2
n
2
<
1
2
< X
1
X
2
+z
/2
r
2
1
n
1
+
2
2
n
2
When the s are unknown (and have to be replaced by s
1
and s
2
), we end
up with a situation which has no simple distribution, unless both n
1
and n
2
are
large. In that case, we (also) dont have to worry about the normality of the
population, and construct an approximate CI by:
X
1
X
2
z
/2
r
s
2
1
n
1
+
s
2
2
n
2
<
1
2
< X
1
X
2
+z
/2
r
s
2
1
n
1
+
s
2
2
n
2
Proportion(s)
Here, we construct a CI for the p parameter of a binomial distribution. This usually
corresponds to sampling, from an innite population with a certain percentage
(or proportion) of special cases. We will deal only with the large n situation.
The X
1
, X
2
, ... X
n
or our RIS now have values of either 1 (special case) or
0. This means that X equals to the sample proportion of special cases, also
denoted by b p. We know that b p is, for large n, approximately Normal, with mean p
and standard deviation of
p(1p)
n
. One can actually take it one small step further,
and show that
b p p
q
b p(1b p)
n
N(0, 1)
One can thus construct an approximate CI for p by
b p z
/2
q
b p(1b p)
n
< p < b p +z
/2
q
b p(1b p)
n
Similarly, for a dierence between two p values (having two independent sam-
ples), we get the following approximate CI
b p
1
b p
2
z
/2
q
b p
1
(1b p
1
)
n
1
+
b p
2
(1b p
2
)
n
2
< p
1
p
2
< b p
1
b p
2
+z
/2
q
b p(1b p)
n
+
b p
2
(1b p
2
)
n
2
60
Variance(s)
We have to go back to assuming sampling fromN(, ). To construct a 100(1)%
condence interval for the population variance
2
, we just have to recall that
(n 1)s
2
2
2
n1
This implies that
Pr
2
1/2,n1
<
(n 1)s
2
2
<
2
/2,n1
= 1
where
2
1/2,n1
and
2
/2,n1
are two critical values of the
2
n1
distribution (Table
V). This time, they are both positive, with no symmetry to help.
The corresponding condence interval for
2
is then
(n 1)s
2
2
/2,n1
<
2
<
(n 1)s
2
2
1/2,n1
(the bigger critical value rst).
To construct a CI for , we would just take the square root of these.
ratio
A CI for a ratio of two s (not a very common thing to do) is based on
s
2
1
2
1
s
2
1
2
1
F
n
1
1,n
2
1
(assuming independent samples). This readily implies
Pr
F
1
2
,n
1
1,n
2
1
<
s
2
1
2
2
s
2
2
2
1
< F
2
,n
1
1,n
2
1
2
,n
1
1,n
2
1
s
2
1
s
2
2
<
2
1
2
2
<
s
2
1
s
2
2
1
F
1
2
,n
1
1,n
2
1
=
s
2
1
s
2
2
F
2
,n
2
1,n
2
1
Note that
2
= Pr
F
n
2
1,n
1
1
> F
2
,n
2
1,n
2
1
= Pr
1
F
n
1
1,n
2
1
> F
2
,n
2
1,n
2
1
= Pr
F
n
1
1,n
2
1
<
1
F
2
,n
2
1,n
2
1
!
implies that
Pr
F
n
1
1,n
2
1
>
1
F
2
,n
2
1,n
2
1
F
1
2
,n
1
1,n
2
1
!
= 1
2
The critical values are in Table VI (but only for 90% and 98% condence levels)!
61
Chapter 7 TESTING HYPOTHESES
Suppose now that, instead of trying to estimate , we would like it to be equal
to (or at least reasonably close to) some desired, specic value called
0
. To test
whether it is (the so called null hypothesis, say H
0
: = 500) or is not (the
alternate hypothesis H
A
: 6= 500) can be done, in this case, in one of two
ways:
1. Construct the corresponding CI for , and see whether it contains 500 (if it
does, accept H
0
, otherwise, reject it). The corresponding (usually 5%)
is called the level of significance.
2. Compute the value of the so called test statistic
X
0
/
n
(it has the Z distribution only when H
0
is true) and see whether it falls in
the corresponding acceptance region [z
/2
, z
/2
] or rejection region
(outside this interval).
Clearly, the latter way of performing the test is equivalent to the former. Even
though it appears more elaborate (actually, it is a touch easier computationally),
it is the standard way to go.
Test statistics are usually constructed with the help of the corresponding like-
lihood function, something we learned about in the previous chapter.
There are two types of error we can make:
Rejecting H
0
when its true - this is called Type I error.
Accepting it when its false - Type II error.
The probability of making Type I error is obviously equal to (under our
control).
The probability of Type II error () depends on the actual value of (and
) - we can compute it and plot it as a function of (OC curve) - when ap-
proaches (but is not equal to)
0
, this error clearly reaches 1 . Equivalently,
they sometimes plot 1 (the power function) instead - we like it big (close
to 1).
Two notes concerning alternate hypotheses:
When H
A
consists of (innitely) many possibilities (such as our 6= 500 exam-
ple), it is called composite. This is the usual case.
When H
A
considers only one specic possibility (e.g. = 400), it is called
simple. In practice, this would be very unusual - we will not be too concerned
with it here.
62
To us, a more important distinction is this:
Sometimes (as in our example), the alternate hypothesis has the 6= sign, indi-
cating that we dont like a deviation from
0
either way (e.g. 500 mg is the amount
of aspirin we want in one pill - it should not be smaller, it should not be bigger) -
this is called a two-sided hypothesis.
Frequently (this is actually even more common), we need to make sure that
meets the specications one way (amount of coke in one bottle is posted as 350
mL, we want to avoid the possibility that < 350 - the one-sided alternate
hypothesis). In this case, the null hypothesis is sometimes still stated in the old
manner of H
0
: = 350, sometimes they put is as H
0
: 350. In any case, the
null hypothesis must always have the = sign!
When the alternate hypothesis is one-sided, so is the corresponding rejection
region (also called one-tailed), which would now consist of the (, z
) interval
- note that now a single tail get the full . Note that now the correspondence
between this test and a CI for becomes more complicated (we would normally
not use CI in this case).
In either case (one or two sided), these is yet another alternate (but equivalent)
way of performing the test (bypassing the critical region). It works like this:
For a two-sided test, we compute the value of the test statistic (let us call it
t), which it then converted into the so called P-value, thus:
P = 2 Pr(Z > |t|)
When this P value is less than , we reject H
0
(accept otherwise).
For a one-sided test, whenever t is on the H
0
side, we accept H
0
without
having to compute anything. When t is on the H
A
side, we compute
P = Pr(Z > |t|)
and reject H
0
when P is smaller than , accept otherwise.
Tests concerning mean(s)
We need to specify the assumptions, null hypothesis, test statistic, and its distri-
bution (under H
0
) - the rest is routine.
Assume: H
0
T Distribution of T
Normal population, known =
0
X
0
/
n
Z
Normal population, unknown =
0
X
0
s/
n
t
n1
Any population, large n, unknown =
0
X
0
s/
n
Z
Two Normal populations, same unknown
1
=
2
X
1
X
2
s
p
r
1
n
1
+
1
n
2
t
n
1
+n
2
2
63
Concerning variance(s)
Assume: H
0
T Distribution of T
Normal population =
0
(n 1)s
2
2
2
n1
Two Normal populations
1
=
2
s
2
1
s
2
2
F
n
1
1,n
2
1
Concerning proportion(s)
Assume: H
0
T Distribution of T
One population, large n p = p
0
b p p
0
r
p
0
(1 p
0
)
n
Z (approximate)
k populations, large samples p
1
= p
2
= ... = p
k
P
k
i=1
n
i
(b p
i
b
b p)
2
b
b p(1
b
b p)
2
k1
(approximate)
Contingency tables
Here, we have two (nominal scale) attributes (e.g. cities and detergent brands
- see your textbook), and we want to know whether they are independent (i.e.
customers in dierent cities having the same detergent preferences - H
0
, or not -
H
A
).
Example 1
Brand A Brand B Brand C
Montreal 87 62 12
Toronto 120 96 23
Vancouver 57 49 9
The numbers are called observed frequencies, denoted o
ij
(i is the row, j
the column label).
First, we have to compute the corresponding expected frequencies (as-
suming independence) by
e
ij
=
(
P
c
j=1
o
ij
) (
P
r
i=1
o
ij
)
P
c
j=1
P
r
j=1
o
ij
To be able to proceed, these must be all bigger than 5.
The test statistic equals
P
ij
(o
ij
e
ij
)
2
e
ij
and has (under H
0
), approximately, the
2
(r1)(c1)
, where r (c) is the number of
rows (columns) respectively.
Goodness of t
This time, we are testing whether a random variable has a specic distribution,
say Poisson (we will stick to discrete cases).
First, based on the data, we estimate the value of each unknown parameter
(this distribution has only one), based on what we learned in the previous chapter.
64
Then, we compute the expected frequency of each possible outcome (all integers,
in this case), by
e
i
= n f(i |
b
) = n
b
i
i!
e
i = 0, 1, 2, ...
where n is the total frequency and
b
=
P
i
i o
i
P
i
o
i
is the usual estimator. We have
to make sure that none of the expected frequencies is less than 5 (otherwise, we
pool outcomes to achieve this).
The test statistic is
T =
X
i
(o
i
e
i
)
2
o
i
Under H
0
(which now states: the distribution is Poisson, with unspecied ), T
has the
2
distribution with m1p degrees of freedom, where m is the number of
possible outcomes (after pooling), and p is the number of (unspecied) parameters,
to be estimated based on the original data (in this case, p = 1).
65
Chapter 8 LINEAR REGRESSION
AND CORRELATION
We will rst consider the case of having one independent (regressor) variable,
called x, and a dependent (response) variable y. This is called
Simple regression
The model is
y
i
=
0
+
1
x
i
+
i
(8.1)
where i = 1, 2, ..., n, making the following assumptions:
1. The values of x are measured exactly, with no random error. This is usually
so when we can choose them at will.
2. The
i
are normally distributed, independent of each other (uncorrelated),
having the expected value of 0 and variance equal to
2
(the same for each of
them, regardless of the value of x
i
). Note that the actual value of is usually
not known.
The two regression coecients are called the slope and intercept. Their
actual values are also unknown, and need to be estimated using the empirical data
at hand.
To nd such estimators, we use the
Maximum likelihood method
which is almost always the best tool for this kind of task. It guarantees to yield
estimators which are asymptotically unbiased, having the smallest possible
variance. It works as follows:
1. We write down the joint probability density function of the y
i
s (note that
these are random variables).
2. Considering it a function of the parameters (
0
,
1
and in this case) only
(i.e. freezing the y
i
s at their observed values), we maximize it, using the
usual techniques. The values of
0
,
1
and to yield the maximum value of
this so called Likelihood function (usually denoted by
b
0
,
b
1
and b ) are
the actual estimators (note that they will be functions of x
i
and y
i
).
Note that instead of maximizing the likelihood function itself, we may choose
to maximize its logarithm (which must yield the same
b
0
,
b
1
and b ).
Least-squares technique
In our case, the Likelihood function is:
L =
1
(
2)
n
n
Y
i=1
exp
(y
i
1
x
i
)
2
2
2
66
and its logarithm:
lnL =
n
2
log(2) nln
1
2
2
n
X
i=1
(y
i
1
x
i
)
2
To maximize this expression, we rst dierentiate it with respect to , and make
the result equal to zero. This yields:
b
m
=
v
u
u
t
n
P
i=1
(y
i
1
x
i
)
2
n
where
b
0
and
b
1
are the values of
0
and
1
which minimize
SS
e
n
X
i=1
(y
i
1
x
i
)
2
namely the sum of squares of the vertical deviations (or residuals) of the y
i
values from the tted straight line (this gives the technique its name).
To nd
b
0
and
b
1
, we have to dierentiate SS
e
, separately, with respect to
0
and
1
, and set each of the two answers to zero. This yields:
n
X
i=1
(y
i
1
x
i
) =
n
X
i=1
y
i
n
0
1
n
X
i=1
x
i
= 0
and
n
X
i=1
x
i
(y
i
1
x
i
) =
n
X
i=1
x
i
y
i
0
n
X
i=1
x
i
1
n
X
i=1
x
2
i
= 0
or equivalently, the following so called
Normal equations
n
0
+
1
n
X
i=1
x
i
=
n
X
i=1
y
i
0
n
X
i=1
x
i
+
1
n
X
i=1
x
2
i
=
n
X
i=1
x
i
y
i
They can be solved easily for
0
and
1
(at this point we can start calling them
b
0
and
b
1
):
b
1
=
n
n
P
i=1
x
i
y
i
n
P
i=1
x
i
n
P
i=1
y
i
n
n
P
i=1
x
2
i
n
P
i=1
x
i
2
=
n
P
i=1
(x
i
x)(y
i
y)
n
P
i=1
(x
i
x)
2
S
xy
S
xx
and
b
0
= y
b
1
x (8.2)
67
meaning that the regression line passes through the (x, y) point, where
x
n
P
i=1
x
i
n
and
y
n
P
i=1
y
i
n
Each
b
0
and
b
1
is clearly a linear combination of normally distributed random
variables, their joint distribution is thus of the bivariate normal type.
Statistical properties of the estimators
First, we should realize that it is the y
i
(not x
i
) which are random, due to the
i
term in (8.1) - both
0
and
1
are also xed, albeit unknown parameters. Clearly
then
E(y
i
y) =
0
+
1
x
i
(
0
+
1
x) =
1
(x
i
x)
which implies
E
=
n
P
i=1
(x
i
x) E(y
i
y)
n
P
i=1
(x
i
x)
2
=
1
Similarly, since E(y) =
0
+
1
x, we get
E
=
0
+
1
x
1
x =
0
Both
b
0
and
b
1
are thus unbiased estimators of
0
and
1
, respectively.
To nd their respective variance, we rst note that
b
1
=
n
P
i=1
(x
i
x)(y
i
y)
n
P
i=1
(x
i
x)
2
n
P
i=1
(x
i
x) y
i
n
P
i=1
(x
i
x)
2
(right?), based on which
Var
=
n
P
i=1
(x
i
x)
2
Var(y
i
)
n
P
i=1
(x
i
x)
2
2
=
2
S
xx
S
2
xx
=
2
S
xx
>From (8.2) we get
Var
= Var(y) 2xCov(y,
b
1
) +x
2
Var
, so now we need
Var(y) = Var() =
2
n
68
and
Cov(y,
b
1
) = Cov
n
P
i=1
i
n
,
n
P
i=1
(x
i
x)
i
S
xx
=
2
n
P
i=1
(x
i
x)
S
xx
= 0
(uncorrelated). Putting these together yields:
Var
=
2
1
n
+
x
2
S
xx
0
and
b
1
is thus equals to xVar(
b
1
), and their correla-
tion coecient is
1
r
1 +
1
n
S
xx
x
2
Both variance formulas contain
2
, which, in most situations, must be replaced
by its ML estimator
b
2
m
=
n
P
i=1
(y
i
1
x
i
)
2
n
SS
e
n
where the numerator denes the so called residual (error) sum of squares.
It can be rewritten in the following form (replacing
b
0
by y
b
1
x):
SS
e
=
n
X
i=1
(y
i
y +
b
1
x
b
1
x
i
)
2
=
n
X
i=1
h
y
i
y +
b
1
(x x
i
)
i
2
= S
yy
2
b
1
S
xy
+
b
2
1
S
xx
= S
yy
2
S
xy
S
xx
S
xy
+
S
xy
S
xx
2
S
xx
= S
yy
S
xy
S
xx
S
xy
= S
yy
1
S
xy
S
yy
2
1
S
xx
Based on (8.1) and y =
0
+
1
x + (from now on, we have to be very careful to
dierentiate between
0
and
b
0
, etc.), we get
E(S
yy
) = E
(
n
X
i=1
[
1
(x
i
x) + (
i
)]
2
)
=
2
1
S
xx
+
2
(n 1)
(the last term was derived in MATH 2F81). Furthermore,
E
2
1
= Var(
b
1
) E(
b
1
)
2
=
2
S
xx
2
1
Combining the two, we get
E(SS
e
) =
2
(n 2)
Later on, we will be able to prove that
SS
e
2
has the
2
distribution with n 2
degrees of freedom. It is also independent of each
b
0
and
b
1
.
69
This means that there is a slight bias in the b
2
m
estimator of
2
(even though the
bias disappears in the n limit - such estimators are called asymptotically
unbiased). We can easily x this by dening a new, fully unbiased
b
2
=
SS
e
n 2
MS
e
(the so called mean square) to be used instead of b
2
m
from now on.
All of this implies that both
b
0
s
MS
e
1
n
+
x
2
S
xx
and
b
1
r
MS
e
S
xx
(8.3)
have the Student t distribution with n 2 degrees of freedom. This can be used
either to construct the so called confidence interval for either
0
or
1
, or to
test any hypothesis concerning
0
or
1
.
Condence intervals
Knowing that (8.3) has the t
n2
distribution, we must then nd two values (called
critical) such that the probability of (8.3) falling inside the corresponding in-
terval (between the two values) is 1 . At the same time, we would like to have
the interval as short as possible. This means that we will be choosing the critical
values symmetrically around 0; the positive one will equal to t
2
,n2
, the negative
one to t
2
,n2
(the rst index now refers to the area of the remaining tail of the
distribution) - these critical values are widely tabulated.
The statement that (8.3) falls in the interval between the two critical values
of t
n2
is equivalent (solve the corresponding equation for
1
) to saying that the
value of
1
is in the following range
b
1
t
2
,n2
r
MS
e
S
xx
which is our (1 ) 100 % condence interval.
Similarly, we can construct a 1 level-of-condence interval for
b
0
, thus:
b
0
t
2
,n2
s
MS
e
1
n
+
x
2
S
xx
0
and
b
1
are not independent, making a joint statement about the
two (with a specic level of condence) is more complicated (one has to construct
a condence ellipse, to make it correct).
Constructing a 1 condence interval for
2
is a touch more complicated.
Since
SS
e
2
has the
2
n2
distribution, we must rst nd the corresponding two
70
critical values. Unfortunately, the
2
distribution is not symmetric, so for these
two we have to take
2
2
,n2
and
2
1
2
,n2
. Clearly, the probability of a
2
n2
random
variable falling between the two values equals 1. The resulting interval may not
be the shortest of all these, but we are obviously quite close to the right solution;
furthermore, the choice of how to divide between the two tails remains simple
and logical.
Solving for
2
yields
SS
e
2
1
2
,n2
,
SS
e
2
,n2
!
as the corresponding (1 ) 100 % condence interval.
Correlation
Suppose now that both x and y are random, normally distributed with (bivariate)
parameters
x
,
y
,
x
,
y
and . We know that the conditional distribution of
y given x is also (univariate) normal, with the following conditional mean and
variance:
y
+
y
x
x
x
0
+
1
x (8.4)
2
y
(1
2
)
The usual
b
0
and
b
1
estimators are still the best (maximizing the likelihood
function), but their statistical properties are now substantially more complicated.
Historical comment: Note that by reversing the rle of x and y (which is now
quite legitimate - the two variables are treated as equals by this model), we
get the following regression line:
x | y
=
x
+
x
y
y
y
One can easily see that this line is inconsistent with (8.4) - it is a lot steeper
when plotted on the same graph. Ordinary regression thus tends, in this case,
to distort the true relationship between x and y, making it either more at
or more steep, depending on which variable is taken to be the independent
one.
Thus, for example, if x is the height of fathers and y that of sons, the regres-
sion line will have a slope less than 45 degrees, implying a false averaging
trend (regression towards the mean, as it was originally called - and the name,
even though ultimately incorrect, stuck). The fallacy of this argument was
discovered as soon as someone got the bright idea to t y against x, which
would then, still falsely, imply a tendency towards increasing diversity.
One can show that the ML technique would use the usual x and y to estimate
x
and
y
,
q
Sxx
n1
and
q
S
yy
n1
(after unbiasing) to estimate
x
and
y
, and
r
S
xy
p
S
xx
S
yy
(8.5)
71
as an estimator of (for some strange reason, they like calling the estimator r
rather than the usual b). This relates to the fact that
S
xy
n 1
is an unbiased estimator of Cov(X, Y ).
Proof.
E
(
n
X
i=1
[x
i
x
(x
x
)]
y
i
y
(y
y
)
)
=
n
X
i=1
Cov(X, Y )
Cov(X, Y )
n
Cov(X, Y )
n
+
Cov(X, Y )
n
=
nCov(X, Y ) (1
1
n
) = Cov(X, Y ) (n 1)
Investigating statistical properties of r,
b
0
and
b
1
exactly is now short of im-
possible (mainly because of dividing by
S
xx
, which is random) - now, we have
to resort to large-sample approach, to derive asymptotic formulas only (i.e.
expanded in powers of
1
n
), something we will take up shortly.
This is also how one can show that
arctanh r
approaches, for large n, the Normal distribution (with the mean of arctanh +
2n
+... and variance of
1
n3
+...) a lot faster than r itself. Utilizing this, we construct
an approximate CI for arctanh :
arctanh r
r
2n
z
/2
n 3
and consequently for (take tanh of each limit).
Squaring the r estimator yields the so called coefficient of determination
r
2
=
S
yy
S
yy
+
S
2
xy
S
xx
S
yy
= 1
SS
E
S
yy
which tells us how much of the original y variance has been removed by tting the
best straight line.
Multiple regression
This time, we have k independent (regressor) variables x
1,
x
2
,..., x
k
; still only one
dependent (response) variable y. The model is
y
i
=
0
+
1
x
1,i
+
2
x
2,i
+... +
k
x
k,i
+
i
72
with i = 1, 2, ..., n, where the rst index labels the variable, and the second the
observation. It is more convenient now to switch to using the following matrix
notation
y = X +
where y and are (column) vectors of length n, is a (column) vector of length
k +1, and X is a n by k +1 matrix of observations (with its rst column having all
elements equal to 1, the second column being lled by the observed values of x
1
,
etc.). Note that the exact values of and are, and will always remain, unknown
to us (thus, they must not appear in any of our computational formulas).
To minimize the sum of squares of the residuals (a scalar quantity), namely
(y X)
T
(y X) =
y
T
y y
T
X
T
X
T
y +
T
X
T
X
(note that the second and third terms are identical - why?), we dierentiate it with
respect to each element of . This yields the following vector:
2X
T
y + 2X
T
X
Making these equal to zero provides the following maximum likelihood (least
square) estimators of the regression parameters:
b
= (X
T
X)
1
X
T
y + (X
T
X)
1
X
T
2
(X
T
X)
1
X
T
X(X
T
X)
1
=
2
(X
T
X)
1
The tted values of y (let us call them b y), are computed by
b y = X
b
= X +X(X
T
X)
1
X
T
X +H
where H is clearly symmetric and idempotent (i.e. H
2
= H). Note that HX = X.
This means that the residuals e
i
are computed by
e = y b y = (I H)
(I H is also idempotent). Furthermore, the covariance (matrix) between the
elements of
b
and those of e is:
E
h
(
b
)e
T
i
= E
(X
T
X)
1
X
T
T
(I H)
=
(X
T
X)
1
X
T
E
(I H) = O
which means that the variables are uncorrelated and therefore independent (i.e.
each of the regression-coecient estimators is independent of each of the residuals
slightly counter-intuitive but correct nevertheless).
The sum of squares of the residuals, namely e
T
e, is equal to
T
(I H)
T
(I H) =
T
(I H)
73
Divided by
2
:
T
(I H)
2
Z
T
(I H)Z
where Z are standardized, independent and normal.
We know (from matrix theory) that any symmetric matrix (including our IH)
can be written as R
T
DR, where D is diagonal and R is orthogonal (implying
R
T
R
1
). We can then rewrite the previous expression as
Z
T
R
T
DRZ =
e
Z
T
D
e
Z
where
e
Z RZ is still a set of standardized, independent Normal random variables
(since its variance-covariance matrix equals I). Its distribution is thus
2
if and
only if the diagonal elements of D are all equal either to 0 or 1 (the number of
degrees being equal to the trace of D).
How can we tell whether this is true for our I H matrix (when expressed in
the R
T
DR form) without actually performing the diagonalization (a fairly tricky
process). Well, such a test is not dicult to design, once we notice that (I H)
2
=
R
T
DRR
T
DR = R
T
D
2
R. Clearly, D has the proper form (only 0 or 1 on the main
diagonal) if and only if D
2
= D, which is the same as saying that (I H)
2
= I H
(which we already know is true). This then implies that the sum of squares of the
residuals has
2
distribution. Now, how about its degrees of freedom? Well, since
the trace of D is the same as the trace of R
T
DR (a well known property of trace),
we just have to nd the trace of I H, by
Tr [I H] = Tr (I
nn
) Tr (H) = n Tr
X(X
T
X)
1
X
T
=
n Tr
(X
T
X)
1
X
T
X
= n Tr
I
(k+1)(k+1)
= n (k + 1)
i.e. the number of observations minus the number of regression coecients.
The sum of squares of the residuals is usually denoted SS
e
(for error sum
of squares, even though it is usually called residual sum of squares) and
computed by
(y X
b
)
T
(y X
b
) = y
T
y y
T
X
b
b
T
X
T
y+
b
T
X
T
X
b
=
= y
T
y y
T
X
b
b
T
X
T
y+
b
T
X
T
y = y
T
y y
T
X
b
y
T
y
b
T
X
T
y
We have just proved that
SSe
2
has the
2
distribution with n (k + 1) degrees
of freedom, and is independent of
b
. A related denition is that of a residual
(error) mean square
MS
e
SS
e
n (k + 1)
This would clearly be our unbiased estimator of
2
.
Various standard errors
We would thus construct a condence interval for any one of the coecients, say
j
, by
b
j
t
2
, nk1
p
C
jj
MS
E
74
where C (X
T
X)
1
.
Similarly, to test a hypothesis concerning a single
j
, we would use
b
i
0
p
C
jj
MS
E
as the test statistic.
Since the variance-covariance matrix of
b
is
2
(X
T
X)
1
, we know that
(
b
)
T
X
T
X(
b
)
2
has the
2
k+1
distribution. Furthermore, since the s are independent of the resid-
uals,
(
b
)
T
X
T
X(
b
)
k + 1
SS
E
n k 1
must have the F
k+1,nk1
distribution. This enables us to construct a confidence
ellipse (ellipsoid) simultaneously for all parameters or, correspondingly, perform
a single test of H
0
:
b
=
0
.
75
Chapter 9 ANALYSIS OF
VARIANCE
One-way ANOVA
Suppose we have k Normal populations, having the same , but potentially dierent
means
1
,
2
, ..
k
. We want to test whether all these means are identical (the null
hypothesis) or not (alternate).
We do this by rst selecting a random independent sample of size n, indepen-
dently from each of the k populations (note that, to simplify matters, we use the
same sample size for all), and estimating each of the
i
s by
X
(i)
P
n
j=1
X
ij
n
(these of course will always be dierent from each other).
Secondly (to decide whether what we see is what we get), we need a test
statistic which meets the following two conditions:
1. It is sensitive to any deviation from the null hypothesis (it should return a
small value when H
0
holds, a large value otherwise)
2. It has a known distribution under H
0
(to decide what is small and what is
large).
The RV which meets these objectives is
T =
n
k
P
i=1
(
X
(i)
X)
2
k 1
k
P
i=1
s
2
(i)
k
(9.1)
where X is the grand mean (mean of means) of all the nk observations put
together, and
X
(i)
and s
2
(i)
are the individual sample means and variances, where
i = 1, 2, ... k. Let us recall that
X
(i)
N(
i
,
n
) and
n1
2
s
2
(i)
2
n1
, for each i.
Note that the numerator of the formula will be small when the population means
are identical (the sample means will be close to each other, and to their grand
mean), becoming large when they are not. On the other hand the denominator of
the formula (the average of the individual sample variances) merely estimates the
common
2
, and is totally insensitive to potential dierences between population
means.
To gure out the distribution of this test statistic when H
0
is true, we notice
that
X
(1)
,
X
(2)
, ...,
X
(k)
eectively constitute a RIS of size k from N(,
n
),
and
k
P
i=1
(
X
(i)
X)
2
k 1
is thus the corresponding sample variance. This means that
k
P
i=1
(
X
(i)
X)
2
2
/n
has the
2
k1
distribution.
76
Similarly
(n 1)
k
P
i=1
s
2
(i)
2
is just a sum of k independent
2
n1
RVs, whose distri-
bution is
2
k(n1)
(the degrees of freedomsimply add up). The ratio of
n
2
k
P
i=1
(
X
(i)
X)
2
k 1
by
1
k
2
k
P
i=1
s
2
(i)
(note that
2
cancels out, leading to T) has therefore the distribution
of
2
k1
k 1
2
k(n1)
k(n 1)
F
k1,k(n1)
.
The only thing left to do is to gure out some ecient way to compute T (this
used to be important in the pre-computer days, but even current textbooks cannot
leave it alone - like those silly tables of Poisson distribution in the Appendix). It
is not dicult to gure out that
k
X
i=1
n
X
j=1
(X
ij
X)
2
= n
k
X
i=1
(
X
(i)
X)
2
+ (n 1)
k
X
i=1
s
2
(i)
or SS
T
= SS
B
+ SS
W
, where the subscripts stand for total, between (or
treatment) and within (or error) sum of squares, respectively.
Furthermore, one can show that
SS
T
=
k
X
i=1
n
X
j=1
X
2
ij
P
k
i=1
P
n
j=1
X
ij
2
kn
(9.2)
and
SS
B
=
P
k
i=1
P
n
j=1
X
ij
2
n
P
k
i=1
P
n
j=1
X
ij
2
kn
which is how these two quantities are eciently computed (with SS
W
= SS
T
SS
B
).
The whole computation (of T) is the summarized in the following table:
Source df SS MS T
Between k 1 SS
B
MS
B
=
SS
B
k1
MS
B
MS
W
Within k(n 1) SS
W
MS
W
=
SS
W
k(n1)
Total kn 1 SS
T
Two-way ANOVA
In the previous section, the population index (i = 1, 2, ... k) can be seen as a
(nominal scale) variable, which is, in this context called a factor (e.g. labelling
the city from which the observation is taken). In some situations, we may need
more than one factor (e.g. white, black, Hispanic) - we will only discuss how to
deal with two. (For a sake of example, we will take the response variable X to
represent a persons salary).
77
No interaction
Our design will rst:
1. assume that there is no interaction between the factors (meaning that
racial biases - if they exist - do not vary from city to city).
2. randomly select only one representative for each cell (one employee of each
race form every city).
The former implies that X
ij
N( +
i
+
j
, ), where
P
k
i=1
i
= 0 and
P
m
j=1
j
= 0 (k and m is the number of levels of the rst and second factor).
To estimate the individual parameters, we would clearly use
b = X
P
m
j=1
P
k
i=1
X
ij
mk
b
i
=
X
(i)
X
P
m
j=1
X
ij
m
X
b
j
=
X
(j)
X
P
k
i=1
X
ij
k
X
This time, we can test several null hypotheses at once: One stating that all the
s equal to zero (no dierence between cities), the other claiming that same for
the s (no dierence between racial groups), and the last one setting them all (s
and s) to zero.
The total sum of squares is computed as before (see 9.2), except that n changes
to m. Similarly, one can show that now
SS
T
= SS
A
+SS
B
+SS
E
(9.3)
where SS
A
(SS
B
) is the sum of squares due to the rst (second) factor and com-
puted by
SS
A
= m
k
X
i=1
b
2
i
=
P
k
i=1
P
m
j=1
X
ij
2
m
P
k
i=1
P
m
j=1
X
ij
2
km
SS
B
= k
m
X
j=1
b
2
j
=
P
m
j=1
P
k
i=1
X
ij
2
k
P
k
i=1
P
m
j=1
X
ij
2
km
and SS
E
is the error (residual) sum of squares (computed form 9.3).
The summary will now look as follows:
Source df SS MS T
Factor A k 1 SS
A
MS
A
=
SS
A
k1
MS
A
MS
E
Factor B m1 SS
B
MS
B
=
SS
B
m1
MS
B
MS
E
Error (k 1)(m1) SS
E
MS
E
=
SS
E
(k1)(m1)
Total km1 SS
T
78
With interaction
Now, we assume a possible interaction between the two factors (the pattern of
racial bias may dier between cities), which necessitates selecting more than one
(say n) random employees from each cell. The (theoretical) mean of the X
ij
distribution will now equal to +
i
+
j
+()
ij
, where
P
k
i=1
()
ij
= 0 for each
j and
P
m
j=1
()
ij
= 0 for each i.
The corresponding estimators are now
b = X
P
n
=1
P
m
j=1
P
k
i=1
X
ij
nmk
b
i
=
X
(i)
X
P
n
=1
P
m
j=1
X
ij
nm
X
b
j
=
X
(j)
X
P
n
=1
P
k
i=1
X
ij
nk
X
(
c
)
ij
=
X
(ij)
X b
i
j
P
n
=1
X
ij
n
X b
i
j
For the total sum of squares, we now get
SS
T
= SS
A
+SS
B
+SS
AB
+SS
E
where
SS
T
=
n
X
=1
m
X
j=1
k
X
i=1
(X
ij
X)
2
SS
A
= nm
k
X
i=1
b
2
i
SS
B
= nk
m
X
j=1
b
2
i
SS
AB
= n
m
X
j=1
k
X
i=1
(
c
)
2
ij
In summary:
Source df SS MS T
Factor A k 1 SS
A
MS
A
=
SS
A
k1
MS
A
MS
E
Factor B m1 SS
B
MS
B
=
SS
B
m1
MS
B
MS
E
Interaction (k 1)(m1) SS
AB
MS
AB
=
SS
AB
(k1)(m1)
MS
AB
MS
E
Error km(n 1) SS
E
MS
E
=
SS
E
km(n1)
Total kmn 1 SS
T
79
Chapter 10 NONPARAMETRIC
TESTS
These dont make any assumption about the shape of the distribution from which
we sample (they are equally valid for distributions of any shape). As a result, they
may not be as powerful (sharp) as test designed for a specic (usually Normal)
distribution.
Sign test
The null hypothesis states that the population median equals a specic number,
H
0
: =
0
If we throw in an assumption that the distribution is symmetric, the
median is the same as the mean, so we can restate it in those terms.
We also assume that the distribution is continuous (or essentially so), so that
the probability of any observation being exactly equal to
0
is practically zero (if
we do get such a value, we would have to discard it).
The test statistic (say B) is simply the number of observations (out of n) which
are bigger than
0
(sometimes, these are represented by + signs, thus the name
of the test). Its distribution is, under H
0
, obviously Binomial, where n is the
number of trials, and p =
1
2
. The trouble is that, due to Bs discreteness, we
cannot arbitrarily set the value of , and have to settle for anything reasonable
close to, say 5%. Thats why, in this case, we are better o simply stating the
corresponding P value.
When n is large, it is permissible to approximate the Binomial distribution
by Normal, which leads to a modied test statistic
T =
B
n
2
p
n
4
=
2B n
n
with critical values of z
/2
(two-sided test), or either z
or z
/2
(one sided test).
The previous test is often used in the context of so called paired samples
(such as taking a blood pressure of individuals before and after taking some med-
ication). In this case, we are concerned with the distribution of the dierence in
blood pressure, testing whether its population median stayed the same (the null
hypothesis), or decreased (alternate). This time, we assign + to increase, and
to decrease, the rest is the same.
Signed-rank test
A better (more powerful) test is, under the same circumstances, the so called
Wilcoxon signed-rank test.
First, we compute the dierences between individual observations and
0
(in
the case of a one-sample test), or between the paired observations (paired-sample
test). Then, we rank (i.e. assign 1, 2, ... n) the (absolute value) dierences
(discarding zero dierences, and assigning the corresponding rank average to any
ties). The test statistic equals the sum of these ranks of all positive dierences
(denoted T
+
).
80
The distribution of T
+
under H
0
(which states that the median dierence equals
zero) is not one of our common cases, thats why its critical values are tabulated
in Table X. We of course are in a good position to compute the corresponding P
value ourselves (with the help of Maple). All we need to do is to assign a random
sign (with equal probability for + and ) to the rst n integers.
Its quite easy to show that the mean and variance of the T
+
distribution are
n(n+1)
4
and
n(n+1)(2n+1)
24
respectively.
Proof. We can write T
+
= 1 X
1
+2 X
2
+3 X
3
+... +n X
n
, where the X
i
s are
independent, having the Bernoulli distribution with p =
1
2
. This implies the mean
of
1+2+3+...n
2
=
n(n+1)
4
. Similarly, Var(T
+
) =
1
2
+2
2
+3
2
+...+n
2
4
=
n(n+1)(2n+1)
24
.
To derive formulas for s
1
P
n
i=1
i and s
2
P
n
i=1
i
2
, we proceed as follows:
P
n
i=0
(1+i)
2
= s
2
+(n+1)
2
, but is also equals (by expanding) to n+1+2s
1
+s
2
.
Make these two equal, and solve for s
1
.
Similarly
P
n
i=0
(1 +i)
3
= s
3
+(n+1)
3
= n+1 +3s
1
+3s
2
+s
3
. Since s
3
cancels
out, and we already know what s
1
is, we can solve for s
2
.
For n 15, it is quite legitimate to treat the distribution of T
+
as approximately
Normal.
Rank-sum tests
Mann-Whitney
Suppose we have two distributions (of the same - up to a shift - shape) and the
corresponding independent (no longer paired) samples. We want to test whether
the two sample means are identical (the null hypothesis) against one the three
possible (>, < or 6=) alternate hypotheses.
We do this by ranking the n
1
+n
2
observations pooled together (as if a single
sample), then we compute the sum of the ranks belonging to the rst sample,
usually denoted W
1
. The corresponding test statistic is
U
1
= W
1
n
1
(n
1
+ 1)
2
Under H
0,
the distribution of U
1
is symmetric (even though uncommon), with
the smallest possible value of 0 and the largest equal to
(n
1
+n
2
)(n
1
+n
2
+1)
2
n
2
(n
2
+1)
2
n
1
(n
1
+1)
2
= n
1
n
2
. Its critical values are listed in Table XI. To compute them, one
has to realize that the distribution of W
1
is that of the sum of randomly selected
n
1
integers out of the rst n
1
+ n
2
(again, we may try doing this with our own
Maple program).
It is reasonable to use the Normal approximation when both n
1
and n
2
are
bigger than 8. The expected value of U
1
is
n
1
n
2
2
(the center of symmetry), its
variance is equal to
n
1
n
2
(n
1
+n
2
+1)
12
.
Proof. Suppose the numbers are selected randomly, one by one (without replace-
ment). W
1
is then equal to X
1
+X
2
+... +X
n
1
, where X
i
is the number selected in
the i
th
draw. Clearly, E(X
i
) =
n
1
+n
2
+1
2
for each i. This implies that the expected
value of W
1
is n
1
n
1
+n
2
+1
2
, and that of U
1
equals n
1
n
1
+n
2
+1
2
n
1
(n
1
+1)
2
=
n
1
n
2
2
.
81
Similarly, Var(X
i
) = E(X
2
i
) E(X
i
)
2
=
(N+1)(2N+1)
6
(N+1)
2
4
=
N
2
1
12
, where
N n
1
+ n
2
, and Cov(X
i
, X
j
) =
N(N+1)
2
4(N1)
(N+1)(2N+1)
6(N1)
(N+1)
2
4
=
N+1
12
for any
i 6= j. This means that the variance of W
1
(and also of U
1
) is n
1
Var(X
i
) +n
1
(n
1
1)Cov(X
i
, X
j
) =
n
1
n
2
(n
1
+n
2
+1)
12
.
Kruskal-Wallis
This is a generalization of the previous test to the case of more than two (say k)
same-shape populations, testing whether all the means are identical (H
0
) or not
(H
A
). It is a non-parametric analog to ANOVA.
Again, we rank all the N n
1
+n
2
+... +n
k
observations pooled together, then
compute the sum of the resulting ranks (say R
i
) individually for each sample. The
test statistic is
T =
12
N(N + 1)
k
X
i=1
R
2
i
n
i
3(N + 1)
and has, approximately (for large n
i
), the
2
k1
distribution.
Proof. First we show that T can be written as
12
N(N + 1)
k
X
i=1
n
i
R
i
n
i
N + 1
2
2
This follows from:
P
k
i=1
n
i
R
i
n
i
N+1
2
2
=
P
k
i=1
h
R
2
i
n
i
(N + 1)R
i
+n
i
(N+1)
2
4
i
=
P
k
i=1
R
2
i
n
i
N(N+1)
2
2
+
N(N+1)
2
4
=
P
k
i=1
R
2
i
n
i
N(N+1)
2
4
It was already shown in the previous section that, for large n
i
and N, S
i
R
i
n
i
N+1
2
is approximately Normal with zero mean and variance equal to
(Nn
i
)(N+1)
12n
i
.
Similarly, Cov(S
i
, S
j
) =
N+1
12
. This means that the variance-covariance matrix of
the
q
12n
i
N(N+1)
S
i
s is
I
p
n
1
N
p
n
2
N
.
.
.
p
n
k
N
p
n
1
N
p
n
2
N
p
n
k
N
This matrix is clearly idempotent, which makes the sumof squares of the
q
12n
i
N(N+1)
S
i
s
into a
2
type RV. The degrees of freedom are given by the Trace of the previous
matrix, which is k 1.
Run test
This is to test whether a sequence of observations constitutes a randomindependent
sample or not. We assume that the observations are of the success (S) and failure
(F) type - any other sequence can be converted into that form, one way or another.
A series of consecutive successes (or failures) is called a run. Clearly, in a
truly random sequence, runs should never be too long (but not consistently too
short, either). This also means that, in a random sequence with n successes and
82
m failures, we should not have too many or too few runs in total (the total number
of runs will be our test statistic T).
This time it is possible to derive a formula for the corresponding distribution:
The sample space consists of
n+m
n
n1
k1
m1
k1
n+m
n
n1
k
m1
k1
n1
k1
m1
k
n+m
n
n1
k1
m1
k1
n+m
n
+
X
All k
(2k + 1)
n1
k
m1
k1
n1
k1
m1
k
n+m
n
=
2nm
n +m
+ 1
and
X
All k
8k
2
n1
k1
m1
k1
n+m
n
+
X
All k
(2k + 1)
2
n1
k
m1
k1
n1
k1
m1
k
n+m
n
=
4nm(n + 1)(m+ 1) + (n +m)
2
10nmn m
(n +m)(n +m1)
which results in
2
=
4nm(n + 1)(m+ 1) + (n +m)
2
10nmn m
(n +m)(n +m1)
2nm
n +m
+ 1
2
=
2nm(2nmn m)
(n +m)
2
(m+m1)
For both n and m bigger than 9, the distribution of T is approximately Normal.
This is when the formulas for and
2
would come handy.
83
(Spermans) rank correlation coecient
All we have to do is to rank, individually, the X and Y observations (from 1 to n),
and compute the regular correlation coecient between the ranks. This simplies
to
r
S
= 1
6
P
n
i=1
d
2
i
n(n
2
1)
where d
i
is the dierence between the ranks of the X and Y observation of the i
th
pair.
Proof. Let
X
i
and
Y
i
denote the ranks. We know that, individually, their sum
is
n(n+1)
2
, and their sum of squares equals to
n(n+1)(2n+1)
6
. Furthermore
n
X
i=1
d
2
i
=
n
X
i=1
(
X
i
Y
i
)
2
=
n(n + 1)(2n + 1)
3
2
n
X
i=1
X
i
Y
i
This implies
r
S
=
P
n
i=1
X
i
Y
i
(
P
n
i=1
X
i)(
P
n
i=1
Y
i)
n
s
P
n
i=1
X
2
i
(
P
n
i=1
X
i)
2
n
P
n
i=1
Y
2
i
(
P
n
i=1
Y
i)
2
n
=
n(n+1)(2n+1)
6
P
n
i=1
d
2
i
2
n(n+1)
2
4
n(n+1)(2n+1)
6
n(n+1)
2
4
= 1
P
n
i=1
d
2
i
2
n(n
2
1)
12
For relatively small n, we can easily construct the distribution of r
S
, assuming
that X and Y are independent (and design the corresponding test for testing that
as the null hypothesis).
When n is large (bigger than 10), the distribution of r
S
is approximately
Normal. To be able to utilize this, we need to know the corresponding mean and
variance under H
0
. These turn out to be 0 and
1
n1
, respectively.
Proof. What we need is
E(d
2
i
) =
P
n
k=1
P
n
=1
(k )
2
n
2
=
n
2
1
6
E(d
4
i
) =
P
n
k=1
P
n
=1
(k )
4
n
2
=
(n
2
1)(2n
2
3)
30
and
E(d
2
i
d
2
j
) =
P
n
k=1
P
n
=1
(k j)
2
P
K6=k
P
L6=
(K L)
2
n
2
(n 1)
2
=
P
n
k=1
P
n
=1
(k j)
2
P
n
K=1
P
n
L=1
(K L)
2
n
2
(n 1)
2
2
P
n
k=1
P
n
=1
(k j)
2
P
n
L=1
(k L)
2
n
2
(n 1)
2
+
P
n
k=1
P
n
=1
(k j)
4
n
2
(n 1)
2
=
(5n
3
7n
2
+ 18)(n + 1)
180
84
implying: Var(d
2
i
) =
7n
2
13
180
(n
2
1) and Cov(d
2
i
, d
2
j
) =
2n
2
5n13
180
(n + 1).
Based on this,
E
n
X
i=1
d
2
i
!
=
n(n
2
1)
6
and
Var
n
X
i=1
d
2
i
!
= n
7n
2
13
180
(n
2
1) n(n1)
2n
2
5n 13
180
(n+1) =
n
2
(n
2
1)
2
36(n 1)