0% found this document useful (0 votes)
108 views6 pages

Compound Poisson Process Analysis

1. The document discusses a compound Poisson process, which is a stochastic process with stationary and independent increments that arises as a random sum of independent and identically distributed random variables. 2. It provides the definition of a compound Poisson process as a random sum of independent and identically distributed random variables, where the number of terms in the sum follows a Poisson process. 3. The document proves several properties of compound Poisson processes, including that they are Lévy processes (have stationary independent increments) and derives formulas for the mean and variance of the process over time. It also proves a central limit theorem, showing the process converges in distribution to a normal distribution as time increases.

Uploaded by

esobjvu1kv0zs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
108 views6 pages

Compound Poisson Process Analysis

1. The document discusses a compound Poisson process, which is a stochastic process with stationary and independent increments that arises as a random sum of independent and identically distributed random variables. 2. It provides the definition of a compound Poisson process as a random sum of independent and identically distributed random variables, where the number of terms in the sum follows a Poisson process. 3. The document proves several properties of compound Poisson processes, including that they are Lévy processes (have stationary independent increments) and derives formulas for the mean and variance of the process over time. It also proves a central limit theorem, showing the process converges in distribution to a normal distribution as time increases.

Uploaded by

esobjvu1kv0zs
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

IEOR 6711: Stochastic Models I

Fall 2012, Professor Whitt


Topic for Discussion, Thursday, September 27
Compound Poisson Process

1. Withdrawals from an ATM


Exercise 5.88 in Green Ross. Customers arrive at an automatic teller machine (ATM) in
accordance with a Poisson process with rate 12 per hour. The amount of money withdrawn on
each transaction is a random variable with mean $30 and standard deviation $50. (A negative
withdrawal means that money was deposited.) Suppose that the machine is in use 15 hours
per day. Approximate the probability that the total daily withdraw is less than $6000.
ANSWER: Let X(t) be the total amount withdrawn in the interval [0, t], where time t is
measured in hours. Assuming that the successive amounts withdrawn are independent and
identically distributed random variables, the stochastic process {X(t) : t ≥ 0} is a compound
Poisson process. Let X(15) denote the daily withdrawal. Its mean and variance can be
calculated as follows using the equations on top of p. 83.

E[X(15)] = 12 × 15 × 30 = 5400 and V ar[X(15)] = 12 × 15 × [30 × 30 + 50 × 50] = 612, 000



Now using the CLT for the compound Poisson process, with 612, 000 ≈ 782 and 600/782 ≈
0.767, we obtain the approximating probability
µ ¶
X(15) − 5400 600
P (X(15) ≤ 6000) = P √ ≤√ = P (N (0, 1) ≤ 0.767) = 0.78,
612000 612000
where N (0, 1) is a standard normal random variable. We have used a table of the standard
normal distribution for the actual numerical value.

2. Definition of a compound Poisson process


Let N ≡ {N (t) : t ≥ 0} be a Poisson process with rate λ, so that E[N (t)] = λt for t ≥ 0.
Let X1 , X2 , · · · be IID random variables independent of N . Let D(t) be the random sum
N (t)
X
D(t) ≡ Xi , t≥0. (1)
i=1

Then D ≡ {D(t) : t ≥ 0} is a compound Poisson process. See Chapter 2 of Ross.

3. Lévy process

Theorem 0.1 The stochastic process D is a Lévy process, i.e., it has stationary independent
increments.

Proof. An increment of D is D(t2 ) − D(t1 ) for t2 > t1 > 0. The independent-increments


property is: For all k and for all time points 0 < t1 < t2 < · · · < tk , the k increments D(ti+1 ) −
D(ti ) are k mutually independent random variables. The stationary-increment property is:
The distribution of D(t2 + h) − D(t1 + h) for t2 > t1 > 0 and h > 0 is independent of h,
and similarly for the joint distribution of k increments. We prove the special case of the
independent-increments property for k = 2; the general case k is proved in the same way. It
suffices to show that

P (D(t1 ) ≤ a1 , D(t2 ) − D(t1 ) ≤ a2 ) = P (D(t1 ) ≤ a1 )P (D(t2 ) − D(t1 ) ≤ a2 ) (2)

for all a1 > 0 and a2 > 0 (for all 0 < t1 < t2 ). To do so, just apply the definition of D(t) and
condition on the values of the counting process for the times t1 and t2 . In particular,
 
N (t1 ) N (t2 ) N (t1 )
X X X
P (D(t1 ) ≤ a1 , D(t2 ) − D(t1 ) ≤ a2 ) = P  Xi ≤ a1 , Xi − Xi ≤ a2 
i=1 i=1 i=1
 
N (t1 ) N (t2 )
X X
= P Xi ≤ a1 , Xi ≤ a2  apply (1)
i=1 i=N (t1 )+1
Ãm mX
!

X ∞
X X1 1 +m2

= P Xi ≤ a 1 , Xi ≤ a2 |N (t1 ) = m1 , N (t2 ) − N (t1 ) = m2


m1 =0 m2 =0 i=1 i=m1 +1
×P (N (t1 ) = m1 , N (t2 ) − N (t1 ) = m2 ) (condition)
Ãm mX
!

X ∞
X X 1 1 +m2

= P Xi ≤ a 1 , Xi ≤ a2 P (N (t1 ) = m1 , N (t2 ) − N (t1 ) = m2 )


m1 =0 m2 =0 i=1 i=m1 +1
∞ ∞
Ãm ! Ã m +m !
X X X1 X
1 2

= P Xi ≤ a 1 P Xi ≤ a2 P (N (t1 ) = m1 ) P (N (t2 ) − N (t1 ) = m2 )


m1 =0 m2 =0 i=1 i=m1 +1
∞ ∞
Ãm ! Ãm !
X X X1 X2

= P Xi ≤ a 1 P Xi ≤ a2 P (N (t1 ) = m1 ) P (N (t2 − t1 ) = m2 )
m1 =0 m2 =0 i=1 i=1

Ãm ! ∞
Ãm !
X X 1 X X2

= P Xi ≤ a1 P (N1 (t) = m1 ) P Xi ≤ a2 P (N (t2 − t1 ) = m2 )


m1 =0 i=1 m2 =0 i=1
   
N (t1 ) N (t2 −t1 )
X X
= P Xi ≤ a1  P  Xi ≤ a2 
i=1 i=1

= P (D(t1 ) ≤ a1 ) P (D(t2 − t1 ) ≤ a2 ) = P (D(t1 ) ≤ a1 ) P (D(t2 ) − D(t1 ) ≤ a2 ) , (3)

using independence in line 5 and stationarity in line 6.


We now state and prove the central limit theorem (CLT) for compound Poisson processes.
We use the fact that

E[D(t)] = λtm1 and V ar(D(t)) = λtm2 , (4)

where mi ≡ E[X1i ]. Note in particular that m2 in (4) is the second moment, not the variance
of X1 .
4. Variance formula
We pause to prove the variance formula in (4) above. To do so, we use the conditional
variance formula; see Exercise 1.22 of Ross. We use the fact that E[E[X|Y ]] = E[X].

Theorem 0.2 Suppose that X is a random variable with finite second moment E[X 2 ]. (Aside:
As a consequence E[|X|] < ∞, and for any random variable Y , E[X 2 |Y ] and E[X|Y ] are well

2
defined with E[E[X 2 |Y ]] = E[X 2 ] < ∞ and E[E[|X||Y ]] = E[|X|] < ∞.) Then

V ar(X) = E[V ar(X|Y )] + V ar(E[X|Y ]) ,

where V ar(X|Y ) is defined to mean

V ar(X|Y ) ≡ E[(X − (E[X|Y ])2 |Y ] = E[X 2 |Y ] − (E[X|Y ])2 .

Proof. Find expressions for the two terms on the right:

E[V ar(X|Y )] = E[E[X 2 |Y ]] − E[(E[X|Y ])2 ] = E[X 2 ] − E[(E[X|Y ])2 ]

and
V ar(E[X|Y ]) = E[(E[X|Y ])2 ] − (E[E[X|Y ]])2 = E[(E[X|Y ])2 ] − (E[X])2 .
Combining these last two expressions gives the result.
Now we consider the implications for a random sum of i.i.d. random variables.

Theorem 0.3 Suppose that N is a nonnegative integer-valued random variable with finite
second moment E[N 2 ]. Let {Xn : n ≥ 1} be a sequence of i.i.d random variables with mean
µ = E[X1 ] and finite variance σ 2 = V ar(X1 ). Suppose that N is independent of the sequence
{Xn : n ≥ 1}. Then ÃN !
X
V ar Xn = σ 2 E[N ] + µ2 V ar(N ) .
n=1

Proof. We apply the conditional variance formula. Let S denote the random sum in question.
Then
V ar(S|N = n) = nσ 2 and E[S|N = n] = nµ .
Therefore,
V ar(S|N ) = N σ 2 and E[S|N ] = N µ .
Then the conditional variance formula gives

V ar(S) = E[V ar(S|N )] + V ar(E[S|N ]) = E[N ]σ 2 + V ar(N )µ2 .

Corollary 0.1 If N is Poisson with mean λ, then

V ar(S) = λσ 2 + λµ2 = λE[X12 ] .

For the compound Poisson process, D(t) has mean λt, so that

V ar(D(t)) = λtσ 2 + λtµ2 = λtE[X12 ] .

5. Central Limit Theorem (CLT)

Theorem 0.4 If m2 < ∞, then

D(t) − λtm1
√ ⇒ N (0, 1) as t→∞. (5)
λtm2

3
Proof. The simple high-level proof is to apply the standard CLT after observing that D is a
process with stationary and independent increments. For any t, we can thus represent D(t) is
the sum of n IID increments, each distributed as D(t/n). By the condition m2 < ∞, and the
variance formula in (4), these summands have finite variance and second moment. There is a
bit of technical difficulty in writing a full proof, because we want to let t → ∞ in the desired
statement. If we divide the interval [0, t] into n subintervals and let n → ∞, leaving t fixed,
then we obviously do not get the desired convergence; then the sum of the n increments is
D(t) for all n; we do not get larger t.
What we want to do now is give a direct proof using transforms. The cleaner mathematical
approach is to use characteristic functions, which involves complex variables, but for simplic-
ity we will use moment generating functions (mgf’s). We would use characteristic functions,
because a mgf is not always well defined (finite). However, we assume that the mgf of X is
well defined (finite) for all positive θ); i.e.,
Z ∞
θX
ψX (θ) ≡ E[e ] = eθx fX (x) dx < ∞ (6)
−∞

for all positive θ. A sufficient condition is for X to be a bounded random variable, but that
is stronger than needed. The condition would be satisfied if X were normally distributed, for
example. However, it would suffice for the mgf to be defined for all θ with |θ| < ² for some
² > 0. The argument is essentially as follows for characteristic functions.
We have acted as if X has a probability density function (pdf) fX (x); that is not strictly
necessary. The general idea is that the convergence in distribution in (5) is equivalent to the
convergence of mgf’s for all θ. (That means the usual pointwise convergence of a sequence of
functions.) We are using a continuity theorem for mgf ’s: There is convergence in distribution
of random variables if and only if there is convergence of the mgf’s for all θ. What we want to
show then is that
2 /2
ψ[D(t)−λtm1 ]/√λtm2 (θ) → ψN (0,1) (θ) = eθ as t → ∞ (7)

for all θ.
Just as in the proof of the ordinary CLT (in previous lecture notes), we use a Taylor
expansion of the component mgf ψX (θ) about θ = 0. We can write

θ2 m2
ψX (θ) = 1 + θm1 + + o(θ2 ) as θ ↓ 0 . (8)
2
(Recall that o(θ) is a quantity f (θ) such that f (θ)/θ → 0 as θ → 0. Thus o(θ2 ) is a quantity
f (θ) such that f (θ)/θ2 → 0 as θ → 0. The last term in (8) is asymptotically negligible
compared to the previous terms in the limit.)
We also exploit the mgf of D(t), which is discussed at length in Ross, but perhaps not
enough. We can use the transform of D(t) to compute the exact distribution numerically
by applying numerical transform inversion. If X ≥ 0, then D(t) ≥ 0 and we can work with
Laplace transforms. Then you can apply the inversion algorithm you are writing.
There is a general form for the mgf of an integer random sum that is important to know
about. Thus, suppose that N is an arbitrary integer-valued random variable and let {Xi : i ≥
1} be a sequence of IID random variables independent of N . (This is our case with N = N (t)
for some t.) The important point is the the mgf of the random sum is the composition of the

4
generating function of N and the mgf of X; i.e., the generating function is evaluated at the
mgf of X. Let g̃N (z) be the generating function of the random variable N ; i.e.,

X
g̃N (z) ≡ z n P (N = n) . (9)
n=0

Note that the generating function of the Poisson distribution is an exponential function: If N
has a Poisson distribution with mean α, then

X αn e−α
g̃N (z) ≡ zn = eα(z−1) (10)
n!
n=0

So, in our case,


g̃N (t) (z) = eλt(z−1) . (11)

Now we want to compute the mgf of D(t).


ψD(t) (θ) ≡ E[eθD(t) ]
N (t)
X
= E[exp θ Xi ]
i=1

X m
X
= E[exp θ Xi ]P (N (t) = m)
m=1 i=1
X∞ Xm
(λt)m e−λt
= E[exp θ Xi ]
m!
m=1 i=1
X∞
(λt)m e−λt
= E[eθX1 ]m
m!
m=1
X∞
(λt)m e−λt
= ψX (θ)m
m!
m=1
λt(ψX (θ)−1)
= e . (12)
Now we combine all the results above to carry out the desired proof. Note that we have
derived the mgf √ of D(t), but we want to work with the mgf of the scaled random variable
[D(t) − λtm1 ]/ λtm2 . We thus need to calculate the mgf of the scaled random variable, but
that is easy. It just requires being careful (accounting).
−θλtm1
√θD(t) √
ψ[D(t)−λtm1 ]/√λtm2 (θ) = E[e λtm2
]e λtm2

√ −θλtm1

= eλt(ψX (θ/ λtm2 )−1)
e λtm2
√ √
λt(ψX (θ/ λtm2 )−1−(θλtm1 / λtm2 ))
= e . (13)
Note that we obtain a simple exponential function as a final expression. It suffices to do
asymptotics for the
√ exponent of this exponential function. We now insert the Taylor expansion

in (8) for ψX (θ/ λtm2 ) as t → ∞. In other words, we replace the argument θ by θ/ λtm2
and let t → ∞. Note that, as t → ∞, the argument of the mgf is going to 0, so the Taylor
expansion applies. Starting to consider only the first term of the exponent only, we have
p p p p
ψX (θ/ λtm2 ) = 1 + (θ/ λtm2 )m1 + (θ/ λtm2 )2 (m2 /2) + o((θ/ λtm2 )2 )
p
= 1 + (θ/ λtm2 )m1 + (θ2 /2λt) + o((1/t)) as t → ∞ . (14)

5
Now note that (when exponentials are used), the first two terms in (14) exactly cancel the last
two terms on the right in the final exponent of (13), and we are left with
2 /2)+o(1) 2 /2)
ψ[D(t)−λtm1 ]/√λtm2 (θ) = e(θ → e(θ as t → ∞ . (15)

You might also like