0% found this document useful (0 votes)
131 views4 pages

Solution To Problem Sheet For Conditional Expectation and Random Walk

This document provides solutions to problems involving conditional expectation and random walks. It covers topics like conditional probabilities, total expectation, variance, probability generating functions, passage times, and gambler's ruin problems for random walks. The solutions involve applying definitions and properties of expectations, variances, conditional probabilities, and generating functions.

Uploaded by

sghaier
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
131 views4 pages

Solution To Problem Sheet For Conditional Expectation and Random Walk

This document provides solutions to problems involving conditional expectation and random walks. It covers topics like conditional probabilities, total expectation, variance, probability generating functions, passage times, and gambler's ruin problems for random walks. The solutions involve applying definitions and properties of expectations, variances, conditional probabilities, and generating functions.

Uploaded by

sghaier
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Solution to Problem Sheet for Conditional

Expectation and Random Walk

1. (a)
P( X > 5, X > 3) P ( X > 5) 1 − FX (5)
P ( X > 5| X > 3) = = = .
P ( X > 3) P ( X > 3) 1 − FX (3)
(b) Using the convolution formula for the continuous case:
Z ∞
f Z (z) = f X (z − y) f Y (y)dy
−∞

where Z = X + Y.

2. • Let Y = 1 be the event that the first TA marked the assignment, and Y = 2 the event
that the second TA marked it. for the mean, we have

E[ X |Y = 1] = 0.75, E[ X |Y = 2] = 0.70],

thus,

E[ X ] = E[E[ X |Y ]] = 0.40E[ X |Y = 1] + 0.60E[ X |Y = 2] = 0.40 ∗ 0.75 + 0.60 ∗ 0.70 = 0.72.

• for the variance, we have

var[ X |Y = 1] = 0.12 , var[ X |Y = 2] = 0.052 ,

hence,

var[ X ] = E[var[ X |Y ]] + var[E[ X |Y ]]


= 0.40var[ X |Y = 1] + 0.60var[ X |Y = 2] + 0.40E[ X |Y = 1]2 + 0.60E[ X |Y = 2]2 − E[ X ]2
= 0.004 + 0.0015 + 0.225 + 0.294 − 0.5184 = 00061.

3. (a)

P( X = i |Y = 3) = P(i white balls selected when choosing 3 balls from 3 white and 6 red)
  
3 6
i 3−i
=   ,
9
3

i = 0, 1, 2, 3
(b) By same reasoning as in (a), if Y = 1, then X has the same distribution as the number of
white balls chosen when 5 balls are chosen from 3 white and 6 red. Hence,
3 5
E [ X |Y = 1 ] = 5 = .
9 3

1
4. By conditioning on N,

E [ Y | N = n ] = E [ X1 X2 . . . X N | N = n ]
= E[ X1 X2 . . . Xn | N = n] Substitution
= E[ X1 X2 . . . Xn ] Independence of N and the Xk s
= E[ X1 ]E[ X2 ] . . . E[ Xn ] since Xk s are independent
= un since they have the same mean u.

So E[Y | N = n] = un and E[Y | N ] = u N , and then using double-averaging, E[Y ] =


E[E[Y | N ]] = E[u N ] = GN (u) by definition. [think of it as GN (s) = E(s N ) evaluated at
s = u]
GN (u) will always exist provided |u| ≤ 1 because we know that the interval of convergence
for a probability generating function is at least [-1, 1].

5. (a) Xn has a binomial distribution so its probability generating function is

An (s) = (( ps) + (1 − p))n

(b) Z = X1 + X2 + X3 is the sum of three independent random variables so its PGF is the
product of the respective PGF of the Xi ’s, so

BZ (s) = A1 (s) A2 (s) A3 (s) = (( ps) + (1 − p))6n

(c) we use the property:


E[s X N ] = E[E[s X N | N = n]]
where E[s X N | N = n] = An (s).
(d) we have :
E[s X N + X N +1 ] = E[E[s X N + X N +1 | N = n]]
where: E[s X N + X N +1 | N = n] = A2n+1 (s). The result follows.
un e−u
6. Let X → POI (u), then P( X = n) = n! for n = 0, 1, . . ..

∞ ∞
un e−u
GX ( s ) = E [ s X ] = ∑ sn p( X = n) = ∑ n!
= esu−s .
n =0 n =0

and the series converges for |su| < ∞ i.e. the series converges for all values of s.

7. (a)
∞ ∞
∑ P (Y = k ) = p + ∑ p k − 1 ( 1 − p ) 2 = p + ( 1 − p ) 2 / ( 1 − p ) = p + 1 − p = 1
k =0 k =0

2
(b)

GY (s) = E[sY ]

= ∑ s k p (Y = k )
k =0

= p+ ∑ p k −1 (1 − p )2 s k
k =0

(1 − p )2
= p+
p ∑ ( ps)k
k =0
(1 − p)2 ps
= p+
p 1 − ps
2
(1 − p ) s
= p+ ,
p(1 − sp)

for | ps| < 1


(c)
[1 − ps](1 − p)2 − (1 − p)2 s[− p]
GY0 (s) =
[1 − ps]2
and
[1 − p](1 − p)2 − (1 − p)2 [− p]
E[Y ] = GY0 (1) = =1
[1 − p ]2
So E[Y ] = 1, independent of the value of p.

8. (a) P(the walk ever passes through state 15)=1.


(b) the expected number of steps required to first visit state 15= 50.
(c) P(the walk ever passes through state -15)=0.00009275.
(d) P( return to the origin)0.7.
(e) the expected number of returns to 0= 37 .
101 100
(f) p = 201 or p = 201 .
(g) P(hit state 15 before it hits state 0)= 23 . (think of it as a gambler’s ruin problem)

9. (a) We want P(first passage though t ever occurs) = Λ(r) (1)


(b) E[ N ] = Λ0(r) (1)

10. In order to verify that the generating function of {un } (return to zero at trial n) is given by:

U (s) = (1 − 4pqs2 )−1/2 ,

we use the fact the {u2n } has a binomial distribution and the binomial expension.
15
11. (a) q− p = 75
1−(q/p)15
(b) P(Mary total worth reaches $3000 at some point)= 1−(q/p)30 = 0.002278
(c) P( her total worth reaches $3000 at some point)=1

3
(d) P(λ(15) < 1) = [ P(λ < 1)]15 = (0.4/0.6)15 ≈ 0.002284.
Now I will explain the difference between (c) and (d). For (c), each time Mary goes
bankrupt before reaching $3000, Mary receives an instant infusion of cash from her aunt
and her worth becomes $1500 again. The game then starts over from the beginning.
So we can look at the whole procedure as a geometric distribution with probability
p = 0.002278 of reaching $3000 (30) on each try. Therefore, MaryŠs worth will reach
$3000 (30) at some try (waiting for Success in a geometric distribution is a recurrent
event).
For (d), if Mary goes bankrupt before hitting $3000 (30), then MaryŠs worth will become
0. Moreover, she can continue to move to the left in the walk (arbitrarily far into debt)
if she continues to lose. By contrast, in (c), hitting Ş0Ť is a kind of bouncing barrier that
kicks her back up into state 15.
1−(q/p)10
12. (a) P(probability that it will be John who goes bankrupt)= 1−(q/p)20 ≈ 0.999791002.
1−(q/p)10
(b) P(probability that it will be John who goes bankrupt)= 1−(q/p)40 ≈ 0.999790959.

You might also like