Introduction to Stochastic Processes
Parthanil Roy, Indian Statistical Institute
Lecture 11: 08/11/2021
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 1 / 20
Example: Simple random walk on Z6
One can dene simple random walk on Zn (for any n ≥ 2) in a similar fashion.
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 2 / 20
An on-o process
Suppose you have d (≥ 2) many lamps all of which are switched o at the
beginning. Our on-o process goes as follows:
(i) Choose a lamp at random and switch it on.
(ii) Choose another lamp at random (independently of the previous choice
and the conguration of the lamps).
(iii) Change the conguration of the chosen lamp (from on to o or vice-versa).
(iv) Repeat Steps (ii) and (iii).
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 3 / 20
An on-o process
For each lamp, we denote its conguration by 0 (=o) or 1 (=on).
For each n ≥ 0, let Xn denote the vector of congurations of the lamps after n
changes.
Clearly X0 ≡ 0 := (0, 0, . . . , 0). Also
1
P(X1 = e1 ) = P(X1 = e2 ) = · · · = P(X1 = ed ) = ,
d
where e1 , e2 , . . . , ed are the standard basis vectors with ek having kth
component 1 and the other components 0.
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 4 / 20
Simple random walk on Zd2
Let Xn be the vector of congurations of the lamps after n changes (n ≥ 0).
Simple random walk on Z32
Then {Xm }m≥0 is a Markov chain on the d-dimensional discrete hypercube
S = Zd2 with initial distribution a = δ0 and transition matrix P = (pij )i,j∈Zd2
given by
d if j − i ∈ {e1 , e2 , . . . , ed },
1
pij =
otherwise.
0
This is called the simple random walk on Zd2 .
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 5 / 20
Random walks on groups
The last few examples fall under the genaral category called random walks on
groups.
All of these groups are abelian - Zd , Zn , Zd2 .
An example of a random walk on a non-abelian group comes from card
shuing. If a pack has n cards, then card shuing gives rise to a random walk
on the group Sn .
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 6 / 20
Gambler's ruin problem
Fix a, b ∈ N.
Two gamblers A and B have | a and | b, respectively.
They decide to play the following game.
They keep on tossing a fair coin independently.
Whenever head appears, A gives | 1 to B.
Whenever tail appears, B gives | 1 to A.
Question: What is the probability that A is ruined?
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 7 / 20
Gambler's ruin problem
Suppose Xm = the amount of money (in |) A has after m (∈ N ∪ {0}) tosses.
Example 10: {Xm }m≥0 is a Markov chain on S = {0, 1, 2, . . . , a + b} with
initial distribution δa and transition probability matrix P = (pij )i,j∈S given by
if i ∈ {1, 2, . . . , a + b − 1} and |j − i| = 1,
0.5
pij = 1 if i ∈ {0, a + b} and j = i,
otherwise.
0
Transition diagram when a = 3 and b = 2
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 8 / 20
Simple random walk with absorbing barriers
The Markov chain {Xm }m≥0 on S = {0, 1, 2, . . . , a + b} with initial
distribution δa and transition probability matrix P = (pij )i,j∈S given by
if i ∈ {1, 2, . . . , a + b − 1} and |j − i| = 1,
0.5
pij = 1 if i ∈ {0, a + b} and j = i,
otherwise.
0
is also called the simple random walk starting from a with absorbing barriers 0
and a + b.
Transition diagram when a = 3 and b = 2
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 9 / 20
Simple random walk with absorbing barriers
Fix a, b ∈ N. Let {Xm }m≥0 be the simple random walk starting from a with
absorbing barriers 0 and a + b.
Transition diagram when a = 3 and b = 2
Exercise: Show that with probability 1, the Markov chain gets absorbed
either at 0 or at a + b. More precisely, show that P(τ < ∞) = 1, where
τ := inf m ≥ 1 : Xm ∈ {0, a + b} .
(Hint: Bound τ by a random variable that's nite with probability 1.)
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 10 / 20
Simple random walk with absorbing barriers
Fix a, b ∈ N. Let {Xm }m≥0 be the simple random walk starting from a with
absorbing barriers 0 and a + b.
Transition diagram when a = 3 and b = 2
Fact: P(τ < ∞) = 1, where τ := inf m ≥ 1 : Xm ∈ {0, a + b} .
Note that
if τ = 1
X1
if τ = 2
Xτ = X2
...
is the nal value of the Markov chain.
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 11 / 20
Simple random walk with absorbing barriers
Fix a, b ∈ N. Let {Xm }m≥0 be the simple random walk starting from a with
absorbing barriers 0 and a + b.
Transition diagram when a = 3 and b = 2
Fact: P(τ < ∞) = 1, where τ := inf m ≥ 1 : Xm ∈ {0, a + b} .
Note: Xτ is the nal value of the Markov chain. Clearly
P Xτ ∈ {0, a + b} = 1.
Question: P(Xτ = 0) = P(A is ruined) = ?
This question can be answered using Markov chain theory.
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 12 / 20
Simple random walk with reecting barriers
Example 11: The Markov chain {Xm }m≥0 on S = {0, 1, 2, . . . , a + b} with
initial distribution δa and transition probability matrix P = (pij )i,j∈S given by
if i ∈ {1, 2, . . . , a + b − 1} and |j − i| = 1,
0.5
if i = 0 and j = 1,
1
pij =
1 if i = a + b and j = a + b − 1,
otherwise.
0
is also called the simple random walk starting from a with reecting barriers 0
and a + b.
Transition diagram when a = 2 and b = 3
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 13 / 20
Nearest neighbour RW with reecting/absorbing barriers
In both examples (RW with reecting/absorbing barriers), the coin does not
need to be unbiased.
In this case, we will have a nearest neighbour random walk with
reecting/absorbing barriers.
The transition probabilities (and hence the transition diagrams) will change
accordingly.
In the case of a nearest neighbour random walk with absorbing barriers, we
can still ask the same question: What is the chance that A is ruined?
This question can also be answered using Markov chain theory.
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 14 / 20
Two-step transition probabilities
Suppose {Xm }m≥0 is a Markov chain on a countable state space S with initial
distribution a and transition matrix P .
Fix two states i, j ∈ S .
Since i ∈ S = Range(Xn ), get n ≥ 0 such that i ∈ Range(Xn ), which is
[
n≥0
same as saying P(Xn = i) > 0.
Goal: To compute P(Xn+2 = j | Xn = i).
As we will see, P(Xn+2 = j | Xn = i) does not depend on the choice of n (as
long as it is well-dened).
This is called the two-step transition probability from i to j .
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 15 / 20
Two-step transition probabilities
Fix i, j ∈ S . Get n ≥ 0, such that P(Xn = i) > 0. Then
P(Xn+2 = j, Xn = i)
P(Xn+2 = j | Xn = i) = .
P(Xn = i)
The denominator = P(Xn = i)
X X
= ··· P(X0 = i0 , . . . , Xn−1 = in−1 , Xn = i)
i0 ∈S in−1 ∈S
X X
= ··· ai0 pi0 i1 pi1 i2 . . . pin−2 in−1 pin−1 i
i0 ∈S in−1 ∈S
On the other hand, the numerator = P(Xn+2 = j, Xn = i)
X X X
= ··· P(X0 = i0 , . . . , Xn−1 = in−1 , Xn = i, Xn+1 = k, Xn+2 = j)
i0 ∈S in−1 ∈S k∈S
X X X
= ··· ai0 pi0 i1 pi1 i2 . . . pin−2 in−1 pin−1 i pik pkj
i0 ∈S in−1 ∈S k∈S
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 16 / 20
Two-step transition probabilities
Fix i, j ∈ S . Get n ≥ 0, such that P(Xn = i) > 0. Then
P(Xn+2 = j | Xn = i)
P(Xn+2 = j, Xn = i)
=
P(Xn = i)
P P P
i0 ∈S · · · in−1 ∈S k∈S ai0 pi0 i1 pi1 i2 . . . pin−2 in−1 pin−1 i pik pkj
= P P
i0 ∈S · · · in−1 ∈S ai0 pi0 i1 pi1 i2 . . . pin−2 in−1 pin−1 i
P P P
i0 ∈S · · · in−1 ∈S a i 0 p i 0 i 1 pi 1 i 2 . . . pin−2 in−1 pi n−1 i k∈S pik pkj
= P P
i0 ∈S · · · in−1 ∈S ai0 pi0 i1 pi1 i2 . . . pin−2 in−1 pin−1 i
(2)
X
= pik pkj =: pij ,
k∈S
which does not depend on the choice of n as long as P(Xn = i) > 0.
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 17 / 20
N -step transition probabilities
Fix i, j ∈ S . Get n ≥ 0, such that P(Xn = i) > 0. Then
(2)
X
pij := P(Xn+2 = j | Xn = i) = pik pkj .
k∈S
Note that
(2)
P 2 = pij i,j∈S
.
In other words, P 2 is the two-step transition probability matrix of {Xm }m≥0 .
Claim: For all N ≥ 0, P N is the N -step transition probability matrix. In
others words, for all N ≥ 0 and for all i, j ∈ S ,
(N )
P(Xn+N = j | Xn = i) = pij ,
where P N = p(N )
for all N ≥ 0.
ij i,j∈S
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 18 / 20
N -step transition probabilities
Claim: For all N ≥ 0, P N is the N -step transition probability matrix. In
others words, for all N ≥ 0 and for all i, j ∈ S ,
(N )
X X
P(Xn+N = j | Xn = i) = ··· pij1 pj1 j2 . . . pjN −2 jN −1 pjN −1 j =: pij .
j1 ∈S jN −1 ∈S
This means that P N = p(N )
for all N ≥ 0.
ij i,j∈S
Remarks:
For N = 0, P N = P 0 := I and hence the above claim simply says
if i = j,
(
1
P(Xn = j | Xn = i) =
0 if i 6= j.
For N = 1, the claim holds by denition of P .
We just proved the claim when N = 2.
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 19 / 20
Proof of claim
Fix i, j ∈ S . Get n ≥ 0, such that P(Xn = i) > 0. Fix N ≥ 3. Then
P(Xn+N = j, Xn = i)
P(Xn+N = j | Xn = i) = .
P(Xn = i)
Numerator =
X X X X
··· ··· ai0 pi0 i1 . . . pin−2 in−1 pin−1 i pij1 pj1 j2 . . . pjN −2 jN −1 pjN −1 j
i0 ∈S in−1 ∈S j1 ∈S jN −1 ∈S
On the other hand, denominator =
X X
··· ai0 pi0 i1 . . . pin−2 in−1 pin−1 i
i0 ∈S in−1 ∈S
Therefore
(N )
X X
P(Xn+N = j | Xn = i) = ··· pij1 pj1 j2 . . . pjN −2 jN −1 pjN −1 j =: pij .
j1 ∈S jN −1 ∈S
This means that P N = (N )
pij i,j∈S for all N ≥ 0.
Parthanil Roy Intro to Stoch Proc Lecture 11: 08/11/2021 20 / 20