Unit - III Joint Probability Distribution (Full Notes)
Unit - III Joint Probability Distribution (Full Notes)
Also, if X and Y are two continuous random variables, we define the joint probability function
for the random variables X and Y which is also called the joint density function by f(x, y)
where
(i) f(x, y) 0 (ii)
f ( x, y ) dxdy 1 .
Suppose X and Y takes any one of the values from the set of values {x1, x2, x3, …….xm} and
{y1, y2, y3, …….yn} respectively then P(X = xi, Y = yj) = f(xi, yj) is represented in the
following two way table
Y y1 y2 -- yn Total
X
x1 f ( x1 , y1 ) f ( x1 , y2 ) -- f ( x1 , yn ) f1 ( x1 )
x2 f ( x2 , y1 ) f ( x2 , y2 ) -- f ( x2 , yn ) f1 ( x2 )
--
--
--
xm f ( xm , y1 ) f ( xm , y2 ) -- f ( xm , yn ) f1 ( xm )
1
Total f 2 ( y1 ) f 2 ( y2 ) -- f 2 ( yn ) 1
f (x ) 1 , f
i 1
1 i
j 1
2 ( y j ) 1 which can jointly written as f ( xi , y j ) 1 .
i 1 j 1
The mean value of the distribution of a variate X is commonly known as its expectation and is
denoted by E(x). If f(x) is the probability density function of the variate X then
E ( x) xi f ( xi ) (Discrete distribution)
E ( x)
xf ( x)dx (Continuous distribution)
E ( x) ( x) f ( x)dx (Continuous distribution)
If X and Y are two discrete random variables having the joint probability function f(x, y) then
the expectations of X and Y are defined as
Mean of X x E ( X ) xf ( x, y) xi f1 ( xi )
x y i
Mean of Y y E (Y ) yf ( x, y) y j f 2 ( y j )
x y j
and E ( XY ) xi y j J ij
ij
If X and Y are two continuous random variables having the joint probability function f(x, y)
then the expectations of X and Y are defined as
2
x E( X ) xf ( x, y)dxdy
y E (Y ) yf ( x, y)dxdy
Variance V ( X ) x2 (x
x ) 2 f ( x, y ) dxdy
V (Y ) y2 (x
y )2 f ( x, y )dxdy
Cov( X , Y ) (x
x )( y y ) f ( x, y )dxdy E ( X , Y ) E ( X ) E (Y )
Cov( X , Y )
Correlation Co efficient ( X , Y )
x y
b d
P ( a x b, c y d ) f ( x, y ) dxdy
a c
Note 2:
Note 3:
Marginal density function of X is f1 ( x)
f ( x, y )dy
Marginal density function of Y is f 2 ( y )
f ( x, y )dx
Properties of Expectation:
(i) E (kX ) kE ( X )
(ii) E ( X k ) E ( X ) k
(iii) E ( X Y ) E ( X ) E (Y )
(iv) E ( XY ) E ( X ) E (Y )
3
Properties of Variance:
(i) V ( X ) E ( X 2 ) E ( X )
2
(ii) V (kX ) k 2V ( X )
(iii) V ( X k ) V ( X )
(iv) V ( X Y ) V ( X ) V (Y )
Problem:
Find E(X), E(X2) and 2 for the probability function P(X) defined by the following data
xi 1 2 3 --------- n
p(xi) k 2k 3k --------- nk
Solution:
k 2 k 3k 4k ......... nk 1
k 1 2 3 4 ......... n 1
n( n 1)
k 1
2
2
k
n( n 1)
E ( X ) xi p ( xi )
2 n( n 1)(2n 1) (2n 1)
E ( X ) k (12 22 32 .......... n2 )
n(n 1) 6 3
E ( X 2 ) xi2 p( xi )
4
2 n 2 (n 1)2 n(n 1)
E ( X 2 ) k (13 23 33 .......... n3 )
n(n 1) 4 2
2 E( X 2 ) 2
2
n(n 1) 2n 1
2
2 3
Problem:
Y
-4 2 7
X
(i) E(X) (ii) E(Y) (iii) E(XY) (iv) x and y (v) Cov(X,Y) (vi) (X,Y)
Solution:
Distribution of X
xi 1 5
Distribution of Y
yi -4 2 7
Thus x E ( X ) 3
5
(ii) y E (Y ) y j g ( y j ) (4)(3 / 8) (2)(3 / 8) (7)(1 / 4) 1
Thus y E (Y ) 1
(iii) E ( XY ) xi y j J ij
(1)(4)(1 / 8) (1)(2)(1 / 4) (1)(7)(1 / 8) (5)(4)(1 / 4) (5)(2)(1 / 8) (5)(7)(1 / 8)
1 1 7 5 35 3
5
2 2 8 4 8 2
(iv) x2 E ( X 2 ) x2
Hence x2 13 9 4
Thus x 2
y2 E (Y 2 ) y2
Hence y2 (79 / 4) 1 75 / 4
Thus y 75 / 4 4.33
3 3
(v) Cov( X , Y ) E ( XY ) x y (3)(1)
2 2
Cov ( X , Y ) 3 2 3
(vi) ( X , Y ) 0.1732
x y 2 75 4 2 75
Problem:
The joint probability distribution table for two random variables X and Y is as follows
Y
-2 -1 4 5
X
6
Determine the marginal probability distributions of X and Y
Also compute (a) Expectations of X and Y (b) Standard deviation of X and Y (c) Covariance
of X and Y (d) Correlation of X and Y.
Solution:
Distribution of X
xi 1 2
Distribution of Y
yi -2 -1 4 5
Thus x E ( X ) 1.4
Thus y E (Y ) 1
(iii) x2 E ( X 2 ) x2
Thus x 0.4898
y2 E (Y 2 ) y2
Thus y 3.2557
7
(iv) E ( XY ) xi y j J ij
(1)(2)(0.1) (1)(1)(0.2) (1)(4)(0) (1)(5)(0.3)
(2)(2)(0.2) (2)(1)(0.1) (2)(4)(0.1) (2)(5)(0)
0.9
Cov( X , Y ) 0.5
(v) ( X , Y ) 0.313
x y 0.4898 3.2557
Problem:
The joint probability distribution of two random variables x and y is given below
y
1 3 9
x
4 1/4 1/4 0
Solution:
xi 2 4 6
yj 1 3 9
(ii) Expectation of x
8
Expectation of y
1 1
E ( y ) y y j g ( y j ) (1) (3) (9) 3
1
2 3 6
1
E ( xy ) x xi y j f ( xi , y j ) (2)(1) (2)(3) (2)(9)
1 1
8 24 12
Cov ( x , y ) E ( x, y ) x y 12 (4)(3) 0
Problem:
The joint probability distribution of two random variables x and y is given below
y
2 3 4
x
Solution:
xi 1 2
yj 2 3 4
(ii) Expectation of x
9
Expectation of y
Problem:
The joint probability distribution of two discrete random variables x and y is given by the
table. Determine the marginal distributions of x and y. Also find whether x and y are
independent
y
1 3 6
x
Problem:
Y
-3 2 4
X
Problem.
k ( x 1)e y 0 x 1, y 0
Find the constant k so that P( x, y ) is a joint probability
0 otherewise
function. Are x, y independent?
Solution:
Further
P( x, y )dxdy 1
10
1
P( x, y)dxdy 1
y 0 x 0
1 1
3
k ( x 1)e y dxdy k e
y
( x 1)dx dy k 1
y0 x0 x0 y 0
2
2
k
3
2 2
0 3 ( x 1)e dy 3 ( x 1)0 e dy
y y
2 2 2
( x 1) e y 0 ( x 1) 0 1 ( x 1)
3 3 3
P2 ( x)
P ( x, y)dx y> 0
1 1
2 2
( x 1)e y dx e y ( x 1) dx
0
3 3 0
1
2 y x2 2 1 2 3
e x e y 1 0 e y e y
3 2 0 3 2 3 2
Problem:
Find the marginal density function of x and y. Verify that x and y are not independent.
Problem:
The joint probability function of two continuous random variables X and Y is given by
c(2 x y ) 0 x 2, 0 y 3
f ( x, y )
0 otherewise
11
Find (i) constant c (ii) P(x < 2, y < 1) (iii) P(x 1, y 2)
(iv) marginal probability function of x and y.
Problem:
4 xy 0 x 1, 0 y 1
f ( x, y )
0 otherewise
Problem:
If X and Y are continuous random variables having joint probability density function
c ( x 2 y 2 ) 0 x 1, 0 y 1
f ( x, y )
0 otherewise
Find (i) constant c (ii) P(x < 1/2, y > 1/2) (iii) P(1/4 < x < 3/4) (iv) P(y < 1/2)
Problem:
The joint density function of two continuous random variables X and Y is given by
xy / 96 0 x 4,1 y 5
f ( x, y )
0 otherewise
Problem:
e ( x y ) x 0, y 0
Verify that f ( x, y ) is a density function of a joint probability
0 otherewise
distribution. Then evaluate the following (i) P(1/2 < x < 2, 0 < y < 4) (ii) P(x < 1)
(iii) P(x > y) (iv) P(x + y 1)
Problem:
A fair coin is tossed thrice, the random variables X and Y are defined as follows: X = 0 or 1
according as head or tail occurs on the first toss Y = number of heads
(iii) Obtain the expectations of X, Y and XY. Also find standard deviation of X and Y
12
(iv) Compute covariance and correlation of X and Y.
Solution:
The sample space S and the association of random variables X and Y is given by the
following table
X 0 0 0 0 1 1 1 1
Y 3 2 2 1 2 1 1 0
4 1
P( X 0)
8 2
4 1
P ( X 1)
8 2
1
P(Y 0)
8
3
P (Y 1)
8
3
P(Y 2)
8
1
P (Y 3)
8
xi 0 1
yj 0 1 2 3
13
(ii) The joint distribution of X and Y is found by computing
J11 P( X 0, Y 0) 0 (X = 0 implies that there is a head turn out and Y the total number
of heads 0 is impossible)
1
J12 P( X 0, Y 1)
8
2 1
J13 P( X 0, Y 2)
8 4
1
J14 P( X 0, Y 3)
8
1
J 21 P ( X 1, Y 0)
8
2 1
J 22 P ( X 1, Y 1)
8 4
1
J 23 P ( X 1, Y 2)
8
X Y 0 1 2 3 Sum
1 1 1
(iii) x E ( X ) xi f ( xi ) 0 1
2 2 2
1 3 3 1 3
y E (Y ) y j f ( y j ) 0 1 2 3
8 8 8 8 2
1 2 1
E ( XY ) xi y j f ( y j )J ij 0 0 0
4 8 2
14
x2 E ( X 2 ) x2 (0)(1 / 2) (1)(1 / 2) (1 / 4) 1 / 4
x 1/ 2
y 3 / 2
1 1 3 1
(iv) Cov( X , Y ) E ( XY ) x y
2 2 2 4
Cov( X , Y ) 1 4 1
( X ,Y )
x y (1 2)( 3 2) 3
15
Markov Chain’s :
Probability Vector:
A vector 𝑣 = {𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 … … … . 𝑣𝑛 } is called a probability vector if each one of its components
are non-negative and their sum is equal to unity.
1 1 1 1 1
Ex:- If 𝑢 = (1,0), 𝑣 = (2 , 2) , 𝑤 = (4 , 4 , 2) are probability vector
Note:
If v is not a probability vector but each one of the 𝑣𝑖 (𝑖 = 0 𝑡𝑜 𝑛) are non-negative then 𝜆 𝑣
1
is a probability vector when 𝜆 = ∑𝑛
𝑖=1 𝑣𝑖
1 1 2 3
Ex:- If 𝑣 = (1,2,3) 𝑡ℎ𝑒𝑛 𝜆 = 6 𝑎𝑛𝑑 (6 , 6 , 6) is a probability vector.
Stochastic Matrix:
A square matrix 𝑃 = [𝑃𝑖𝑗 ] having every row in the form of a probability vector is called a
stochastic matrix.
1 0 0
1 0
Ex:- (1) Identity matrix I of any order 𝐼2 = [ ] 𝐼3 = [ 0 1 0]
0 1
0 0 1
1⁄ 1⁄
2 2 0
1/2 1/2 1⁄ 1⁄
(ii) [ ] (iii) 0 2 2
0 1
1 1
[ ⁄2 0 ⁄2 ]
Regular – Stochastic Matrix:
A stochastic matrix 𝑃 is said to be a regular stochastic matrix if all the entries of some power 𝑃𝑛
are positive.
0 1
Ex:- 𝐴 = [ ]
1/2 1/2
0 1 0 1 1/2 1/2
∴ 𝐴2 = [ ][ ]=[ ]
1/2 1/2 1/2 1/2 1/4 1/4
∴ 𝐴 is a regular stochastic matrix of order (𝑛 = 2)
Properties of a Regular stochastic Matrix:
The following properties are associated with a regular stochastic matrix 𝑃 of order 𝑛.
1. (a) 𝑃 has a unique fixed point 𝑥 = {𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 … … … . 𝑥𝑛 } such that 𝑥𝑃 = 𝑥
(b) P has a unique fixed probability vector 𝑣 = {𝑣1 , 𝑣2 , 𝑣3 , 𝑣4 … … … . 𝑣𝑛 } such that 𝑣𝑃 = 𝑣
2. 𝑃2 , 𝑃3 , 𝑃4 … …. Approaches the matrix 𝑉 whose rows are each the fixed probability vector 𝑣.
Solution: By data, 𝑎1 + 𝑎2 = 1, 𝑏1 + 𝑏2 = 1, 𝑣1 + 𝑣2 = 1
𝑎1 𝑎2
𝑣𝐴 = [𝑣1 𝑣2 ] [ 𝑏 𝑏 ] = [𝑣1 𝑎1 + 𝑣2 𝑏1 𝑣1 𝑎2 + 𝑣2 𝑏2 ]
1 2
3⁄ 1⁄
2. Find the unique fixed probability vector of the regular stochastic matrix 𝐴 = [ 4 4]
1⁄ 1⁄
2 2
Unique fixed probability vector 𝑣 = (𝑥, 𝑦)
such that 𝑥 + 𝑦 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
3⁄ 1⁄
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦) [ 4 4 ] = (𝑥, 𝑦)
1⁄ 1⁄
2 2
3𝑥 𝑦 𝑥 𝑦
[ + , + ] = (𝑥, 𝑦)
4 2 4 2
3𝑥 + 2𝑦 = 4𝑥, … … … (1) 𝑥 + 2𝑦 = 4𝑦 … … … . (2)
Using 𝑥 + 𝑦 = 1 ⇒ 𝑦 =1−𝑥
Equation (1), 3𝑥 + 2(1 − 𝑥) = 4𝑥
3𝑥 + 2 − 2𝑥 = 4𝑥
4𝑥 − 𝑥 = 2
2
3𝑥 = 2 𝑥=3
2
∴ 𝑦 =1−𝑥 ⇒𝑦 = 1−3
1
𝑦=3
2 1
Thus Unique fixed probability vector 𝑣 = (𝑥, 𝑦) = (3 , 3)
0 1 0
1⁄ 1⁄ 1⁄
3. Find the unique fixed probability vector of the regular stochastic matrix𝐴 = [ 6 2 3]
0 2⁄3 1⁄
3
Solution: we have to find the unique fixed probability vector 𝑣 = (𝑥, 𝑦, 𝑧)
such that 𝑥 + 𝑦 + 𝑧 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 1 0
1⁄ 1⁄ 1⁄
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦, 𝑧) [ 6 2 3 ] = (𝑥, 𝑦, 𝑧)
0 2⁄ 1⁄
3 3
𝑦 𝑦 2𝑧 𝑦 𝑧
[ , 𝑥+ + , + ] = ( 𝑥, 𝑦, 𝑧)
6 2 3 3 3
𝑦 = 6𝑥, … … . . (1) 6𝑥 + 3𝑦 + 4𝑧 = 6𝑦, … … . . (2) 𝑦 + 𝑧 = 3𝑧 … … . (3)
Using 𝑥 + 𝑦 + 𝑧 = 1 ⇒ 𝑧 = 1−𝑥−𝑦 ⇒ 𝑧 = 1 − 𝑥 − 6𝑥
⇒ 𝑧 = 1 − 7𝑥
4. Find the unique fixed probability vector of the regular stochastic matrix
0 1⁄2 1⁄
4
1⁄
4
1⁄ 1⁄ 1⁄
𝐴= 2 0 4 4
1⁄ 1⁄
2 2 0 0
1⁄ 1⁄ 0 0
[ 2 2 ]
Solution: we have to find the unique fixed probability vector 𝑣 = (𝑎, 𝑏, 𝑐, 𝑑)
such that 𝑎 + 𝑏 + 𝑐 + 𝑑 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 1⁄2 1⁄4 1⁄4
1⁄ 1⁄ 1⁄
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑎, 𝑏, 𝑐, 𝑑) 2 0 4 4 = (𝑎, 𝑏, 𝑐, 𝑑)
1⁄ 1⁄
2 2 0 0
1⁄ 1⁄ 0 0 ]
[ 2 2
𝑏 𝑐 𝑑 𝑎 𝑐 𝑑 𝑎 𝑏 𝑎 𝑏
[ + + , + + , + , + ] = (𝑎, 𝑏, 𝑐, 𝑑)
2 2 2 2 2 2 4 4 4 4
𝑏 + 𝑐 + 𝑑 = 2𝑎, … … (1) 𝑎 + 𝑐 + 𝑑 = 2𝑏, … … (2) 𝑎 + 𝑏 = 4𝑐 … (3)
𝑎 + 𝑏 = 4𝑑 … … … (4)
Equation (3) 𝑎 + 𝑏 = 4𝑐
1 1 1
4𝑐 = 3 + 3 ⇒𝑐=6
Equation (4) 𝑎 + 𝑏 = 4𝑑
1 1 1
4𝑑 = 3 + 3 ⇒𝑑=6
1 1 1 1
∴ Unique fixed probability vector 𝑣 = (𝑎, 𝑏, 𝑐, 𝑑) = (3 , 3 , 6 , 6)
0 1 0
5. Show that 𝑃 = [ 0 0 1 ] is a regular stochastic matrix. Also find the associated unique
1⁄ 1⁄ 0
2 2
fixed probability vector.
Solution: Stochastic matrix P is said to be regular stochastic matrix if all the entries of some
power 𝑃𝑛 are positive
0 1 0 0 1 0 0 1 0
1⁄ 1
1 ] = [ ⁄2
𝑃 =[ 0
2 0 1][ 0 0 2 0 ]
1⁄ 1⁄ 0 1 1
⁄2 ⁄2 1⁄ 1⁄
0
2 2 2 2 0
1⁄ 1⁄
0 0 1 0 1 0 2 2 0
1 1
𝑃3 = 𝑃2 𝑃 = [ ⁄2 ⁄2 0 ] [ 0 0 1 ] = 0 1⁄ 1⁄
2 2
1 1
0 1⁄2 1⁄2 ⁄2 ⁄2 0 1⁄ 1⁄ 1⁄
[ 4 4 2]
1⁄ 1⁄ 0 1⁄2 1⁄2
2 2 0 0 1 0
𝑃4 = 𝑃3 𝑃 = 0 1⁄2 1⁄2 [ 0 0 1 ] = 1⁄ 1⁄ 1⁄
4 4 2
1⁄ 1⁄ 0
1⁄ 1⁄ 1⁄ 2 2 1⁄ 1⁄ 1⁄
[ 4 4 2] [ 4 2 4]
We observe that 𝑃4 all the entries are positive, ∴.P is Regular stochastic matrix
we have to find the unique fixed probability vector 𝑣 = (𝑎, 𝑏, 𝑐)
such that 𝑎 + 𝑏 + 𝑐 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 1 0
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑎, 𝑏, 𝑐) [ 0 0 1 ] = (𝑎, 𝑏, 𝑐)
1⁄ 1⁄ 0
2 2
𝑐 𝑐
[ , 𝑎+ , 𝑑] = (𝑎, 𝑏, 𝑐)
2 2
𝑐 = 2𝑎, … … (1) 2𝑎 + 𝑐 = 2𝑏, … … (2) 𝑏 = 𝑐 … (3)
Using, 𝑐 = 2𝑎 𝑏 = 𝑐 = 2𝑎
Using 𝑎 + 𝑏 + 𝑐 = 1 ⇒ 𝑎 + 2𝑎 + 2𝑎 = 1
1
⇒ 𝑎=5
2
𝑐 = 2𝑎 ⇒ 𝑐=5
2
𝑏=𝑐 ⇒ 𝑏=5
1 1 1
∴ Unique fixed probability vector 𝑣 = (𝑎, 𝑏, 𝑐) = (5 , 5 , 5)
Makov Chain:
Definition:
A stochastic process which is such that the generation of the probability distribution
depends only on the present state is called a Markov process. If this state space is discrete (finite
or countably infinite) we say that the process is a discrete state process or chain. Then the Markov
process is known as Markov chain.
Further if the state space is continuous the process is called a continuous state process.
The above two properties satisfy the requirement of a stochastic matrix hence we conclude
that the transition matrix of a Markov chain is a stochastic matrix.
Conclusion: The first row of the matrix is related to the fact that the person does not commute
to consecutive days by train and is sure to go by bus if he had travelled by train. The second
row of the matrix is related to the fact that if the person had commute in bus on a particular day
he is likely to go by bus again or by train. Thus the probabilities are equal to 1/2.
2. Three boys A, B, C are throwing ball to each other A always throws the ball to B and B always
throws the ball to C, C is just as likely to throw the ball to B as to A.
Solution:
State space = {A, B, C} and transition probability matrix (t. p. m): is as follows
𝐴 𝐵 𝐶
𝐴 0 1 0
𝑃 = 𝐵[ 0 0 1]
𝐶 1⁄2 1⁄
2 0
3. A student’s study habits are as follows. If he studies one night he is 30% sure to study the next
night. On the other hand, if he does not study one night, he is 60% sure not to study the next
night as well. Find the transition matrix for the chain of his study.
Solution:
State space = {𝑎1 = studying 𝑎2 = not studying} and the transition probability matrix is as
follows
𝑎1 𝑎2
𝑎1 0 1
𝑃 = 𝑎 [ 1⁄ 1⁄ ]
2 2 2
State Classification:
Absorbing State:
A state i is called an absorbing state if the transition probabilities 𝑝𝑖𝑗 are such that
1 𝑓𝑜𝑟 𝑗 = 𝑖
𝑝𝑖𝑗 = {
0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
In other words, a state 𝑖 is absorbing if the 𝑖 𝑡ℎ row of the transition matrix 𝑃 ℎ𝑎𝑠 1 on the
main diagonal and zero’s everywhere.
Transient State:
A state 𝑖 said to be a transient state if the system is in this state at some step (as it has to be)
and there is a chance (i.e. there is a non-zero probability) that it will not return to that state (in
a subsequent step).
Recurrent State:
A state 𝑖 said to be a recurrent state if starting from state 𝑖, the system does eventually return
to the same state. (Here it is implicit that the probability of return is one).
Example:
0 1
Consider the two-state chain for which the transition matrix is 𝑃 = [ ]
1 0
1 0 0 1
We find that 𝑃2 = [ ], 𝑃3 = [ ] = 𝑃 Since 𝑃3 = 𝑃, the system returns to a
0 1 1 0
state after two steps. Therefore both states of the chain are recurrent.
The entry 𝑝𝑖𝑗 in the transition probability matrix 𝑃 of the Markov chain is the probability
that the system changes from the state 𝑎𝑖 to 𝑎𝑗 in a single step that is 𝑎𝑖 → 𝑎𝑗
The probability that the system changes from the state 𝑎𝑖 to the state 𝑎𝑗 in exactly 𝑛 steps is
denoted by 𝑝𝑖𝑗 (𝑛)
The matrix formed by the probabilities 𝑝𝑖𝑗 (𝑛) is called the 𝑛 – step transition matrix denoted
by 𝑃(𝑛) .
𝑖. 𝑒. [𝑃(𝑛) ] = [𝑝𝑖𝑗 (𝑛) ] is obviously a stochastic matrix.
It can be proved that the 𝑛 – step transition matrix is equal to the 𝑛𝑡ℎ power of 𝑃.
That is 𝑃(𝑛) = 𝑃𝑛 .
Let 𝑃 be the transition probability matrix of the Markov chain and let 𝑝 = (𝑝𝑖 ) =
(𝑝1 , 𝑝2 , 𝑝3 … … … . 𝑝𝑚 ) be the probability distribution at some arbitrary time, then
𝑝𝑃, 𝑝𝑃2 , 𝑝𝑃3 … … . 𝑝 𝑃𝑛 respectively are the probabilities of the system after one step, two
step, ------ 𝑛 𝑠𝑡𝑒𝑝.
Let 𝑝(0) = [𝑝1 (0) , 𝑝2 (0) , … … … . 𝑝𝑚 (0) ] denote the initial probability distribution at the start
of the process.
Let 𝑝(𝑛) = [𝑝1 (𝑛) , 𝑝2 (𝑛) , … … … . 𝑝𝑚 (𝑛) ] denote the 𝑛𝑡ℎ step probability distribution at the
end of 𝑛 𝑠𝑡𝑒𝑝𝑠.
We have, 𝑝(1) = 𝑝(0) 𝑃,
𝑝(2) = 𝑝(1) 𝑃 = 𝑝(0) 𝑃2 ,
𝑝(3) = 𝑝(2) 𝑃 = 𝑝(0) 𝑃3 , − − − − − − − − 𝑝(𝑛) = 𝑝(𝑛) 𝑃 = 𝑝(0) 𝑃𝑛
Example-1. A person commutes the distance to his office every day either by train or by bus.
Suppose he does not go by train for two consecutive days, but if he goes by bus the
next day he is just as likely to go by bus again as he is to travel by train.
Solution:
The state space of the system is {train(t), bus(b)}
The stochastic process is a Markov chain since the outcome of any day depends only on the
happening of the previous day. The transition probability matrix is as follows
𝑡 𝑏
𝑡 0 1 𝑝𝑡𝑡 𝑝𝑡𝑏
𝑃= [ 1⁄ ] = [ 𝑝𝑏𝑡 𝑝𝑏𝑏 ]
𝑏 1⁄2 2
We shall find 𝑃2 , 𝑃3
0 1 1 1⁄ 1⁄
0 (2)
2 2 ] = [ 𝑝𝑡𝑡 𝑝𝑡𝑏 (2)
𝑃2 = [ 1⁄ 1⁄ 1⁄ ] [ 1⁄
] = [ ]
2 2 2 1⁄ 3⁄
2 𝑝𝑏𝑡 (2) 𝑝𝑏𝑏 (2)
4 4
0 1 1⁄ 1⁄ 1⁄ 3⁄ (3)
4 ] = [ 𝑝𝑡𝑡 𝑝𝑡𝑏 (3)
3 2
𝑃 = 𝑃𝑃 = [ 1⁄ 1⁄ ] [ 2 2 ]= [ 4 ]
2 2 1⁄4 3⁄4 3⁄ 5⁄
8 8
𝑝𝑏𝑡 (3) 𝑝𝑏𝑏 (3)
1
𝑝𝑡𝑏 (2) = 2 means that the probability that the system changes from the state 𝑡 𝑡𝑜 𝑏 in exactly
Next let us create an initial probability distribution for the state of the process.
Let us suppose that the person toss a coin and decided that he will go by bus if head turn up.
1
Therefore 𝑝(𝑏) = 2 𝑎𝑛𝑑 𝑝(𝑡) = 1/2
Problem – 1: A student’s study habits are as follows. If he studies one night he is 70% sure not to
study the next night. On the other hand, if he does not study one night, he is 60% sure
to study the next night. In the long run how often does he study?
Solution: State space = {A = studying, B = not studying}
The transition probability matrix(t.p.m) for the problem is as follows
𝐴 𝐵
𝐴 0.3 0.7
𝑃= [ ]
𝐵 0.6 0.4
To know the student’s study habits in the long run,
we have to find the steady state probability vector 𝑣 = (𝑥, 𝑦)
such that 𝑥 + 𝑦 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
Solution:
State space = {A =He smokes filter cigarettes, B = He smokes non filter cigarettes}
The transition probability matrix (t.p.m) for the problem is as follows
𝐴 𝐵
𝐴 0.8 0.2
𝑃= [ ]
𝐵 0.3 0.7
To find the probability in the long run how often does he smoke filter cigarettes,
we have to find the steady state probability vector 𝑣 = (𝑥, 𝑦)
such that 𝑥 + 𝑦 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
Solution:
State space = {A = club A , B = club B }
i) The transition probability matrix (t.p.m) for the problem is as follows
𝐴 𝐵
𝐴 0 1
𝑃= [ ]
𝐵 1/2 1/2
To find the probability in the long run how often does he visit Clubs,
ii) Since all the entries of 𝑃2 are positive then P is regular stochastic matrix
0 1 0 1 1/2 1/2
𝑃2 = [ ][ ]= [ ]
1/2 1/2 1/2 1/2 1/4 3/4
∴ 𝑃 is regular stochastic matrix.
we have to find the unique fixed probability vector 𝑣 = (𝑥, 𝑦)
such that 𝑥 + 𝑦 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 1
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦) [ ] = (𝑥, 𝑦)
1/2 1/2
1 1
[0 + 𝑦, 𝑥 + 𝑦] = (𝑥, 𝑦)
2 2
1 1
𝑦 = 𝑥, 𝑥 + 2𝑦 = 𝑦
2
𝑦 = 2𝑥 … … (1) 2𝑥 + 𝑦 = 2𝑦 … … . (2)
Using 𝑥 + 𝑦 = 1 ⇒ 𝑦 =1−𝑥
2𝑥 = 1 − 𝑥
1
2𝑥 + 𝑥 = 1 ∴𝑥=3
1 2
𝑦 =1−𝑥 ⇒𝑦 =1−3=3
1 2
Thus the unique fixed probability vector for the problem is 𝑣 = (𝑥, 𝑦) = (3 , 3)
0 2/3 1/3
Problem -4: Prove that the Markov chain whose t.p.m is 𝑃 = [ 1/2 0 1/2 ] is irreducible.
1⁄ 1⁄ 0
2 2
Find the corresponding stationary probability vector.
Solution: W.K.T Markov chain whose t.p.m is irreducible, if t.p.m is regular. (Since all the
entries in 𝑃2 are positive then t.p.m is regular)
[ we need to show that P is regular stochastic matrix:- Stochastic matrix P is said to be
regular stochastic matrix if all the entries of some power 𝑃𝑛 are positive]
0 4 2
1
𝑃 = 6[ 3 0 3]
3 3 0
2
1 0 4 2 1 0 4 2 1 18 6 12
𝑃 = [3 0 3] [3 0 3] = [ 9 21 6 ]
6 6 36
3 3 0 3 3 0 9 12 15
Since, all the entries in 𝑃2 are positive. Then we conclude that t.p.m P is regular
Markov chain having t.p.m P is irreducible.
we have to find the unique fixed probability vector 𝑣 = (𝑥, 𝑦, 𝑧)
such that 𝑥 + 𝑦 + 𝑧 = 1 𝑎𝑛𝑑 𝑣𝑃 = 𝑣
0 4 2
1
∴ 𝑣𝑃 = 𝑣 ⇒ (𝑥, 𝑦, 𝑧) [ 3 0 3 ] = (𝑥, 𝑦, 𝑧)
6
3 3 0
1
[0 + 3𝑦 + 3𝑧, 4𝑥 + 3𝑧, 2𝑥 + 3𝑦] = (𝑥, 𝑦, 𝑧)
6
3𝑦 + 3𝑧 = 6𝑥, … … . . (1) 4𝑥 + 3𝑧 = 6𝑦, … … . . (2) 2𝑥 + 3𝑦 = 6𝑧 … … . (3)
Using 𝑥 + 𝑦 + 𝑧 = 1 ⇒ 𝑦+𝑧 = 1−𝑥
3(𝑦 + 𝑧) = 6𝑥, … … . . (1)
3(1 − 𝑥) = 6𝑥 ⇒ 6𝑥 + 3𝑥 = 3 ⇒ 𝑥 = 1/3
4
4𝑥 + 3𝑧 = 6𝑦 , … … . . (2) ⇒ = 6𝑦 − 3𝑧 … … … . . (4)
3
2
2𝑥 + 3𝑦 = 6𝑧 … … . (3) ⇒ = 6𝑧 − 3𝑦 … … … … 𝑀𝑢𝑙𝑡𝑖𝑝𝑙𝑦 𝑏𝑦 2
3
4
⇒ = 12𝑧 − 6𝑦 … … … . (5)
3
4 4 9
Equation (4)+(5) ⇒ + 3 = 9𝑧 ⇒ 𝑧 = 27
3
∴ 𝑥+𝑦+𝑧 =1 ⇒ 𝑦 = 1−𝑥−𝑧
1 9 10
⇒ 𝑦 = 1− − =
3 27 27
1 9 10
∴ unique fixed probability vector 𝑣 = (𝑥, 𝑦, 𝑧) = (3 , 27 , 27)
Problem-5: Three boys A, B, C are throwing ball to each other. A always throws the ball to B and
B always throws the ball to C. C is just as likely to throw the ball to B as to A. If C was the
first person to throw the ball find probabilities that after three throws i) A as the ball. ii) B
has the ball. iii) C has the ball.
Solution: State space = {A, B, C} and transition probability matrix (t.p.m) is as follows
𝐴 𝐵 𝐶
𝐴 0 1 0
𝑃 = 𝐵[ 0 0 1]
𝐶 1⁄2 1⁄
2 0
Initially, If C has the ball, the associated initial probability vector is given 𝑝(0) = (0,0,1)
Since Probabilities are desired after three throws we have to find 𝑝(3) = 𝑝(0) 𝑃3
0 1 0 0 1 0 0 0 1
1
0 1 ] = [ ⁄2 1⁄ 0 ]
𝑃 = 𝑃𝑃 = [ 0
2 0 1][ 0 2
1⁄ 1⁄ 0 1⁄2 1⁄ 0 1⁄ 1⁄
2 2 2 0 2 2
0 0 1 1⁄ 1⁄ 0
0 1 0 2 2
1 1⁄
𝑃3 = 𝑃2 𝑃 = [ ⁄2 2 0 ][ 0 0 1] = 0 1⁄2 1⁄
2
1⁄ 1⁄ 1⁄ 1⁄ 0
0 2 2 1 1 1⁄
2 2 [ ⁄4 ⁄4 2]
1⁄ 1⁄
2 2 0
𝑝(3) (0) 3
= 𝑝 𝑃 = (0,0,1) 0 1⁄ 1⁄
2 2
1 1⁄ 1⁄
[ ⁄4 4 2]
1 1 1
𝑝(3) = [ , , ]
4 4 2
1 1 1
After 3 throws the probability that the ball is with 𝐴 = , with 𝐵 = 4, Ball with 𝐶 =
4 2