0% found this document useful (0 votes)
77 views8 pages

Stochastic Processes

Stochastic processes are defined as families of random variables indexed by time, with discrete or continuous parameter spaces. Markov processes are a specific type of stochastic process where the future state depends only on the current state, and transition probabilities are used to describe the likelihood of moving between states. The document also covers transition probability matrices and examples of Markov chains in various contexts.

Uploaded by

apoorvarakesh11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
77 views8 pages

Stochastic Processes

Stochastic processes are defined as families of random variables indexed by time, with discrete or continuous parameter spaces. Markov processes are a specific type of stochastic process where the future state depends only on the current state, and transition probabilities are used to describe the likelihood of moving between states. The document also covers transition probability matrices and examples of Markov chains in various contexts.

Uploaded by

apoorvarakesh11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Stochastic Processes

Definition 0.1. Families of Random variables which are functions of say time are known as
stochastic processes.

i.e., Stochastic process is defined as family of random variables {X(t),t ∈ T }. The parameter t
usually represents time, so {X(t)} represents value assumed by random variable at any time t.

Definition 0.2. T is called index set or parameter space and is subset of (−∞, ∞).

Definition 0.3. If the index set is discrete, eg: T = {0, 1, 2, . . . , }, then we have discrete-(time) pa-
rameter stochastic process. If T is continuous, eg: T = 0 ≤ t ≤ ∞, we call the process a continuous-
(time) parameter stochastic process.

Definition 0.4. Values assumed by random variable X(t) are called states. The set S of possible
values that the random variable X(t) takes for each t ∈ T is known as state space of stochastic
process and the set of possible values for the parameter t (i.e., set T ) is known as parameter space.

Further set of all possible values from the state space of stochastic process may discrete or
continuous.

Definition 0.5. If the state space is discrete, the process is referred as a chain and the states are
usually identified with set of natural numbers or a subset of it.

• Example for discrete state space is the number of customers at service facility.

• Example for continuous state space is the length of the time, a customer has been waiting
for the service.

Definition 0.6. If for t1 < t2 < . . . < tn < t, for all n and for all x1 , x2 , . . . , xn ,

P{a ≤ X(t) ≤ b|X(t1 ) = x1 , . . . , X(tn ) = xn } = P{a ≤ X(t) ≤ b|X(tn ) = xn },

then the process {X(t),t ∈ T } is Markov Process.

1
Definition 0.7. A discrete parameter discrete state Markov process is called Markov chain.

Example: Consider the experiment of throwing a fair die repeatedly. If Yn denote the number
of 6′ s in the first n throws, then {Yn , n ≥ 1} is a Markov chain.

Random processes in which the occurrences of future state depends on the immediately pre-
ceding state and only on it is known as Markov chain.

A state is a condition or location of an object in the system at particular time.

Assumptions we made for Markov chains:

• Finite number of states.

• States are mutually exclusive.

• States are collectively exhaustive.

• Probability of moving from one state to other state is constant over time.

Transition probabilities:

The transition probabilities pi j (n) are basic entities in the study of the Markov chains. Transi-
tion probability of moving from state i to state j in nth step is pi j (n) = P{Xn = j|Xn−1 = i}.

Stationary transition probabilities pi j are called one step transition probabilities as they repre-
sent probability of transition from state i to state j at two successive time points or in one step.

Transition probability pi j satisfy following properties:

(i) pi j ≥ 0.
n
(ii) ∑ pi j = 1, for all i.
j=1

One step transition probabilities may be written in the matrix form as follows:

2
 
p11 p12 . . . pin
 
P =  p21 p22 . . . p2n 
 
 
.. .. . . ..
. . . .

This matrix is called transition probability matrix (TPM) of the Markov chain.

Examples:

1) In a certain market, only two brands of cold drinks A and B are sold. Given that man last
purchased brand A, there is 80% chance that he would buy same brand in the next purchase. While
if the man purchased brand B, there in 90% chance that his next purchase would br brand B. Using
this information,

(i) Develop TPM.

(ii) Draw state transition diagram.

Solution:

A B
 
A 0.8 0.2
(i) TPM is P =  
B 0.1 0.9

(ii) State transition diagram is

2) Transition diagram is

3
(1) Find P{X4 = 3|X3 = 2}
(2) Find P{X3 = 1|X2 = 1}
(3) If we know P{X0 = 1} = 1/3, find P{X0 = 1, X1 = 2}.
(4) If we know P{X0 = 1} = 1/3, find P{X0 = 1, X1 = 2, X2 = 3}.
(5) Find TPM.
Solution: (1) P{X4 = 3|X3 = 2} = p23 = 2/3.
(2) P{X3 = 1|X2 = 1} = p11 = 1/4.
(3) P{X0 = 1, X1 = 2} = P(X0 = 1)P(X1 = 2|X0 = 1) = 1/3.p12 = 1/3.1/2 = 1/6.
(4) P{X0 = 1, X1 = 2, X2 = 3} = P(X0 = 1)P(X1 = 2|X0 = 1)P(X2 = 3|X1 = 2)
= 1/3.p12 p23 = 1/9.
1 2 3
 
1 1/4 1/2 1/4
 
(5) TPM is P =  1/3 0 2/3 .
 
2
 
3 1/2 0 1/2

3) The factory has 2 machines and one repair crew. Assume that probability of any one machine
breaking down on a given day is α. Assume that if the repair crew is working on a machine, the
probability that will complete the repair in a day is β . For a simplicity, ignore the probability of
repair completion or breakdown taking place except at the end of the day. Let Xn be the number of
machines in operation at the end of the nth day. Assume the behavior of Xn can be modeled as a
Markov chain.

4
Solution: Probability of machine breakdown=α.
Probability of machine got repaired by a crew in a day=β .
Probability of non completion of repair by crew in a day=1 − β .
Probability of non breakdown of machine =1 − α.
Let Xn = {0, 1, 2}.

TPM is
0 1 2
 
0 1−β β 0
 
P=  α(1 − β ) (1 − α)(1 − β ) + αβ β (1 − α) .
 
1
 
2 α2 2α(1 − α) (1 − α)2

4) An Urn initially contains 5 black balls and 5 white balls. The following experiment is repeated
indefinitely. A ball is drawn from the Urn, if the ball is white, it is put back in the urn otherwise
it is left out. Let Xn be the number of black balls remaining in the urn after n draws from the urn.
Find Transition probability matrix.

Solution: S = {0, 1, 2, 3, 4, 5}.

0 1 2 3 4 5
 
0
 1 0 0 0 0 0 
 
1 
 1/6 5/6 0 0 0 0 

 
2  0 2/7 5/7 0 0 0 
P=
 
 
3

 0 0 3/8 5/8 0 0 

 
0 0 0 4/9 5/9 0
 
4  
 
5 0 0 0 0 5/10 5/10

5
Higher order Probabilities

Chapman-Kolmogorov equation

We have so far considered unit step transition probabilities, the probability of Xn+1 given Xn .
(1)
One step transition probability from state i to state j is pi j = p{Xn+1 = j|Xn = i}.
(2)
2 step transition probability is pi j = p{Xn+2 = j|Xn = i}.
(m)
m step transition probability is pi j = p{Xn+m = j|Xn = i}, i, j ∈ S, n ≥ 0.
In the matrix form we denote

• P(1) = [Pi j ].

• P(2) = P(1) P(1) .

• P(m+1) = P(m) P(1) .

• P(m+n) = P(m) P(n) .

In order to compute unconditional probabilities, we need to define initial state probability distribu-
tion. A Markov chain is fully specified once the transition probability matrix and the initial state
distribution have been defined.

The initial state distribution is a probability distribution of state at initial time 0. i.e., distribu-
tion of X0 given by P(X0 = i) = αi , ∀i ∈ S.

Now we compute unconditional probabilities. The probability of state j at particular time n


can be computed as p(Xn = j) = ∑i∈S p{Xn = j|X0 = i}p{X0 = i} = ∑ pnij αi .

Probability of chain realization can be computed as follows:

p{X0 = i0 , X1 = i1 , . . . , Xn = in } = p{X0 = i0 }p{X1 = i1 |X0 = i0 }p{X2 = i2 |X1 = i1 }

. . . p{Xn = in |Xn−1 = in−1 }.

6
Examples

1) Let {Xn , n = 0, 1, 2, 3, . . .} be a Markov chain with state space {0, 1, 2} and the initial probability
vector p(0) = (1/4, 1/2, 1/4). One step transition probability matrix P is given by
0 1 2
 
0 1/4 3/4 0
 
P = 1  1/3 1/3 1/3 . Find, (i) p(X0 = 0, X1 = 1, X2 = 1), (ii)p(X2 = 1),
 
 
2 0 1/4 3/4
(iii) p(X7 = 0|X5 = 0).

Solution: Given that p(X0 = 0) = 1/4, p(X0 = 1) = 1/2 and p(X0 = 2) = 1/4.
(i)

p(X0 = 0, X1 = 1, X2 = 1) = p(X0 = 0)p(X1 = 1|X0 = 0)p(X2 = 1|X1 = 1)

= 1/4.3/4.1/3 = 1/16.

0 1 2
 
0 5/16 7/16 1/4
 
(ii) P2 = P.P =  7/36 4/9 13/36 .
 
1
 
2 1/12 13/48 31/48

p(X2 = 1) = ∑ p(X0 = i)p(X2 = 1|X0 = i)


i∈S

= p(X0 = 0)p(X2 = 1|X0 = 0) + p(X0 = 1)p(X2 = 1|X0 = 1) + p(X0 = 2)p(X2 = 1|X0 = 2)

= 1/4.p201 + 1/2.p211 + 1/3.p221

= 1/4.7/16 + 1/2.4/9 + 1/3.13/48 = 0.3993.

(iii) p(X7 = 0|X5 = 0) = p200 = 5/16.


2) Let {Xn , n = 0, 1, 2, 3, . . .} be a Markov chain with state space {0, 1, 2} and the initial probability
distribution p(X0 = i) = 1/3, i = 0, 1, 2. One step transition probability matrix P is given by

7
0 1 2
 
0 3/4 1/4 0
 
P = 1  1/4 1/2 1/4 . Evaluate the followings: (i) p(X1 = 1|X0 = 2), (ii)p(X2 = 2|X1 = 1),
 
 
2 0 3/4 1/4
(iii)p(X2 = 2, X1 = 1|X0 = 2), (iv) p(X2 = 2, X1 = 1, X0 = 2), (v)p(X3 = 1, X2 = 2, X1 = 1, X0 = 2).
Solution: (i) p(X1 = 1|X0 = 2) = P21 = 3/4.
(ii)p(X2 = 2|X1 = 1) = P12 = 1/4.
P(X2 = 2, X1 = 1, X0 = 2) P(X0 = 2)P(X1 = 1|X0 = 2)P(X2 = 2|X1 = 1)
(iii)p(X2 = 2, X1 = 1|X0 = 2) = =
P(X0 = 2) P(X0 = 2)
= P21 P12 = 3/4.1/4 = 3/16.
(iv)p(X2 = 2, X1 = 1, X0 = 2) = P(X0 = 2)P(X1 = 1|X0 = 2)P(X2 = 2|X1 = 1) = 1/3.P21 P12
= 1/3.3/4.1/4 = 1/16.
(v)p(X3 = 1, X2 = 2, X1 = 1, X0 = 2) = 3/64.

You might also like