70% found this document useful (33 votes)
18K views128 pages

Stochastic Processes by Jyotiprasad Medhi PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
70% found this document useful (33 votes)
18K views128 pages

Stochastic Processes by Jyotiprasad Medhi PDF

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF or read online on Scribd
You are on page 1/ 128
STOCHASTIC PROCESSES SECOND EDITION J. MEDHI aaa EMO Lm aca Copyright © 1982, 1984 New Age International (P) Ltd., Publishers Reprint 2002 NEW AGE INTERNATIONAL(P) LIMITED, PUBLISHERS 4835/24, Ansari Road, Daryaganj, ‘New Delhi - 110002 Offices at: Bangalore, Chennai, Guwahati, Hyderabad, Kolkata, * Lucknow and Mumbai This book or any part thereof may not be reproduced in any form without the ‘written permission of the publisher. ISBN ; 81-224-0549-5 5678910 Published by K.K. Gupta for New Age International (P) Ltd., 4835/24, Ansari Road, Daryaganj, New Delhi-110 002 and printed in India at S.P. Printers, Noida. Contents Preface to the Second Edition ix Preface to the First Edition xi Chapfer 1: Probability Distributions 1-55 1.1. Generating Functions L 1.2 Laplace Transforms. 7 1.3 Laplace (Stieltjes) Transform of a Probability Distribution or of a | Chapter 2: Stochastic Processes: Some’ Notions 2.1_Introduetion 2.2 _ Specification of Stochastic Processes, 2.3 __ Stationary Processes 24 Mamtingales ‘Excicises References E eaeuese Chapter 3: Markov Chains 3.1_Definition and Examples o 3.2__Higher Transition Probabilities 7 3.3 Generalisation of In ‘Bernoulli Trials: Sequence of Chain- Dependent Trials —_82 2.4__Classification of Staics and Chains __________88 3.5 Determination of Higher Transition Probabilities 100 36 _ Stability of a Markov System 107 3.7_Graph Theoretic Approach 112 3 Chain with D 7 3.9__Reducible Chaing 0a 3.10 Smistical inf 5 3.11_ Markov Chains with Continuous State Space 139 3.12_Non-Homogeneous Chains 142 Exercises Referenges SS Chapter 4: Markov Processes with Discrete State Space: Poisson Process and its Extensions 157-220 4.1 Poisson Process 12 4.2 Poisson Process and Related Distributions 0 43. Generalisations of Poisson Process 116 44 Birth and Death Process 186 xiv Contents 4.5 Markov Processes with Discrete Stale Space (Continuous Time ‘Markov Chains) 192 4.6 Randomization (Uniformization): 4.7_Ertang Process 210 Exewisgs 0 References Chapter 5: Markov Processes with Continuous State Space 221-241 $.L_Inmoduction: Brownian Motion $.2_Wiener Process 5.3 _ Differential Equations for a Wiener Process 224 $.4__ Kolmogorov Equations: 226 $.5__First Passage Time Distribution for Wiener Process 227 $6 Omstein-Uhlenbeck Process 2 Exercises 0 References - 240 ‘Chapter 6: Renewal Processes and Theory 242-312 G1 Renewal Process oR Conia 209 63 Renewal Equation 255 6.4 __ Stopping Time: Wald's ion 258 GS Renewal Theorems 08 6.6 _ Delayed and Equilibrium Renewal Processes mm 6.7__Residual and Excess Lifetimes 28 6.8 __ Renewal Reward (Cumulative Renewal) Process 290 6.9 __ Alternating (or Two-Stage) Renewal Process: 294 6.10 Regenerative Stochastic Processes: Existence of Limits 298 6.11_Regenerative Inventory System 301 6.12_Generalisation of the Classical Renewal Theory 305 Exercises 0080 References. 0 Chapter 7: Markov Renewal and Semi-Markov Processes 313-339 ‘L1_Introduction 7.2 Definitions and Preliminary Results E F H H t 74 Limiting Behaviour 7S, First Passage Time Exercises References 8 Chapter 8: __ Stationary Processes and Time Series 8.1_Introduetion &.2__Models of Time Series 8.3 Time and Frequency Domain: Power Spectrum 8.4 Statistical Analysis of Time Series: ‘Some Observations Exercises References SEE vee ge Contents xv Chapter 9: Branching Processes 362-406 Q1 _Imerduvetign 9.2 Properties of Generating Functions of Branching Processes. 363 9.3 Probability of Extinetion 366 9.4 Distribution of the Total Number of Progeny 35 9.5 Conditional Limit Laws an 26 Generatisations of the Classical Galton-Watson Process 3.84 97 Continnous-Time Markov Branching Process 389 98 Dependent Branching Process: Bellman-Harris Process 395 9.9 General Branching Processes 399 9.10 Some Observations 400 Exesrises References Chapter 10: Stochastic Processes in Queueing and Reliability 407-523 10.1 ‘ing Systems: General Cc AO? 10.2 _ The Queucing Model M/M/!: Steady State Behaviour 4i1 10.3 Transient Behaviour of M/M/1 Model 424 104 Birth and Death Processes in Queueing Theory: ‘Mublichannel Models 0 10.5 _Non-Birth and Death ing Processes: Bulk Queues. 444 10.6 Network of Markovian Queueing System 48S 10.7 Non-Markovian Queueing Models 461 10:8 The Model Gua) a 10.9 _The Model M/G (0, bY. 486 10.10 Some General Observations 490 10.11 Reliabilit 493, APPENDIX A: Some Basic Mathematical Results A.L_Tmportant ies of Laplace Transforms A2 Difference Equations A3__Differential-Difference Equations Ad Matrix Analysis Exercises References APPENDIX C: Abbreviations Glossary of Commonly Used Notations APPENDIX D: Table D.1 Table of Laplace Transforms ‘Table D.2 Important Probability Distributions and Their Parameters, Moments, Moment Generating Functions Table D.3_ Performance Measures [steady state] of Some Queueing Models AUTHOR INDEX ‘SUBJECT INDEX 524-559 24 $30 39 S44 536 552, APPENDIX B: Answers to Exercises 560-568 569 570 S77-578 587 52 Copyrighted material CHAPTER 1 Probability Distributions 1,1 GENERATING FUNCTIONS 1.1.1 Introduction In dealing with integral-valued random variables, it is often of great convenience to apply the powerful tool of generating functions. Many stochastic processes that ‘we come across involve non-negative integral-valued random variables, and quite often we could use generating function in their studies. The principal advantage of its use is that a single function (generating function) may be used to representa whole set of individual items. Definition. Let ay, a), dy, . . . be a sequence of real numbers. Using a variable s, we may define a function Als) =a, +a,5 taste= East (1.1) If this power series converges in some interval — 54 < $< Sp, then A (5) is called the ‘generating function (or the s-transform) of the sequence, dy, @,d,, .... The variable 5 itself has no particular significance. Here we assume s to be real but a generating function with a complex variable z is also sometimes used. It is then called a z-transform. The power series (1.1) can be differentiated any number of times, also a d a U= ZF, holds, within its radius of convergence. Differentiating (1.1) times, putting s = 0 and dividing by k!, we get a,, Le. (1.2) 1.1.2 Probability Generating Function : Mean and Variance Suppose that X is a random variable that assumes non-negative integral values 0, 1,2,...,and that Pr{X=A) = py & 201.2... 22,21 a3) If we take a, to be the probability p,. & = 0, 1, 2, . . . then the corresponding generating function P (s) = © p,s' of the sequence of probabilities {p,} is known as 2 Stochastic Processes the probability generating function (p.g.t.) of the random variable X. Itis sometimes also called the s-transform (or geometric transform) of ther. v. X. We have P (1)= 1; the series P (s) converges forat least—1 1, E kp, diverges then we say that E (X) = P’ (1) = ee and if Ek (k - 1) p, diverges then P” (1) =o and var (X) = 0, ‘The relation (1.6) gives the mean and (1.8) the variance of X in terms of the p.g.f. of X. In fact, moments, cumulants etc. can be expressed in terms of generating functions. More generally, the kth factorial moment of X is given by 4 Stochastic Processes p=Pr(X sk) -(hen. £=0,1,2, The ppt. of X is P(s)=(q+spy with E (0) = np and var (X) = npg. The binomial r.v. has parameters and p and is the sum of n independent Bemoulli 1wv.’s each with parameter p. Example L(c). Poisson distribution: Let X be a Poisson variate with p.m. exp(-Ayn* Pe=PHX =h)

Oisknown as negative binomial distribution with index r, and mean rqp. In particular, when r is a positive integer, it is known as Pascal distribution. (See Exercise 1.9 for distribution of the sum of two independent geometric variables). Example (i). [f X, ¥ are independent r.v.'s. with generating functions A (s), B (s) respectively, then the generating function D (s) of W = X ~ ¥ is given by D(s) 5 E{s'} = E{s (57) S EMS EL(15)'}, because of independence of X, ¥; thus Pr {X — ¥ =A} is the coefficient of s', k= 0, 41,42... in the expansion of Ds) = ABU). Probability Distributions 11 1.1.4 Sum of a Random Number of Discrete Random Variables In Section 1.1.3 we considered the sum S, =X, + --- +X, of a fixed number of mutually independent random variables. Sometimes we come across situations when we have to consider the sum of arandom number N of random variables. For example, if X, denotes the number of persons involved in the ith accident (in a day in a certain city) and if the number N of accidents happening on a day is a random variable, then the sum Sy =X, +++ +Xj,denotes the total number of persons involved in accidents ona day. ‘We have the following theorem for the sum Sy. .-, be identically and independently distributed random and p.gf. P(s) = L.p,s* for =1,2,... (uy Sy aX, +...4X yn (1.12) where N is a random variable independent of the X;’s. Let the distribution of W be given by Pr {N =n} =, and the p.gf. of Nbe G(s)=Eg,5". (1.13) Then the p.g.f. H (5) of Sy is given by the compound function G (P (3)), ie. H(s)= E Pr {Sy = j}s'=GP(s)). (14) Proof: Since N is a random variable that can assume values 0, 1, 2, 3,..., the event Sy =jcan happen in the following mutually exclusive ways: N =n and S,=X,+... +X, =j fora = 1,2, 3,... To meet the situation N = 0 let us define X_ = Sy that XgtX,+..4Xy=Sy, 220. Wehave hy=Pr(Sy=i) = 5 PrN =aandS,=/}. Since N is independent of X;"s and therefore of S,, E Priv =n} PriS,=/) (1.15) 12 Stochastic Processes (1.16) (The inclusion of the value 0 for m can be justified since Pr (S, = 0) = 1 and Pr (S,>0}=0.) ‘The p.g.f. of the sum Sy =X, +--- +X, is thus given by H(s)= E hs! =D Pe {Sy = hs! (1.17) pheat =E[[Eerc.=al]e a8) = x[zrs, =is| & =ZalPo)"=GP), since the expression {P (s)}" is the p.g.f. of the sum S, of a fixed number 1 of iid. random variables (Theorem 1.2). a As P (s) and G (3) are p.g.f.'s, so also is G (P (s)); G (P (s)) = 1 fors = 1. “Since G (P (s)) is a compound fuuction, the corresponding distribution is known as compound distribution. It is atso known as random sum distribution. Mean and Variance of Sy We have Ey) = HS) |= (PGP M2 = PG (P(1) =P()G") so that E{Sy} = E{X}E{N}. (1.19) Again, HS) =P(S)G (Ps) +G"PE PCF Thus E(SR)-E(Sy) =H) Les = [E(K}) - EXE) HEW)-EN MEX Probability Distributions 13 Using (1.19) we get EXSp) = E(X)E(N) — E(N) (EX)? FEN YER = E(N) var (X,) + EN?) EX)? Finally, var (Sy) = E(S)-[E(Sy)]? = E(N var (X)+ EK) var (N). (1.20) Note: The result (1.20) can also be obtained by using the relation var (X) =E [var (X | Y)] + var (E(X | ¥)) (a) and taking X = Sy and Y = N, (See Sec. 1.1.5 for conditional expectation). We have, E(Sy|N=n)=n£(X,) with probability Pr(V =n) =, $0 that E(SpIN)= Ene Xe, = E(X)E() and [EGyINI? = LHEHNs, HEX SEW"). Thus var (Sy 1) = [EX var (W). @) Again var (Sy [M = 2) = nvar(X,) with probability g, 14 Stochastic Processes so that : Elvar (Sy |N)] = Z nvar(X,)g, = [var X%)] E(N). (c) Using (b) and (c) in (a) we get (1.20). Example 1(j). Compound Poisson distribution: If N has a Poisson distribution with mean A, then G (s) = Eg,s"= exp (A (s— 1)} and hence the sum X, +... +Xy has pet. H(s)= GP(s)) =a, [POs " =exp(AP (s)— II}. ‘The distribution having a generating function of the form exp {A [P (s)— 1]}, where P (s) is itself a generating function, is called compound Poisson distribution. The mean is given by E{Sy} = EX} EIN} =EX,). Here g, = e*.'/a!. Similarly, taking g, = (1 —p) p", one gets compound geometric distribution, with H (s)=p P (s)/[1-q P(s)]. 1.1.5 Generating Function of Bivariate Distribution Bivariate Distribution: Suppose that X, Y is a pair of integral-valued random variables with joint probability distribution given by Pr{X=j,¥ sk} spy, j,k =0,1,2,..., Epa=l. (1.21) ‘The marginal distributions are given by Pr(X=A=Z perf, J20,1,2,... (122) Priv=k)=E pees, £2012... (1.23) B If (, b) is any point at which f, #0 (i.e. fj > 0), then the conditional distribution of Y given X =/is given by Pr{Y Pr{Y =k 1X =j} = (1.24) Probability Distributions 15 Conditional Expectation ‘The conditional expectation E {¥ 1X =} is given by EY |X = J) = LkPr{Y =k IX =/} EkPp 5 + £20,122. (125) ForX given but unspecified, we note that E {¥ 1X} isa random variable that assumes the value (Ek palf] with Pr (X =} =f)>0. Hence the expectation of the random variable E {¥ 1X} is given by ELEY 1X4) = EE(V IX =f} PrX = J} Lkpp {Eh LG =LEkPy ee =Ek {ze} = LAP =K}= EY). (1.26) 7 In the same way, we can prove that E(E{Y? 1X} = ElY"]s (1.27) more generally, for any function (Y) whose expectation exists, E(E{9(Y) |X}) = Elo (Y)). (1.28) Note (1). The results (1.26—-1.28) which are given here for discrete random variables X, ¥ will also hold, mutatis mutandis, for continuous random variables. Note (2). The result (1.26) holds for all r.v.'s provide E (¥) exists. Enis (Biometrika (1973) 432) cites an example where E [E (¥ 1X)] =0 but E (¥) does not exist and consequently (1.26) does not hold. Bivariate Probability Generating Function Definition. The probability generating function (bivariate p.g.f.) of a pair of random variables X, Y with joint distribution given by (1.21) is defined by POs. si)= ¥ pysish, 1.29) 16 Stochastic Processes Siy 8 being dummy positive real variables chosen so as to make the double series convergent. We have the following results, Theorem 1.4. (a) The p.g.f.A (s) of the marginal distribution of X is given by A (s) = P (s, 1). (b) The pg-f. B (s) of ¥ is given by B(s) =P (1,5). (6) The pact of (% + ¥) is given by P (s,)- Proof: Since the convergent double series (1.29) consists of positive real terms, the change of order of summation is justified. (a) We have from (1.29) Ps, I= z Pas! = Edea -¥ psi=, z Pr{X = j}s/, from (1.22) =A (s) (the pg fofX). (b) It can be proved in the same way. (©) We have <= jt = F J POsss)= E pas = & & Pens as can be easily verified. Now Pr{X +¥ =m =i (X=,¥em-h =Zn.. Hence. Pos)= % (PAX +¥ =m) Js", which shows that P (s, s) is the p.gf. of X +¥. a Remarks: (1) If X, ¥ are independent, Dy =PHAX =j,¥ =k} =Pr{X =} Pri =k} and then P55) =A(S)BS) =P, PLS) Probability Distributions 17 and conversely, from this relation follows the independence of X and Y. (2) The probabilities py can be (uniquely) determined from P (s,, 53) as follows: a re] Pa TW, elas, Example 1(k). Consider a series of Bemoulli trials with probability of success p. Suppose that X denotes the number of failures preceding the first success and ¥ the number of failures following the first success and preceding the second success. The sum (X + ¥) gives the number of failures preceding the second success. The joint distribution of X, Y is given by Dg =Pr{X =),Y sk} =qi'tp?, j,k $0,1,2,.005 and the bivariate generating function is given by PO5,)= E. pysisl = Eq! **prsist tee it -1 fe Stu ? “U=sa-59) ‘The p.g-f. of X is given by p AG)=PS =r ag) T-9s . ‘The p.g.f. of X + Y is given by 2 rens=(-25) ’ (See Example 1¢h)). 1.2 LAPLACE TRANSFORMS 1.2.1. Introduction Laplace transform isa generalization of generating function. Laplace transforms serve as very powerful tools in many situations. They provide an effective means for the solution of many problems arising in our study. For example, the transforms are very effective for solving linear differential equations. The Laplace transformation. 18 Stochastic Processes reduces a linear differential equation to an algebraic equation, In the study of some probability distributions, the method could be used with great advantage, for it happens quite often that it is easier to find the Laplace transform of a probability distribution rather than the distribution itself. Definition. Let f (f) be a function of a positive real variable ¢. Then the Laplace transform (LT) of f (0) is defined by Far= femesnytnd Qa) 4 for the range of values of s for which the integral exists. ‘We shall write f (s) = L (f (0) to denote the Laplace transform of f (1). Example 2(a). @_TEf(e) = c(const,), Tee) [ exptstbedt= cts (>0) 2 Hpin=t, Fis= frexpeande= us? 6 >0) : Hepa, Fisd= [tempest Tint tis" (o> 0) 4 Though f (0 >» >— 1) is infinite at ¢ = 0, the result holds for n > — 1. In particular, itfo=r", then, Fos) -r(3) Ie =F : GD) Letf()=e%,then — fis)= ferpcmespanse ° =Ms-a) (s>a) Gi) Let f(¢) = sine, then fs) = fexpcansin tat ° =1(s'+1) (s>0) (i) Letfe)=e"F,then—7ts)= | exptcstexpl-ayat 3 Probability Distributions 19 = fexpi-ne +1} Adar = Ta+)) G+iyt! (a >-1). Example 2(b). Dirac-delta (or Impulse) function located at @ is defined as &t-a)=1, t=a =0, r¥a The L.T. of 8 (t~ a) equals feree -a)dt=e™. Unit step function (at a) is defined as The L-T. of u, (t) ise™/s. 1.2.2 ‘Some Important Properties of Laplace Transforms : see Appendix AL 1.2.3 Inverse Laplace Transform Definition. If (s) is the L-T. of (0) ic. L (f(@) =F (s), thenf (i)iscalled the inverse Laplace transform f(s). For example, the inverse Laplace transform of f (3) = 1/5" is (0 51. There is an inversion formula which gives f (0 in terms of f (s); if f (s) exists, then f (0) can be uniquely determined subject to certain conditions satisfied by f(). In particular, if two continuous functions have the same transform, they are identical. Extensive tables of Laplace transforms are also available and reference may be made to them, whenever necessary, to obtain either the L.TT. of a function f (f) or the inverse L.T. of a function f (s). Techniques for numerical inversion of Laplace transforms are discussed in Bellman et al. (1966). 13 LAPLACE (STIELTJES) TRANSFORM OF A PROBABILITY DIS- ‘TRIBUTION OR OF A RANDOM VARIABLE 13.1 Definition Let X be a non-negative random variable with distribution function F(x)=Pr{X Sx}. 20 ‘Stochastic Processes The Laplace (Laplace-Stieltjes) transform F* (3) of this distribution is defined, for 5&0, by F*s)= fewesnaren G.I) ° We shall say that (3.1) also gives the ‘Laplace (Laplace-Stieltjes) transform of the random variable X””, We have Fs) = Efexp(-sX)} Gla) and F*Q)=1. (3.2) Suppose that X is a continuous variate having density f (x) = F’ (x). Then form (3.1), Fats)= f expsriftelde 63) 4 (this is the “ordinary” Laplace transform L {f (x)} = 7(s) of the density function f@). By a Laplace transform of a r.v.X, we shall mean the L.S.T. of the distribution function F (-) of X; this is equal to the ordinary L.T. of the density function of X when this exists. We have Fe (s)= f(s). Incase X is an integral-valued random variable with distribution p, = Pr (X=), k=0,1,2,... and pig. P (s) =Eas', we can stretch the language and define the LT. of X by F*(s) =E {exp(-sX_ I = Pf{exp(-s)}. G.4) ‘Thus in case of a discrete random variable assuming non-negative values 0, 1.2, 3,....the L.-T. of the variable differs from its p.g.f. only by a change of variable: exp (— 5) in the former replaces s in the latter. Thus there is a close analogy between the properties of Laplace transforms and those of generating functions ors-transforms. Note. When it exists, the function Probability Distributions 21 is called the moment generating function of the rv. X (it is the generating function of the sequence j2,/n!, where 1, = £ {X"} is the nth central moment of X), ‘The function $0) = Efe} defined for every r.v. X is called the characteristic function of X. When X is non-negative and integral valued, then its'p.g.f. is considered; so is its L.-T. when X is non-negative and continuous as these are easier to handle. The characteristic function, defined for all r.v.'s, is a more universal tool. 1.3.2. The Laplace Transform of the Distribution Function in Terms of that of the Density Function. Let X be a continuous (and non-negative) r.v. having density function f (x) and distribution function Pr{X $x} =F(x)= [nae 3 ‘The (ordinary) Laplace transform of the distribution function F (x) is LF} = f exp-snF eee ‘We have from A.6 (Appendix) Fos) =L{F eo) = fexpcs) {fneha : t = ffs} =fisys. ‘The relation can also be obtained by integrating by parts the relation (3.1). Thus we get Fs) =f(s)=sF(s). G5) 1.3.3 Mean and Variance in Terms of (Derivatives of) L.T. ‘We note here that differentiation under the integral sign is valid for the L.T. given by (3.1), since the integrand is bounded and continuous. Differentiating (3.1) with respect to s, we get 22 Stochastic Processes a - rms) = f xexp(-sr)dF tr) # rary acy fa? war's) =(-ly fe exp(—sx dF (x) and, in general, for n= 1,2,... Sr) en" jr exp(-sx)dF (x). G6) The differentiation under the integral is valid since the new integrands are con- tinuous and bounded. We can use the above relation to find £ (X"), the ath moment of X when it exists; we have C1] F “9 = ftexpcso}s'arey fors =0 eo 4 = frdray=£0%, n=I,2,... ° when the Lh. s, exists, i.c. the rv, X possesses a finite a th momentiflim < SF SIiao exists. We have eare-[Lrre)] G7) ds 0 " EQ)= -[Sr «| G8) sno d 2 and var (X) = [fro] [fereo} I G9) =o 100. 1.3.4 Some Important Distributions 1.3.4.1 A special kind of discrete distribution: Suppose that X is a random variable whose whole mass is concentrated in one single-point, say, point a. This implies that the variable is ‘almost always’ equal to a, ie. Pr (X= @) = 1 and Pr (X #a) = 0, Its distribution function is given by Probability Distributions 23 F(x)=Pr{X $x} =0 forx0, OSx The distribution is known also simply as exponential distribution. Properties of exponential distribution (a) Non-ageing (or memoryless or Markov) property Fora rv. X having exponential distribution with parameter A, we have eter Pr{X2xty[X2x} = e =e =Pr{X zy}. ‘This implies thatif the duration X of acertain activity has exponential distribution with parameter A and if the activity is observed after a length of time x after its commencement, then the remaining duration of the activity is independent of x and is also distributed as an exponential r.v. with the same parameter A. Again, if X is a non-negative continuous r.v. with non-ageing property, then it can be shown that X must be exponential. ‘This non-ageing or memoryless property characterizes exponential distribution among all distributions of continuous non-negative r.v."s. (b) Minimuon of nwo independent exponential distributions: ‘Suppose that X,,X3 have independent exponential distributions with parameters Ay Ap respectively. Let Probability Distributions 25 Z=min(X,,X). We have Pr{Z 2x} = Pr{X, 2x}Pr{X, 2x} se"e™ =e so that Z is exponential with parameter (A, + 4,). This implies that if the durations X, and X, of two activities A, and A, have independent exponential distributions with parameters 2,, 4, respectively and these: activities are observed when neither has been completed, then the duration of the interval Z upto the first completion of one of the activities has also exponential dis- tribution with parameter 2, + Ay. The probability that the activity A, will be completed carlier than the activity Ay is given by Pr{X,x} 3 = [re ™are™ hy Mth Similarly, _' Pre y)= y+ E(X) forall y or EQ -y|X>y)=EQK) forall y. Gi) Two independent continuous variables X,, X, are exponential iff Z=miin(X,X,) and W=X,-X, are independent. 1.3.44 Uniform (rectangular) distribution: Let X have uniform distribution in (0, 1). The density function of X is f=, Osxs1 =0, otherwise. The L:T. F* (s) of the rv. X is given by « 1 FHsy= ferpcsreerer = fenpirsnnte ={1-exp-s)}/s. ° 2 ‘The mean of X is 3 and the variance is 1/12. 13.4.5 Gamma distribution: LetX have a two parameter gamma distribution with parameters 4, k (A > 0 is the scale parameter and k > 0 is the shape parameter). The density function of the r.v. X is Ret Nexpa. fut) =P), x50 =0, x<0. (3.15) The L.T. F* (s) is given by Fes)= [ exp confiende Probability Distributions 27 = MED [lempl—ts Hayne ae , expe? = aru) [eee a (putting (s +A)x =") 4 aa”. a “(45 . (3.16) We have 4 paiyn-—ER gPO=-T a Dace) KAFUM a eye? Hence from (3.7) and (3:8) Gin and . _kk+1) (kP_ var = (i) B 3.18) . soe sd. 0) at i‘ ‘The coefficient of variation = “F75"= INK: itis less (greater) than 1 according as k is greater (less) than 1. Further, E (X) =[k +1)... k+r-D1/ Nr. When A = 1, we get single parameter gamma variate with density Fed), 5 9, Tk) and its Laplace transform is 1 f(s + 1). ‘When & = 1, the gamma distribution becomes the negative exponential distribution “with density A exp (— Ax) and having Laplace transform A /(s + 2). Another special case is A = 1/2 and k = n/2, wheren is a positive integer. The density then becomes (for.x >0) 28 © Stochastic Processes x°7-1 exp (x2) 2F(n/2) and the random variable is then said to have a chi-square (2) distribution with n degrees of freedom. The L.T, of the density is (1 + 2s)". Fig. 1.2 The gamma density fi , (x) for some values of 2, & ‘The gamma distribution has considerable scope for practical application; many empirical distributions can be represented roughly by a suitable choice of the parameters 1, k. Burgin (Opal. Res. Orly.26 (1975) 507~525) discusses its application in inventory control. 13.46 Erlang distribution: Writing 2X for A, we have a gamma distribution with parameters AX, k having density function (Aky'x*“Sexp(-khx) Sua @O= Th x>0 G.19) ‘When k is a positive integer, the variable having this for its density function is said to have £rlang (or Erlang-X) distribution and is denoted by £,, In fact, when kis a positive integer, gamma distribution with density fi, (x) is known as Erlang—k distribution. Erlang—k density is shown in Fig. 1.6. ‘The L.T. of Erlang-k distribution is (; ue); 820) Wehave E(E,) =k/Ak = 1/A; var (E,) = kA) = 1k, G21) ‘The coefficient of variation equals 1 1K which is <1 for all k (> 1). Further Probability Distributions 29 m,=E(Ef], r22 (etr—1) 1.3.4.7 Hyper-exponental distribution: A mixture of k @ 2) independent exponential distributions having pdf is called a k-stage hyper-exponential distribution. We have, for its L.T. rx= $0.) with E(X)=mean= zt and e 2 (3 zy. =variance= 3 A The coefficient of variation ay / E (X) is always greater than 1. It corresponds to a mixed-exponential distribution with k-stages in parallel: the stage iis traversed with probability a;, the time taken having exponential distribution with mean I/h,. It is also known as mixed-exponential. A hyper-exponential distribution with k-stage is denoted by H,. 13.4.8 Hypo-exponential distribution: ‘ A sum of & (2 2) independent exponential distributions Z = © X, where X, are exponential with parameters 4, (A; # 1, i #,j) is called a k-stage hypo-exponential distribution, ‘While 4-stage hyper-exponential corresponds to k-stages in parallel, k-stage hypo-exponential corresponds to k-stages in series, the time taken to traverse stage i being exponential with mean 1/A,. See Fig. 1.3, For k = 2, the pdf is given by fr) sade +a age" where We Stochasne Pracesses %,4/—(2) a\—-&) a {ay wm ve Bret a (b) Fig. 1.3 Schematic represention of (a) hyper-exponential (b} hypo-exponential distributions, each of k stages The pdf of k-stage hypo-exponential Zis given by fad= Bake where ‘The LT. of the distribution is x PAs)= x ef with $1 FO fi on hae The coefficient of variation 6z/ £ (Z) is always less than 1. 2 Note that, © a; =, since f &) is pdf. Though the form of the pdf appears to _ Tesemble that of a hyper-exponential distribution, it is different; it is of a particular form in case of the hypo-exponential distribution. Note: The prefix hyper(hypo) can be linked to the fact that the coefficient of variation {c.v.] is greater (less) than unity, the c.v. of exponential distribution. Probability Distributions 31 1.3.4.9 Coxian Distribution: Exponential, Erlang, hyper- and hypo-exponential distributions all have their L.T.’sas rational functions of s. The poles are on the negative real axis of the complex s-plane, A family of distributions is obtained by a generalisation of an idea (duc to Erlang) by Cox (1955). The family of distributions has as L.T. a rational function of 5 with the degree of polynomial in the numerator Jess than or equal to that of the denominator. This family has been increasingly considered in the literature as a more general service time distribution (in queueing context) and repair time distribution (in reliability context). Extensive references to Coxian distributions.appear also in computer science and telecommunication literature. ‘Considera service facility, with m phases of service channels (nodes); the system: is a subnetwork of m nodes. The service time distribution in the node (phase) i, i=1,2,...,misexponential with rate p, service time in any node being independent of the service time in other nodes. The job (or the customer) needing service can be at one of the m stages at a given time and no other job can be admitted for service until the job receiving service at one of the nodes (phases) has completed his service and departs from the system. Ajob enters from the left and moves to the right. After receiving service at node 3, the job may leave the system with probability 6; or move for further service to the ‘next node (i + 1) with probability a,, a,+ b, = 1, = 1, . ..m; we can include i = 0 ‘such that @) = 0 indicates that the job does not require any service from any of the nodes and departs from the system without receiving any service from any node, whereas a = | indicates that it needs service at least from the first node. After receiving service at the last node a, if it reaches that node, the job departs from the system, so that b= 1, Fig. 1.4 Schematic representation of Coxian distribution with m phases ‘The distribution is denoted by K,, (or C,, as a tribute to Cox). The probability that a job receives (requires) services at nodes 1,2,..., k (k Sm) and departs from the system after service completion at the facility equals A, b, , where A, =do a)... a1 We have by +2 Ab Let ybe the total service time of a job. The L.T. of thex.v. ys given by 32 Stochastic Processes al Seon. aml =] * Blea 0D ey, F Hs) = bo 5 Aa A sth eal ‘The last term can also be written as sl d Be sy a-b) |- ino SHH, In general, we have by = 1, then A; = 1; the service is received at least at the firstnode. We have F*Q)=1, = ay EM= X Abs > il, and that the coefficient of variation 6, / E (y) is not less that 1 /-Vra, that is, it ranges from 1/Vnt to © , depending on the vaiues of the parameters involved. Particular Case, m = 2 (Two-phase facility): K,-distribution. Suppose that a job needs service at node 1 and then leaves the system with probability q or (after receiving service at node 1) goes to node 2 with probability p (1-4) for receiving further service before departing from the system. The service times at nodes 1 and 2 are independent exponential variables with parameters jl, and {1 respectively. The total time that the job requires is a random variable (X) which has K, distribution. Fig. 1.5 Schematic representation of two-phase Coxian distrubution (K,) The configuration here is like a 2-stage parallel exponential system as described below. (A): The job receives service with probability q and then departs from the system; (B): or the job receives service with probability p at the both the facilities sequentially and then departs from the system. Probability Distributions 33 ‘The ry.X can be written as X =X, with probability q =X, +X, with probability p where X,, Xz are independent exponential variables with, parameters j1, and jt, respectively. Thus we can at once write down the LT. F* (s) as, Feigy= + -a) fl js: al where =f—te. a On simplification, we get = Dita (5+ $i) (s + Ha) Fes)=4 Stik For q= 1, K; reduces to an exponential distribution and forp = 1, pt; =H, K; reduces to Enang-2 distribution, Hyper-exponential (H,) is also a particular case of K,-distribution as can be seen below. Consider a H, distribution having L.T. Gr) =e SS (v,>0,¥,>0, 0<@<1) and let H,=max(V,,V,), P= min (v,, V,) and gation, wh then it can be seen that G* (s) = F* (s) (for K; distribution). Thus a K, distribution becomes a H, distribution. A necessary condition for a X; distribution to be a H, distribution is that (wf <4 it follows since min (¥y,¥2) < OV, + (1-@) Ve 34 Stochastic Processes ‘The first three moments of KX; distribution can be obtained by taking derivatives of F* (s) or alternatively (and simply) by considering X in terms of X, and X, + Xz and taking moments of X, and X, +X;. Using EQ)= hy, EX)=2pi, 6q 1 1 = 46p(5+4+5—-+-5 me (i wath, HBB 2S 18 846 (eae) i ra From the above, the squared coefficient of variation variance (K,) v= [EGP bite +a nt dat puny 1 and Viz5= Qh -p mY e4p HP ~1) which is obvious, since the r-h.s. of the inequality is a negative quantity, p being less L ‘than I. Thus the squared coefficient of variation of K, distribution lies between 5 and 1 =, Further, V,? = 5 when p = 1, 1, = Hl; (in case of E; distribution). Note that V,2 1 when (}12/1,) 0,1, >Oand0

. A simple version of CLT for equal components is as follows. IfX,....,X,,...arei.id. random variables, with F(X) =u, var 0) = 7 (both finite) and S, =X, +... +X, then asin — e, m{ Heal 010 where © (:)is the d.f. of N (0, 1). Probability Distributions 37 13.5 Three Important Theorems We state, without proof, three theorem conceming the L.T. of probability dis- tributions Theorem 1.5.. Uniqueness Theorem: Distinct probability distributions on [0,°) have distinct Laplace transforms. Ie has the important implication that a probability distribution is recognizable by its transform, If the L,T, of a random variable X is known, then by identifying this with the form of L.T. of a distribution, one can conclude that X has the corresponding distribution; if this form does not resemble with the form of a L.T. of any standard distribution one can proceed to find the inverse of the L.T. (for which numerical methods have also been developed) to get the distribution. Even without finding the inverse, i.e. the form of the distribution, one can compute the moments of the dis- tribution. Because of this theorem, perhaps, L.T. has the role it now plays in the study of probability distribmions. Theorem 1.6. Continuity Theorem: Let (X,),n= 1,2, ... bea sequence of random variables with distribution functions {F,} and L.T."s (F,* (s)}.1fas n 0, F, tends to a distribution function F having transform F* (s), then as m —> 09, F,* (s) > F* (s) for s > 0, and conversely. This theorem can be used to obtain the limit distribution of a sequence of random variables. Theorem 1.7. Convolution Thearem: The Laplace transform of the convolution of two independent random variables X, Y is the product of their transforms. ‘The integral fs0-vvord (3.222) (denoted by f* g) is called the convolution of the two functions f and g. The con- volution U of F and G is given by vere [eu-ydFo) 226) 3 and L{UG)} =L{GOHLIF OQ). (3.22¢) In case of discrete random variables the result is essentially the same as that statec in (1.10) for generating function of the convolution of two random variables. In case of continuous random variables we have now the analogous result. The above result is equivalent to the assertion that if X, Y are two independent random variables, then 38 Stochastic Processes Efe"™ "P= E fexp(-sX exp(-s¥)} = E{exp(-sX)} Efexp(-s¥)). 23) (Note that the converse is not true). ‘The result can also be stated as: Theorem 1.8. The L.-T. of the sum of two independent variables is the product of L.'s of the variables. Wee can use the result to find the distribution of the sum of two or more inde- pendent random variables, when the variables are continuous ordiscrete. The method of generating functions is applicable only when the variables are. non-negative integral-valued. The following result immediately follows: Theorem 1.9. The sum S, = X, +. . . + X, of m (a fixed number) identically and independently distributed random variables X; has the L.T. equal to [F* (s)]", F* (s) being the L-T. of X. As an application we consider the following result: Theorem 1,10. The sum of k identical and independent negative exponential dis- tributions with parameters 4 follows gamma distribution with parameters A, k. Let Xj,G=1,2,...1,2,...4, have densities Six) = Lexp(-Ax),x 20 forall i. From Example 3(c), the L.T. of X; is Fy4(s)=M(s +A) and so the L.T. of S=X, +Xz+...+X,isQ/s+Ayj*. But this is the L.T. of the gamma distribution with density f,, (x) given in (3.15) (Example 3(¢)). Hence the theorem. ‘Wecan easily obtain the mean and variance of the gamma distributionas follows: E(S) = E(SX,) = ZE(XK)=kA var (5) = Zvar (X,) =k/A! (see (3.17) and (3.18). Limiting form of Erlang—k distribution: As k — ©, Erlang-k distribution tends to a discrete distribution concentrated at 1/A. Let {£,) be a sequence of r.v."s, E, having Erlang—& distribution with density k)tx*"Sexp(-kAx)} TK) &=1,2,3,... Fa s0= Probability Distributions 39 Is L.T. is given by F,*(s) (245) (i + As k 900, F,* (5) exp(-s/A). (3.24) Butexp (—s/A) is the L.T. of the discrete distribution concentrated ata= 1/4 (Example 3(a)). Therefore, from the continuity theorem it follows that, as ke, the distribution of E, tends to that variable whose whole mass is concentrated at 1/A. This can also be seen from the fact that, as k =9 ©», the mode (k = 1)/KA of E, moves to the right to 1/A and that as var (E,) = 1/kA? —> 0, the whole mass of E, tends to concentrate at the single point 1/A. ‘We thus have exponential distribution for k = 1 and the degenerate distribution (deterministic case) fork —9 2. A suitable value of k (1

You might also like