0% found this document useful (0 votes)
14 views

PRP Module 2

The document covers the concepts of random variables, including discrete and continuous random variables, their probability mass functions (PMF), probability density functions (PDF), and cumulative distribution functions (CDF). It explains the calculation of mean and variance for both types of random variables, providing examples and exercises for better understanding. The document is part of a lecture series on Probability and Random Processes at Jaypee Institute of Information Technology.

Uploaded by

bansaltarun2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views

PRP Module 2

The document covers the concepts of random variables, including discrete and continuous random variables, their probability mass functions (PMF), probability density functions (PDF), and cumulative distribution functions (CDF). It explains the calculation of mean and variance for both types of random variables, providing examples and exercises for better understanding. The document is part of a lecture series on Probability and Random Processes at Jaypee Institute of Information Technology.

Uploaded by

bansaltarun2004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 113

Probability and Random Processes

(15B11MA301)
Lecture-6
(Course content covered: One dimensional discrete random variable)

Department of Mathematics
Jaypee Institute of Information Technology, Noida
Random Variable
• A random variable X is a function that assigns a real number to each
outcome of the sample space S, i.e., X: S→R is a mapping from the
sample space S to the set of real numbers R.
• The sample space S is the domain of the random variable and the set of
all values taken on by the random variable is the range of the random
variable X.
• A random variable is usually denoted by an uppercase letter such as X
and the corresponding lowercase letter, such as x, denotes a possible
value of the random variable X.
• If only one characteristics is considered corresponding to the sample
space of a random experiment, then the random variable is called one
dimensional random variable.

2
Random Variable Contd..
• More than one characteristics may be considered corresponding to the
sample space of a random experiment, e.g., suppose that S consists of a
large group of students of a college. Let the experiment consist of
choosing a student at random. Let X denotes the weight of the student
and Y denotes the students' height. Then (X, Y) is a two dimensional
random variable. The same idea may be extended further.
Example
The “number of heads obtained” in an experiment of tossing three
unbiased coins simultaneously, so the random variable X can take
values 0, 1, 2 and 3, i.e.,
Domain of X= S ={TTT, TTH,THT, HTT,HHT,HTH,THH,HHH},
and
Range of X={0, 1, 2, 3}
3
Discrete Random Variable
• A random variable which can assume some specific values is called a
discrete random variable.
• The main characteristic of a discrete random variable is that the set of
possible values in the range can all be listed and the list may be a finite
list or a countably infinite list.
Examples
• Number of heads obtained when 10 coins are tossed.
• Number of phone calls received, per day at a telephone booth,
• number of sixes obtained when a pair of dice are thrown.
• Number of spade cards when 4 cards are chosen from a well shuffled
pack of 52 playing cards.
• The number of odd numbers selected out of the set of positive
integers.

4
Continuous Random Variable
• A random variable X which can take any value in an interval of real
numbers is said to be a continuous random variable.
• The range of X can take infinitely many real values within one or
more intervals of real numbers.
• The set of all possible values in the range cannot be listed in case
of a continuous variable as the list is uncountably infinite.
Examples
• The duration of a phone call received.
• The time to failure of a machine.
• The temperature gained by an electric motor after one hour of
operation.
• The amount of rainfall in a day.
5
Probability Mass Function
Let a discrete random variable X takes the values x1, x2, x3, . . ., xn,
then the probability function or the probability mass function (PMF) of
X is denoted by
f(xi) = P(X = xi) = pi ; for i = 2, 3, . . .,n,
i.e.,.

X : x1 x2 x3 x4 . . . xn
P( X  xi ) : p1 p2 p3 p4 . . . pn
such that
(i) P( X  xi )  0;
(ii )  P( X  x )  p  p
xi
i 1 2  p3  p4  . . .  pn  1.
6
Probability Distribution
If pi represents the probability corresponding to X= xi , for i=1,
2, 3,…,n; then the collection of pairs (xi , pi) is called the
probability distribution of the discrete random variable X.

Example. If a pair of coins is tossed and random variable ‘X’ is


‘Number of heads’, then its probability distribution is:

(X=xi) : 0 1 2

P(X=xi) = pi : 1/4 2/4 1/4

7
Example. Let X represents the number of heads when three fair
coins are tossed. Find
(a) the probability distribution of the number of heads,
(b) P(0< X <3), (c) P(X >1), (d) P(X ≤ 2).
Sol.: S = {TTT, TTH,THT, HTT,HHT,HTH,THH,HHH}
(a) The probability distribution is as follows:
X (Number of Heads): 0 1 2 3
Probability: 1/8 3/8 3/8 1/8

(b) P(0< X <3) = P(X =1) + P(X =2) =6/8=0.75,

(c) P(X >1) = P(X =2) + P(X =3) =4/8=0.50,

(d) P(X ≤ 2) = P(X =0) + P(X =1) + P(X =2)=7/8


8
Cumulative Distribution Function
• Cumulative distribution function (CDF) of a discrete random variable
X is denoted by F(x). It is defined as follows:
F ( x)  P( X  x)   f ( x ).
x x
i
i

• Properties of cumulative distribution function (CDF)


(i) 0  F ( x) 1,
(ii) F ()  0,
(iii) F () 1,
(iv) If x1  x2 , then F ( x1)  F ( x2 ),
(v) P( X  xi )  F ( xi )  F ( xi 1).

9
Example. The probability mass function of a random variable is as
follows:

X 1 2 3 4 5 6
P k 2k/3 3k 1/3 k/3 1/6

Determine the following:


(a) value of k (b)F(4) (c) F(6) (d) P(X=3)

Solution:
(a) Since k + (2k/3) + 3k + (1/3) + (k/3) + (1/6) =1, so
k =1/10 =0.1,
(b) F(4) = P(X ≤ 4) = P(1)+ P(2)+ P(3)+P(4)= 0.8,
(c) F(6) = P(X ≤ 6)= P(1)+ P(2)+ P(3)+P(4) + P(5)+P(6)=1,
(d) P(X = 3) = 3k = 0.3, or P(X = 3) = F(3)-F(2)=0.3.

10
References/Further Reading
1. Veerarajan, T., Probability, Statistics and Random Processes, 3rd
Ed. Tata McGraw-Hill, 2008.
2. Ghahramani, S., Fundamentals of Probability with Stochastic
Processes, Pearson, 2005.
3. Papoulis, A. and Pillai, S.U., Probability, Random Variables and
Stochastic Processes, Tata McGraw-Hill, 2002.
4. Miller, S., Childers, D., Probability and Random Processes,
Academic Press, 2012.
5. Johnson, R.A., Miller and Frieund’s, Probability and Statistics
for Engineers, Pearson, 2002.
6. Spiegel, M.R., Statistics, Schaum Series, McGraw-Hill
7. Walpole R.E, Myers, R.H., Myers S.I, Ye. K. Probability and
Statistics for Engineers and Scientists, 7th Ed., Pearson, 2002.
8. https://nptel.ac.in/courses/117/105/117105085/
11
Probability and Random Processes
(15B11MA301)
Lecture-7

Department of Mathematics
Jaypee Institute of Information Technology
Noida, India
1
Continuous Random Variable

If X is an RV which can take all possible values (i.e.,


infinite number of values ) in an interval, then X is
called a continuous RV.

e.g. Lifetime of a random light bulb


weight of a random watermelon grown in a
certain field
Probability Density Function
If X is a continuous RV such that
P( x  X  x  dx)  f ( x)dx

then f(x) is called the probability


density function (pdf) of RV X.

The curve y=f(x) is called the probability (density)


curve of the RV X.

The expression f(x)dx is c/a probability


differential.
Probability Density Function satisfies the
following conditions:

(i ) f ( x)  0, -   x   (or x  R X ),

(ii )  f ( x)dx  1

(or over R X ),

(iii ) For any a, b with -   a  b  ,


b
P(a  X  b)  
a
f ( x)dx
Remarks:
1) When X is a continuous RV
a
P(X  a )  P(a  X  a )   f ( x )dx  0
a

This means the probability that a continuous


RV assumes a specific value is zero. (unlike
discrete RV). Hence, we have

P(a  X  b)  P(a  X  b)  P(a  X  b)  P(a  X  b)


2) Prob. of X does not depend on end points of
the interval.

3) Value of pdf f(x) at no point represents a


probability. The probabilistic significance of
f(x) is that its integral over any subset I of real
numbers gives the probability that X lies in I.

6
Example
• The diameter of an electric cable (X) is
assumed to be a continuous random variable
with probability function
f(x)= 6x(1-x), 0<x<1
(i) Check that above is a valid PDF.
(ii) Determine a number b such that
P(X<b)= P(X>b)

7
8
Exercise
• Experience has shown that while walking in a certain
park, the time X in minutes, between seeing two
people smoking has a density function of the form
 xe  x , x  0
f ( x)  
 0, otherwise
• (a) Calculate the value of λ
• (b) What is the probability that Jeff who has just seen a
person smoking, will see another person smoking in
next 2 to 5 minutes?
• (c) in at least 7 minutes?

9
Cumulative Distribution Function (cdf)

If X is an RV, discrete or continuous , then P(X<=x)


is called the cumulative distribution function of X
or distribution function of X and denoted as F(x).

If X is discrete , F(x)   p j
j
X j x
X
If X is continuous, F(x)  P(-  X  x)   f(x)dx
-

10
Properties of Cumulative Distribution
Function (cdf)

5.

11
Example

12
Example

13
14
Example Suppose that the error in the reaction temperature, in °C, for a
controlled laboratory experiment is a continuous random variable X
having the probability density function
 x2
 if 1  x  2
f ( x)   3
0
 otherwise

(a) Verify that f(x) is a density function.

(b) Find P(0  X  1)

Solution (a) (i) f(x)≥0 for all x.

(ii)

(b)

15
Example If a continuous random variable X having the probability density
function
 x2
 if 1  x  2
f ( x)   3
0
 otherwise

(a) Find CDF of X.

(b) Find P(0  X  1) by using the CDF.

Solution (a)

So

(b)

16
Example

Formula used

Solution

17
Thank You
Probability and Random Processes
(15B11MA301)
Lecture-8
Department of Mathematics
Jaypee Institute of Information Technology
Noida, India

1
Mean and Variance of discrete random variable

Mean: If X is a discrete random variable, then mean of X denoted


by mX or E[X] (also read as expectation of X) is defined by
E[ X ]   x j f ( x j )   x j P( X  x j )
xj xj

Variance: It is defined as follows:

E  X  m X      x j  m X  f ( x j )    x j  m X  P( X  x j )
2 2 2

  x x
j j

It is denoted by
 X , var[ X ] or E  X  m X   (read as expectation of X  m X square)

2 2
 

2
Mean and Variance of continuous random variable

Mean: If X is a continuous random variable, then mean of X


denoted by mX or E[X] (also read as expectation of X) is defined by

E[ X ]   xf ( x)dx

Variance: It is defined as follows:

E  X  mX   

 x  m 
2 2
f ( x)dx
  X

It is denoted by
 X , var[ X ] or E  X  m X   (read as expectation of X  m X square)

2 2
 

3
Properties of Variance
If X is a random variable (discrete or continuous), then

1. Var[ X ] =E  X  m X  =E[ X ]   E[ X ]
 
2 2 2
 
Provided E[ X 2 ] exists.

2. V ar[aX  b] =a 2 Var[X ]

4
Mean and Variance of discrete random variable
Example: Let X be the total of the two dice in the experiment of
tossing two balanced dice. Find mean and variance of X.
S={11 12 13 14 15 16 X(S)={2 3 4 5 6 7
21 22 23 24 25 26 345678
31 32 33 34 35 36 456789
41 42 43 44 45 46 5 6 7 8 9 10
51 52 53 54 55 56 6 7 8 9 10 11
61 62 63 64 65 66} 7 8 9 10 11 12}
={2 3 4 5 6 7 8 9 10 11 12}

X 2 3 4 5 6 7 8 9 10 11 12
f(x) 1/36 2/36 3/36 4/36 5/36 6/36 5/36 4/36 3/36 2/36 1/36

5
Mean and Variance of discrete random variable
Example: Let X be the total of the two dice in the experiment
of tossing two unbalanced dice. Find mean and variance of X.
X 2 3 4 5 6 7 8 9 10 11 12
f(x) 1/36 2/36 3/36 4/36 5/36 6/36 5/36 4/36 3/36 2/36 1/36

Mean: E[ X ]   x j f ( x j )  7
xj

6
Solution:

7 2 1
Var ( X )  E[ X ]   E[ X ]  1 
2 2

6 6

7
Thank You
Probability and Random Processes
(15B11MA301)
Lecture- 9

Department of Mathematics
Jaypee Institute of Information Technology
Noida, India
1
Contents:
• Two dimensional RVs

• Marginal Distributions

• Conditional probability distribution

• Examples
Two dimensional random variable

Let S be the sample space associated with the random


experiment E. Let X = X(s) and Y = Y(s) be two functions, each
assigning a real number to each outcome s∈S of the random
experiment, then (X, Y) is called the two dimensional random
variable.

If the possible values of (X, Y) are finite or countably infinite,


then (X, Y) is called a two-dimensional discrete random variable.
If (X, Y) can assume all values in a specified region R in the xy
plane, then (X, Y) is called a two- dimensional continuous
random variable.

3
JOINT PROBABILITY MASS FUNCTION (PMF)
Mathematically, pmf f(x,y) of bivariate discrete random variables is a real valued
function that satisfies the following properties:

1. f ( x, y )  [0,1]  x, y  R
2.  f ( x, y )  1
( x, y )

where ( x, y ) belongs to range


space of (X,Y).

Let (X, Y) be the two dimensional random variable defined


on the same probability space (S, A, P). The joint probability
mass function of (X, Y) denoted by f(X,Y) is defined as
P[X=xi, Y=yj] for all (xi, yj) belongs to range of (X,Y)

4
5
BIVARIATE JOINT PROBABILITY DENSITY FUNCTION (PDF)

Mathematically, pdf f(x,y) of bivariate random variable is a


real valued function that satisfies the following properties:

1. f ( x, y)  0  x, y  R
 
2. 
 
f ( x, y)dxdy  1

6
Joint Cumulative Distribution Function

Let (X, Y) be the two dimensional random variable defined on the


same probability space (S, A, P). The joint cumulative distribution
function of (X, Y) denoted by F(X,Y) is defined as P[X ≤x, Y ≤y]
for all (x, y).

Relation between Joint CDF and Joint PMF

y x
1. F ( x, y )  
 
f (u , v)dudv,

 2 F ( x, y )
2. f ( x, y ) 
xy

7
Marginal probability distribution

fX(x) = P[X = x] is called the marginal probability function


of X.
and

fY(y) = P[Y = y] is called the marginal probability function


of Y.
Note: Let y1, y2, y3, … denote the possible values of Y.

 P  X  x, Y  y1   X  x, Y  y2   

 P  X  x, Y  y1   P  X  x, Y  y2  

It is also written as

Thus the marginal probability function of X, fX(x) is obtained from the joint probability
function of X and Y by summing f(x,y) over the possible values of Y.
Also

 P  X  x1 , Y  y   X  x2 , Y  y  

 P  X  x1, Y  y   P  X  x2 , Y  y  

It is also written as

Thus the marginal probability function of Y, fY(y) is obtained from the joint probability
function of X and Y by summing f(x,y) over the possible values of X.
Marginal probability distribution for Continuous
Random Variable
Definition: Let (X ,Y) denote two dimensional continuous random variables with
joint probability density function f(x,y) then
the marginal density of X is

f X  x   f  x, y  dy


the marginal density of Y is



fY  y    f  x, y  dx

Conditional probability distribution

f (x|y) = P[X = x|Y = y] is called the conditional probability


function of X given Y = y

and
f(y|x) = P[Y = y|X = x] is called the conditional probability
function of Y given X = x
Note:

P X  x and Y  y 
P X  x | Y  y  
PY  y 

and

P X  x and Y  y 
PY  y | X  x  
P X  x 
Conditional probability distribution for Continuous
Random Variable

Definition: Let (X ,Y) denote two dimensional continuous random variables


with joint probability density function f(x,y) and marginal densities fX(x), fY(y)
then the conditional density of Y given X = x
f  x, y 
f  y x 
fX  x

conditional density of X given Y = y

f  x, y 
f  x y 
fY  y 
Example : Given the following probability distribution.
(i) Find the marginal distributions of X and Y.
(ii) Find the conditional distribution of X given Y= 2.

Solution:
Marginal distribution of X:

Marginal distribution of Y:
Solution (b): The conditional distribution of X given Y=2 is

X -1 0 1
2/5 1/5 2/5
Probability and Random Processes
(15B11MA301)
Lecture- 10

Department of Mathematics
Jaypee Institute of Information Technology
Noida, India
1
Contents:
• Independent Random Variables
• Conditional Means
• Conditional Variances
• covariance
• Examples

2
Independent Random Variables

Let (X ,Y) denote two dimensional continuous random variables with


joint probability density function f(x,y) and marginal densities fX(x),
fY(y) then (X ,Y) is said to be independent if

f  x, y   f X  x  . fY  y 

f  y x   fY  y 

3
4
• The marginal density functions of Y is

(ii) The conditional distribution of Y given X=x is

5
Example : If X,Y have the joint PDF
 x  y, 0  x  1, 0  y  1
f ( x)  
0 otherwise
Check whether X and Y are independent or not.

Solution: The marginal density function of X is given by

The marginal density function of Y is given by

Now,
f ( x). f ( y)  ( x  1/ 2)( y  1/ 2)  f ( x, y)

Ans: X and Y are not independent.


6
Example : Given

Evaluate

Solution:

7
8
X=2 X= 4
Y=1 0.10 0.15
Y=3 0.20 0.30
Y=5 0.10 0.15

9
Expected values of two-dimensional random variable
If (X,Y) is a two-dimensional random variable, then the mean or expectation of (X,Y)
is defined as follows
• Case 1: when X,Y are discrete random variables, then

10
Case 2: When X,Y are continuous random variables, then

If X and Y are independent then E(XY)=E(X).E(Y)

11
Conditional Expected Values

Conditional Variance: If (X,Y) is a two-dimensional random variable,


then the conditional variance of (X,Y) is

Notes: If X and Y are independent random variables, then

12
Covariance

13
Example 1: The joint distribution of X and Y is given by
x y
f ( x, y )  , x  1,2,3, y  1,2
21
Find the marginal distributions of X and Y. Find the mean of X and Y also.
Solution:

14
15
Example 2: The joint PDF of (X,Y) is given by
24 xy , 0  x, 0  y, x  y  1
f ( x, y )  
0 otherwise
Find the conditional mean and variance of Y given X.
Solution:

16
17
18
Probability and Random Processes
(15B11MA301)
Lecture-11

Department of Mathematics
Jaypee Institute of Information Technology, Noida
1
Contents of the Lecture:
 Moments
 Related Results
 Examples

2
Moments
 If X is a random variable which is discrete or continuous, the moments about the origin denoted by 𝜇𝑟′ is defined
as
𝜇𝑟′ = 𝐸 𝑋 ′ , 𝑟 = 1, 2,3, …
 The moments about mean or central moments denoted by 𝜇𝑟 and is defined as 𝜇𝑟 = 𝐸 𝑋 − 𝑋 𝑟 , 𝑟 = 1,2,3, …

 If X is a discrete random variable which can assume any of the values say 𝑥1 ,… , 𝑥𝑛 with respective
probabilities as p(𝑥1 ), p(𝑥2 ),…,p(𝑥𝑛 ), then

𝜇𝑟′ = 𝐸 𝑋 ′ = 𝑥 𝑟 𝑝(𝑥𝑟 )
𝑟=1
𝑟 ∞
and, 𝜇𝑟 = 𝐸 𝑋 − 𝑋 = 𝑟=1(𝑥 − 𝑥 )𝑟 p(𝑥𝑟 )

 If X is a continuous random variable with PDF f(x), then


𝜇𝑟′ = 𝐸 𝑋 𝑟 = 𝑥 𝑟 𝑓 𝑥 𝑑𝑥, 𝑟 = 1,2,3, …


−∞

𝑟
𝜇𝑟 = 𝐸 𝑋 − 𝑋 = (𝑥 − 𝑥 )𝑟 𝑓(𝑥) 𝑑𝑥, 𝑟 = 1,2,3, …
−∞
3
Relationship between Moments about origin and moments about mean

 If X is a random variable which is discrete or continuous, then the first moment about the origin
𝝁′𝟏 = 𝑬 𝑿 = 𝑿
 We know, the moments about mean are defined as 𝜇𝑟 = 𝐸 𝑋 − 𝑋 𝑟 , 𝑟 = 1,2,3, ….
Therefore, 𝝁𝟏 = 𝑬[𝑿 − 𝑿 ] =𝝁′𝟏 − 𝝁′𝟏 = 0

The first moment about mean is always zero.

𝟐
 Similarly, 𝜇2 = 𝐸 𝑋 − 𝑋 2 = 𝝁′𝟐 - 𝝁′𝟏
Var (X) = 𝑬 𝑿𝟐 - 𝑬 𝑿 𝟐

𝟑 𝟑
 𝜇3 = 𝐸 𝑋 − 𝑋 3
= 𝝁′𝟑 - 𝟑𝝁′𝟐 𝝁′𝟏 + 𝟑𝝁′𝟏 − 𝝁′𝟏

𝟐 𝟒
 𝜇4 = 𝐸 𝑋 − 𝑋 4
= 𝝁′𝟒 - 𝟒𝝁′𝟑 𝝁′𝟏 + 𝟔𝝁′𝟏 𝝁′𝟐 − 𝟑𝝁′𝟏 and so on.
4
Properties of Moments
1. If X is a random variable, then 𝑬 𝒂𝑿 + 𝒃 = 𝒂 𝑬 𝑿 + 𝒃.
Proof: By definition, 𝐸 𝑎𝑋 + 𝑏 = 𝑎𝑥 + 𝑏 𝑝(𝑥)
= 𝑎 𝑥 𝑝(𝑥) +𝑏 𝑝(𝑥)
= 𝑎 𝐸[𝑋] + 𝑏, (since, 𝑝(𝑥) = 1)
Therefore, 𝑬 𝒂𝑿 + 𝒃 = 𝒂 𝑬[𝑿] + 𝒃
Remark: 𝑬 𝑿 ± 𝒀 = 𝑬[𝑿] ± 𝑬[𝒀]

2. If X is a random variable, then 𝑽𝒂𝒓(𝒂𝑿 + 𝒃) = 𝒂𝟐 𝑽𝒂𝒓(𝑿)


Proof: Let 𝑌 = 𝑎𝑋 + 𝑏, E(Y) = E(𝑎𝑋 + 𝑏) = a E(X)+b
Therefore, 𝑌 − 𝐸 𝑌 =(𝑎𝑋 + 𝑏)-[a E(X) +b] = a[X-E(X)]
2
𝑌−𝐸 𝑌 = 𝑎2 𝑋 − 𝐸 𝑋 2
⇒ 𝑌−𝑌 2
= 𝑎2 𝑋 − 𝑋 2

2 2 2
𝐸 𝑌−𝑌 = 𝐸 𝑎2 𝑋 − 𝑋 = 𝑎2 𝐸{ 𝑋 − 𝑋 }
𝑽𝒂𝒓 𝒀 = 𝒂𝟐 𝑽𝒂𝒓 𝑿
Hence, 𝑽𝒂𝒓(𝒂𝑿 + 𝒃) = 𝒂𝟐 𝑽𝒂𝒓(𝑿) { as Var (b)=0}
5
Properties of Moments (Continued.....)
3. If X and Y are independent random variables, then
𝑽𝒂𝒓 𝒂𝑿 ± 𝒃𝒀 = 𝒂𝟐 𝑽𝒂𝒓[𝑿] ± 𝒃𝟐 𝑽𝒂𝒓[𝒀]
4. If X and Y are independent random variables, then
𝑬 𝑿𝒀 = 𝑬 𝑿 𝑬(𝒀)
5. If X and Y are independent random variables such that 𝑌 ≤ 𝑋, then 𝐸[𝑌] ≤ 𝐸[𝑋]
Proof: Given, 𝑌 ≤ 𝑋 ⇒ Y- X ≤ 0
X-Y≥0
𝑬 X−Y ≥0
⇒ 𝐸 𝑋 − 𝐸(𝑌) ≥ 0
⇒ 𝐸 𝑋 ≥ 𝐸(𝑌)

6
Example 1: Find the first four moments about the origin for a random variable X having the density
4𝑥 9−𝑥 2
function 𝑓 𝑥 = , 0 ≤ 𝑥 ≤ 3.
81
4𝑥 9−𝑥 2
Solution: Given, 𝑓 𝑥 = 81
,0 ≤ 𝑥 ≤ 3
By the definition of moments,

7
Example 2: If a random variable X has the probability density function given by
𝑥+1
, −1 < 𝑥 < 1
𝑓 𝑥 = 2
0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒

Find the mean and variance.


∞ 𝟏 𝒙+𝟏
Solution: Mean = E(X) = −∞
𝒙 𝒇 𝒙 𝒅𝒙 = −𝟏
𝒙
𝟐
𝒅𝒙 = 𝟏/𝟑

∞ 𝟏 𝟐 𝒙+𝟏
E(𝑿𝟐 ) = −∞
𝒙 𝟐
𝒇 𝒙 𝒅𝒙 = −𝟏
𝒙 𝟐
𝒅𝒙 = 𝟏/𝟑

Var(X) = E(𝑿𝟐 ) – (E(X)) 2


𝟏 𝟏 𝟐
= − =
𝟑 𝟗 𝟗

8
Example 3: The monthly demand for fossil watches is known to have following probability distribution
Demand 1 2 3 4 5 6 7 8
Probability 0.08 0.12 0.19 0.24 0.16 0.10 0.07 0.04

Find the expected demand of watches and variance.


𝟖
Solution: Mean = E(X) = 𝒙=𝟏 𝒙𝒑(𝒙)

= 1(0.08)+2(0.12)+ 3(0.19)+…..+8(0.04) = 4.06


𝟖
E(𝑿𝟐 ) = 𝟐
𝒙=𝟏 𝒙 𝒑(𝒙)

=12(0.08)+22(0.12)+ 32(0.19)+…..+82(0.04) = 19.7


Var(X) = E(𝑿𝟐 ) – (E(X)) 2
𝟐
= 𝟏𝟗. 𝟕 − 𝟒. 𝟎𝟔
= 3.22

9
Example 4: If X and Y are independent random variables with means 2, 3 and variance 1, 2 respectively.
Find the mean and variance of the random variable Z = 2X-5Y.

Solution: Given, E(X) =2, E(Y) = 3, Var(X)=1, Var(Y)= 2

Now, Z=2X-5Y

E(Z)= E(2X-5Y) = 2E(X)-5E(Y)

= 2 (2) - 5(3) = -11


If X and Y are independent random variables, then

Var(aX+b) = a2 Var(X) + b2 Var(Y)

Var(2X-5Y) = 22 Var (X) + (5)2 Var (Y)

= 54

10
Example 5: The cumulative distribution function (CDF) of a random variable is 𝐹 𝑥 = 1 − (1 + 𝑥)𝑒 −𝑥 ,
x >0. Find probability density function of X, mean and variance.

Solution: Given, 𝐹 𝑥 = 1 − (1 + 𝑥)𝑒 −𝑥 , x >0


𝑑𝐹 (𝑥) 𝑑
(i) PDF = 𝑓 𝑥 = = [1 − 1 + 𝑥 𝑒 −𝑥 ]
𝑑𝑥 𝑑𝑥

= 𝑥𝑒 −𝑥
∞ ∞ −𝑥
(ii) Mean = E(X) = −∞
𝑥 𝑓 𝑥 𝑑𝑥 = 0
𝑥. 𝑥𝑒 𝑑𝑥 = 2
(iii) Variance = E(X2) –(E(X))2
= ∞
0
𝑥 2 . 𝑥𝑒 −𝑥 𝑑𝑥 − 4 = 6 – 4 =2

11
Thank you

12
Probability and Random Processes
(15B11MA301)
Lecture-12

Department of Mathematics
Jaypee Institute of Information Technology, Noida
1
Contents of the Lecture:
 Moment generating Function (MGF)
 Properties of MGF
 Solved Examples
 References

2
Moment Generating Function
 The moment generating function of a random variable X denoted by 𝑀𝑋 (t) is
defined as
𝑀𝑋 t = 𝐸 𝑒 𝑡𝑋 where t is a real variable.

 If X is a discrete random variable with PMF p(x), then


𝑀𝑋 t = 𝐸 𝑒 𝑡𝑋 = 𝑥 𝑒 𝑡𝑥 𝑝(𝑥)

 If X is a continuous random variable with PDF f(x), then


𝑀𝑋 t = 𝐸 𝑒 𝑡𝑋 = 𝑒 𝑡𝑥 𝑓 𝑥 𝑑𝑥
−∞

3
Properties of Moment Generating Function
𝑡𝑟
The coefficient in 𝑀𝑋 t is𝜇𝑟′ , r = 1, 2, 3…and 𝜇𝑟′ = 𝐸 𝑋 𝑟 gives moments about the origin.
𝑟!
Proof: We know that 𝑀𝑋 t = 𝐸 𝑒 𝑡𝑋
𝑡𝑋 𝑡𝑋 2
=𝐸 1+ + +⋯
1! 2!
𝑡 𝑡2 2 𝑡𝑟
= 𝐸 1 + 1! 𝐸 𝑋 + 2! 𝐸 𝑋 + ⋯ + 𝑟! 𝐸 𝑋 𝑟 + ⋯
𝑡 ′ 𝑡2 ′ 𝑡𝑟
=1+ 𝜇1 + 𝜇2 + ⋯ + 𝑟! 𝜇𝑟′ + ⋯
1! 2!
𝑟
∞ 𝑡
Hence, 𝑀𝑋 t = 𝑟=0 𝜇𝑟′ ..............(1)
𝑟!
this gives the MGF in terms of the moments.

The moments 𝜇𝑟′ can also be obtained as


Differentiating equation (1) with respect to t, r times and putting t =0 provides moments
𝑑𝑟
𝜇𝑟′ = 𝑀𝑋 t , r =1, 2, 3… ................... (2)
𝑑𝑡 𝑟 𝑡=0
4
𝑀𝑎𝑋 (t) =𝑀𝑋 𝑎𝑡 , 𝑎 𝑏eing a constant.
Proof: By definition, 𝑀𝑎𝑋 𝑡 = 𝐸 𝑒 𝑡𝑎𝑋 = 𝐸[𝑒 (𝑎𝑡 )𝑋 ]
𝑀𝑎𝑋 𝑡 =𝑀𝑋 (at)

If Y = aX +b, then 𝑀𝑌 𝑡 = 𝑒 𝑏𝑡 𝑀𝑋 (at).


Proof: We know that,
𝑀𝑌 𝑡 = 𝐸 𝑒 𝑡𝑌
= 𝐸 𝑒 𝑡(𝑎𝑋 +𝑏)
= 𝐸 𝑒 𝑡(𝑎𝑋 ) 𝐸 𝑒 𝑡𝑏
= 𝑒 𝑏𝑡 𝐸 𝑒 (𝑡𝑎 )𝑋 = 𝑒 𝑏𝑡 𝑀𝑋 (at).

5
The moment generating function of the sum of n independent random variables is equal to the
product of their respective moment generating functions, i.e.

𝑀𝑋1 +𝑋2 +⋯+𝑋𝑛 𝑡 = 𝑀𝑋1 𝑡 . 𝑀𝑋2 𝑡 ..... 𝑀𝑋𝑛 𝑡

Proof: Using the definition of MGF, we have

𝑀𝑋1 +𝑋2 +⋯+𝑋𝑛 𝑡 = 𝐸 𝑒 𝑡 𝑋1 +𝑋2 +⋯+𝑋𝑛

= 𝐸 𝑒 𝑡𝑋1 𝐸 𝑒 𝑡𝑋2 𝐸 𝑒 𝑡𝑋3 … . . 𝐸 𝑒 𝑡𝑋𝑛 (since variables are independent)

Therefore,

𝑴𝑿𝟏 +𝑿𝟐 +⋯+𝑿𝒏 𝒕 = 𝑴𝑿𝟏 𝒕 . 𝑴𝑿𝟐 𝒕 ..... 𝑴𝑿𝒏 𝒕

6
MGF of a distribution (RV) is unique, if exists.
Properties of Moment Generating Function
Two RVs X1 and X2 with pdf f1 and f2 are identical iff their MGFs are same.

Effect of Origin and Scale on MGF


Let the random variable X be transformed to a new variable U by changing both the origin and
𝑋−𝑎
scale in X as 𝑈 = where 𝑎 and h are constants.

Then, the MGF of U (about origin) is given by


𝑀𝑈 (t) = 𝐸 𝑒 𝑡𝑈
𝑋 −𝑎
𝑡
=𝐸 𝑒 ℎ

−𝑎𝑡 𝑋
𝑡
=𝑒 ℎ 𝐸 𝑒 ℎ

−𝑎𝑡
=𝑒 ℎ 𝑀𝑋 𝑡/ℎ 7
3
Example 1: If a random variable X has the MGF 𝑀𝑋 𝑡 = 3−𝑡 , obtain the standard deviation of X.

3 𝑡 𝑡2
Solution: 𝑀𝑋 𝑡 = = 1+ + +......
3−𝑡 3 9

𝑡
E(X) = coefficient of = 1/3
1!

𝑡2
E(X 2 ) = coefficient of = 2/9
2!

2 2
Var(X) =E(𝑋 ) – (E(X))
𝟐 𝟏 𝟏
= − =
𝟗 𝟗 𝟗

Standard Deviation = 𝝈𝑿 = 1/3


8
1
Example 2: Find the MGF of the random variable X whose probability function 𝑋 = 𝑥 = 2𝑥 , x = 1, 2,3...

hence, find its mean.


Solution: 𝑀𝑋 𝑡 = 𝐸 𝑒 𝑡𝑋 = 𝑥=0 𝑒 𝑡𝑥
𝑃 𝑋=𝑥

1 𝑒𝑡
𝑥
∞ 𝑡𝑥 ∞
= 𝑥=1 𝑒 = 𝑥=1
2𝑥 2

On expanding the above summation, we get


𝒆𝒕 𝟐 𝒆𝒕
𝑴𝑿 𝒕 = =
𝟐 𝟐−𝒆𝒕 𝟐−𝒆𝒕

d
Mean = μ1′ = dt MX t at t =0.

d 𝒆𝒕
= dt 𝟐−𝒆𝒕
at t =0 Mean = 2
9
Example 3: A random variable X has the PDF given by 𝑓 𝑥 = 2𝑒−2𝑥 , 𝑥 ≥ 0 𝑎𝑛𝑑 0 𝑖𝑓 𝑥 < 0.

Find (i) MGF and

(ii) the first four moments of X about the origin.


𝑡𝑋 ∞
Solution: 𝑀𝑋 t = 𝐸 𝑒 = −∞
𝑒 𝑡𝑥 𝑓 𝑥 𝑑𝑥

∞ 𝑡𝑥 −𝑡𝑥 𝑒 − 2−𝑡 𝑥
𝑀𝑋 t = 0
2𝑒 𝑒 𝑑𝑥 = from 0 𝑡𝑜 ∞
−(2−𝑡)
2
Therefore, 𝑀𝑋 t =
2−t

10
𝑟
∞ 𝑡 2 2
We know that, 𝑀𝑋 t = 𝑟=0 𝑟! 𝜇𝑟′ = = t
2−t 2(1− )
2

𝑡 𝑡2 𝑡𝑟 t −1
1+ 𝜇1′ + 𝜇2′ + ⋯+ 𝜇𝑟′ +⋯= 1−
1! 2! 𝑟! 2
𝑡 𝑡2 𝑡3 𝑡4
=1+ + + + +⋯
2 22 23 24
1 𝑡 2! 𝑡 2
=1+ + + ⋯.
2 1! 4 2!
t t2
On equating the coefficients of , , and so on, we have..
1! 2!
1
𝜇1′ = 1/2, 𝜇2′ = , 𝜇3′ = ¾, 𝜇2′ = 3/2
2

11
References
1. A. M. Mood, F. A. Graybill and D. C. Boes, Introduction to the theory of
statistics, 3rd Indian Ed., Mc Graw Hill, 1973.
2. R. V. Hogg and A. T. Craig, Introduction to mathematical Statistics, Mc-
Millan, 1995.
3. V. K. Rohatgi, An Introduction to Probability Theory and Mathematical
Statistics, Wiley Eastern, 1984.
4. S. M.Ross, A First Course in Probability, 6th edition, Pearson Education Asia,
2002.
5 S. Palaniammal, Probability and Random Processes, PHI Learning Private
Limited, 2012.
6 T. Veerarajan, Probability, Statistics and Random Processes, 3rd Ed. Tata
McGraw-Hill, 2008.
7. R. E. Walpole, R H. Myers, S. L. Myers, and K. Ye, Probability & Statistics
for Engineers & Scientists, 9th edition, Pearson Education Limited, 2016.
8. I. Miller and M. Miller, John E. Freund's Mathematical Statistics with
Applications, 8th Edition, Pearson Education Limited 2014.
12
Probability and Random Processes
(15B11MA301)

Lecture-13

Department of Mathematics
Jaypee Institute of Information Technology,
Noida 1
Contents of the Lecture:
 Characteristic Function
 Properties
 Solved Examples

2
Limitation of MGF
MGF may not exist in case of some distributions as the

integral  etx f ( x)dx or the series e tx
p ( x) does not converge absolutely for
 x

real value of t for some distributions.

1
e.g. for continuous distribution given by f ( x)  c , m  1 , MGF
(1  x 2 )m

c

tx
does not exist , since the integral e dx does not converge

(1  x )
2 m

absolutely for finite positive values of m.

6
p ( x)  , x  1, 2,..
Also, for discrete distribution  2 x2
 0, else

etx
6
the series 2  2 is not convergent for t>0 (D’Alembert ratio test),
 x x
so MGF does not exist.
3
• So,
A more serviceable function than the MGF is
needed.
It is known as Characteristic function

4
Characteristic Function

5
The characteristic function of a random variable X is defined by
X ( )  E (ei X )

 ei x p( x), if X is discrete


 x

  ei x f ( x)dx, if X is continuous


6
 cos  x  i sin  x   cos  x  sin  x 
i x 2 2 1/2
e
 1 1

e
i X i x
 X ( )  E (e ) f ( x)dx

 
 

ei x f ( x)dx  

f ( x)dx  1

Hence the characteristic function always exist even


when moment-generating function may not exist.

7
Properties of Characteristic Function
i 
n n
1. n  E ( X )  the co-efficient of
' n
in the
n!
expansion of  ( ) in series of asending powers of i.

 X ( w )  E (e iwx
)   e f (x)
iwx
x

  1  iwx 
iwx  iwx 
2

3

 ...f ( x )
x
 2! 3! 
  f ( x )  iw  xf ( x ) 
iw 
2

 x f ( x )  .....
2
x x 2! x

8
1  dn 
2.  'n  n  d n  ( ) 
i   0
3. If the characteristic function of a RV X
is  x ( ) and if Y  aX  b, then  y ( )  eib x ( a )

4. If X and Y are independent RVs, then


 x  y ( )   x ( )   y ( ).

5. If the characteristic function of a continuous RV X with


density function f(x) is  ( ), then

1
    ix
f(x)  ( ) e d .
2 
9
Ex. Find the characteristic function of a random X with
  x
parameter when its pdf given by f(x)  e ,   x  .
2
 

e f ( x) dx   e
 x
 x ( w)  iwx iwx
e dx
 
2

0 

e dx   e
(  iw ) x  ( iw ) x
 dx 
2   0 
2
 2
(   2 )

10
Ex. The characteristic function of a random variable X is
given by
1  w , w  1
x ( w)  
0, w 1
Find the pdf of X.

Sol:
The pdf of X is

1
 x
   i x
f(x)  ( ) e dw
2 

11
1 1 iwx 
  (1  w )e dw
2 1 

1 0 iwx
1
iwx 
  (1  w )e dw   (1  w )e dw

2 1 0 

(2  e  e )  2 1  cos x 
1 ix 1
 ix

2x 2
x

12
Joint Characteristic Function

If (X, Y) is a two - dimensiona l RV, then


E(e i1X i2 Y ) is called the joint characteristic
function of (X, Y) and denoted by  xy (1 , 2 ).

 

 e
i1x i2 y
xy (1 , 2 )  f ( x, y )dxdy
 

  e i1x i2 y
p( xi , y j )
i j

13
Properties
(i ) xy (0,0)  1
1   m n

(ii ) E{ X Y }  mn  m
m n
 (1, 2 ) 
i  1 2 n xy
1 0,2 0

(iii ) x ( )   xy ( ,0) &  y ( )   xy (0,  )

(iv)If X and Y are independent


 xy (1 ,2 )   x (1 )   y (2 )
and conversely.
14
Example
Two RVs X and Y have the joint characteristic
function  xy (1 , 2 )  e 2 8  2 2 . Show that
2
1

X and Y are both zero mean RVs and also that


they are uncorrelated

1    212 822  
E(X)   e 
i  1  0, 0 1 2


 e2 1 8 2
2 2
 4i 
1 1 0,2 0 0

E(Y)  e 
2  1 8 2
2 2
16i 
2 1 0,2 0 0
15
1   m n

E{ X Y }  mn 
m n
 (1 , 2 ) 
i  1 2
m n xy
1 0,2 0

1  2  2 12 8 2 2  
E(XY)  2  e 
i 
 1 2  1 0,2 0
   2 12 82 2  
 e 162 
 1  1 0,2 0
 {6412 e  2  8  2 2 }
2
1
1  0 , 2  0

0

C xy  E(XY )  E(X)  E(Y)  0


16
Thank you

17

You might also like