0% found this document useful (0 votes)
27 views

PSLecture6 2022

This document is a lecture on key parameters for analyzing random variables. It discusses the moment generating function (MGF) and its properties, including how to derive moments from the MGF. The MGF of a random variable X is defined as M(t) = E(e^{tx}). The document shows how to take derivatives of the MGF to obtain higher order moments, and that the n^{th} moment is given by the n^{th} derivative of the MGF evaluated at 0. It also presents properties of MGFs under transformations like adding a constant or multiplying by a constant. Finally, it defines skewness as a measure of the asymmetry of a distribution.

Uploaded by

Prachi Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

PSLecture6 2022

This document is a lecture on key parameters for analyzing random variables. It discusses the moment generating function (MGF) and its properties, including how to derive moments from the MGF. The MGF of a random variable X is defined as M(t) = E(e^{tx}). The document shows how to take derivatives of the MGF to obtain higher order moments, and that the n^{th} moment is given by the n^{th} derivative of the MGF evaluated at 0. It also presents properties of MGFs under transformations like adding a constant or multiplying by a constant. Finally, it defines skewness as a measure of the asymmetry of a distribution.

Uploaded by

Prachi Sharma
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Proability and Statistics

MA20205

Bibhas Adhikari

Autumn 2022-23, IIT Kharagpur

Lecture 6
August 30, 2022

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 1 / 70
Key parameters for analyzing random variables
Observation  
d d d tX
M(t) = E (e tx ) = E = E Xe tX

(1) e
dt dt dt

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 47 / 70
Key parameters for analyzing random variables
Observation  
d d d tX
M(t) = E (e tx ) = E = E Xe tX

(1) e
dt dt dt
d 2
M(t) = E X 2 e tX

(2) 2
dt

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 47 / 70
Key parameters for analyzing random variables
Observation  
d d d tX
M(t) = E (e tx ) = E = E Xe tX

(1) e
dt dt dt
d 2
M(t) = E X 2 e tX Similarly

(2) 2
dt
dn
M(t) = E X n e tX

(3) n
dt

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 47 / 70
Key parameters for analyzing random variables
Observation  
d d d tX
M(t) = E (e tx ) = E = E Xe tX

(1) e
dt dt dt
d 2
M(t) = E X 2 e tX Similarly

(2) 2
dt
dn
M(t) = E X n e tX

(3) n
dt
Setting t = 0,
dn 
n tX

M(t) | t=0 = E X e |t=0 = E (X n )
dt n

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 47 / 70
Key parameters for analyzing random variables
Observation  
d d d tX
M(t) = E (e tx ) = E = E Xe tX

(1) e
dt dt dt
d 2
M(t) = E X 2 e tX Similarly

(2) 2
dt
dn
M(t) = E X n e tX

(3) n
dt
Setting t = 0,
dn 
n tX

M(t) | t=0 = E X e |t=0 = E (X n )
dt n
Theorem If M(t) = a0 + a1 t + a2 t 2 + . . . + an t n + . . . is the Taylor
expansion of M(t) then E (X n ) = n!an for all n.

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 47 / 70
Key parameters for analyzing random variables
Observation  
d d d tX
M(t) = E (e tx ) = E = E Xe tX

(1) e
dt dt dt
d 2
M(t) = E X 2 e tX Similarly

(2) 2
dt
dn
M(t) = E X n e tX

(3) n
dt
Setting t = 0,
dn 
n tX

M(t) | t=0 = E X e |t=0 = E (X n )
dt n
Theorem If M(t) = a0 + a1 t + a2 t 2 + . . . + an t n + . . . is the Taylor
expansion of M(t) then E (X n ) = n!an for all n.
Proof is obvious.

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 47 / 70
Key parameters for analyzing random variables
Observation  
d d d tX
M(t) = E (e tx ) = E = E Xe tX

(1) e
dt dt dt
d 2
M(t) = E X 2 e tX Similarly

(2) 2
dt
dn
M(t) = E X n e tX

(3) n
dt
Setting t = 0,
dn 
n tX

M(t) | t=0 = E X e |t=0 = E (X n )
dt n
Theorem If M(t) = a0 + a1 t + a2 t 2 + . . . + an t n + . . . is the Taylor
expansion of M(t) then E (X n ) = n!an for all n.
Proof is obvious.
Problems...

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 47 / 70
Key parameters for analyzing random variables
Theorem. Let MX (t) denote the mgf of a random variable X . Suppose
a, b ∈ R. Then
(a) MX +a = e at MX (t)

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 48 / 70
Key parameters for analyzing random variables
Theorem. Let MX (t) denote the mgf of a random variable X . Suppose
a, b ∈ R. Then
(a) MX +a = e at MX (t)

 
MX +a (t) = E e t(X +a) = E (e tX e ta ) = e ta E (e tX ) = e ta MX (t)

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 48 / 70
Key parameters for analyzing random variables
Theorem. Let MX (t) denote the mgf of a random variable X . Suppose
a, b ∈ R. Then
(a) MX +a = e at MX (t)

 
MX +a (t) = E e t(X +a) = E (e tX e ta ) = e ta E (e tX ) = e ta MX (t)

(b) MbX (t) = MX (bt)

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 48 / 70
Key parameters for analyzing random variables
Theorem. Let MX (t) denote the mgf of a random variable X . Suppose
a, b ∈ R. Then
(a) MX +a = e at MX (t)

 
MX +a (t) = E e t(X +a) = E (e tX e ta ) = e ta E (e tX ) = e ta MX (t)

(b) MbX (t) = MX (bt)

MbX (t) = E (tb)X = MX (tb)

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 48 / 70
Key parameters for analyzing random variables
Theorem. Let MX (t) denote the mgf of a random variable X . Suppose
a, b ∈ R. Then
(a) MX +a = e at MX (t)

 
MX +a (t) = E e t(X +a) = E (e tX e ta ) = e ta E (e tX ) = e ta MX (t)

(b) MbX (t) = MX (bt)

MbX (t) = E (tb)X = MX (tb)

a
(c) M X +a = e b t MX t

b
b

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 48 / 70
Key parameters for analyzing random variables
Theorem. Let MX (t) denote the mgf of a random variable X . Suppose
a, b ∈ R. Then
(a) MX +a = e at MX (t)

 
MX +a (t) = E e t(X +a) = E (e tX e ta ) = e ta E (e tX ) = e ta MX (t)

(b) MbX (t) = MX (bt)

MbX (t) = E (tb)X = MX (tb)

a
(c) M X +a = e b t MX bt

b
Proof follows from applying the above

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 48 / 70
Key parameters for analyzing random variables

Skewness: measures the asymmetry of the distribution


 !
X −µ 3

γ=E
σ

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 49 / 70
Key parameters for analyzing random variables

Skewness: measures the asymmetry of the distribution


 !
X −µ 3

γ=E
σ

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 49 / 70
Key parameters for analyzing random variables
Kurtosis: measures how heavey-tailed the distribution is
 !
X −µ 4

κ=E
σ

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 50 / 70
Key parameters for analyzing random variables
Kurtosis: measures how heavey-tailed the distribution is
 !
X −µ 4

κ=E
σ

Positive and negative kurtosis is defined based on the kurtosis of Gaussian


random variable

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 50 / 70
Special discrete distributions

Bernoulli random variable Let X be a random variable with range space


RX = {0, 1}. Then we say X to have Bernoulli distribution if the pmf of X
is
f (0) = 1 − p and f (1) = p
for some 0 < p < 1.

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 51 / 70
Special discrete distributions

Bernoulli random variable Let X be a random variable with range space


RX = {0, 1}. Then we say X to have Bernoulli distribution if the pmf of X
is
f (0) = 1 − p and f (1) = p
for some 0 < p < 1.

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 51 / 70
Special discrete distributions

Bernoulli random variable Let X be a random variable with range space


RX = {0, 1}. Then we say X to have Bernoulli distribution if the pmf of X
is
f (0) = 1 − p and f (1) = p
for some 0 < p < 1.

Then we denote X ∼ Bernoulli(p)

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 51 / 70
Special discrete distributions

Observation The Bernoulli(p) models the phenomena of coin


toss/true-false/yes-no etc.

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 52 / 70
Special discrete distributions

Observation The Bernoulli(p) models the phenomena of coin


toss/true-false/yes-no etc.
Properties of Ber(p) :
(a) E (X ) = p

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 52 / 70
Special discrete distributions

Observation The Bernoulli(p) models the phenomena of coin


toss/true-false/yes-no etc.
Properties of Ber(p) :
(a) E (X ) = p
(b) E (X 2 ) = p

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 52 / 70
Special discrete distributions

Observation The Bernoulli(p) models the phenomena of coin


toss/true-false/yes-no etc.
Properties of Ber(p) :
(a) E (X ) = p
(b) E (X 2 ) = p
(c) Var(X ) = p(1 − p)

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 52 / 70
Special discrete distributions

Observation The Bernoulli(p) models the phenomena of coin


toss/true-false/yes-no etc.
Properties of Ber(p) :
(a) E (X ) = p
(b) E (X 2 ) = p
(c) Var(X ) = p(1 − p)
(d) M(t) = (1 − p) + pe t
Application Social network modelling, the existence of a link can be
modelled as the Bernoulli random variable

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 52 / 70
Special discrete distributions

Observation The Bernoulli(p) models the phenomena of coin


toss/true-false/yes-no etc.
Properties of Ber(p) :
(a) E (X ) = p
(b) E (X 2 ) = p
(c) Var(X ) = p(1 − p)
(d) M(t) = (1 − p) + pe t
Application Social network modelling, the existence of a link can be
modelled as the Bernoulli random variable
Question For what values of p the Var(X ) attains the maximum value?

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 52 / 70
Special discrete distributions

Observation The Bernoulli(p) models the phenomena of coin


toss/true-false/yes-no etc.
Properties of Ber(p) :
(a) E (X ) = p
(b) E (X 2 ) = p
(c) Var(X ) = p(1 − p)
(d) M(t) = (1 − p) + pe t
Application Social network modelling, the existence of a link can be
modelled as the Bernoulli random variable
Question For what values of p the Var(X ) attains the maximum value?

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 52 / 70
Special discrete distributions

Binomial random variable A random variable X is said to have binomial


distribution if the pmf of X is
 
n k
f (k) = p (1 − p)n−k , k = 0, 1, 2, . . . , n
k

where 0 < p < 1 and n is the total number of states.

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 53 / 70
Special discrete distributions

Binomial random variable A random variable X is said to have binomial


distribution if the pmf of X is
 
n k
f (k) = p (1 − p)n−k , k = 0, 1, 2, . . . , n
k

where 0 < p < 1 and n is the total number of states.


Then we write X ∼ Binomial(n, p)

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 53 / 70
Special discrete distributions

Binomial random variable A random variable X is said to have binomial


distribution if the pmf of X is
 
n k
f (k) = p (1 − p)n−k , k = 0, 1, 2, . . . , n
k

where 0 < p < 1 and n is the total number of states.


Then we write X ∼ Binomial(n, p)
The pmf of Binomial(n, p) looks like:

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 53 / 70
Special discrete distributions
Observation Binomial(n, p) explains the phenomena of X = k number of
heads in n coin toss, when the probability of getting head is p whereas the
tail is 1 − p

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 54 / 70
Special discrete distributions
Observation Binomial(n, p) explains the phenomena of X = k number of
heads in n coin toss, when the probability of getting head is p whereas the
tail is 1 − p

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 54 / 70
Special discrete distributions
Observation Binomial(n, p) explains the phenomena of X = k number of
heads in n coin toss, when the probability of getting head is p whereas the
tail is 1 − p

Properties of Bin(n, p)
(a) E (X ) = np

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 54 / 70
Special discrete distributions
Observation Binomial(n, p) explains the phenomena of X = k number of
heads in n coin toss, when the probability of getting head is p whereas the
tail is 1 − p

Properties of Bin(n, p)
(a) E (X ) = np
(b) E (X 2 ) = np(np + (1 − p))

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 54 / 70
Special discrete distributions
Observation Binomial(n, p) explains the phenomena of X = k number of
heads in n coin toss, when the probability of getting head is p whereas the
tail is 1 − p

Properties of Bin(n, p)
(a) E (X ) = np
(b) E (X 2 ) = np(np + (1 − p))
(c) Var(X ) = np(1 − p)

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 54 / 70
Special discrete distributions
Observation Binomial(n, p) explains the phenomena of X = k number of
heads in n coin toss, when the probability of getting head is p whereas the
tail is 1 − p

Properties of Bin(n, p)
(a) E (X ) = np
(b) E (X 2 ) = np(np + (1 − p))
(c) Var(X ) = np(1 − p)
(d) M(t) = [(1 − p) + pe t ]n

Bibhas Adhikari (Autumn 2022-23, IIT Kharagpur) Proability and Statistics Lecture 6 August 30, 2022 54 / 70

You might also like