0% found this document useful (0 votes)
27 views27 pages

WINSEM2024-25 MAT1011 ETH AP2024254000664 2025-01-21 Reference-Material-I

The document discusses the concepts of random variables and mathematical expectation, defining random variables as functions that assign real numbers to outcomes of random experiments. It distinguishes between discrete and continuous random variables, explains probability mass functions and probability density functions, and provides examples of calculating probabilities and distributions. Additionally, it introduces basic calculus concepts necessary for understanding problems involving continuous random variables.

Uploaded by

praveen11d123212
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views27 pages

WINSEM2024-25 MAT1011 ETH AP2024254000664 2025-01-21 Reference-Material-I

The document discusses the concepts of random variables and mathematical expectation, defining random variables as functions that assign real numbers to outcomes of random experiments. It distinguishes between discrete and continuous random variables, explains probability mass functions and probability density functions, and provides examples of calculating probabilities and distributions. Additionally, it introduces basic calculus concepts necessary for understanding problems involving continuous random variables.

Uploaded by

praveen11d123212
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

2.

RANDOM VARIABLE AND


MATHEMATICAL EXPECTATION

2.0 Introduction:
It has been a general notion that if an experiment is conducted under identical
conditions, values so obtained would be similar. Observations are always taken about a factor
or character under study, which can take different values and the factor or character is termed
as variable.

These observations vary even though the experiment is conducted under identical
conditions. Hence, we have a set of outcomes (sample points) of a random experiment. A rule
that assigns a real number to each outcome (sample point) is called random variable.
From the above discussion, it is clear that there is a value for each outcome, which it
takes with certain probability. Hence a list of values of a random variable together with their
corresponding probabilities of occurrence, is termed as Probability distribution.
As a tradition, probability distribution is used to denote the probability mass or
probability density, of either a discrete or a continuous variable.

The formal definition of random variable and certain operations on random variable
are given in this chapter prior to the details of probability distributions.

2.1 Random variable:


A variable whose value is a number determined by the outcome of a random
experiment is called a random variable.
We can also say that a random variable is a function defined over the sample space of
an experiment and generally assumes different values with a definite probability associated
with each value. Generally, a random variable is denoted by capital letters like X, Y, Z…..,
where as the values of the random variable are denoted by the corresponding small letters like
x, y, z …

Suppose that two coins are tossed so that the sample space is S = {HH, HT, TH, TT}

Suppose X represent the number of heads which can come up, with each sample point
we can associate a number for X as shown in the table below:

Sample point HH HT TH TT

X 2 1 1 0

Thus the random variable X takes the values 0, 1, 2 for this random experiment.

32
The above example takes only a finite number of values and for each random value we
can associate a probability as shown in the table.

Usually, for each random variable xi, the probability of respective random variable is
denoted by p(xi) or simply pi.

X x1 = 0 x2 = 1 x3 = 2

1 2 1
p(xi) p(x i ) = p(x i ) = p(x i ) =
4 4 4

Observe that the sum of the probabilities of all the random variable is equal to one.
1 2 1
ie p(x1) + p(x2) + p(x3) = + + = 1.
4 4 4
Thus the probability distribution for a random variable provides a probability for each
possible value and that these probabilities must sum to 1.

Similarly if 3 coins are tossed, the random variable for getting head will be
X = 0, X = 1, X = 2, X = 3 and sum of their respective probabilities i.e ∑p(xi) =1

If two dice are rolled then the sample space S consists of 36 sample points. Let X
denote the sum of the numbers on the two dice. Then X is a function defined on S by the rule
X(i,j) = i+j . Then X is a random variable which can takes the values 2,3,4……12. That is the
range of X is {2,3,4……12}

2.1.1 Discrete random variable:

If a random variable takes only a finite or a countable number of values, it is called a


discrete random variable.

For example, when 3 coins are tossed, the number of heads obtained is the random
variable X assumes the values 0,1,2,3 which form a countable set. Such a variable is a discrete
random variable.

2.1.2 Continuous random variable:

A random variable X which can take any value between certain interval is called a
continuous random variable.

Note that the probability of any single value at x, value of X is zero. i.e P (X = x) = 0
Thus continuous random variable takes value only between two given limits.

For example the height of students in a particular class lies between 4 feet to 6 feet.

We write this as X = {x| 4 ≤ x ≤ 6}

The maximum life of electric bulbs is 2000 hours. For this the continuous random
variable will be X = {x | 0 ≤ x ≤ 2000}

33
2.2 Probability mass function
Let X be a discrete random variable which assumes the values x1, x2, ...xn with each of
these values, we associate a number called the probability Pi = P (X = xi), i = 1,2,3…n. This is
called probability of xi satisfying the following conditions.

(i) Pi ≥ 0 for all i, ie Pi’ s are all non-negative

(ii) ∑pi = p1 + p2 + …pn =1

ie the total probability is one.

This function pi or p (xi) is called the probability mass function of the discrete random
variable X.

The set of all possible ordered pairs (x, p (x)) is called the probability distribution of
the random variable X.

Note:
The concept of probability distribution is similar to that of frequency distribution.
Just as frequency distribution tells us how the total frequency is distributed among different
values (or classes) of the variable, a probability distribution tells us how total probability
1 is distributed among the various values which the random variable can take. It is usually
represented in a tabular form given below:

X x1 x2 x3 .... xn

P (X = x) P (x1) P (x2) P (x3) .... p (xn)

2.2.1 Discrete probability distribution:

If a random variable is discrete in general, its distribution will also be discrete. For a
discrete random variable X, the distribution function or cumulative distribution is given by
F(x) and is written as F(x) = P(X ≤ x) ; - ∞ < x < ∞
Thus in a discrete distribution function, there are a countable number of points
x1, x2,….. and their probabilities pi such that

Note:

For a discrete distribution function, F(xj) – F(xj-1) = p(xj)

2.2.2 Probability density function (pdf):

A function f is said to be the probability density function of a continuous random


variable X if it satisfies the following properties.

34
(i) f (x) ≥ 0 -∞<x<∞

(ii) ∫ f(x) dx = 1
−∞

Remark:

In case of a discrete random variable, the probability at a point ie P (x = a) is not zero


for some fixed ‘ a’ However in case of continuous random variables the probability at a point
is always zero
a
ie P (X = a) = ∫ f(x) dx = 0
a

Hence P ( a ≤ X ≤ b) = P (a < X < b) = P (a ≤ X < b) = P (a < X ≤ b)


b
The probability that x lies in the interval (a, b) is given by P (a < X < b) = ∫ f(x) dx
a
Distribution function for continuous random variable.

If X is a continuous random variable with p.d.f f(x), then the distribution function is
given by
x
(i) F (x) = ∫ f(x) dx = P (X ≤ x) ; − ∞ < x < ∞
−∞
b
(ii) F(b) − F(a) = ∫ f(x) dx = P (a ≤ X ≤ b)
a

2.3 Properties of distribution function:


Suppose that X be a discrete or continuous random variable, then

(i) F (x) is a non - decreasing function of x

(ii) 0 ≤ F (x) ≤ 1 , – ∞ < x < ∞

(iii) F (– ∞) = xlimit
→−∞ F (x) = 0

(iv) F (∞) = limit


x→∞ F (x) = 1

(v) If F(x) is the cumulative distribution function of a continuous random variable


X with p.d.f f (x) then F′(x) = f (x)

Example 1:

A random variable has the following probability distribution

35
Values of X 0 1 2 3 4 5 6 7 8
P (x) a 3a 5a 7a 9a 11a 13a 15a 17a

(1) Determine the value of a

(2) Find (i) P( x < 3) (ii) P (x ≤ 3) (iii) P (x > 7) (iv) P( 2 ≤ x ≤ 5), (v) P (2 < x <5)

(3) Find the cumulative distribution function of x.

Solution :

(1) Since pi is the probability mass function of discrete random variable X,


We have ∑pi = 1

∴ a + 3 a + 5a + 7a + 9a +11a + 13a + 15a + 17a = 1


81a = 1


(2)

(i) P (x <3) = P (x = 0) + P (x = 1) + P (x = 2)

= a + 3 a + 5a

= 9a
1
= 9  
 81
1
=
9
(ii) P(x ≤ 3) = P (x = 0) + P (x = 1) + P (x = 2) +P (x = 3)
=a+3a+5a+7a
= 16 a
16
=
81
(iii) P (x >7) = P (x = 8)
= 17 a
17
=
81
(iv) P ( 2 ≤ x ≤ 5) = P (x = 2) +P (x = 3) + P (x = 4) +P (x = 5)
= 5 a + 7a +9a +11a
= 32a
36
32
=
81
(v) P (2 < x < 5 ) = P (x = 3) + P (x = 4)
= 7a + 9a
= 16a
16
=
81
3) The distribution function is as follows:

X=x 0 1 2 3 4 5 6 7 8
F (x) = P (X ≤ x) a 4a 9a 16a 25a 36a 49a 64a 81a
1 4 9 16 25 36 49 64 81
(or) F (x) =1
81 81 81 81 81 81 81 81 81

Example 2:
Find the probability distribution of the number of sixes in throwing two dice once.
Solution:
When two dice are thrown the total number of sample points are 36.
Let X denote the number of sixes obtained in throwing two dice once. Then X is the
random variable, which can take the values 0,1,2.
Let A denote the success of getting a six in throwing a die and A denote not getting a
six.
Then probability getting a six
1
P(A) =
6

Probability not getting a six


5
P(A) =
6

No sixes :
∴ P(x = 0) = P(A and A)
= P (A) . P (A)
5 5
= .
6 6
25
=
36

37
P (x = 1) = P (A and A) or P (A and A)
= P(A). P(A) + P (A) . P (A)
1 5 5 1
= . +
6 6 6 6
5 5
= +
36 36
10
=
36
5
=
18

P (x = 2) = P (A and A)
= P (A). P(A)
1 1
= .
6 6
1
=
36

Hence the probability distribution of X is given by

X=x 0 1 2
25 10 1
P (X = x)
36 36 36

Example 3:

An urn contains 6 red and 4 white balls. Three balls are drawn at random. Obtain the
probability distribution of the number of white balls drawn.

Solution:

The total number of balls in the urn is 10


Let X denote the number of white balls drawn

If three balls are drawn, the random variable takes the value X= 0, 1, 2, 3

Probability of getting white balls from the urn containing 10 balls (red and white) with
the following combination are
4C0 6C3 1 × 120 5
P (no white, 3 red balls) = = =
10C3 720 30
4C1 .6C2 15
P (1 white, 2 red) = =
10C3 30
4C2 6C1 9
P (2 white, 1 red) = =
10C3 30
4C3 6C0 381
P (3 white, no red) = =
10C3 30
4C1 .6C2 15
P (1 white, 2 red) = =
10C3 30
4C2 6C1 9
P (2 white, 1 red) = =
10C3 30
4C3 6C0 1
P (3 white, no red) = =
10C3 30

Hence the probability distribution of X is given by

X=x 0 1 2 3

5 15 9 1
P (X = x)
30 30 30 30

2.4 An introduction to elementary calculus:


Before going to see the problems on continuous random variables, we need to know
some fundamental ideas about differentiation and integration, which are part of calculus in
higherlevel mathematics.

Hence we introduce some simple techniques and formulae to calculate the problems in
statistics, which involve calculus methods.

2.4.1 Differentiation:

1. Functional value is an exact value. For some function f(x), when x = a, we obtain
the functional value as f(a) = k.
2. Limiting value is an approximate value. This value approaches the nearest to
the exact value k. Suppose the exact value is 4. Then the limiting value will be
4.000000001 or 3.999999994. Here the functional value and limiting value are
more or less same.
Hence in many occasions we use the limiting values for critical problems.

The limiting value of f(x) when x approaches a number 2 is given as


f(x + h) − f(x)
3. The special type of existing limit, limit is called the derivative of
h →0 h
the function f with respect to x and is denoted by f ′(x). If y is the function x then
dy
we say the differential coefficient of y with respect to x and is denoted as .
dx

4. Some rules on differentiation :


(i) Derivative of a constant function is zero. f ′(c)=0 where c is some constant.
(ii) If u is a function of x and k is some constant and dash denotes the
differentiation, [ku]′ = k[u]′

39
(iii) (u ± v)′ = u′ ± v′

(iv) (uv)′ = u′v +uv′


u ′ u ′v − uv ′
(v)   =
v v2

5. Important formulae :

(i) (xn)′ = nxn–1

(ii) (ex)′ = ex
1
(iii) (log x)′ =
x

Example 4 :
Evaluate the following limits :

x 2 + 5x x2 − 1
(i) limit (ii) limit
x→2 x+2 x→1 x −1

Solution :
x 2 + 5x (2)2 + 5(2) 4 + 10 14 7
(i) limit = = = =
x 2 x+2 2+2 4 4 2
x2 − 1 12 − 1 0
(ii) limit = = .
x1 x −1 1−1 0

This is an indeterminate form.


Therefore first factorise and simplify and then apply the same limit to get the limiting
value
2
x -1 (x -1) (x +1)
\ = = x+1
x -1 (x - 1)
x2- 1
\ limit = limit(x + 1) = 1 + 1 = 2
x®1 x -1 x®1

Example 5
Find the derivative of the following with respect to x.

x2 + 1
(i) x12 +7 (ii) (x4 + 4x2 – 5) (iii) (x3) (ex) (iv)
x−5
Solution :
(i) Let y = x12 + 7
40
dy
∴ = 12x12–1 + 0 = 12x11
dx

(ii) Let y = x3 + 4x2 – 5


y′ = 4x3 + 4 (2x) – 0
= 4x3 + 8
(iii) Let y = x3ex
(uv)′ = u′v + uv′
= [x3]′ (ex) + (x3) [ex]′
= 3x2 ex + x3ex
= ex (3x2 + x3)

x2 + 1  u  u ′v − uv ′
(iv) y= . This is of the type v =
x−5   v2

dy [x 2 + 1]′ (x − 5) − (x 2 + 1)[x − 5]′


∴ =
dx (x − 5)2
[2x](x − 5) − (x 2 + 1)[1]
=
(x − 5)2
2x 2 − 10x − x 2 − 1
=
(x − 5)2
x 2 − 10x − 1
=
(x − 5)2

2.4.2 Integration:
Integration is known as the reverse process of differentiation. Suppose the derivative
of x3 is 3x2. Then the integration of 3x2 with respect to x is x3 . We write this in symbol as
follows:
d 4
(x ) = 4x3 ⇒ 4∫ x3dx = x 4
dx

Similarly
d 8
(x ) = 8x 7 ⇒ 8∫ x 7dx = x8
dx
d x
(e ) = e x ⇒ ∫ e x dx = e x
dx

41
Note:

While differentiating the constant term we get zero. But in the reverse process, that is
on integration, unless you know the value of the constant we cannot include. That is why we
include an arbitrary constant C to each integral value.
x x
Therefore the above examples, we usually write ∫ e dx = e + c and ∫ 8x dx = x + c .
7 8

These integrals are also called improper integrals or indefinite integrals.


Rules and formulae on integration :

(i) ∫ k dx = kx
nx n +1
(ii) ∫ x dx = n + 1
x
(iii) ∫e dx = e x

1
(iv) ∫ x dx = log x
(v) ∫ (u ± v) dx = ∫ u dx ± ∫ v dx
Example 6 :
Integrate the following with respect to x :

x 6+1 x 7
(i) (i) ∫ x 6 dx = = +c
6 +1 7

x −5+1 x −4 1 1 1
(ii)(ii) ∫ x −5dx = = =− = − 4 +c
−5 + 1 −4 4 x 4
4x
1
(iii)
(iii) ∫ x dx = log x + c
x1/2+1 x3/2 2 3/2
(iv) ∫ x dx = ∫ x1/2 dx = = = x +c
(iv) 1 3 3
+1
2 2

(v)

x 4 3 x 5 2
(vi)(vi) ∫ (e + x + 1 / x + 10) dx = e + x / 5 − 1 / 2x + 10x + c

The above discussed integrals are known as improper integrals or indefinite integrals.
For the proper or definite integrals we have the limiting point at both sides. ie on the lower
limit and the upper limit.

42
This integral ∫ f(x) dx is an indefinite integral
Integrating the same function within the given limits a and b is known as the definite
integral.
b
ie ∫ f(x) dx = k (a constant value) is a definite integral where a is known as lower
a
limit and b is known as the upper limit of the definite integral.
To find the value of definite integral we use the formulae :
b
Suppose f(x) dx = F(x) then ∫ f(x) dx = F(b) − F(a)
∫ a

An important note to the Teachers and students


As per as statistics problems concerned, the differentiation and integration methods
restricted to simple algebraic functions only.

Example 7 :
Evaluate the following definite inttegrals :
4 3 5
2
(i) ∫ 3x dx 3
(ii) ∫ x dx (iii) ∫ x dx
0 1 2

Solution :
4 4
2
 3x3  3 4
(i) ∫ 3x dx =   = [x ] 0
0  3  0

= 43 − 03 = 64

3 3
3
 x4 
(ii) ∫ x dx =  
1  4  1
1
= [x 4 ] 13
4
1
= [34 − 14 ]
4
1
= [81 − 1]
4
1
= [80]
4
= 20

43
5 5
 x2 
(iii) ∫ x dx =  
2  2  2
1
= [52 − 22 ]
2
1 21
= [25 − 4] =
2 2

Example 8 :
Examine whether f (x) = 5x4 , 0 < x < 1 can be a p.d.f of a continuous random
variable x.
Solution:

For a probability density function, to show that ∫ f(x) dx = 1
−∞
1
That is show that ∫ 5(x) 4 dx = 1
0

1 1
4
 x5 
∫ 5(x) dx = 5  
 5  0
0

5  5 1
= x
5  0
= [15 − 0]
=1

∴ f(x) is a p.d.f.
Example 9 :
A continuous random variable x follows the rule f (x) = Ax2, 0 < x < 1. Determine A
Solution :

Since f (x) is a p.d.f, ∫ f(x) dx = 1
−∞
1
Therefore ∫ Ax 2 dx = 1
0
1
 x3 
A  =1
 3  0
A  3 1
x =1
3  0
A 44
[1] = 1
3
A=3
 x3 
A  =1
 3  0
A  3 1
x =1
3  0
A
[1] = 1
3
A=3

Example 10:
Let f(x) = c(1-x) x2 , 0 < x < 1 be a probability density function of a random variable
x. Find the constant c
Solution:
f(x) = c (1 – x)x2 , 0 < x < 1

Since f(x) is a p.d.f ∫ f(x) dx = 1
−∞
1
1 c (x 2 − x 3 ) dx = 1
∴ ∫ 2 3
∴ ∫
0 c (x − x ) dx = 1
0 1
 x3 x 4 1
c 3 − 4
 x x  =1
c  3 − 4  0 = 1
 3 4
0
 13 14 
c13 −14  − (0 − 0) = 1
c  3 − 4  − (0 − 0) = 1
 3 4   1 1 
c 1 − 1  = 1
 
c  3 − 4 = 1
 3 4
 4 − 1
c 4 −1  = 1
 
c  12  = 1
 12 
 1
c 1  =1
 
c  12 = 1
 12 c = 12
c = 12

Example 11 :
A random variable x has the density function

1
 , −2 < x < 2
f(x) =  4
 0, else where

obtain (i) P (– 1 < x < 2) (ii) P (x > 1)


Solution :
2
(i) P( −1 < x < 2) = ∫ f(x) dx
−1
2
1 1
∫ 4 dx = 4 [ x ] −1
−2
45
−1
1
= [2 − ( −1)]
4
(ii) Here the upper limit of the p.d.f is 2 ∴ the probability for the given random variable.
2
1
P(x > 1) = ò 4 dx
1
1 2
= [x ]1
4
1
= [2 -1]
4
1
= [1]
4
1
=
4

2.5 Mathematical Expectation:


Expectation is a very basic concept and is employed widely in decision theory,
management science, system analysis, theory of games and many other fields. Some of these
applications will be discussed in the chapter on Decision Theory.
The expected value or mathematical expectation of a random variable X is the
weighted average of the values that X can assume with probabilities of its various values as
weights.
Thus the expected value of a random variable is obtained by considering the various
values that the variable can take multiplying these by their corresponding probabilities and
summing these products. Expectation of X is denoted by E(X)
2.5.1 Expectation of a discrete random variable:

Let X be a discrete random variable which can assume any of the values of x1, x2,
x3…….. xn with respective probabilities p1, p2, p3……pn. Then the mathematical expectation
of X is given by

E(x) = x1p1 + x2p2 + x3p3 +………xnpn


n n
= ∑ x i p i , where ∑ pi = 1
i =1 i =1
46
Note:

Mathematical expectation of a random variable is also known as its arithmetic mean.


We shall give some useful theorems on expectation without proof.

2.5.2 Theorems on Expectation:

1. For two random variable X and Y if E(X) and E(Y) exist,


E(X + Y) = E(X) + E(Y). This is known as addition theorem on expectation.

2. For two independent random variable X and Y, E(XY) = E(X).E(Y) provided all
expectation exist. This is known as multiplication theorem on expectation.

3. The expectation of a constant is the constant it self. ie E(C) = C

4. E(cX) = cE(X)

5. E (aX + b) = aE(X) +b

6. Variance of constant is zero. ie Var(c) = 0

7. Var (X + c) = Var X

Note: This theorem gives that variance is independent of change of origin.

8. Var (aX) = a2 var(X)

Note: This theorem gives that change of scale affects the variance.

9. Var (aX + b) = a2Var (X)

10. Var (b – ax) = a2 Var (x)

Definition:

Let f (x) be a function of random variable X. Then expectation of f (x) is given by


E (f(x)) = ∑ f (x) P (X = x) , where P (X = x) is the probability function of x.

Particular cases:

1. If we take f(x) = Xr, then E(Xr) = ∑xrp(x) is defined as the rth moment about
origin or rth raw moment of the probability distribution. It is denoted by μ′r

Thus μ′r = E(Xr)

μ′1 = E(X)

μ′2 = E(X2)
Hence mean = X = m ¢1 = E(X)
2
å x2 é å x ù
Variance = -ê ú
N ë N û
= E(x) 2 - (E(x)) 2
2 47
= m ¢2 - ()m ¢1
Hence mean = X = m ′1 = E(X)
2
∑ x2  ∑ x 
Variance = − 
N  N 
= E(x 2 ) − (E(x))2
= m ′2 − (m ′1 )2

Variance is denoted by μ2

2. If we take f (x) = (X – X )r then E(X – X )r = ∑(X – X)r p(x) which is μr, the rth
moment about mean or rth central moment. In particular if r = 2, we get

μ2 = E (X – X )2

= ∑ (X – X )2 p(X)
= E [X – E (X)]2
These two formulae give the variance of probability distribution in terms of
expectations.
Example 12:
Find the expected value of x, where x represents the outcome when a die is thrown.
Solution:
1
Here each of the outcome (ie., number) 1, 2, 3, 4, 5 and 6 occurs with probability .
Thus the probability distribution of X will be 6

x 1 2 3 4 5 6
1 1 1 1 1 1
P(x)
6 6 6 6 6 6

Thus the expected value of X is


E(X) = ∑ x i p i
= x1p1 + x 2 p2 + x3 p3 + x 4 p 4 + x 5 p5 + x 6 p6
 1  1  1  1  1  1
E(X) = 1 ×  + 2 ×  + 3 ×  +  4 ×  + 5 ×  + 6 × 
 6  6  6  6  6  6
7
=
2
E(X) = 3.5

Remark:
In the games of chance, the expected value of the game is defined as the value of the
game to the player.

The game is said to be favourable to the player if the expected value of the game is
positive, and unfavourable, if value of the game is negative. The game is called a fair game if
the expected value of the game is zero.

48
Example 13:

A player throws a fair die. If a prime number occurs he wins that number of rupees but
if a non-prime number occurs he loses that number of rupees. Find the expected gain of the
player and conclude.

Solution:

Here each of the six outcomes in throwing a die have been assigned certain amount of
loss or gain. So to find the expected gain of the player, these assigned gains (loss is considered
as negative gain) will be denoted as X.

These can be written as follows:

Outcome on a die 1 2 3 4 5 6
Associated gain to the outcome (xi) –1 2 3 –4 5 –6

1 1 1 1 1 1
P (xi)
6 6 6 6 6 6

Note that 2, 3 and 5 prime numbers now the expected gain is


6
E (x) = å E =1 xi pi
é1ù é1ù é1ù é1ù é1ù é1ù
= ( -1) ê ú + (2) ê ú + (3) ê ú + ( -4) ê ú + (5) ê ú + ( -6) ê ú
ë6û ë6û ë6û ë6û ë6û ë6û
é1ù
=-ê ú
ë6û

Since the expected value of the game is negative, the game is unfavourable to the
player.

Example 14:

An urn contains 7 white and 3 red balls. Two balls are drawn together at random from
the urn. Find the expected number of white balls drawn.

Solution:

From the urn containing 7 white and 3 red balls, two balls can be drawn in 10C2 ways.
Let X denote the number of white balls drawn, X can take the values 0, 1 and 2.

The probability distribution of X is obtained as follows:

P(0) = Probability that neither of two balls is white.


= Probability that both balls drawn are red.
3C2 3×2 1
= = =
10C2 10 × 9 15

49
P(1) = Probability of getting 1 white and 1 red ball.
7C1 × 3C1 7 × 3 × 2 7
= = =
10C2 10 × 9 15

P(2) = Probability of getting two white balls


7C2 7×6 7
= = =
10C2 10 × 9 15

Hence expected number of white balls drawn is

Example 15:

A dealer in television sets estimates from his past experience the probabilities of his
selling television sets in a day is given below. Find the expected number of sales in a day.

Number of TV sold in a day 0 1 2 3 4 5 6


Probability 0.02 0.10 0.21 0.32 0.20 0.09 0.06

Solution :

We observe that the number of television sets sold in a day is a random variable which
can assume the values 0, 1, 2, 3, 4, 5, 6 with the respective probabilities given in the table.

Now the expectation of x = E(X) = ∑xipi

= x1p1 + x2p2 + x3p3 + x4p4 + x5p5 + x6p6

= (0) (0.02) + (1) (0.010) + 2(0.21) + (3) (0.32) + 4(0.20)

+(5) (0.09) + (6) (0.06)

E(X) = 3.09

The expected number of sales per day is 3

Example 16:

Let x be a discrete random variable with the following probability distribution

X –3 6 9
P (X = x) 1/6 1/2 1/3

Find the mean and variance.

50
Solution :
E (x) = ∑ x i p i
1 1 1 
= ( −3)   + (6)   + (9)  
6 2 3
11 
= 
2

E (x 2 ) =2 ∑ x i 2 p i2
E (x ) = ∑ x i p i
1 1  1   93 
= ( −3)2  2  +1 (6)2  2  +1 (9)2  2  =
1  93 
= ( −3)
 6   + (6) 2   + (9)  3   =2 
6 2 3  2 
Var (X) = E(X 2 ) −2[E(X)]2 2
Var (X) = E(X ) − [E(X)]
2
 93  11  2
=  93 −  11 
 =2  −2  
2 2
 93  121 
=  93 −  121 
 =2  −4  
2  4 
186 − 121
= 186 − 121
= 4
4
65
= 65
4=
4

2.5.3 Expectation of a continuous random variable:

Let X be a continuous random variable with probability density function f(x), then the
mathematical expectation of x is defined as

E (x) = ∫ x f(x) dx , provided the integral exists.
−∞
Remark:
If g(x) is function of a random variable and E[g(x)] exists,

then E[(g(x)] = ∫ g(x) f(x) dx
−∞

Example 17:
Let X be a continuous random variable with p.d.f given by f(x) = 4x3, 0 < x < 1. Find
the expected value of X.
Solution:

We know that E (X) = ∫ x f(x) dx
−∞
1
In this problem E (X) = ∫ x (4x3 ) dx 51
0
1
= 4∫ x (x3 ) dx

We know that E (X) = ∫ x f(x) dx
−∞
1
In this problem E (X) = ∫ x (4x3 ) dx
0
1
= 4∫ x (x3 ) dx
0
1
 x5 
= 4 
 5  0
4  5 1
= x
5  0
4
= [15 − 0 5 ]
5
4
= [1]
5
4
=
5

Example 18 :

Let x be a continuous random variable with pdf. given by f(x) = 3x2, 0 < x < 1 Find
mean and variance

Solution :
¥
E (x) = ò xf(x)dx

1

ò x(3x
2
E (x) = )dx
0
1
= 3 ò (x 3 )dx
0
1
é x4 ù
= 3ê ú
êë 4 úû 0
3 é 4ù1
= x
4 ë û0
3 4
= [1 - 0]
4
3
=
4

52
¥

ò x f(x)dx
2 2
E (x) =

1

ò
2 2
= x (3x )dx
0
1
= ò 3(x )dx
4

0
1
é x5 ù
= 3ê ú
êë 5 úû 0
3 1
= éx5 ù
5ë û0
3 5
= [1 - 0]
5
3
=
5

Variance = E(x 2 ) − [E(x)] 2


2
3  3
Var(x) = − 
5  4
3 9
= −
5 16
48 − 45 3
= =
80 80

2.6 Moment generating function (M.G.F) (concepts only):


To find out the moments, the moment generating function is a good device. The
moment generating function is a special form of mathematical expectation and is very useful
in deriving the moments of a probability distribution.

Definition:
If X is a random variable, then the expected value of etx is known as the moment
generating functions, provided the expected value exists for every value of t in an interval,
– h < t < h , where h is some positive real value.

The moment generating function is denoted as Mx(t)

For discrete random variable

M x (t) = E(e tx )
= ∑ e tx p(x)
 (tx)2 (tx)3 
= ∑  1 + tx + + + ........
53 p x (x)
 2! 3! 

 t2 ′ t3 ′  tr ′


M x (t) =  1 + tm1 + m 2 + m 3 + ........ =
2! 3! 
∑ r! m r
M x (t) = E(e tx )
= ∑ e tx p(x)
 (tx)2 (tx)3 
= ∑  1 + tx + + + ........ p x (x)
 2! 3! 
 t2 t3  ∞ tr
M x (t) =  1 + tm1′ + m 2′ + m 3′ + ........ = ∑ m r ′
 2! 3!  r = 0 r!

tr
In the above expression, the rth raw moment is the coefficient of in the above
r!
expanded sum. To find out the moments differentiate the moment generating function with
respect to t once, twice, thrice…… and put t = 0 in the first, second, third, ….. derivatives to
obtain the first, second, third,…….. moments.
From the resulting expression, we get the raw moments about the origin. The central
moments are obtained by using the relationship between raw moments and central moments.
2.7 Characteristic function:
The moment generating function does not exist for every distribution. Hence another
function, which always exists for all the distributions is known as characteristic function.
It is the expected value of eitx, where i = −1 and t has a real value and the
characteristic function of a random variable X is denoted by φx(t)
For a discrete variable X having the probability function p(x), the characteristic
function is φx(t) = ∑ eitx p(x)
For a continuous variable X having density function f(x), such that a < x < b , the
b
itx
characteristic function φx(t) = ∫ e f(x) dx .
a

Exercise - 2
I. Choose the best answer :
n
1. ∑ p (x i ) is equal to
i =1

(a) 0 (b) 1 (c) – 1 (d) ∞


2. If F(x) is distribution function, then F(– ∞) is
(a) –1 (b) 0 (c) 1 (d) – ∞
3. From the given random variable table, the value of a is

X=x 0 1 2
pi a 2a a
1 1
(a) 1 (b) (c) 4 (d)
2 4
54
4. E (2x + 3) is
(a) E(2x) (b) 2E(x) + 3 (c) E(3) (d) 2x + 3
5. Var (x + 8) is
(a) var (8) (b) var(x) (c) 8 var (x) (d) 0
6. Var(5x+2) is
(a) 25 var (x) (b) 5 var (x) (c) 2 var (x) (d) 25
7. Variance of the random variable X is
(a) E(x2) – [E(x)]2 (b) [E(x)]2 – E(x2) (c) E(x2) (d) [E(x)]2
1
8. Variance of the random variable x is ; its standard deviation is
16
1 1 1 1
(a) (b) (c) (d)
256 32 64 4
9. 2
A random variable X has E(x) = 2 and E(x ) = 8 its variance is
(a) 4 (b) 6 (c) 8 (d) 2
10. If f(x) is the p.d.f of the continuous random variable x, then E(x2) is
∞ ¥ ¥ ∞
(a) ∫ f(x) dx (b) ò xf(x) dx (c) ò
2
x f(x) dx (d) ∫ f(x
2
) dx
−∞ -¥ -¥ −∞

II. Fill in the blanks:


11. If f(x) is a distribution function, then F(+ ∞) is equal to ________
12. If F(x) is a cumulative distribution function of a continuous random variable x with p.d.f
f(x) then F′(x) = __________
13. f(x) is the probability density function of a continuous random variable X. Then

∫ f(x) dx is equal to ________


−∞
14. Mathematical expectation of a random variable X is also known as _____________
15. Variance of a constant is _____________
16. Var (12x) is _____________
17. Var (4x + 7) is _________
18. If x is a discrete random variable with the probabilities pi , then the expected value of x2
is ________
19. If f(x) is the p.d.f of the continuous random variable X, then the expectation of X is given
by __________
20. The moment generating function for the discrete random variable is given by ________

55
III. Answer the following:
21. Define random variable.
22. Define discrete random variable
23. Define continuous random variable
24. What is probability mass function?
25. What is discrete probability distribution?
26. Define probability density function.
27. Write the properties of distribution function.
28. Define mathematical expectation for discrete random variable.
29. Define the expectation of a continuous random variable.
30. State the moment generating function.
31. State the characteristic function for a discrete random variable.
32. State the characteristic function for the continuous random variable.
33. Write short note on moment generating function.
34. Write a short note on characteristic function.
35. Find the probability distribution of X when 3 coins are tossed, where x is defined as
getting head.
36. Two dice are thrown simultaneously and getting three is termed as success. Obtain the
probability distribution of the number of threes.
37. Three cards are drawn at random successively, with replacement, from a well shuffled
pack of 52 cards. Getting a card of diamond is termed as success. Obtain the probability
distribution of the number of success.
38. A random variable X has the following probability distribution

Value of x 0 1 2 3 4
P (X = x) 3a 4a 6a 7a 8a
(a) determine the value of a (b) Find p ( 1 < x < 4 )
(c) P(1 ≤ x ≤ 4) (d) Find P (x >2)
(e) Find the distribution function of x
39. A random variable X has the following probability function.

Values of X, x 0 1 2 3 4 5 6 7
P (X) 0 k 2k 2k 3k k2 2k2 7k2+k
(i) Find k (ii) Find p(0 < x < 5) (iii) Find p(x ≤ 6)

56
40. Verify whether the following are probability density function
(i) f (x) = 6x5, 0<x<1
2x
(ii) f (x) =, 0<x<3
9
41. A continuous random variable x follows the probability law. f (x) = Ax3, 0 < x < 1
determine A.

42. A random variable X has the density function f (x) = 3x2 , 0 < x < 1 Find the probability
between 0.2 and 0.5

43. A random variable X has the following probability distribution

X=x 5 2 1
1 1 1
P (x)
4 2 4

Find the expected value of x.

44. A random variable X has the following distribution

x –1 0 1 2
1 1 1 1
P (x)
3 6 6 3

Find E(x) , E(x2) and Var (x)


1 1
45. A random variable X has E(x) = and E(x2) = find its variance and standard
2 2
deviation.
46. In a continuous distribution, whose probability density function is given by
3
f(x)= x (2 – x) , 0 < x < 2. Find the expected value of x.
4
x
47. The probability density function of a continuous random variable X is given by f(x) =
for 0 < x < 2. Find its mean and variance. 2

Answers:
I.
1. (b) 2. (b) 3. (d) 4. (b) 5. (b)
6. (a) 7. (a) 8. (d) 9. (a) 10. (c)

57
II.
11. 1 12. f (x) 13. 1 14. Mean 15. zero

16. 144 var(x) 17. 16 var(x) 18. ∑xi2 pi 19. ∫ x f(x) dx
∞ r −∞
20. ∑ t m!
r =0
r! r

III.
35.
X=x 0 1 2 3
P (xi) 1/8 3/8 3/8 1/8
36.
X=x 0 1 2
25 10 1
P (X = x)
36 36 36

37.
X=x 0 1 2 3
27 27 9 1
P (xi)
64 64 64 64

38. (i) a = 1/28 (ii) 13/28 (iii) 25/28 (iv) 15/28


(v)
x 0 1 2 3 4
3 7 13 20 28
F(x) =1
28 28 28 28 28

39. (i) k = 1/10 (ii) 4/5 (iii) 83/100


40. (i) p.d.f (ii) p.d.f
41. A = 4
42. P(0.2 < x , 0.5) = 0.117
43. 2.5
44. E(x) = 1/2 , var (x) = 19/12
45. 1/4 , 1/2
46. E(x) = 1
47. E(x) = 4/3 , var (x) = 2/9

58

You might also like