VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
∑ 𝑥2𝑦 = 𝑎 ∑ 𝑥2 + 𝑏 ∑ 𝑥3 + 𝑐 ∑ 𝑥4
To fit the curve 𝑦 = 𝑎 ∙ 𝑥 𝑏 , Solve the normal equations of 𝑌 = 𝐴 + 𝑏𝑋 for A & B
∑ 𝑌 = 𝑛𝐴 + 𝑏 ∑ 𝑋
∑ 𝑋𝑌 = 𝐴 ∑ 𝑋 + 𝑏 ∑ 𝑋 2
Where 𝑋 = log10 𝑥 , 𝑌 = log10 𝑦 & 𝐴 = log10 𝑎
Mean:
∑ 𝑥𝑖 𝑥1 +𝑥2 +𝑥3 +⋯⋯+𝑥𝑛
The mean of the set of n values 𝑥1 , 𝑥2 , 𝑥3 , ⋯ ⋯ , 𝑥𝑛 is 𝑥̅ = =
𝑛 𝑛
Standard Deviation:
The standard deviation of the set of n values 𝑥1 , 𝑥2 , 𝑥3 , ⋯ ⋯ , 𝑥𝑛 is given by 𝜎
∑(𝑥𝑖 − 𝑥̅ )2 ∑ 𝑥𝑖2
𝜎2 = = − (𝑥̅ )2
𝑛 𝑛
For the frequency distribution, if 𝑥1 , 𝑥2 , 𝑥3 , ⋯ ⋯ , 𝑥𝑛 be the mid values of the class-intervals
having frequencies 𝑓1 , 𝑓2 , 𝑓3 , ⋯ ⋯ , 𝑓𝑛 respectively,
∑ 𝑓 𝑖 𝑥𝑖 𝑓1 𝑥1 +𝑓2 𝑥2 +𝑓3 𝑥3 +⋯⋯+𝑓𝑛 𝑥𝑛
the mean is 𝑥̅ = ∑ 𝑓𝑖
= 𝑓1 +𝑓2 +𝑓3 +⋯⋯+𝑓𝑛
∑ 𝑓𝑖 (𝑥𝑖 −𝑥̅ )2
the standard deviation is 𝜎 2 = ∑ 𝑓𝑖
Coefficient of Correlation 𝒓 between 𝒙 & 𝒚 is
∑ 𝑋𝑌
𝑟=
√∑ 𝑋 2 ∑ 𝑌 2
Where 𝑋 = 𝑥 − 𝑥̅ & 𝑌 = 𝑦 − 𝑦̅
Line of Regression of 𝑦 on 𝑥 is
𝜎𝑦
𝑦 − 𝑦̅ = 𝑟 (𝑥 − 𝑥̅ )
𝜎𝑥
Line of Regression of 𝑥 on 𝑦 is
15 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
𝜎𝑥
𝑥 − 𝑥̅ = 𝑟 (𝑦 − 𝑦̅)
𝜎𝑦
𝜎𝑦
Regression coefficient of 𝑦 on 𝑥 is 𝑏𝑦𝑥 = 𝑟 𝜎
𝑥
𝜎
Regression coefficient of 𝑥 on 𝑦 is 𝑏𝑥𝑦 = 𝑟 𝜎 𝑥
𝑦
The angle between two regression lines 𝜃 is given by
𝜎𝑥 ∙ 𝜎𝑦 (1 − 𝑟 2 )
tan 𝜃 = ∙
𝜎𝑥2 + 𝜎𝑦2 𝑟
The Standard error of estimate of 𝑥 is given by 𝑆𝑥 = 𝜎𝑥 √1 − 𝑟 2
The Standard error of estimate of 𝑦 is given by 𝑆𝑦 = 𝜎𝑦 √1 − 𝑟 2
Rank Correlation between x & y:
6 ∑ 𝑑2
𝜌 =1− , 𝑤ℎ𝑒𝑟𝑒 𝑑 = 𝑥 − 𝑦
𝑛(𝑛2 − 1)
Probability Distributions
Sample space S is the set of all possible outcomes.
The probability P is a real valued function whose domain is S and range is the interval
[0,1] satisfying the following axioms:
(i) For any event E, 𝑃(𝐸) ≥ 0
(ii) 𝑃 (𝑆) = 1
(iii) If E and F are mutually exclusive events, then 𝑃(𝐸 ∪ 𝐹) = 𝑃(𝐸) + 𝑃(𝐹).
If E and F are equally likely to occur, then 𝑃(𝐸) = 𝑃(𝐹).
If E and F are any two events then 𝑃(𝐸 ∪ 𝐹) = 𝑃(𝐸) + 𝑃(𝐹) − 𝑃(𝐸 ∩ 𝐹).
If E and F are mutually exclusive events then 𝑃(𝐸 ∩ 𝐹) = 0.
For any event E of the sample space S, we have 𝑃(𝐸 ′ ) = 1 − 𝑃(𝐸)
Two events E and F are said to be independent events if 𝑃 (𝐸 ⁄𝐹 ) = 𝑃(𝐸)
If E & F are two events 𝑃(𝐸 ∩ 𝐹) = 𝑃(𝐸) ∙ 𝑃(𝐸 ⁄𝐹 )
16 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
Two events E and F are said to be independent iff 𝑃(𝐸 ∩ 𝐹) = 𝑃(𝐸) ∙ 𝑃(𝐹)
Baye’s Theorem: An event A corresponds to a number of exhaustive events
𝐵1 , 𝐵2 , ⋯ ⋯ , 𝐵𝑛 . If 𝑃(𝐵𝑖 ) and 𝑃(𝐴⁄𝐵𝑖 ) are given, then
𝑷(𝑩𝒊 )𝑷(𝑨⁄𝑩𝒊 )
𝑷(𝑩𝒊 ⁄𝑨) =
∑ 𝑷(𝑩𝒊 )𝑷(𝑨⁄𝑩𝒊 )
Discrete Probability Distribution
For each value of 𝑥𝑖 of a discrete random variable 𝑋, we assign a real number 𝑃(𝑥𝑖)
such that
𝑛
(𝑖) 𝑃(𝑥𝑖 ) ≥ 0, 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑣𝑎𝑙𝑢𝑒𝑠 𝑜𝑓 𝑖 & (𝑖𝑖) ∑ 𝑃(𝑥𝑖 ) = 1
𝑖=1
then,
𝑋 𝑥1 𝑥2 𝑥3 … 𝑥𝑛
𝑃(𝑋) 𝑃(𝑥1) 𝑃(𝑥2) 𝑃(𝑥3) … 𝑃(𝑥𝑛)
is called a discrete probability distribution of X.
The cumulative distribution function 𝐹(𝑥) is defined by 𝐹(𝑥) = 𝑃(𝑋 ≤ 𝑥) = ∑𝑥𝑖=1 𝑃(𝑥𝑖 )
The mathematical expectation is 𝐸(𝑋) = ∑𝑛𝑖=1 𝑥𝑖 𝑃(𝑥𝑖 ) and 𝐸(𝑋 2 ) = ∑𝑛𝑖=1 𝑥𝑖2 𝑃(𝑥𝑖 )
Mean = 𝐸(𝑋), Variance = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2 , Standard deviation = √𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒
Binomial Distribution
The probability density function is said to follow binomial distribution, if 𝑃(𝑥) satisfies the
condition 𝑃(𝑥) = 𝑛𝐶𝑥 𝑝 𝑥 𝑞 𝑛−𝑥 , where p is the probability of success and 𝑞 = 1 − 𝑝 is the
probability of failure.
𝑥𝑖 0 1 2 3 ⋯ 𝑥𝑟 ⋯ n
𝑃(𝑥𝑖 ) 𝑞 𝑛 𝑛𝐶1 𝑝1 𝑞 𝑛−1 𝑛𝐶1 𝑝1 𝑞 𝑛−1 𝑛𝐶1 𝑝1 𝑞 𝑛−1 𝑛𝐶𝑟 𝑝𝑟 𝑞 𝑛−𝑟 𝑝𝑛
Mean = 𝜇 = 𝑛𝑝, Variance 𝑉 = 𝜎 2 = 𝑛𝑝𝑞 , Standard deviation = 𝜎 = √𝑛𝑝𝑞
Poisson Distribution
𝑒 −𝑚 𝑚𝑥
A probability distribution which satisfies the probability density function 𝑃(𝑥) = Is
𝑥!
called Poisson Distribution
𝑀𝑒𝑎𝑛 = 𝜇 = 𝑚 = 𝑉𝑎𝑟𝑖𝑎𝑛𝑐𝑒 , where m = np finite
𝑚 𝑥 𝑥−1 𝑥−2
∞ 𝑚 ∞ 𝑚
∑∞
𝑥=0 𝑥! = ∑𝑥=1 (𝑥−1)! = ∑𝑥=2 (𝑥−2)! = 𝑒
𝑚
17 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
Continuous Probability Distribution
If a random variable takes any real value in the specified interval, then it is called
Continuous Random Variable.
f(x) is the probability density function of the continuous random variable x,
(𝑖) 𝑓(𝑥) ≥ 0
∞
(ii) ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1
The mathematical expectation of the variable is
∞ ∞
2)
𝐸(𝑋) = ∫ 𝑥𝑓(𝑥)𝑑𝑥 , 𝐸(𝑋 = ∫ 𝑥 2 𝑓(𝑥) 𝑑𝑥
−∞ −∞
If f(x) is the probability density function of the continuous random variable x, then the
𝑡
cumulative distribution function 𝐹(𝑡) == 𝑃(𝑋 ≤ 𝑡) = ∫−∞ 𝑓(𝑥) 𝑑𝑥. Then 𝐹 ′ (𝑡) = 𝑓(𝑡)
𝑏
𝑃(𝑎 ≤ 𝑋 ≤ 𝑏) = 𝑃(𝑎 ≤ 𝑋 < 𝑏) = 𝑃(𝑎 < 𝑋 ≤ 𝑏) = 𝑃(𝑎 < 𝑋 < 𝑏) = ∫𝑎 𝑓(𝑥) 𝑑𝑥
Normal distribution
The continuous probability distribution having the probability density function
1 2 / 2𝜎 2
𝑓(𝑥) = 𝜎√2𝜋 𝑒 −(𝑥−𝜇) is called the normal distribution.
∞
𝑓(𝑥) ≥ 0, ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1, Mean = 𝜇, variance = 𝜎 2
A normal distribution with 𝜇 = 0 𝑎𝑛𝑑 𝜎 = 1 is called standard normal distribution .
𝑋−𝜇
𝑧= is called the standard normal variate.
𝜎
Standard normal curve is symmetric about the line 𝑧 = 0.
Exponential distribution
The continuous probability distribution having the probability density function
𝛼𝑒 −𝛼𝑥 , 0 ≤ 𝑥 < ∞
𝑓(𝑥) = { is called the exponential distribution.
0, 𝑒𝑙𝑠𝑒𝑤ℎ𝑒𝑟𝑒
∞
𝑓(𝑥) ≥ 0, ∫−∞ 𝑓(𝑥)𝑑𝑥 = 1
1 1
Mean = 𝛼 , Standard deviation= 𝛼
18 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
𝑘 𝑘
∫−∞ 𝑓(𝑥)𝑑𝑥 = ∫0 𝑓(𝑥)𝑑𝑥 (∵ It is defined only 0 ≤ 𝑥 < ∞)
∞ 𝑘
∫𝑘 𝑓(𝑥)𝑑𝑥 = 1 − ∫0 𝑓(𝑥)𝑑𝑥 (∵ It is defined only 0 ≤ 𝑥 < ∞)
Joint Probability distribution
Let 𝑋 = {𝑥1 , 𝑥2 , … , 𝑥𝑚 } and 𝑌 = {𝑦1 , 𝑦2 , … , 𝑦𝑛 } be two discrete random variables. Then
𝑃(𝑥, 𝑦) = 𝐽𝑖𝑗 is called joint probability function of X and Y if it satisfies the conditions:
𝑚 𝑛
(𝑖) 𝐽𝑖𝑗 ≥ 0 (𝑖𝑖) ∑ ∑ 𝐽𝑖𝑗 = 1
𝑖=1 𝑗=1
.
Set of values of this joint probability function 𝐽𝑖𝑗 is called joint probability distribution of
X and Y.
X\Y 𝑦1 𝑦2 … 𝑦𝑛 𝑆𝑢𝑚
𝑥1 𝐽11 𝐽12 … 𝐽1𝑛 𝑓(𝑥1 )
𝑥2 𝐽21 𝐽22 … 𝐽2𝑛 𝑓(𝑥2 )
… … … … … …
𝑥𝑚 𝐽𝑚1 𝐽𝑚2 … 𝐽𝑚𝑛 𝑓(𝑥𝑚 )
𝑆𝑢𝑚 𝑔(𝑦1 ) 𝑔(𝑦2 ) … 𝑔(𝑦𝑛 ) 𝑇𝑜𝑡𝑎𝑙 = 1
Marginal probability distribution of X
𝑥1 𝑥2 … 𝑥𝑚
𝑓(𝑥1 ) 𝑓(𝑥2 ) … 𝑓(𝑥𝑚 )
Where 𝑓(𝑥1 ) + 𝑓(𝑥2 ) + ⋯ + 𝑓(𝑥𝑚 ) = 1
Marginal probability distribution of Y
𝑦1 𝑦2 … 𝑦𝑛
𝑔(𝑦1 ) 𝑔(𝑦2 ) … 𝑔(𝑦𝑛 )
Where 𝑔(𝑦1 ) + 𝑔(𝑦2 ) + ⋯ + 𝑔(𝑦𝑛 ) = 1
The discrete random variables X and Y are said to be independent random variables if
𝑓(𝑥𝑖 )𝑔(𝑦𝑗 ) = 𝐽𝑖𝑗 .
Important results:
19 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
Expectations:
𝑚 𝑛 𝑚 𝑛
𝐸(𝑋) = ∑ 𝑥𝑖 𝑓(𝑥𝑖 ) 𝐸(𝑌) = ∑ 𝑦𝑗 𝑔(𝑦𝑗 ) 𝐸(𝑋𝑌) = ∑ ∑ 𝑥𝑖 𝑦𝑗 𝐽𝑖𝑗
𝑖=1 𝑗=1 𝑖=1 𝑗=1
Covariance:
𝐶𝑜𝑣(𝑋, 𝑌) = 𝐸(𝑋𝑌) − 𝐸(𝑋)𝐸(𝑌)
Variance:
𝑉𝑎𝑟(𝑋) = 𝐸(𝑋 2 ) − [𝐸(𝑋)]2 𝑉𝑎𝑟(𝑌) = 𝐸(𝑌 2 ) − [𝐸(𝑌)]2
Standard deviation:
𝜎𝑥 = √𝑉𝑎𝑟(𝑋) 𝜎𝑦 = √𝑉𝑎𝑟(𝑌)
Correlation of X and Y:
𝐶𝑜𝑣(𝑋, 𝑌)
𝜌(𝑋, 𝑌) =
𝜎𝑥 𝜎𝑦
If X and Y are independent then 𝐸(𝑋𝑌) = 𝐸(𝑋)𝐸(𝑌).
Sampling
Sampling distribution and standard error:
The number of units in the sample is called sample size. It is denoted by n. If n ≥ 30, the
sample is called large. Otherwise, small.
Test of significance for large samples
Binomial distribution tends to normal for large 𝑛. For a normal distribution, only 5% of
the members lie outside 𝜇 ± 1.96𝜎 and only 1% of the members lie outside 𝜇 ± 2.58𝜎.
Comparison of large samples
Standard error:
20 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
𝑠2 𝑠2
√ 1+ 2, 𝐼𝑓 𝑠1 , 𝑠2 𝑎𝑟𝑒 𝑘𝑛𝑜𝑤𝑛
𝑛1 𝑛2
𝜎2 𝜎2
𝑆𝐸(𝑥 𝑥2 = √ 1 + 2 ,
̅̅̅1 − ̅̅̅) 𝐼𝑓 𝜎1 , 𝜎2 𝑎𝑟𝑒 𝑘𝑛𝑜𝑤𝑛
𝑛1 𝑛2
1 1
𝜎√ + , 𝐼𝑓 𝜎 𝑖𝑠 𝑘𝑛𝑜𝑤𝑛
{ 𝑛1 𝑛2
𝑃1 𝑄1 𝑃2 𝑄2
√ + , 𝐼𝑓 𝑃1 , 𝑃2 𝑎𝑟𝑒 𝑘𝑛𝑜𝑤𝑛
𝑛1 𝑛2
𝑆𝐸(𝑝1 − 𝑝2 ) =
1 1
√𝑃𝑄 ( + ) , 𝐼𝑓 𝑝1 , 𝑝2 𝑎𝑟𝑒 𝑘𝑛𝑜𝑤𝑛
{ 𝑛1 𝑛2
where,
𝑛1 𝑝1 + 𝑛2 𝑝2
𝑃=
𝑛1 + 𝑛2
Test of significance - t test
For a small sample of size n, drawn from a normal population with µ and s.d. σ and. If 𝑥̅ 𝑎𝑛𝑑 𝜎𝑠
be the sample mean and s.d., then the statistic, ‘t’ is defined as
𝑥̅ − 𝜇 𝑥̅ − 𝜇
𝑡= √𝑛, 𝑜𝑟 𝑡= √(𝑛 − 1)
𝜎 𝜎𝑆
For two independent samples 𝑥1 , 𝑥2 , ⋯ ⋯ , 𝑥𝑛1 and 𝑦1 , 𝑦2 , ⋯ ⋯ , 𝑦𝑛2 with means
𝑥̅ 𝑎𝑛𝑑 𝑦̅ and standard deviations 𝜎𝑥 𝑎𝑛𝑑 𝜎𝑦 from a normal population with the
same variance,
𝑥̅ − 𝑦̅
𝑡=
1 1
𝜎√𝑛 + 𝑛
1 2
and
1
𝜎𝑠2 = [(𝑛1 − 1)𝜎𝑥2 + (𝑛2 − 1)𝜎𝑦2 ]
𝑛1 + 𝑛2 − 2
21 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
𝑛1 𝑛2
1 2
= [∑(𝑥𝑖 − 𝑥̅ )2 + ∑(𝑦𝑗 − 𝑦̅) ]
𝑛1 + 𝑛2 − 2
𝑖=1 𝑗=1
For the two samples of the same size and the data are paired, the ‘t’ is defined by
̅
𝒅
𝑡= 𝝈
( )
√𝒏
Where
𝑛
1 2
𝜎2 = ∑(𝑑𝑖 − 𝑑̅ )
𝑛−1
1
∑ 𝑑𝑖
𝑑𝑖 = 𝑥𝑖 − 𝑦𝑖 , & 𝑑̅ =
𝑛
CHI-SQUARE (𝜒 2 ) TEST
The magnitude of discrepancy between observation and theory is given by the quantity 𝜒 2
(𝑶𝒊 − 𝑬𝒊 )𝟐
𝝌𝟐 = ∑
𝑬𝒊
Where 𝑂𝑖 − Observed frequency or tabulated frequency
𝐸𝑖 − Expected frequency or theoretical frequency
𝑛 − 1 degrees of freedom
Critical value:
Level of significance 𝛼 = 0.05 𝑜𝑟 0.01 (Always upper tailed)
1, 𝐼𝑛 𝑔𝑒𝑛𝑒𝑟𝑎𝑙
Degrees of freedom 𝛾 = 𝑛 − 𝑐. Where 𝑐 = {2, 𝐹𝑜𝑟 𝑃𝑜𝑖𝑠𝑠𝑜𝑛 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛
3, 𝐹𝑜𝑟 𝑛𝑜𝑟𝑚𝑎𝑙 𝑑𝑖𝑠𝑡𝑟𝑖𝑏𝑢𝑡𝑖𝑜𝑛
F-Distribution
For two independent random samples 𝑥1 , 𝑥2 , ⋯ ⋯ , 𝑥𝑛1 and 𝑦1 , 𝑦2 , ⋯ ⋯ , 𝑦𝑛2 drawn from
the normal populations with the variances 𝜎 2 , the ratio F is defined as
𝑠12
𝐹= 2 , 𝑠12 > 𝑠22
𝑠2
22 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
∑(𝑥−𝑥̅ )2 ∑(𝑦−𝑦̅)2
where 𝑠12 = , 𝑠22 =
𝑛1 −1 𝑛2 −1
The ANOVA Technique
ANOVA table for one-way classification:
Source of Sum of squares Degrees of Mean squares 𝑭 −Ratio
variation
freedom
Between samples SSC 𝑐−1 𝑆𝑆𝐶
𝑀𝑆𝐶 =
𝑐−1 𝑀𝑆𝐶
Within samples SSE 𝑁−𝑐 𝑆𝑆𝐸 𝐹=
𝑀𝑆𝐸 = 𝑀𝑆𝐸
𝑁−𝑐
Total SST 𝑁−1 - -
Expansion of abbreviations:
SSC – Sum of squares between samples (Columns)
SSE – Sum of squares within sample (Rows)
SST – Total sum of squares of variations
MSC – Mean squares of variations between samples (Columns)
MSE - Mean squares of variations within samples (Rows)
Notations:
𝑻 − Total sum all the observations
𝑁 − Number of observations.
𝑐 − Number of columns.
(Σ𝑋1 )2 (Σ𝑋2 )2 (Σ𝑋3 )2 (Σ𝑋𝑘 )2 𝑇 2
𝑆𝑆𝐶 = + + + ⋯+ −
𝑛1 𝑛2 𝑛3 𝑛𝑘 𝑁
𝑇2
𝑆𝑆𝑇 = Σ𝑋12 + Σ𝑋22 + Σ𝑋32 + ⋯ + Σ𝑋𝑘2 − 𝑁
𝑆𝑆𝐸 = 𝑆𝑆𝑇 − 𝑆𝑆𝐶
Working rule:
23 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
(i) Assume 𝐻0 : 𝜇1 , 𝜇2 , … , 𝜇𝑘 all are equal.
(ii) Construct ANOVA tale for one-way classification.
𝑀𝑆𝐶
, 𝑖𝑓 𝑀𝑆𝐶 > 𝑀𝑆𝐸
(iii) Under 𝐻0 , 𝐹 = {𝑴𝑺𝑬
𝑀𝑆𝐸
, 𝑖𝑓 𝑀𝑆𝐸 > 𝑀𝑆𝐶
𝑴𝑺𝑪
(iv) If calculated value < tabulated value, accept 𝐻0 . Reject otherwise.
ANOVA for two-way classification
In a two-way classification, the data are classified according to two different criteria or factors.
Expansion of abbreviations:
SSC – Sum of squares between columns CF – Correction Factor
SSR – Sum of squares between rows MSC – Mean squares of variations between columns
SST – Total sum of squares of variations MSR – Mean squares of variations between rows
SSE – Sum of squares due to errors MSE - Mean squares of variations between rows
Notation:
𝑇1 , 𝑇2 , 𝑇3 , 𝑇4 −Row totals 𝑇 − Grand total
𝑇5 , 𝑇6 , 𝑇7 − Column Totals N – Total number of
elements
ANOVA table for two-way classification:
Source of variation Sum of Degrees of Mean squares 𝑭 −Ratio
squares freedom
Between columns SSC 𝑐−1 𝑆𝑆𝐶 𝑀𝑆𝐶
𝑀𝑆𝐶 = 𝐹𝐶 =
𝑐−1 𝑀𝑆𝐸
Between rows SSR 𝑟−1 𝑆𝑆𝑅
𝑀𝑆𝑅 =
𝑟−1 𝑀𝑆𝑅
Residual SSE (𝑐 − 1)(𝑟 − 1) 𝑆𝑆𝐸 𝐹𝑅 =
𝑀𝑆𝐸 = 𝑀𝑆𝐸
(𝑐 − 1)(𝑟 − 1)
𝑀𝑆𝐶
𝐹𝐶 = , 𝑖𝑓 𝑀𝑆𝐶 > 𝑀𝑆𝐸. 𝑅𝑒𝑐𝑖𝑝𝑟𝑜𝑐𝑎𝑡𝑒 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.
𝑀𝑆𝐸
𝑀𝑆𝑅
𝐹𝐶 = 𝑀𝑆𝐸 , 𝑖𝑓 𝑀𝑆𝑅 > 𝑀𝑆𝐸. 𝑅𝑒𝑐𝑖𝑝𝑟𝑜𝑐𝑎𝑡𝑒 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒.
24 | P a g e
VISVESVARAYA TECHNOLOGICAL UNIVERSITY, BELAGAVI
MATHEMATICS HANDBOOK
How to find SSC, SSE and SST from the following table?
𝑹𝟏 𝑹𝟐 𝑹𝟑 𝑹𝟒 Total
𝑪𝟏 𝑎1 𝑏1 𝑐1 𝑑1 𝑇5
𝑪𝟐 𝑎2 𝑏2 𝑐2 𝑑2 𝑇6
𝑪𝟑 𝑎3 𝑏3 𝑐3 𝑑3 𝑇7
Total 𝑇1 𝑇2 𝑇3 𝑇4 𝑇
𝑇2
𝐶𝐹 = 𝑁
𝑇12 𝑇22 𝑇32 𝑇42
𝑆𝑆𝐶 = + + + − 𝐶𝐹
3 3 3 3
𝑇52 𝑇62 𝑇72
𝑆𝑆𝑅 = + + − 𝐶𝐹
4 4 4
𝑆𝑆𝑇 = Σ𝑎𝑖2 + Σ𝑏𝑖2 + Σ𝑐𝑖2 + Σ𝑑𝑖2 − 𝐶𝐹
𝑆𝑆𝐸 = 𝑆𝑆𝑇 − 𝑆𝑆𝐶
Working rule:
(i) Assume 𝐻0 :There is no significant difference between rows and between columns.
(ii) Construct ANOVA table for two-way classification.
𝑀𝑆𝐶 𝑀𝑆𝑅
(iii) Under 𝐻0 , 𝐹𝐶 = 𝑴𝑺𝑬 , 𝑖𝑓 𝑀𝑆𝐶 > 𝑀𝑆𝐸 and 𝐹𝑅 = 𝑴𝑺𝑬 , 𝑖𝑓 𝑀𝑆𝑅 > 𝑀𝑆𝐸
(iv) If calculated value < tabulated value, accept 𝐻0 . Otherwise reject.
25 | P a g e