0% found this document useful (0 votes)
27 views57 pages

Understanding Algorithms and Their Analysis

Uploaded by

hardcoregoodall5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views57 pages

Understanding Algorithms and Their Analysis

Uploaded by

hardcoregoodall5
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1

INTRODUCTION

An Algorithm is a finite set of steps to complete a task.

It is a blue print or, logic of a program.

Set of rules to obtain


Input the expected output Output
from the given input.

Algorithm

Algorithm vs. Program

Given a problem to solve, the design phase produces an algorithm and the implementation
phase then produces a program that expresses the designed algorithm.

Characteristics of an algorithm:

Every algorithm must be satisfied the following characteristics.

1. Input: Zero / more quantities are externally supplied.


2. Output: At least one quantity is produced.
3. Definiteness: Each step is clear and unambiguous. Each step must be clearly defined.
4. Finiteness: The algorithm must terminate after a finite time or, finite number of
steps.
5. Correctness: Correct set of output values must be produced from the each set of
inputs.
6. Effectiveness: Logic of an algorithm must be appropriate. One must be able to
perform the steps of an algorithm without applying any intelligence.
7. Efficiency: An algorithm is efficient if it uses minimal running time and memory
space as possible.
8. Feasibility: An algorithm must be simple, generic and practical so that it can be
executed upon the available resources. It must not contain some future technology
or anything.
9. Independent: It must be independent of language i.e. an algorithm should focus only
on what are inputs, outputs and how to derive output.

Dr. A. K. Panda
2

Different Ways to Express an Algorithm:

1. Natural language
2. Flow charts
3. Pseudocode: A code that uses all the constructs of a programming language, but
doesn't actually run anywhere.
4. Actual programming languages.

Design of Algorithm

Algorithm design refers to a method or process of solving a problem. There are two ways
to design an algorithm. They are:

1. Top-down approach (Iterative algorithm): In the iterative algorithm, the function


repeatedly runs until the condition is met or it fails.
2. Bottom-up approach (Recursive algorithm): In the recursive approach, the function
calls itself until the condition is met.

For example, to find factorial of a given number n is given below:

1. Iterative :
Fact(n)
{
for i = 1 to n
fact = fact * i;
return fact;
}

Here the factorial is calculated as 1 × 2 × 3 × . . . × n.

2. Recursive :

Fact(n)

if n = 0 return 1;

else return n * fact(n-1);

Here the factorial is calculated as n × (n − 1) × . . . × 1.

Dr. A. K. Panda
3

Analysis of Algorithm:

The analysis is a process of estimating the efficiency of an algorithm. An algorithm is


efficient if it uses minimum memory and has less running time.

Problem

Algorithm 1 Algorithm 2 Algorithm 3 … Algorithm n

A problem can be solved in multiple ways. So we can say that we may have many
algorithms to solve a particular problem. Performance analysis (Efficiency) helps us to
select the best algorithm from many algorithms to solve a particular problem.

There are two basic parameters which are used to find the efficiency of an algorithm. They
are:

 The amount of memory used (Space Complexity)


 The amount of compute time consumed on any CPU (Time Complexity)

Time complexity of an algorithm can be calculated by using two methods:

1. Priori Analysis: Analysis of an algorithm before its execution is called posterior


analysis.
2. Posteriori Analysis: Analysis of an algorithm after its execution is called posterior
analysis.

While analyzing the time complexity of an algorithm, we are usually concerned with priori
analysis.

Dr. A. K. Panda
4

Difference between a priori analysis and posteriori analysis:

Priori Analysis Posteriori Analysis


1. It is an absolute analysis It is a relative analysis.
2. It is done before execution of the algorithm It is done after execution of the algorithm
3. It gives approximate answer It gives exact answer
4. It is independent of language of compiler It is dependent on language of compiler
and hardware and type of hardware
5. It uses asymptotic notations It does not use asymptotic notations
6. Time complexity is same for every system Time complexity differ from system to
system

Space Complexity & Time Complexity:

Space Complexity:

It measures the total amount of memory or storage space an algorithm needs to complete.
It includes both auxiliary space and space used by the input. Auxiliary space is the
temporary space or extra space used by an algorithm.

𝑆(𝑃) = 𝐶 + 𝑆𝑃 (𝐼)

Where C is constant representing fixed space requirement

And 𝑆𝑃 (𝐼) is variable space requirement which depends on instance


characteristic I.

While analyzing the space complexity of an algorithm, we are usually concerned with only
the variable space requirements.

Example:

Algorithm abcd(a, b, c, d)

return a + b + b * c + (a + b - d) / (a + b)-d;

𝐶 = 4; 𝑆𝑃 (𝐼) = 0; 𝑆(𝑎𝑏𝑐𝑑) = 4 + 0 = 4.

Dr. A. K. Panda
5

Example:

Algorithm sum(n) // sum of n numbers

int i, sum=0;

for(𝑖 = 𝑛; 𝑖 >= 1; 𝑖 − −)

𝑠𝑢𝑚 = 𝑠𝑢𝑚 + 𝑖;

return sum;

𝐶 = 1; 𝑆𝑃 (𝐼) = 2; 𝑆(𝑠𝑢𝑚) = 1 + 2 = 3.

Time Complexity: It is the amount of time taken to run an algorithm.

Time complexity is defined in terms of how many times it takes to run a given algorithm,
based on the length of the input. Time complexity is not a measurement of how much time
it takes to execute a particular algorithm because such factors as programming language,
operating system, and processing power are also considered .Time complexity is a type of
computational complexity that describes the time required to execute an algorithm.

Step Count Method:

This method is used to calculate Time and space complexity.

Procedure:

 There is no count for “{“ and “}” .


 Each basic statement like ’assignment’ and ’return’ have a count of 1.
 If a basic statement is iterated, then multiply by the number of times the loop is run.
 The loop statement is iterated n times, it has a count of (n + 1). Here the loop runs n
times for the true case and a check is performed for the loop exit (the false
condition), hence the additional 1 in the count.

Dr. A. K. Panda
6

Example:

1. Sum of elements in an array

Statements SC (Time Complexity) SC (Space Complexity))


Algorithm Sum(a,n) 0
{ 0
sum=0 1 1 word for sum
for i = 1 to n do 𝑛+1 1 word each for i and n
sum = sum + a[i]; 𝑛 n words for the array a[]
return sum; 1
} 0
Total: 2𝑛 + 3 (𝑛 + 3) words

2. Adding two matrices of order m and n

Statements Step Count (Time Complexity)


Algorithm Add(a, b, c, m, n) 0
{ 0
for i = 1 to m do 𝑚+1
for j = 1 to n do 𝑚(𝑛 + 1)
c[i,j] = a[i,j] + b[i,j] 𝑚𝑛
} 0
Total: 2𝑚𝑛 + 2𝑚 + 1

3. To find 𝒏𝒕𝒉 number in Fibonacci series

Statements Step Count (Time Complexity)


Algorithm Fibonacci (n) 0
{ 0
if 𝑛 ≤ 1 then 1
write (𝑛) 0
else 0
𝑓2 = 0; 1
𝑓1 = 1; 1
for 𝑖 = 2 to 𝑛 do 𝑛
{ 0
𝑓 = 𝑓1 + 𝑓2; 𝑛−1
𝑓2 = 𝑓1; 𝑛−1
𝑓1 = 𝑓; 𝑛−1
} 0
write (𝑓) 1
} 0
Total: 4𝑛 + 1

Dr. A. K. Panda
7

Rate of Growth:

Rate of growth is defined as the rate at which the running time of the algorithm is increased
when the input size is increased. We will ignore the lower order terms, since the lower
order terms are relatively insignificant for large input.

For example

Let us assume that you went to a shop to buy a car and a bicycle. If your friend sees you
there and asks what you are buying then in general we say buying a car. This is because,
cost of a car is too big compared to cost of cycle (approximating the cost of cycle to the cost
of a car).

𝑇𝑜𝑡𝑎𝑙 𝑐𝑜𝑠𝑡 = 𝑐𝑜𝑠𝑡 𝑜𝑓 𝑐𝑎𝑟 + 𝑐𝑜𝑠𝑡 𝑜𝑓 𝑏𝑖𝑐𝑦𝑐𝑙𝑒

𝑇𝑜𝑡𝑎𝑙 𝑐𝑜𝑠𝑡 ≈ 𝑐𝑜𝑠𝑡 𝑜𝑓 𝑐𝑎𝑟

If time complexity of algorithm A: 100𝑛 + 1

Time complexity of algorithm B: 𝑛2 + 𝑛 + 1 then

Input Size Run Time of Algorithm A Run Time of Algorithm B


10 1,001 111
100 10,001 10,101
1000 100,001 1,001,001
10000 1000,001 > 1010

100𝑛 + 1 ≈ 𝑛
𝑛2 + 𝑛 + 1 ≈ 𝑛2
Hence, the growth rate of algorithm A is linear and Algorithm B is quadratic.

Time
Name Example
Complexity

1 Constant Adding an element to the front of a linked list

log n Logarithmic Finding an element in a sorted array

n Linear Finding an element in a unsorted array

Dr. A. K. Panda
8

nlog n Linear Logarithmic Sorting n items by ‘Divide and Conquer’

n2 Quadratic Shortest path between 2 nodes in a graph

n3 Cubic Matrix Multiplication

2n Exponential The Towers of Hanoi problem

The execution time for six of the typical functions is given below:

n 𝑙𝑜𝑔2 𝑛 𝑛. 𝑙𝑜𝑔2 𝑛 𝑛2 𝑛3 2𝑛

1 0 0 1 1 2

2 1 2 4 8 4

4 2 8 16 64 16

8 3 24 64 512 256

16 4 64 256 4096 65,536

32 5 160 1024 32,768 4,294,967,296

Dr. A. K. Panda
9

The diagram below shows the relationship between different rates of growth:

𝑛!

D
𝑛
D
4 e
e
c
c
r
2𝑛 r
e
e
a
a
s
𝑛2 s
i
i
n
𝑛 log 𝑛 n
g
log(𝑛!) g

𝑛 log 𝑛 and log(𝑛!) are R


r
approximately same a
𝑛 a
t
t
e
e
2log 𝑛 s
s

O
2 o
𝑙𝑜𝑔 𝑛 F
f

G
g
√log 𝑛 r
r
o
o
w
w
log log 𝑛 t
t
h
h

Dr. A. K. Panda
10

Best, Worst, and Average Case Complexity:

In analyzing algorithms, we consider three types of time complexity:

1. Best-case complexity: This represents the minimum time required for an


algorithm to complete when given the optimal input. It denotes an algorithm
operating at its peak efficiency under ideal circumstances.
2. Worst-case complexity: This denotes the maximum time an algorithm will take to
finish for any given input. It represents the scenario where the algorithm
encounters the most unfavourable input.
3. Average-case complexity: This estimates the typical running time of an algorithm
when averaged over all possible inputs. It provides a more realistic evaluation of an
algorithm's performance.

Asymptotic Notations:

Asymptotic notations are the way to express time and space complexity. It represents the
running time of an algorithm. If we have more than one algorithm with alternative steps
then to choose among them, the algorithm with lesser complexity should be selected. To
represents these complexities, asymptotic notations are used.
There are five asymptotic notations:
 Big Oh − 𝑂(𝑛)
 Big Theta − 𝛳(𝑛)
 Big Omega − 𝛺(𝑛)
 Little oh − 𝑜(𝑛)
 Little omega − 𝜔(𝑛)

Big-oh (O) notation:

It is the formal method of expressing the upper bound of an algorithm’s running time. It is
the measure of the longest amount of time it could possibly take for the algorithm to
complete.

Let 𝑓(𝑛) and 𝑔(𝑛) be asymptotically non-negative functions.

We say that 𝑓(𝑛) is in 𝑂(𝑔(𝑛) if there exists a positive integer 𝑛0 and a real positive
constant 𝑐 > 0, such that for all integers 𝑛 ≥ 𝑛0 ,

0 ≤ 𝑓(𝑛) ≤ 𝑐𝑔(𝑛)

𝑂(𝑔(𝑛)) = {𝑓(𝑛): There exist positive constants 𝑐 and 𝑛0 , such that 0 ≤ 𝑓(𝑛) ≤
𝑐𝑔(𝑛) for all 𝑛 ≥ 𝑛0 }.

Dr. A. K. Panda
11

Guiding principles

Following are some principles that are normally followed while using big-oh (O) notations:

1. The coefficient of higher order terms should be ignored.


2. The lower order terms should be ignored.
3. The base should be changed by from one constant to another constant-i.e. the base
value of the logarithm should be changed by only a constant factor.
For example, if 𝑇1 (𝑛) = 𝑂(𝑓(𝑛)) and 𝑇2 (𝑛) = 𝑂(𝑔(𝑛)) then,
a) 𝑇1 (𝑛) + 𝑇2 (𝑛) = max [𝑂(𝑓(𝑛)), 𝑂(𝑔(𝑛))]
b) 𝑇1 (𝑛) × 𝑇2 (𝑛) = 𝑂(𝑓(𝑛) × 𝑔(𝑛))
c) 𝑂(𝑓(𝑛)) × 𝐶 = 𝑂(𝑓(𝑛)) for any constant C.

Example 1:

𝑓(𝑛) = 13

𝑓(𝑛) ≤ 13 × 1

Here, 𝑐 = 13 and 𝑛0 = 0 and 𝑔(𝑛) = 1

Hence, 𝑓(𝑛) = 𝑂(1)

Example 2:

𝑓(𝑛) = 3𝑛 + 5

3𝑛 + 5 ≤ 3𝑛 + 5𝑛

Dr. A. K. Panda
12

3𝑛 + 5 ≤ 8𝑛

Here, 𝑐 = 8 and 𝑛0 = 1

Hence, 𝑓(𝑛) = 𝑂(𝑛)

Example 3:

𝑓(𝑛) = 3𝑛2 + 5

3𝑛2 + 5 ≤ 3𝑛2 + 5𝑛

3𝑛2 + 5 ≤ 3𝑛2 + 5𝑛2

3𝑛2 + 5 ≤ 8𝑛2

Here, 𝑐 = 8 and 𝑛0 = 1

Hence, 𝑓(𝑛) = 𝑂(𝑛2 )

Example 4:

𝑓(𝑛) = 7𝑛2 + 5𝑛

7𝑛2 + 5𝑛 ≤ 7𝑛2 + 5𝑛2

7𝑛2 + 5𝑛 ≤ 12𝑛2

Here, 𝑐 = 12 and 𝑛0 = 1

Hence, 𝑓(𝑛) = 𝑂(𝑛2 )

Example 5:

𝑓(𝑛) = 2𝑛 + 6𝑛2 + 3𝑛

2𝑛 + 6𝑛2 + 3𝑛 ≤ 2𝑛 + 6𝑛2 + 3𝑛2 ≤ 2𝑛 + 6. 2𝑛 + 3. 2𝑛 ≤ 10. 2𝑛

Here, 𝑐 = 10 and 𝑛0 = 1

Hence, 𝑓(𝑛) = 𝑂(2𝑛 )

Example 6:

Prove that 𝑓(𝑛) = 𝑛! = 𝑂(𝑛𝑛 )

Proof:

𝑓(𝑛) = 𝑛!

Dr. A. K. Panda
13

= 𝑛 × (𝑛 − 1) × (𝑛 − 2) × … × 2 × 1

= (𝑛2 − 𝑛) × (𝑛 − 2) × … × 2 × 1

= (𝑛3 − 3𝑛2 + 2𝑛) × (𝑛 − 3) × … 2 × 1

………….

………….

≤ 𝐶𝑛𝑛

= 𝑂(𝑛𝑛 )

Hence, 𝑓(𝑛) = 𝑂(𝑛𝑛 )

Big Omega (𝛀) Notation:

Big Omega (Ω) is the method used for expressing the lower bound of an algorithm’s
running time. It is the measure of the smallest amount of time it could possibly take for the
algorithm to complete.
Let 𝑓 (𝑛) and 𝑔(𝑛) be asymptotically non-negative functions.

We say 𝑡ℎ𝑎𝑡 𝑓 (𝑛) is Ω( 𝑔 ( 𝑛 )) if there exists a positive integer 𝑛0 and a positive constant
𝑐 such that for all integers 𝑛 ≥ 𝑛0 ,

0 ≤ 𝑐𝑔(𝑛) ≤ 𝑓(𝑛)

Ω( 𝑔 ( 𝑛 )) = {𝑓(𝑛): There exist positive constants 𝑐 and 𝑛0 , such that 0 ≤ 𝑐𝑔(𝑛) ≤ 𝑓(𝑛)

for all 𝑛 ≥ 𝑛0 }.

Dr. A. K. Panda
14

Example 1:

𝑓(𝑛) = 13
𝑓(𝑛) ≥ 12 × 1
where 𝑐 = 12 and 𝑛0 = 0
Hence, 𝑓(𝑛) = Ω(1)

Example 2:
𝑓(𝑛) = 3𝑛 + 5
3𝑛 + 5 > 3𝑛
where 𝑐 = 3 and 𝑛0 = 1
Hence, , 𝑓(𝑛) = Ω(𝑛)

Example 3:
𝑓(𝑛) = 3𝑛2 + 5
3𝑛2 + 5 > 3𝑛2
where 𝑐 = 3 and 𝑛0 = 1
Hence, 𝑓(𝑛) = Ω(𝑛2 )

Example 4:
𝑓(𝑛) = 7𝑛2 + 5𝑛
7𝑛2 + 5𝑛 > 7𝑛2
where 𝑐 = 7 and 𝑛0 = 1
Hence, 𝑓(𝑛) = Ω(𝑛2 )

Example 5:
𝑓(𝑛) = 2𝑛 + 6𝑛2 + 3𝑛
2𝑛 + 6𝑛2 + 3𝑛 > 2𝑛
where 𝑐 = 1 and 𝑛0 = 1
Hence, 𝑓(𝑛) = Ω(2𝑛 )

Big Theta Notation:

It is the method of expressing the tight bound of an algorithm’s running time.

Let 𝑓 (𝑛) and 𝑔(𝑛) be asymptotically non-negative functions.

We say 𝑡ℎ𝑎𝑡 𝑓 (𝑛) is 𝜃( 𝑔 ( 𝑛 )) if there exists a positive integer 𝑛0 and positive constants
𝑐1 and 𝑐2 such that for all integers 𝑛 ≥ 𝑛0 ,

0 ≤ 𝑐1 𝑔(𝑛) ≤ 𝑓(𝑛) ≤ 𝑐2 𝑔(𝑛)

Dr. A. K. Panda
15

𝜃( 𝑔 ( 𝑛 )) = {𝑓(𝑛): There exist positive constants 𝑐1 , 𝑐2 and 𝑛0 , such that 0 ≤ 𝑐1 𝑔(𝑛) ≤


𝑓(𝑛) ≤ 𝑐2 𝑔(𝑛) for all 𝑛 ≥ 𝑛0 }.

Example 1:
𝑓(𝑛) = 123
122 × 1 ≤ 𝑓(𝑛) ≤ 123 × 1
Here, 𝑐1 = 122, 𝑐2 = 123 and 𝑛0 = 0
Hence, 𝑓(𝑛) = 𝜃(1)

Example 2:
𝑓(𝑛) = 3𝑛 + 5
3𝑛 < 3𝑛 + 5 ≤ 4𝑛
Here, 𝑐1 = 3, 𝑐2 = 4 and 𝑛0 = 5
Hence, 𝑓(𝑛) = 𝜃(𝑛)

Example 3:
𝑓(𝑛) = 3𝑛2 + 5
3𝑛2 < 3𝑛2 + 5 ≤ 4𝑛2
Here, 𝑐1 = 3, 𝑐2 = 4 and 𝑛0 = 5
Hence, 𝑓(𝑛) = 𝜃(𝑛2 )

Example 4:
𝑓(𝑛) = 7𝑛2 + 5𝑛
7𝑛2 < 7𝑛2 + 5𝑛 for all 𝑛, 𝑐1 = 7
Also, 7𝑛2 + 5𝑛 ≤ 8𝑛2 for 𝑛 ≥ 𝑛0 = 5, 𝑐2 = 8
Hence, 𝑓(𝑛) = 𝜃(𝑛2 )

Example 5:
𝑓(𝑛) = 2𝑛 + 6𝑛2 + 3𝑛
2𝑛 < 2𝑛 + 6𝑛2 + 3𝑛 < 2𝑛 + 6𝑛2 + 3𝑛2 < 2𝑛 + 6. 2𝑛 + 3. 2𝑛 ≤ 10. 2𝑛
Here, 𝑐1 = 1, 𝑐2 = 10 and 𝑛0 = 1
Hence, 𝑓(𝑛) = 𝜃(2𝑛 )

Dr. A. K. Panda
16

Example 7:
𝑛

Prove that ∑ log(𝑖) = 𝜃(𝑛 log 𝑛)


𝑖=1

Proof:
𝑛

𝑓(𝑛) = ∑ log(𝑖) = log 1 + log 2 + ⋯ + log 𝑛 = 0 + log 2 + ⋯ + log 𝑛


𝑖=1

= log(2 × 3 × 4 × … × 𝑛) = log 𝑛! = 𝜃(𝑛 log 𝑛) (𝑆𝑖𝑛𝑐𝑒, log 𝑛! = 𝜃(𝑛 log 𝑛))

Example 8:
1
Prove that 2 𝑛(𝑛 − 1) = 𝜃(𝑛2 )

Proof:

1 1 1 1
𝑛(𝑛 − 1) = 𝑛2 − 𝑛 ≤ 𝑛2 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 0 … … … (1)
2 2 2 2
1 1 1 1 1 1
𝑛(𝑛 − 1) = 𝑛2 − 𝑛 ≥ 𝑛2 − 𝑛 𝑛 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 2
2 2 2 2 2 2
1 1 1
𝑛(𝑛 − 1) ≥ 𝑛2 − 𝑛2 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 2
2 2 4
1 1
𝑛(𝑛 − 1) ≥ 𝑛2 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 2 … … … (2)
2 4
1 2 1 1
𝐹𝑟𝑜𝑚 1 𝑎𝑛𝑑 (2) 𝑛 ≤ 𝑛(𝑛 − 1) ≤ 𝑛2 𝑓𝑜𝑟 𝑎𝑙𝑙 𝑛 ≥ 2
4 2 2
1 1
𝐻𝑒𝑟𝑒 𝑐1 = , 𝑐2 = 𝑎𝑛𝑑 𝑛0 = 2
4 2
1
∴ 𝑛(𝑛 − 1) = 𝜃(𝑛2 )
2
Example 9:
1
Prove that 2 𝑛2 − 3𝑛 = 𝜃(𝑛2 )

Proof:

We need to find positive constants 𝑐1and 𝑐2 such that

Dr. A. K. Panda
17

1 2
0 ≤ 𝑐1 𝑛2 ≤ 𝑛 − 3𝑛 ≤ 𝑐2 𝑛2
2
Dividing by 𝑛2 we get

1 3
0 ≤ 𝑐1 ≤ − ≤ 𝑐2
2 𝑛
1 3
𝑐1 ≤ − 𝑛 holds for 𝑛 ≥ 10 and 𝑐1 = 1/5
2

1 3
− 𝑛 ≤ 𝑐2 holds for 𝑛 ≥ 10 and 𝑐2 = 1
2

Thus if 𝑐1 = 1/5, 𝑐2 = 1 and 𝑛0 = 10,

1 2
0 ≤ 𝑐1 𝑛2 ≤ 𝑛 − 3𝑛 ≤ 𝑐2 𝑛2 for all 𝑛 ≥ 𝑛0
2
1
∴ 𝑛2 − 3𝑛 = 𝜃(𝑛2 )
2
Little-oh (o) Notation

The asymptotic upper bound provided by Big-oh(O) notation may or may not be
asymptotically tight.

For Example, 2𝑛2 = 𝑂(𝑛2 ) is asymptotically tight but 2𝑛 = 𝑂(𝑛2 ) is not.

We use Little-Oh(o) notation to denote an upper bound that is not asymptotically tight.

𝑜(𝑔(𝑛)) = {𝑓(𝑛): for any positive constant 𝑐 > 0 , there exists a constant 𝑛0 > 0 such
that 0 ≤ 𝑓(𝑛) < 𝑐𝑔(𝑛) for all 𝑛 ≥ 𝑛0 }.

For example, 2𝑛 = 𝑜(𝑛2 ) but 2𝑛2 ≠ 𝑜(𝑛2 )

The main difference between Big-oh (O) notation and Little-oh (o) Notation is that in
𝑓(𝑛) = 𝑂(𝑔(𝑛)), the bound 0 ≤ 𝑓(𝑛) ≤ 𝑐𝑔(𝑛) holds for some constant 𝑐 > 0, but in
𝑓(𝑛) = 𝑜(𝑔(𝑛)), the bound 0 ≤ 𝑓(𝑛) < 𝑐𝑔(𝑛) holds for all constant 𝑐 > 0.

Little omega (𝝎) Notation:

The asymptotic lower bound provided by Big Omega (𝜔) notation may or may not be
asymptotically tight. We use Little Omega (𝜔) notation to denote a lower bound that is not
asymptotically tight.

𝜔( 𝑔 ( 𝑛 )) = {𝑓(𝑛): for any positive constant 𝑐 > 0 there exists a constant 𝑛0 > 0, such
that 0 ≤ 𝑐𝑔(𝑛) < 𝑓(𝑛) for all 𝑛 ≥ 𝑛0 }.

Dr. A. K. Panda
18

For example,
𝑛2 𝑛2
= 𝜔(𝑛), but ≠ 𝜔(𝑛2 )
2 2

Example:

𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝜔(1)

𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝜔(𝑛)

𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝜃(𝑛2 )

𝑎𝑛2 + 𝑏𝑛 + 𝑐 = Ω(1)

𝑎𝑛2 + 𝑏𝑛 + 𝑐 = Ω(𝑛)

𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝑂(𝑛2 )

𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝑂(𝑛6 )

𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝑂(𝑛50 )

𝑎𝑛2 + 𝑏𝑛 + 𝑐 = 𝑜(𝑛19 )

Also,

𝑎𝑛2 + 𝑏𝑛 + 𝑐 ≠ 𝑜(𝑛2 )

𝑎𝑛2 + 𝑏𝑛 + 𝑐 ≠ 𝜔(𝑛19 )

𝑎𝑛2 + 𝑏𝑛 + 𝑐 ≠ 𝑂(𝑛)

Some more incorrect bounds are as follows:

7𝑛 + 5 ≠ 𝑂(1)

2𝑛 + 3 ≠ 𝑂(1)

3𝑛2 + 16𝑛 + 2 ≠ 𝑂(𝑛)

5𝑛3 + 𝑛2 + 3𝑛 + 2 ≠ 𝑂(𝑛2 )

7𝑛 + 5 ≠ Ω(𝑛2 )

2𝑛 + 3 ≠ Ω(𝑛3 )

10𝑛2 + 7 ≠ Ω(𝑛4 )

Dr. A. K. Panda
19

7𝑛 + 5 ≠ 𝜃(𝑛2 )

2𝑛2 + 3 ≠ 𝜃(𝑛3 )

Some more loose bounds are as follows:

2𝑛 + 3 = 𝑂(𝑛2 )

4𝑛2 + 5𝑛 + 6 = 𝑂(𝑛4 )

5𝑛2 + 3 = Ω(1)

2𝑛3 + 3𝑛2 + 2 = Ω(𝑛2 )

Some correct bounds are as follows:

2𝑛 + 8 = 𝑂(𝑛)

2𝑛 + 8 = 𝑂(𝑛2 )

2𝑛 + 8 = 𝜃(𝑛)

2𝑛 + 8 = Ω(𝑛)

2𝑛 + 8 = 𝑜(𝑛2 )

2𝑛 + 8 ≠ 𝑜(𝑛)

2𝑛 + 8 ≠ 𝜔(𝑛)

4𝑛2 + 3𝑛 + 9 = 𝑂(𝑛2 )

4𝑛2 + 3𝑛 + 9 = Ω(𝑛2 )

4𝑛2 + 3𝑛 + 9 = 𝜃(𝑛2 )

4𝑛2 + 3𝑛 + 9 = 𝑜(𝑛3 )

4𝑛2 + 3𝑛 + 9 ≠ 𝑜(𝑛2 )

4𝑛2 + 3𝑛 + 9 ≠ 𝜔(𝑛2 )

Asymptotic comparison of functions:

 𝑓(𝑛) = 𝑂(𝑔(𝑛)) ≈ 𝑓 ≤𝑔
 𝑓(𝑛) = 𝜃(𝑔(𝑛)) ≈𝑓 = 𝑔
 𝑓(𝑛) = Ω(𝑔(𝑛)) ≈ 𝑓 ≥ 𝑔
 𝑓(𝑛) = 𝑜(𝑔(𝑛)) ≈ 𝑓 < 𝑔

Dr. A. K. Panda
20

 𝑓(𝑛) = 𝜔(𝑔(𝑛)) ≈ 𝑓 > 𝑔

N.B.:

 𝑓(𝑛) is asymptotically smaller than 𝑔(𝑛) if 𝑓(𝑛) = 𝑜(𝑔(𝑛)).


 𝑓(𝑛) is asymptotically larger than 𝑔(𝑛) if 𝑓(𝑛) = 𝜔(𝑔(𝑛)).

Properties of Asymptotic Notations

Reflexivity

 𝑓(𝑛) = 𝑂(𝑓(𝑛))
 𝑓(𝑛) = 𝜃(𝑓(𝑛))
 𝑓(𝑛) = Ω(𝑓(𝑛))

Symmetry

𝑓(𝑛) = 𝜃(𝑔(𝑛)) if and only if 𝑔(𝑛) = 𝜃(𝑓(𝑛))

Transitivity

 𝑓(𝑛) = 𝑂(𝑔(𝑛)) and 𝑔(𝑛) = 𝑂(ℎ(𝑛)) ⇒ 𝑓(𝑛) = 𝑂(ℎ(𝑛))


 𝑓(𝑛) = 𝜃(𝑔(𝑛)) and 𝑔(𝑛) = 𝜃(ℎ(𝑛)) ⇒ 𝑓(𝑛) = 𝜃(ℎ(𝑛))
 𝑓(𝑛) = Ω(𝑔(𝑛)) and 𝑔(𝑛) = Ω(ℎ(𝑛)) ⇒ 𝑓(𝑛) = Ω(ℎ(𝑛))
 𝑓(𝑛) = 𝑜(𝑔(𝑛)) and 𝑔(𝑛) = 𝑜(ℎ(𝑛)) ⇒ 𝑓(𝑛) = 𝑜(ℎ(𝑛))
 𝑓(𝑛) = 𝜔(𝑔(𝑛)) and 𝑔(𝑛) = 𝜔(ℎ(𝑛)) ⇒ 𝑓(𝑛) = 𝜔(ℎ(𝑛))

Transpose Symmetry

𝑓(𝑛) = 𝑂(𝑔(𝑛)) if and only if 𝑔(𝑛) = Ω(𝑓(𝑛))

𝑓(𝑛) = 𝑜(𝑔(𝑛)) if and only if 𝑔(𝑛) = 𝜔(𝑓(𝑛))

Dr. A. K. Panda
21

Comparing the growth rate of two functions using limits

Rules:
𝑓(𝑛)
1. 𝑓(𝑛) = 𝑂(𝑔(𝑛)) 𝑖𝑓 lim = 𝑐 <  𝑤ℎ𝑒𝑟𝑒 𝑐 ∈ ℝ (Can be zero)
𝑛→∞ 𝑔(𝑛)
𝑓(𝑛)
2. 𝑓(𝑛) = Ω(𝑔(𝑛)) 𝑖𝑓 lim > 0 (Can be )
𝑛→∞ 𝑔(𝑛)
𝑓(𝑛)
3. 𝑓(𝑛) = 𝜃(𝑔(𝑛)) 𝑖𝑓 lim = 𝑐, 𝑐 ∈ ℝ+
𝑛→∞ 𝑔(𝑛)
𝑓(𝑛)
4. 𝑓(𝑛) = 𝑜(𝑔(𝑛)) 𝑖𝑓 lim =0
𝑛→∞ 𝑔(𝑛)
𝑓(𝑛)
5. 𝑓(𝑛) = 𝜔(𝑔(𝑛)) 𝑖𝑓 lim =
𝑛→∞ 𝑔(𝑛)

Example:

Compare the growth rate of two functions 𝑓(𝑛) = 𝑛2 and 𝑔(𝑛) = 2𝑛 using limits.

Solution:

𝑓(𝑛) 𝑛2 2𝑛 2
lim = lim 𝑛 = lim 𝑛 = lim 𝑛 =0
𝑛→∞ 𝑔(𝑛) 𝑛→∞ 2 𝑛→∞ 2 ln 2 𝑛→∞ 2 (ln 2)2

Since, the limit is equal to 0, the growth rate of 𝑔(𝑛) = 2𝑛 is greater than the growth rate of
𝑓(𝑛) = 𝑛2 .

Example:

Compare the growth rate of two functions 𝑓(𝑛) = 4𝑛3 + 2𝑛 + 4 and 𝑔(𝑛) = 2𝑛3 − 100𝑛
using limits.

Solution:

4𝑛3 + 2𝑛 + 4 2 4
𝑓(𝑛) 4𝑛3 + 2𝑛 + 4 3 4+ 2+ 3
lim = lim = lim 𝑛 = lim 𝑛 𝑛
𝑛→∞ 𝑔(𝑛) 𝑛→∞ 2𝑛3 − 100𝑛 𝑛→∞ 2𝑛3 − 100𝑛 𝑛→∞ 100
3 2− 2
𝑛 𝑛
4+0+0
= =2
2−0
Since, the limit is equal to 2, the growth rate of 𝑔(𝑛) = 2𝑛3 − 100 is 2 times the growth
rate of 𝑓(𝑛) = 4𝑛3 + 2𝑛 + 4 at sufficiently large n.

Dr. A. K. Panda
22

Comparing the growth rate of two functions using logarithms:

Rules:

1. 𝑙𝑜𝑔 𝑎𝑏 = 𝑙𝑜𝑔 𝑎 + 𝑙𝑜𝑔 𝑏


𝑎
2. log 𝑏 = log 𝑎 − log 𝑏
3. log 𝑎𝑏 = 𝑏 log 𝑎
4. 𝑎log𝑐 𝑏 = 𝑏 log𝑐 𝑎
5. 𝑎𝑏 = 𝑛 then 𝑏 = log 𝑎 𝑛

Example:

Compare the growth rate of two functions 𝑓(𝑛) = 𝑛2 and 𝑔(𝑛) = 𝑛3using logarithms.

Solution:

𝑓(𝑛) = 𝑛2 and 𝑔(𝑛) = 𝑛3

Taking Logarithm of both functions

f(n) = log 𝑛2 and 𝑔(𝑛) = log 𝑛3

f(n) = 2 log 𝑛 and 𝑔(𝑛) = 3 log 𝑛

2 log 𝑛 < 3 log 𝑛

Hence, the growth rate of 𝑔(𝑛) = 𝑛3 is higher than the growth rate of 𝑓(𝑛) = 𝑛2

Example:

Compare the growth rate of two functions 𝑓(𝑛) = 𝑛2 log 𝑛 and 𝑔(𝑛) = 𝑛 (log 𝑛)10using
logarithms.

Solution:

𝑓(𝑛) = 𝑛2 log 𝑛 and 𝑔(𝑛) = 𝑛 (log 𝑛)10

Taking Logarithm of both functions

f(n) = log(𝑛2 log 𝑛) and 𝑔(𝑛) = log(𝑛 (log 𝑛)10 )

f(n) = log n2 + log log 𝑛 and 𝑔(𝑛) = log 𝑛 + log(log 𝑛)10

f(n) = 2 log n + log log n and 𝑔(𝑛) = log 𝑛 + 10 log log 𝑛

𝑛2 log 𝑛 > 𝑛 (log 𝑛)10

Dr. A. K. Panda
23

Hence, the growth rate of 𝑓(𝑛) is higher than the growth rate of 𝑔(𝑛)

Example:

Compare the growth rate of two functions 𝑓(𝑛) = 3𝑛√𝑛 and 𝑔(𝑛) = 2√𝑛 log 𝑛 using
logarithms.

Solution:

𝑓(𝑛) = 3𝑛√𝑛 and 𝑔(𝑛) = 2√𝑛 log 𝑛

Taking Logarithm of both functions


√𝑛
f(n) = 3𝑛√𝑛 𝑎𝑛𝑑 𝑔(𝑛) = 2log 𝑛 (𝑅𝑢𝑙𝑒 − 3)
log 2
f(n) = 3𝑛√𝑛 𝑎𝑛𝑑 𝑔(𝑛) = (𝑛√𝑛 ) (𝑅𝑢𝑙𝑒 − 4)

f(n) = 3𝑛√𝑛 𝑎𝑛𝑑 𝑔(𝑛) = 𝑛√𝑛

Hence, 3𝑛√𝑛 > 2√𝑛 log 𝑛

E𝐱𝐚𝐦𝐩𝐥𝐞:

Show that log 𝑥 = 𝑜(𝑥)

Proof:

log 𝑥 log 𝑥 1 1
lim = lim = 𝑥 = lim =0
𝑛→∞ 𝑥 𝑛→∞ 𝑥 lim 1 𝑛→∞ 𝑥
𝑛→∞

∴ log 𝑥 = 𝑜(𝑥)

Example:

Arrange the following list of functions in ascending order of growth rate.

log 𝑛, 𝑛!, 𝑛2 / log 𝑛, 𝑛. 2𝑛 , (log 𝑛)log 𝑛 , 3𝑛

Solution:

log 𝑛, (log 𝑛)log 𝑛 , 𝑛2 / log 𝑛, 𝑛. 2𝑛 , 3𝑛 , 𝑛!

Example:

Arrange the following list of functions in ascending order of growth rate,

Dr. A. K. Panda
24

𝑛 2
2√log 𝑛 , 2𝑛 , 𝑛4/3 , 𝑛(log 𝑛)3 , 𝑛log 𝑛 , 22 , 2𝑛

Solution:

If 𝑓(𝑛) = 𝑂(𝑔(𝑛)), then 𝑔(𝑛) follows 𝑓(𝑛).

𝑛(log 𝑛)3 = 𝑂(𝑛4/3 )


2
2𝑛 = 𝑂(2𝑛 )
2 𝑛
2𝑛 = 𝑂(22 )

2√log 𝑛 = 𝑂(2log 𝑛 ) ⇒ 2√log 𝑛 = 𝑂(𝑛)

𝑛4/3 = 𝑂(𝑛log 𝑛 ) 𝑠𝑖𝑛𝑐𝑒 4/3 = 𝑂(log 𝑛)


log 𝑛 ) 2
𝑛log 𝑛 = 2log(𝑛 = 2log 𝑛 log 𝑛 = 2(log 𝑛 ) = 𝑂(2𝑛 )
2 𝑛
Therefore the correct order is 2√log 𝑛 , 𝑛(log 𝑛)3, 𝑛4/3 , 𝑛log 𝑛 , 2𝑛 , 2𝑛 , 22

Dr. A. K. Panda
25

RECURRENCES

A recurrence relation or simply recurrence for a sequence a0, a1, … is an equation that
relates an to some of the terms a0, a1, …, an-1.

Initial conditions or, base condition for the sequence a0, a1, … are explicitly given values for
a finite number of the terms of the sequence.

For example, to compute factorial of a number recursively we use the following algorithm:

factorial(n)
{
if (n === 1)
return 1;
else
return n * factorial(n - 1);
}

Time Complexity of the above recursive algorithm is


𝑇(𝑛) = 𝑇(𝑛 − 1) + 1
= [𝑇(𝑛 − 2) + 1] + 1 = 𝑇(𝑛 − 2) + 2
= [𝑇(𝑛 − 3) + 1] + 2 = 𝑇(𝑛 − 3) + 3
= ⋯……………
= 𝑇(𝑛 − 𝑛) + 𝑛 = 𝑛

Different techniques to solve recurrence relations:


1. Iterative Method
2. Substitution method
3. The recursion tree method
4. Master’s theorem

Iterative Method:

This method is also called repeated substitution method. In this method, we keep
substituting the smaller terms again and again until we reach the base condition

Example:

Solve the following recurrence equation:

1, 𝑛=1
𝑇(𝑛) = {
2𝑇(𝑛 − 1) + 1, 𝑛≥2

Dr. A. K. Panda
26

Solution:

⇒ 𝑇(𝑛) = 1 + 2 ∙ 𝑇(𝑛 − 1)

⇒ 𝑇(𝑛) = 1 + 2(1 + 2𝑇(𝑛 − 2))

⇒T(𝑛) = 1 + 2 + 4 ∙ 𝑇(𝑛 − 2)

⇒T(𝑛) = 1 + 2 + 4 ∙ (1 + 2𝑇(𝑛 − 3))

⇒ 𝑇(𝑛) = 1 + 2 + 4 + 8 ∙ 𝑇(𝑛 − 3)

⇒ 𝑇(𝑛) = 1 + 2 + 22 + 23 ∙ 𝑇(𝑛 − 3)

………………………………………

… … … … … … … … … … … … … … ….

⇒ 𝑇(𝑛) = 1 + 2 + 22 + 23 + ⋯ + 2𝑛−1 ∙ 𝑇(1)

⇒ 𝑇(𝑛) = 1 + 2 + 22 + 23 + ⋯ + 2𝑛−1 (Since, 𝑇(1) = 1)

2𝑛 − 1
⇒ 𝑇(𝑛) = (𝐺𝑒𝑜𝑚𝑒𝑡𝑟𝑖𝑐 𝑆𝑢𝑚 𝐹𝑜𝑟𝑚𝑢𝑙𝑎)
2−1
⇒ 𝑇(𝑛) = 2𝑛 − 1

Note: The repeated substitution method may not always be successful.

Example:

Solve the following recurrence relation:

0, 𝑛=1
𝑇(𝑛) = {
2𝑇(𝑛/2) + 1, 𝑛≥2

Solution:
𝑛
𝑇(𝑛) = 2𝑇 ( ) + 1
2

= 1 + 2[1 + 2𝑇(𝑛⁄4)]

= 1 + 2 + 4𝑇(𝑛⁄4)

= 1 + 2 + 4 + 8 𝑇(𝑛⁄8)

… … … … … … … … … … ..

Dr. A. K. Panda
27

… … … … … … … … … … ….
𝑛 𝑛
= 1 + 2 + 4 + ⋯ + 2𝑘−1 + 2𝑘 𝑇(2𝑘) (where 2𝑘 = 1 ⇒ 𝑛 = 2𝑘 )

= 1 + 2 + 4 + ⋯ + 2𝑘−1 + 2𝑘 𝑇(1)

= 1 + 2 + 4 + ⋯ + 2𝑘−1 + 0

= 1 + 2 + 4 + ⋯ + 2𝑘−1

2𝑘 −1
= = 2𝑘 − 1 = 𝑛 − 1(Geometric Sum formula)
2−1

Example:

Solve the following recurrence relation to compute time complexity if Insertion Sort

0, 𝑛=1
𝑇(𝑛) = {
𝑇(𝑛 − 1) + 𝑛 − 1, 𝑛≥2

Solution:

𝑇(𝑛) = (𝑛 − 1) + 𝑇(𝑛 − 1)

= (𝑛 − 1) + (𝑛 − 2) + 𝑇(𝑛 − 2)

= (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + 𝑇(𝑛 − 3)

……………………………

……………………………

= (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + ⋯ + 1 + 𝑇(1)

= (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) + ⋯ + 1 + 0 [Since, 𝑇(1) = 0]
(𝑛−1)𝑛
= 2
(Arithmetic sum formula)

𝑛2 −𝑛
= 2

Example:

1, 𝑛=1
𝑇(𝑛) = { 𝑛
3𝑇 ( 4) + 𝑛, 𝑛≥2

Dr. A. K. Panda
28

Solution:

𝑛 𝑛 𝑛 𝑛 𝑛
𝑇(𝑛) = 𝑛 + 3𝑇 ( ) = 3𝑛 + ( + 3𝑇 ( )) + 𝑛 = 𝑛 + 3 ( ) + 9𝑇 ( )
4 4 16 4 16

𝑛 𝑛 𝑛 3 0 3 1 3 2 3 3
= 𝑛 + 3 ( ) + 9 ( ) + 27𝑇 ( ) = ( ) 𝑛 + ( ) 𝑛 + ( ) 𝑛 + ( ) 𝑛 + ⋯
4 16 64 4 4 4 4

3 0 3 1 3 2 3 3 1
= [( ) + ( ) + ( ) + ( ) + ⋯ ] 𝑛 = 𝑛 = 4𝑛
4 4 4 4 1 − 3/4

The substitution Method:

It involves guessing the form of the solution and then using mathematical induction find
the constants and show that the guess is correct.

1. Guess the solution


2. Prove it correct by mathematical induction.

Example:

Solve the recurrence equation:

1, 𝑛=1
𝑇(𝑛) = {
2𝑇(𝑛 − 1) + 1, 𝑛≥2

Solution:

Suppose we guess that the solution to be exponential.

Guess: 𝑇(𝑛) = 𝐴 2𝑛 + 𝐵

Induction Proof:

Basis of Induction:

LHS: 𝑇(1) = 1(From the initial condition)

RHS: 𝐴21 + 𝐵 = 2𝐴 + 𝐵

we need 2𝐴 + 𝐵 = 1 … … … … … … . . . (1)

Induction Hypothesis:

Assume that 𝑇(𝑘) = 𝐴 2𝑘 + 𝐵 for 𝑘 ≥ 1

Dr. A. K. Panda
29

Induction Step:

To prove the solution is also correct for 𝑛 = 𝑘 + 1:

𝑖. 𝑒.To Show 𝑇(𝐾 + 1) = 𝐴 2𝑘+1 + 𝐵

LHS: 𝑇(𝑘 + 1) = 2𝑇(𝑘) + 1 (𝑓𝑟𝑜𝑚 𝑡ℎ𝑒 𝑟𝑒𝑐𝑢𝑟𝑟𝑒𝑛𝑐𝑒 𝑒𝑞𝑢𝑎𝑡𝑖𝑜𝑛)

= 2[𝐴 2𝑘 + 𝐵] + 1 (By Inductive ℎ𝑦𝑝𝑜𝑡ℎ𝑒𝑠𝑖𝑠)

= 𝐴 2𝑘+1 + (2𝐵 + 1)

= 𝐴 2𝑘+1 + 𝐵 … … … RHS

we need 2𝐵 + 1 = 𝐵 … … … … … … . . . (2)

2𝐴 + 𝐵 = 1
Solving (1) and (2) {
2𝐵 + 1 = 𝐵

We get 𝐵 = −1 and 𝐴 = 1

Hence, 𝑇(𝑛) = 𝐴 2𝑛 + 𝐵 = 2𝑛 − 1

Example:

Solve the following recurrence relation:

0, 𝑛=1
𝑇(𝑛) = {
𝑇(𝑛 − 1) + 𝑛 − 1, 𝑛≥2

Solution:

Suppose we guess that the solution as 𝑂(𝑛2 )

Guess: 𝑇(𝑛) = 𝐴𝑛2 + 𝐵𝑛 + 𝐶

Basis of Induction:

LHS: 𝑇(1) = 0(From the initial condition)

RHS: 𝐴𝑛2 + 𝐵𝑛 + 𝐶 = 𝐴 + 𝐵 + 𝐶

So we need 𝐴 + 𝐵 + 𝐶 = 0 … … … … … … … … … . . (1).

Induction Hypothesis:

Assume that 𝑇(𝑘 − 1) = 𝐴 (𝑘 − 1)2 + 𝐵(𝑘 − 1) + 𝐶 for 𝑘 ≥ 1

Dr. A. K. Panda
30

Induction Step:

To prove the solution is also correct for 𝑛 = 𝑘

𝑖. 𝑒.To Show 𝑇(𝑘) = 𝐴 𝑘 2 + 𝐵𝑘 + 𝐶

LHS.: 𝑇(𝑘) = 𝑇(𝑘 − 1) + (𝑘 − 1) (from the recurrence equation)

= 𝐴 (𝑘 − 1)2 + 𝐵(𝑘 − 1) + 𝐶 + (𝑘 − 1)

= 𝐴 𝑘 2 − 2𝐴𝑘 + 𝐴 + 𝐵𝑘 − 𝐵 + 𝐶 + 𝑘 − 1

= 𝐴 𝑘 2 + (−2𝐴 + 𝐵 + 1)𝑘 + (𝐴 − 𝐵 + 𝐶 − 1)

= 𝐴 𝑘 2 + 𝐵𝑘 + 𝐶

So we need −2𝐴 + 𝐵 + 1 = 𝐵 … … … … … … … … (2)

And 𝐴 − 𝐵 + 𝐶 − 1 = 𝐶 … … … … … … … … … … … (3)

𝐴+𝐵+𝐶 =0
Solving (1), (2) and (3) { −2𝐴 + 𝐵 + 1 = 𝐵
𝐴−𝐵+𝐶−1=𝐶

1 1
𝐴 = , 𝐵 = − , 𝐶 = 0.
2 2
𝑛2 𝑛
Therefore, 𝑇(𝑛) = 2

2

Example:

Show that 𝑇(𝑛) = 𝑂(𝑛 lg 𝑛) is the solution of the following recurrence relation by
substitution method.

0, 𝑛=1
𝑇(𝑛) = {
2𝑇(𝑛/2) + 𝑛, 𝑛>2

Proof:

We have to prove that 𝑇(𝑛) ≤ 𝑐𝑛 lg 𝑛 for an appropriate choice of the constant 𝑐 > 0.

Basis of Induction:

Here our guess does not hold for 𝑛 = 1 because 1 ≰ 𝑐. 1 𝑙𝑜𝑔1

Now for 𝑛 = 2

Dr. A. K. Panda
31

𝑛
LHS: 𝑇(2) = 2𝑇 ( 2 ) + 𝑛 = 2𝑇(1) + 2 = 2 × 0 + 2 = 2(From the initial condition)

RHS: 𝑐 × 2 lg 2 = 2𝑐

Hence, 𝑇(2) ≤ 𝑐 × 2 lg 2 for 𝑐 ≥ 1

Induction Hypothesis:

Assume that 𝑇(𝑛) ≤ 𝑐𝑛 lg 𝑛 for 2 ≤ 𝑛 < 𝑘

Induction Step:

To prove the solution is also correct for 𝑛 = 𝑘

LHS.: 𝑇(𝑘) = 2𝑇(𝑘/2) + 𝑘(from the recurrence equation)

≤ 2𝑐 (𝑘/2) lg(𝑘/2) + 𝑘

= 𝑐 𝑘 lg(𝑘/2) + 𝑘

= 𝑐 𝑘 lg 𝑘 − 𝑐𝑘 lg 2 + 𝑘

= 𝑐 𝑘 lg 𝑘 − 𝑘 (c lg 2 − 1)

= 𝑐 𝑘 lg 𝑘 − 𝑘(𝑐 − 1)

= 𝑐 𝑘 lg 𝑘 − 𝑐𝑘 + 𝑘

≤ 𝑐𝑘 lg 𝑘 for all 𝑐 ≥ 1

Therefore, 𝑇(𝑛) = 𝑂(𝑛 lg 𝑛)

Example:

Show that 𝑇(𝑛) = 𝑂(log 𝑛) is the solution of the following recurrence relation:
𝑛
𝑇(𝑛) = 𝑇 (⌊ ⌋) + 1
2
Solution:

We have to prove that 𝑇(𝑛) ≤ 𝑐 log 𝑛 for an appropriate choice of the constant 𝑐 > 0.

Inductive hypothesis:

Assume that 𝑇(𝑛) ≤ 𝑐 lg 𝑛 for 2 ≤ 𝑛 < 𝑘

Dr. A. K. Panda
32

Inductive Step:

To prove the solution is correct for 𝑛 = 𝑘


𝑘
LHS.: 𝑇(𝑘) = 𝑇 (⌊2⌋) + 1 (By the recurrence relation)

𝑘 𝑘
≤ 𝑇 (2) + 1 ≤ 𝑐 log (2) + 1 (By Inductive Hypothesis)

= 𝑐 log 𝑘 − 𝑐 log 2 2 + 1 = 𝑐 log 𝑘 − 𝑐 + 1 ≤ 𝑐 log 𝑘 for 𝑐 ≥ 1

Therefore, 𝑇(𝑛) = 𝑂(log 𝑛)

Example:

Show that 𝑇(𝑛) = 𝑂(𝑛 log 𝑛) is the solution of the following recurrence relation:
𝑛
𝑇(𝑛) = 2𝑇 (⌊ ⌋ + 16) + 𝑛
2
Solution:

We have to prove that 𝑇(𝑛) ≤ cn log 𝑛 for an appropriate choice of the constant 𝑐 > 0.

Inductive hypothesis:

Assume that 𝑇(𝑛) ≤ 𝑐𝑛 lg 𝑛 for 2 ≤ 𝑛 < 𝑘

Inductive Step:

To prove the solution is correct for 𝑛 = 𝑘


𝑘
LHS: 𝑇(𝑘) = 2𝑇 (⌊2⌋ + 16) + 𝑘 (By the recurrence relation)

𝑘 𝑘
≤ 2 [𝑐 (⌊2⌋ + 16) log (⌊2⌋ + 16)] + 𝑘(By inductive hypothesis)

𝑘 𝑘 𝑘 + 32 𝑘
≤ 2 [𝑐 ( + 16) log ( + 16)] + 𝑘 = 𝑐(𝑘 + 32) log ( ) + 𝑘 ≤ 𝑐𝑘 log ( ) ≤ 𝑐𝑘 log 𝑘
2 2 2 2

Recursion Tree Method:

It is a pictorial representation of a given recurrence relation, which shows how Recurrence


is divided till Boundary conditions. It is useful when the divide & Conquer algorithm is
used.

Dr. A. K. Panda
33

Basic Steps to solve recurrence relation using recursion tree method:

1. Draw a recursive tree for given recurrence relation. In general, we consider the
second term in recurrence as root of the tree.
2. Calculate cost of each level
3. Determine the total number of levels in the recursion tree
4. Add cost of all the levels of the recursion tree and simplify the expression so
obtained in terms of asymptotic notation.

Example:

Solve the recurrence 𝑇(𝑛) = 2𝑇 (𝑛/2 ) + 𝑛 using recursion tree method.

Solution:

Step1: First you make a recursion tree of a given recurrence, where n is the root.

The given recurrence relation shows-

 A problem of size n will get divided into 2 sub-problems of size n/2.


 Then, each sub-problem of size n/2 will get divided into 2 sub-problems of size n/4
and so on.
 At the bottom most layer, the size of sub-problems will reduce to 1.

Step 2: Determine cost of each level-

 Cost of level-0 = n
 Cost of level-1 = 𝑛/2 + 𝑛/2 = 𝑛

Dr. A. K. Panda
34

 Cost of level-2 = 𝑛/4 + 𝑛/4 + 𝑛/4 + 𝑛/4 = 𝑛 and so on.

Step 3: Determine total number of levels in the recursion tree-

Suppose at level-ℎ (last level), size of sub-problem becomes 1.

Then-
𝑛
=1
2ℎ
⇒ 𝑛 = 2ℎ
Taking log on both sides, we get ℎ 𝑙𝑜𝑔2 = 𝑙𝑜𝑔𝑛

⇒ ℎ = log 2 𝑛

∴ Total number of levels in the recursion tree = log 2 𝑛 + 1

Step 6: Add costs of all the levels of the recursion tree and simplify the expression so
obtained in terms of asymptotic notation-

𝑇(𝑛) = 𝑛 + 𝑛 + ⋯ 𝑛 = (log 2 𝑛 + 1) 𝑛 = 𝜽 (𝒏 log 2 𝑛)

Example
𝑛 𝑛
𝑇(𝑛) = 𝑇 ( ) + 𝑇 ( ) + 𝑛2
4 2
Solution:

Dr. A. K. Panda
35

2
5 2 5 2 2 5 3 2
5 5 2
𝑇(𝑛) = 𝑛 + 𝑛 + ( ) 𝑛 + ( ) + ⋯ … … . = 𝑛 (1 + + ( ) + ⋯..)
16 16 16 16 16

1−𝑥 𝑛+1
By geometric series, 1 + 𝑥 + 𝑥 2 … . +𝑥 𝑛 = for 𝑥 ≠ 1
1−𝑥

1
1 + 𝑥 + 𝑥 2 + ⋯ = 1−𝑥 for |𝑥| < 1

5 5 2 1 1 16
1+ +( ) +⋯ = = =
16 16 1 − 5/16 11/16 11
16
Hence, 𝑇(𝑛) = 𝒏𝟐 = 𝜽(𝒏𝟐 )
11

Example

𝑛 2𝑛
𝑇(𝑛) = 𝑇 ( ) + 𝑇 ( ) + 𝑛
3 3

The longest path is the rightmost one.

Let height of the tree = ℎ

𝑛 3 ℎ 3 log 𝑛
3 ℎ
=1 ⇒ 𝑛 = (2) ⇒ log 𝑛 = ℎ log 2 ⇒ ℎ = log 3/2 ⇒ ℎ = log 3/2 𝑛
( )
2

∴ 𝑇(𝑛) = 𝑛 + 𝑛 + 𝑛 + ⋯ = 𝑛 log 3/2 𝑛 = 𝑂(𝑛 log 𝑛)

Example
𝑛
𝑇(𝑛) = 3𝑇 ( ) + 𝑐𝑛2
4

Dr. A. K. Panda
36

Solution:

Let height of the tree = ℎ


𝑛
=1 ⇒ 𝑛 = 4ℎ ⇒ log 𝑛 = ℎ log 4 ⇒ ℎ = log 4 𝑛
4ℎ
Total Number of levels in the recursion tree = log 4 𝑛 + 1

Cost of level-0 = 𝑐𝑛2

Cost of level-1 = 𝑐(𝑛/4)2 + 𝑐(𝑛/4)2 + 𝑐(𝑛/4)2 = (3/16)𝑐𝑛2

Cost of level-2 = 𝑐(𝑛/16)2 × 9 = (3/16)2 𝑐𝑛2 and so on.

3
𝑇(𝑛) = 𝑐𝑛2 + ( ) 𝑐𝑛2 + (3/16)2 𝑐𝑛2 + ⋯
16
1−𝑥 𝑛+1
By geometric series, 1 + 𝑥 + 𝑥 2 … . +𝑥 𝑛 = for 𝑥 ≠ 1
1−𝑥

1
1 + 𝑥 + 𝑥 2 + ⋯ = 1−𝑥 for |𝑥| < 1

3 3 2 1 1 16
1+ +( ) +⋯ = = =
16 16 1 − 3/16 13/16 13
16
Hence, 𝑇(𝑛) = 13 𝒏𝟐 = 𝜽(𝒏𝟐 )

Dr. A. K. Panda
37

The Master Theorem Method:

The master theorem method is the most useful method for solving recurrences of the form:

𝑇(𝑛) = 𝑎𝑇(𝑛/𝑏) + 𝑓(𝑛)

where 𝑎 ≥ 1, 𝑏 > 1, and 𝑓 is asymptotically positive and 𝑛/𝑏 is equal to either ⌊𝑛/𝑏⌋ or,
⌈𝑛/𝑏⌉.

The solution can be obtained by using any one of following three common rules:

1. If 𝑓(𝑛) = 𝑂(𝑛log𝑏 𝑎−∈ ) for some constant ∈> 0, then 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 )
2. If 𝑓(𝑛) = 𝑂(𝑛log𝑏 𝑎 ) , then 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 lg 𝑛)
3. If 𝑓(𝑛) = Ω(𝑛log𝑏 𝑎+∈ ) for some constant ∈> 0, and if 𝑎𝑓(𝑛/𝑏) ≤ 𝑐𝑓(𝑛)for some
constant 𝑐 < 1 then 𝑇(𝑛) = 𝜃(𝑓(𝑛))
The condition 𝑎𝑓(𝑛/𝑏) ≤ 𝑐𝑓(𝑛) is called the regularity condition.

Example:

Find the solution of the following problem by using Master’s theorem:

𝑇(𝑛) = 9𝑇(𝑛/3) + 𝑛

Solution:

Here, 𝑎 = 9, 𝑏 = 3, 𝑓(𝑛) = 𝑛

log 𝑏 𝑎 = log 3 9 = 2

𝑛log𝑏 𝑎 = 𝑛2

By rule 1, 𝑓(𝑛) = 𝑛 = 𝑂(𝑛2−1 ) = 𝑂(𝑛log3 9−1 ) where ∈= 1

Hence, 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 ) = 𝜃(𝑛log3 9 ) = 𝜃(𝑛2 )

Example:

𝑇(𝑛) = 4𝑇(𝑛/2) + 𝑛

Solution:

Here, 𝑎 = 4, 𝑏 = 2, 𝑓(𝑛) = 𝑛

log 𝑏 𝑎 = log 2 4 = 2

𝑛log𝑏 𝑎 = 𝑛2

Dr. A. K. Panda
38

By rule 1, 𝑓(𝑛) = 𝑛 = 𝑂(𝑛2−1 ) = 𝑂(𝑛log2 4−1 ) where ∈= 1

Hence, 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 ) = 𝜃(𝑛2 )

Example:

𝑇(𝑛) = 𝑇(2𝑛/3) + 1

Solution:

Here, 𝑎 = 1, 𝑏 = 3/2, 𝑓(𝑛) = 1


log3 1
𝑛log𝑏 𝑎 = 𝑛 2 = 𝑛0 = 1

By rule 2, 𝑓(𝑛) = 1 = 𝑂(1)

Hence, 𝑇(𝑛) = 𝜃(1 lg 𝑛) = 𝜃(lg 𝑛)

Example:

𝑇(𝑛) = 4𝑇(𝑛/2) + 𝑛2

Solution:

Here, 𝑎 = 4, 𝑏 = 2, 𝑓(𝑛) = 𝑛2

𝑛log𝑏 𝑎 = 𝑛log2 4 = 𝑛2

By rule 2, 𝑓(𝑛) = 𝑛2 = 𝑂(𝑛2 )

Hence, 𝑇(𝑛) = 𝜃(𝑛2 lg 𝑛)

Example:

𝑇(𝑛) = 4𝑇(𝑛/2) + 𝑛3

Solution:

Here, 𝑎 = 4, 𝑏 = 2, 𝑓(𝑛) = 𝑛3

𝑛log𝑏 𝑎 = 𝑛log2 4 = 𝑛2

𝑓(𝑛) = 𝑛3 = Ω(𝑛3 ) = Ω(𝑛2+1 )= Ω(𝑛log𝑏 𝑎+∈ ) where ∈= 1

And also we have to check the 2nd condition,


𝑛 𝑛 4𝑛3 𝑛3
𝑎𝑓 (𝑏 ) = 4𝑓 (2) = = ≤ 𝑐𝑛3 where 𝑐 = 1/2.
8 2

Dr. A. K. Panda
39

Hence, by rule 3, 𝑇(𝑛) = 𝜃(𝑓(𝑛)) = 𝜃(𝑛3 )

Example:

𝑇(𝑛) = 2𝑇(𝑛/2) + 𝑛2

Solution:

Here, 𝑎 = 2, 𝑏 = 2, 𝑓(𝑛) = 𝑛2

𝑛log𝑏 𝑎 = 𝑛log2 2 = 𝑛1

𝑓(𝑛) = 𝑛2 = Ω(𝑛2 ) = Ω(𝑛1+1 )= Ω(𝑛log𝑏 𝑎+∈ ) where ∈= 1

𝑛 𝑛 2𝑛2 𝑛2
And 𝑎𝑓 ( ) = 2𝑓 ( ) = = ≤ 𝑐𝑛3 where 𝑐 = 1/2.
𝑏 2 4 2

Hence, by rule 3, 𝑇(𝑛) = 𝜃(𝑓(𝑛)) = 𝜃(𝑛2 )

Example:
𝑛
𝑇(𝑛) = 3𝑇 ( ) + 𝑛 log 𝑛
4
Solution:

Here, 𝑎 = 3, 𝑏 = 4, 𝑓(𝑛) = 𝑛 log 𝑛

𝑛log𝑏 𝑎 = 𝑛log4 3 = 𝑛0.793

𝑓(𝑛) = 𝑛 log 𝑛 = Ω(n) = Ω(𝑛log4 3+0.2 )= Ω(𝑛log𝑏 𝑎+∈ ) where ∈= 0.2

𝑛 𝑛 2𝑛2 𝑛2
And 𝑎𝑓 (𝑏 ) = 2𝑓 ( 2 ) = 4
= 2
≤ 𝑐𝑛3 where 𝑐 = 1/2.

Now, for regularity condition of rule-3,


𝑛 𝑛 𝑛 𝑛 3
𝑎𝑓 (𝑏 ) = 3𝑓 ( 4) = 3 (4 ) log ( 4) ≤ 4 n log 𝑛 = 𝑐𝑛 log 𝑛 for 𝑐 = 3/4

Therefore, the solution is 𝑇(𝑛) = 𝜃(𝑓(𝑛)) = 𝜃(𝑛 log 𝑛)

Example:

𝑇(𝑛) = 𝑇(√𝑛) + log 𝑛

Solution:

𝑇(𝑛) = 𝑇(𝑛1/2 ) + log 𝑛

Dr. A. K. Panda
40

Let 𝑛 = 2𝑚 ⇒ 𝑚 = log 𝑛

𝑇(2𝑚 ) = 𝑇(2𝑚/2 ) + log 2𝑚

𝑇(2𝑚 ) = 𝑇(2𝑚/2 ) + m log 2

𝑇(2𝑚 ) = 𝑇(2𝑚/2 ) + 𝑚

Let 𝑇(2𝑚 ) = 𝑆(𝑚)

𝑆(𝑚) = 𝑆(𝑚/2) + 𝑚

𝑎 = 1, 𝑏 = 2, 𝑓(𝑚) = 𝑚

log 𝑏 𝑎 = log 2 1 = 0

𝑓(𝑚) = 𝑚 = Ω(𝑚) = Ω(𝑚log𝑏 𝑎+1 ) where ∈= 1


𝑚 𝑚 𝑚
And 𝑎𝑓 ( 𝑏 ) = 1𝑓 ( 2 ) = ≤ 𝑐𝑚 where 𝑐 = 1/2
2

By rule 3, 𝑆(𝑚) = 𝜃(𝑓(𝑚)) = 𝜃(𝑚)

∴ 𝑇(𝑛) = 𝜃(log 𝑛) (Since 𝑛 = 2𝑚 )

Example:

𝑇(𝑛) = 4𝑇(√𝑛) + 𝑙𝑜𝑔2 𝑛

Solution:

Let 𝑛 = 2𝑚 ⇒ 𝑚 = log 𝑛

𝑇(2𝑚 ) = 4𝑇(2𝑚/2 ) + 𝑚2

Let 𝑇(2𝑚 ) = 𝑆(𝑚)

𝑆(𝑚) = 4𝑆(𝑚/2) + 𝑚2

4 = 1, 𝑏 = 2, 𝑓(𝑚) = 𝑚2

log 𝑏 𝑎 = log 2 4 = 2

𝑓(𝑚) = 𝑚2 = O(𝑚2 ) = Ω(𝑚log𝑏 𝑎 )

Dr. A. K. Panda
41

By rule 2, 𝑆(𝑚) = 𝜃(𝑚log𝑏 𝑎 log 𝑚) = 𝜃(𝑚2 log 𝑚)

∴ 𝑇(𝑛) = 𝜃(log 2 𝑛 log(log 𝑛)) (Since 𝑛 = 2𝑚 )

Fourth condition of Master’s theorem

If 𝑓(𝑛), the non-recursive cost is not a polynomial and it is a poly logarithmic function,
then 4th condition of the master’s theorem is applicable.

If 𝑓(𝑛) = 𝜃(𝑛log𝑏 𝑎 log k 𝑛) for some 𝑘 > 0, then 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 log k+1 𝑛)

Example:

𝑇 (𝑛) = 2𝑇 (𝑛/2) + 𝑛 𝑙𝑜𝑔 𝑛

Solution:

Here, 𝑎 = 2, 𝑏 = 2, 𝑓(𝑛) = 𝑛 log 𝑛

𝑛log𝑏 𝑎 = 𝑛log2 2 = 𝑛1 = 𝑛

𝑓(𝑛) = 𝜃(𝑛 log1 𝑛) , 𝑘 = 1

𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 log k+1 𝑛) = 𝜃(𝑛 log1+1 𝑛) = 𝜃(𝑛 log 2 𝑛)

Example:

𝑇(𝑛) = 𝑇(√𝑛) + 𝜃(log log 𝑛)

Solution:

Let 𝑛 = 2𝑚 ⇒ 𝑚 = log 𝑛

𝑇(2𝑚 ) = 𝑇(2𝑚/2 ) + 𝜃 (log log 2𝑚 )

𝑇(2𝑚 ) = 𝑇(2𝑚/2 ) + 𝜃(log 𝑚 log 2)

𝑇(2𝑚 ) = 𝑇(2𝑚/2 ) + 𝜃(𝑙𝑜𝑔 𝑚)

Let 𝑇(2𝑚 ) = 𝑆(𝑚)

𝑆(𝑚) = 𝑆(𝑚/2) + 𝜃(log 𝑚)

Here, 𝑎 = 1, 𝑏 = 2, 𝑓(𝑚) = 𝜃(log 𝑚)

𝑚log𝑏 𝑎 = 𝑚log2 1 = 𝑚0 = 1

Dr. A. K. Panda
42

𝑓(𝑚) = 𝜃(𝑚log𝑏 𝑎 log k+1 𝑚), k = 1

𝑆(𝑚) = 𝜃(𝑚log𝑏 𝑎 log k+1 𝑚) = 𝜃(log1+1 𝑚) = 𝜃(log 2 𝑚) = 𝜃(log 2 log 𝑛)

Advanced Master Theorem:

The advanced version of the Master Theorem provides a more general form of the theorem
that can handle recurrence relations that are more complex than the basic form. The
advanced version of the Master Theorem can handle recurrences with multiple terms and
more complex functions.

If the recurrence relation is of the form:

𝑇(𝑛) = 𝑎𝑇(𝑛/𝑏) + 𝑓(𝑛) where 𝑎 ≥ 1, 𝑏 > 1

And 𝑓(𝑛) = 𝜃(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛), 𝑘 ≥ 0, and 𝑝 is a real number then:

1. If 𝑎 > 𝑏 𝑘 , then 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 )


2. if 𝑎 = 𝑏 𝑘 , then
a) if 𝑝 > −1, then 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 log p+1 𝑛)
b) if 𝑝 = −1, then 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 log log 𝑛)
c) if 𝑝 < −1, then 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 )
3. if 𝑎 < 𝑏 𝑘 , then
a) if 𝑝 ≥ 0, then 𝑇(𝑛) = 𝜃(𝑛𝑘 log p 𝑛)
b) if 𝑝 < 0, then 𝑇(𝑛) = 𝑂(𝑛𝑘 )

Example:
𝑛
Solve the recurrence relation 𝑇(𝑛) = 8𝑇 (2) + 𝑛2

Solution:

In this problem, 𝑎 = 8, 𝑏 = 2, 𝑓(𝑛) = 𝑛2 = 𝜃(𝑛2 ) = 𝜃(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛), where 𝑘 = 2, 𝑝 = 0

𝑎 > 𝑏 𝑘 𝑖. 𝑒. 8 > 22 [𝐶𝑎𝑠𝑒 1]

Therefore, 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 ) = 𝑇(𝑛) = 𝜃(𝑛log2 8 ) = 𝑇(𝑛) = 𝜃(𝑛3 )

Example:

Solve the recurrence relation


𝑛
𝑇(𝑛) = 2𝑇 ( ) + 𝑛 𝑙𝑜𝑔2 𝑛
2

Dr. A. K. Panda
43

Solution:

In this problem, 𝑎 = 2, 𝑏 = 2, 𝑓(𝑛) = 𝑛 𝑙𝑜𝑔2 𝑛 = 𝜃(𝑛 𝑙𝑜𝑔2 𝑛) = 𝜃(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛) , where


𝑘 = 1, 𝑝 = 2

𝑎 = 𝑏 𝑘 𝑖. 𝑒. 2 = 21 , 𝑝 > −1 [𝐶𝑎𝑠𝑒 2 (𝑎)]

Therefore, 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 log p+1 𝑛) = 𝜃(𝑛log2 2 log 2+1 𝑛) = 𝜃(𝑛 log 3 𝑛)

Example:

Solve the recurrence relation


𝑛
𝑇(𝑛) = 2𝑇 ( ) + 𝑛/ log 𝑛
2
Solution:

In this problem, 𝑎 = 2, 𝑏 = 2, 𝑓(𝑛) = 𝑛/ log 𝑛 = 𝜃(𝑛 𝑙𝑜𝑔−1 𝑛) = 𝜃(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛) , where


𝑘 = 1, 𝑝 = −1

𝑎 = 𝑏 𝑘 𝑖. 𝑒. 2 = 21 , 𝑝 = −1 [𝐶𝑎𝑠𝑒 2 (𝑏)]

Therefore, 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 log log 𝑛) = 𝜃(𝑛log2 2 log log 𝑛) = 𝜃(𝑛 log (log 𝑛))

Example:

Solve the recurrence relation


𝑛
𝑇(𝑛) = 16𝑇 ( ) + 𝑛2 / log 2 𝑛
4
Solution:

In this problem, 𝑎 = 16, 𝑏 = 4, 𝑓(𝑛) = 𝑛2 / log 2 𝑛 = 𝜃(𝑛2 𝑙𝑜𝑔−2 𝑛) = 𝜃(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛), where
𝑘 = 2, 𝑝 = −2

𝑎 = 𝑏 𝑘 𝑖. 𝑒. 16 = 42 , 𝑝 < −1 [𝐶𝑎𝑠𝑒 2 (𝑐)]

Therefore, 𝑇(𝑛) = 𝜃(𝑛log𝑏 𝑎 ) = 𝜃(𝑛log4 16 ) = 𝜃(𝑛2 )

Example:

Solve the recurrence relation


𝑛
𝑇(𝑛) = 2𝑇 ( ) + 𝑛2
2

Dr. A. K. Panda
44

Solution:

In this problem, 𝑎 = 2, 𝑏 = 2, 𝑓(𝑛) = 𝑛2 = 𝜃(𝑛2 𝑙𝑜𝑔0 𝑛) = 𝜃(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛) , where


𝑘 = 2, 𝑝 = 0

𝑎 < 𝑏 𝑘 𝑖. 𝑒. 2 < 22 , 𝑝 ≥ 0 [𝐶𝑎𝑠𝑒 3(𝑎)]

Therefore, 𝑇(𝑛) = 𝜃(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛) = 𝜃(𝑛2 𝑙𝑜𝑔0 𝑛) = 𝜃(𝑛2 )

Example:

Solve the recurrence relation

𝑛 𝑛3
𝑇(𝑛) = 2𝑇 ( ) +
2 log 𝑛

Solution:
𝑛3
In this problem, 𝑎 = 2, 𝑏 = 2, 𝑓(𝑛) = log 𝑛 = 𝜃(𝑛3 𝑙𝑜𝑔−1 𝑛) = 𝜃(𝑛𝑘 𝑙𝑜𝑔𝑝 𝑛) , where
𝑘 = 3, 𝑝 = −1

𝑎 < 𝑏 𝑘 𝑖. 𝑒. 2 < 23 , 𝑝 < 0 [𝐶𝑎𝑠𝑒 3(𝑏)]

Therefore, 𝑇(𝑛) = 𝑂(𝑛𝑘 ) = 𝑂(𝑛3 )

Limitations of Master’s Theorem:

The master theorem cannot be used for if:

 𝑇(𝑛) is not monotone. eg. 𝑇(𝑛) = 𝑠𝑖𝑛 𝑛


 𝑓(𝑛) is not a polynomial. eg. 𝑓(𝑛) = 2𝑛
 𝑎 is not a constant. eg. 𝑎 = 2𝑛
 𝑎 < 1

Dr. A. K. Panda
45

Problem Set

1. 𝑇 (𝑛) = 3𝑇 (𝑛/2) + 𝑛2
𝑇 (𝑛) = 𝜃(𝑛2 ) (Rule-3)

2. 𝑇 (𝑛) = 4𝑇 (𝑛/2) + 𝑛2
𝑇 (𝑛) = 𝜃(𝑛2 log 𝑛) (Rule-2)

3. 𝑇 (𝑛) = 𝑇 (𝑛/2) + 2𝑛
𝑇 (𝑛) = 𝜃(2𝑛 ) (Rule 3)

4. 𝑇 (𝑛) = 2𝑛 𝑇 (𝑛/2) + 𝑛𝑛
Master theorem is not applicable because 2𝑛 is not a constant.

5. 𝑇 (𝑛) = 16𝑇 (𝑛/4) + 𝑛


𝑇 (𝑛) = 𝜃(𝑛2 ) (Rule-1)

6. 𝑇 (𝑛) = 2𝑇 (𝑛/4) + 𝑛0.51


𝑇 (𝑛) = 𝜃(𝑛0.51 ) (Rule-3)

7. 𝑇 (𝑛) = 0.5𝑇 (𝑛/2) + 1/𝑛


Master theorem is not applicable because 𝑎 < 1

𝑛
8. 𝑇 (𝑛) = 16𝑇 ( 4) + 𝑛!
𝑇 (𝑛) = 𝜃(𝑛!) (Rule-3)

9. 𝑇 (𝑛) = √2𝑇 (𝑛/2) + 𝑙𝑜𝑔 𝑛


𝑇 (𝑛) = 𝜃(√𝑛) (Rule-1)

10. 𝑇 (𝑛) = 3𝑇 (𝑛/2) + 𝑛


𝑇 (𝑛) = 𝜃(𝑛log 3 ) (Rule-1)

11. T (𝑛) = 3𝑇 (𝑛/3) + √𝑛


𝑇 (𝑛) = 𝜃(𝑛) (Rule 1)

12. 𝑇 (𝑛) = 4𝑇 (𝑛/2) + 𝑐𝑛


𝑇 (𝑛) = 𝜃(𝑛2 ) (Rule-1)

13. 𝑇 (𝑛) = 3𝑇 (𝑛/4) + 𝑛 𝑙𝑜𝑔 𝑛


𝑇(𝑛) = 𝜃(𝑛 log 𝑛) (Rule-3)

Dr. A. K. Panda
46

14. 𝑇 (𝑛) = 3𝑇 (𝑛/3) + 𝑛/2


𝑇(𝑛) = 𝜃(𝑛 log 𝑛) (Rule-2)

15. 𝑇 (𝑛) = 6𝑇 (𝑛/3) + 𝑛2 𝑙𝑜𝑔 𝑛


𝑇(𝑛) = 𝜃(𝑛2 𝑙𝑜𝑔 𝑛) (Rule-3)
16. 𝑇 (𝑛) = 4𝑇 (𝑛/2) + 𝑛/ 𝑙𝑜𝑔 𝑛
𝑇 (𝑛) = 𝜃(𝑛2 ) (Rule-1)

17. 𝑇 (𝑛) = 64𝑇 (𝑛/8) − 𝑛2 𝑙𝑜𝑔 𝑛


Master theorem is not applicable because 𝑓(𝑛) is not positive.

𝑛
18. 𝑇 (𝑛) = 7𝑇 ( 3) + 𝑛2
𝑇 (𝑛) = 𝜃(𝑛2 ) (Rule-3)

19. T (n) = 4T (n/2) + log n


𝑇 (𝑛) = 𝜃(𝑛2 ) (Rule-1)

ALGORITHM DESIGN TECHNIQUES

Algorithmic strategy / technique / paradigm is a general approach by which many


problems can be solved algorithmically.

Following are some popular design techniques:

a) Brute Force
b) Divide and Conquer
c) Dynamic Programming
d) Greedy Technique
e) Branch & Bound
f) Backtracking

BRUTE FORCE APPROACH

Brute force Approach is a type of problem solving approach wherein a problem solution is
directly based on the problem definition that is provided. It is a top down approach. It is
considered as the easiest approach to adopt and is also very useful when problem domain
is not that much complex.

Dr. A. K. Panda
47

Example:

 Computing a factorial of a number


 Multiplication of matrices
 Searching and sorting.

Let’s consider an example to compute 𝑥 𝑛 (where, 𝑥 > 0 and n is any non-integer).

Based on the definition of exponentiation 𝑥 𝑛 = 𝑥 × 𝑥 × … × 𝑥

Implies using brute force to solve the above exponentiation problem it requires (𝑛 − 1)
repetitive multiplications.

But to solve above problem using recursive approach the complexity of the problem is
reduced to 𝑂(𝑙𝑜𝑔(𝑛)) because

𝑥 𝑛 = 𝑥 × 𝑥 𝑛−1

Divide and Conquer Method:

Divide and conquer is a design strategy which is well known to breaking down efficiency
barriers.

Divide and conquer strategy is as follows: divide the problem instance into two or more
smaller instances of the same problem, solve the smaller instances recursively, and
assemble the solutions to form a solution of the original instance. The recursion stops when
an instance is reached which is too small to divide.

Divide and conquer algorithm consists of two parts:

1. Divide : Divide the problem into a number of sub problems. The sub problems are
solved recursively.
2. Conquer : The solution to the original problem is then formed from the solutions to
the sub problems (patching together the answers).

Merge Sort

The merge sort algorithm that uses the divide and conquer technique. The algorithm
comprises three steps, which are as follows:

1. Divides the n-element list, into two sub-lists of n/2 elements each, such that both
the sub-lists hold half of the element in the list.
2. Recursively sort the sub-lists using merge sort.
3. Merge the sorted sub-lists to generate the sorted list.

Dr. A. K. Panda
48

Algorithm

MERGE SORT (A, p, r)


if 𝑝 < 𝑟 then
q ← ( p+ r)/2 // defines the current array in 2 parts .
MERGE-SORT (A, p, q) // sort the 1st part of array
MERGE-SORT ( A, q+1,r) // sort the 2nd part of array
MERGE ( A, p, q, r)
MERGE (A, p, q, r)

𝑛1 ← 𝑞 − 𝑝 + 1
𝑛2 ← 𝑟 − 𝑞
create arrays 𝐿[1. . . . . 𝑛1 + 1] and 𝑅 [ 1. . . . . 𝑛2 + 1 ]
for i ← 1 to 𝑛1 do
𝐿[𝑖] ← 𝐴 [ 𝑝 + 𝑖 − 1]
for j ← 1 to 𝑛2 do
𝑅[𝑗] ← 𝐴[ 𝑞 + 𝑗]
L [𝑛1 + 1] ← ∞
R [𝑛2 + 1] ← ∞
i← 1
j← 1
for 𝑘 ← 𝑝 to r do
if 𝐿 [𝑖] ≤ 𝑅[𝑗] then
𝐴[𝑘] ← 𝐿[ 𝑖]
i ← i +1
else A[k] ← R[j]
j ← j+1

Dr. A. K. Panda
49

Analysis of Merge Sort:

Let 𝑇(𝑛) represents the total time taken by merge sort algorithm to sort an array of size n.

When we have 𝑛 > 1 elements, we break down the running time as follows:

1. Divide: The divide step just computes middle of the array, which takes constant
time. Thus, it is equal to 𝜃(1).
2. Conquer: Time taken by the algorithm to recursively sort the two halves of the array,
each of size 𝑛/2 is 2𝑇(𝑛/2).
3. Combine: Time taken to merge the two sorted halves is 𝜃(𝑛).

The running time of a recursive procedure can be expressed as a recurrence relation:

𝑠𝑜𝑙𝑣𝑖𝑛𝑔 𝑡𝑟𝑖𝑣𝑖𝑎𝑙 𝑝𝑟𝑜𝑏𝑙𝑒𝑚 𝑖𝑓 𝑛 = 1


𝑇(𝑛) = { 𝑛
𝑁𝑜. 𝑜𝑓 𝑝𝑒𝑖𝑐𝑒𝑠 × 𝑇 ( ) + 𝑑𝑖𝑣𝑖𝑑𝑒 + 𝐶𝑜𝑚𝑏𝑖𝑛𝑒 𝑖𝑓 𝑛 > 1
𝑅𝑒𝑑𝑢𝑐𝑡𝑖𝑜𝑛𝐹𝑎𝑐𝑡𝑜𝑟
Hence, the recurrence relation of Merge sort is:

𝜃(1) 𝑖𝑓 𝑛 = 1
𝑇(𝑛) = { 𝑛
2𝑇 ( ) + 𝜃(𝑛) 𝑖𝑓 𝑛 > 1
2

Dr. A. K. Panda
50

Solution:
𝑛
𝑇(𝑛) = 2𝑇 ( ) + 𝑛
2

= 𝑛 + 2[𝑛/2 + 2𝑇(𝑛⁄4)]

= 𝑛 + 𝑛 + 4𝑇(𝑛⁄4)

= 2𝑛 + 4𝑇(𝑛⁄4)

𝑛
= 3𝑛 + 8 𝑇(𝑛⁄8) = 3𝑛 + 23 𝑇( )
23
… … … … … … … … … … ..

… … … … … … … … … … ….
𝑛 𝑛
= 𝑘𝑛 + 2𝑘 𝑇(2𝑘) (where 2𝑘 = 1 ⇒ 𝑛 = 2𝑘 ⇒ log 𝑛 = 𝑘 log 2 ⟹ log 𝑛 = 𝑘)

= 𝑛𝑘 + 2𝑘 𝑇(1) = 𝑛𝑙𝑜𝑔 𝑛 + 2log 𝑛 (Since T(1)=1)

= 𝑛 𝑙𝑜𝑔 𝑛 + 𝑛 = 𝜃(𝑛 log 𝑛)

Space Complexity

The space complexity of merge sort is 𝑂(𝑛), which means that it requires an auxiliary array
to temporarily store the merged array. The auxiliary array must be the same size as the
main input array.

Quick Sort
It is one of the fastest sorting algorithms based on the divide and conquer approach. This
algorithm takes an array of values, chooses one of the values as the 'pivot' element, and
divides the array in such a way that elements less than pivot are kept on the left side and
the elements greater than pivot are on the right side of the pivot.

Working Procedure:

1. Choose a value in the array to be the pivot element.


2. Order the rest of the array so that lower values than the pivot element are on the
left, and higher values are on the right.
3. Swap the pivot element with the first element of the higher values so that the pivot
element lands in between the lower and higher values.
4. Do the same operations (recursively) for the sub-arrays on the left and right side of
the pivot element.

Dr. A. K. Panda
51

Algorithm:

Quick _sort(A, p, r)
if (p<r) then
q←Partition(A, p, r);
Quick_sort(A, p, q-1);
Quick_sort(A, q+1, r);

Partition(A , p ,r)
𝑥 ← 𝐴[𝑟];
𝑖 ← 𝑝 − 1;
for 𝑗 ← 𝑝 to 𝑟 − 1 do
if 𝐴[𝑗] <= 𝑥 then
𝑖 = 𝑖 + 1;
exchange 𝐴[𝑖] ↔ 𝐴[𝑗];
exchange 𝐴[𝑖 + 1] ↔ 𝐴[𝑟];
return (𝑖 + 1);

Example:

10 80 30 90 40 50 70
Index: 1 2 3 4 5 6 7

𝑝 = 1, 𝑟 = 7, pivot 𝑥 = 𝑎[𝑟] = 𝑎[7] = 70, 𝑖 = 𝑝 − 1 = 0


Now for 𝑗 = 1 to 7

Dr. A. K. Panda
52

Pass 1:
𝑗 = 1 and 𝑖 = 0

Test condition Action Value of the variables


𝑎[𝑗] ≤ 𝑥, 𝑖 =𝑖+1 𝑖 =0+1=1
10 ≤ 70, 𝑌𝑒𝑠 Swap (𝑎[𝑖], 𝑎[𝑗]), Swap (10, 10) 𝑗=1

10 80 30 90 40 50 70
Index: 1 2 3 4 5 6 7

Pass 2:
𝑗 = 2 and 𝑖 = 1

Test condition Action Value of the variables


𝑎[𝑗] ≤ 𝑥, No Action 𝑖=1
80 ≤ 70, 𝑁𝑜 𝑗=2

10 80 30 90 40 50 70
Index: 1 2 3 4 5 6 7

Pass 3:
𝑗 = 3 and 𝑖 = 1

Test condition Action Value of the variables


𝑎[𝑗] ≤ 𝑥, 𝑖 =𝑖+1 𝑖 =1+1=2
30 ≤ 70, 𝑌𝑒𝑠 Swap (𝑎[𝑖], 𝑎[𝑗]), Swap (80, 30) 𝑗=3

10 30 80 90 40 50 70
Index: 1 2 3 4 5 6 7

Pass 4:
𝑗 = 4 and 𝑖 = 2

Test condition Action Value of the variables


𝑎[𝑗] ≤ 𝑥, No Action 𝑖=2
90 ≤ 70, 𝑁𝑜 𝑗=4

10 30 80 90 40 50 70
Index: 1 2 3 4 5 6 7

Dr. A. K. Panda
53

Pass 5:
𝑗 = 5 and 𝑖 = 2

Test condition Action Value of the variables


𝑎[𝑗] ≤ 𝑥, 𝑖 =𝑖+1 𝑖 =2+1=3
40 ≤ 70, 𝑌𝑒𝑠 Swap (𝑎[𝑖], 𝑎[𝑗]), Swap (40, 80) 𝑗=5

10 30 40 90 80 50 70
Index: 1 2 3 4 5 6 7

Pass 6:
𝑗 = 6 and 𝑖 = 3

Test condition Action Value of the


variables
𝑎[𝑗] ≤ 𝑥, 𝑖 =𝑖+1 𝑖 =3+1=4
50 ≤ 70, 𝑌𝑒𝑠 Swap (𝑎[𝑖], 𝑎[𝑗]), Swap (90, 50) 𝑗=6

10 30 40 50 80 90 70
Index: 1 2 3 4 5 6 7

Before pass 7, 𝑗 becomes 7, so we come out from the loop.


Now we have to swap (𝑎[𝑖 + 1], 𝑎[𝑞]) i.e. Swap (80, 70)

10 30 40 50 70 90 80
Index: 1 2 3 4 5 6 7

Now, the element 70 is brought to its appropriate position by the partition function. Now
the same procedure will be applied to the left part and right part of the element 70.

Thus, return 𝑞 = 𝑖 + 1 = 5
Now call 𝑄𝑢𝑖𝑐𝑘_𝑠𝑜𝑟𝑡(𝐴, 𝑝, 𝑞 − 1) and 𝑄𝑢𝑖𝑐𝑘_𝑠𝑜𝑟𝑡(𝐴, 𝑞 + 1, 𝑟),
i.e, 𝑄𝑢𝑖𝑐𝑘_𝑠𝑜𝑟𝑡(𝐴, 1, 4) and 𝑄𝑢𝑖𝑐𝑘_𝑠𝑜𝑟𝑡(𝐴, 6, 7).

Analysis of Quick Sort


Best Case:
The best-case scenario for quicksort occurs when the pivot chosen at the each step divides
the array into roughly two equal halves. If the procedure partition produces two regions of

Dr. A. K. Panda
54

size 𝑛/2. Partition algorithm performs n comparisons (possibly n−1 or n+1, depending on
the implementation).

The running time of Quick Sort can be expressed as a recurrence relation:


𝑛 𝑛 𝑛
𝑇(𝑛) = 𝑇 ( ) + 𝑇 ( ) + 𝑛 = 2𝑇 ( ) + 𝑛
2 2 2
𝑛
By using Master’s method, the solution of the recurrence 2𝑇 (2) + 𝑛 is 𝜃(𝑛 log 𝑛)

Therefore Best Case Time Complexity of Quick Sort is 𝜃(𝑛 log 𝑛).

Worst Case:
The worst-case scenario for quick sort occurs when the array is already sorted and the
pivot is always chosen as the smallest or largest element. In this case the pivot consistently
results a highly unbalances partitions at each step.

Let 𝑇(𝑛) represents the total time taken by Quick Sort algorithm to sort an array of size n.
Since part 1 contains 𝑛 − 1 elements and part 2 contains 1 element and partition procedure
requires 𝑛 comparisons then

𝑇(𝑛) = 𝑇(𝑛 − 1) + 𝑇(1) + 𝑛 = 𝑇(𝑛 − 1) + 𝑛 (∵ 𝑇(1) = 0)

Solution:

𝑇(𝑛) = 𝑇(𝑛 − 1) + 𝑛

= 𝑇(𝑛) = 𝑛 + 𝑇(𝑛 − 1)

= 𝑛 + (𝑛 − 1) + 𝑇(𝑛 − 2)

= 𝑛 + (𝑛 − 1) + (𝑛 − 2) + 𝑇(𝑛 − 3)

= 𝑛 + (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) +… … ….+3 + 2 + 𝑇(1)

= 𝑛 + (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) +… … ….+3 + 2 + 0(∵ 𝑇(1) = 0)

= (𝑛 + (𝑛 − 1) + (𝑛 − 2) + (𝑛 − 3) +… … ….+3 + 2 + 1) − 1

𝑛(𝑛 + 1)
= − 1 = 𝑂(𝑛2 )
2
Therefore Worst Case Time Complexity of Quick Sort is 𝑂(𝑛2 ).

Average Case:

Dr. A. K. Panda
55

Suppose that size of the array be n and in the first pass the pivot partitions the array into
two sizes 𝑘 − 1 and 𝑛 − 𝑘.

Then the running time is given by the recurrence:

𝑇(𝑛) = 𝑛 + 𝑇(𝑘 − 1) + 𝑇(𝑛 − 𝑘) with 𝑇(0) = 0 and 𝑇(1) = 0

Assume that the pivot element will be a random element of the array to be partitioned.

That is, for 𝑘 = 1, 2, . . . , 𝑛, the probability that the pivot element is the 𝑘 𝑡ℎ largest element
of the array is 1/𝑛.

Since, all values of k are equally likely, we must average over all k.

∑𝑛𝑘=1 𝑛 + 𝑇(𝑘 − 1) + 𝑇(𝑛 − 𝑘)


𝑇(𝑛) = , 𝑇(0) = 0, 𝑇(1) = 0 … . (1)
𝑛
∑𝑛𝑘=1 𝑇(𝑘 − 1) + 𝑇(𝑛 − 𝑘)
=𝑛 +
𝑛
∑𝑛𝑘=1 𝑇(𝑘 − 1) ∑𝑛𝑘=1 𝑇(𝑛 − 𝑘)
=𝑛+ +
𝑛 𝑛
By substituting 𝑖 = 𝑘 − 1 in the above equation we get:

∑𝑛−1
𝑖=0 𝑇(𝑖) ∑𝑛−1
𝑖=0 𝑇(𝑛 − 𝑖 − 1)
𝑇(𝑛) = 𝑛 + +
𝑛 𝑛

Since, ∑𝑛−1 𝑛−1


𝑖=0 𝑇(𝑖) = ∑𝑖=0 𝑇(𝑛 − 𝑖 − 1), 𝑇(𝑛) can be written as

∑𝑛−1
𝑖=0 𝑇(𝑖) ∑𝑛−1
𝑖=0 𝑇(𝑖) 2 ∑𝑛−1
𝑖=0 𝑇(𝑖)
𝑇(𝑛) = 𝑛 + + =𝑛+
𝑛 𝑛 𝑛
𝑛−1
2
⟹ 𝑛𝑇(𝑛) = 𝑛 + 2 ∑ 𝑇(𝑖) … (2)
𝑖=0

Substituting 𝑛 by 𝑛 − 1, we get
𝑛−2

(𝑛 − 1)𝑇(𝑛 − 1) = (𝑛 − 1)2 + 2 ∑ 𝑇(𝑖) … (3)


𝑖=0

By subtracting equation (3) from (2) we get,


𝑛−1 𝑛−2
2 2
𝑛𝑇(𝑛) −(𝑛 − 1)𝑇(𝑛 − 1) = 𝑛 + 2 ∑ 𝑇(𝑖) − (𝑛 − 1) − 2 ∑ 𝑇(𝑖)
𝑖=0 𝑖=0

Dr. A. K. Panda
56

⇒ 𝑛𝑇(𝑛) −(𝑛 − 1)𝑇(𝑛 − 1) = 𝑛2 − (𝑛 − 1)2 + 2𝑇(𝑛 − 1)

Rearranging and simplifying the above equation we get

𝑛𝑇(𝑛) = (𝑛 + 1)𝑇(𝑛 − 1) + 2𝑛 − 1

Dividing both sides by 𝑛(𝑛 + 1) we get

𝑇(𝑛) 𝑇(𝑛 − 1) 2𝑛 − 1
= +
(𝑛 + 1) 𝑛 𝑛(𝑛 + 1)

𝑇(𝑛) 𝑇(𝑛 − 1) 2
⇒ ≈ +
(𝑛 + 1) 𝑛 𝑛
𝑇(𝑛)
Let 𝑆(𝑛) = (𝑛+1), then the recurrence relation becomes:

2
𝑆(𝑛) = 𝑆(𝑛 − 1) + , 𝑆(1) = 0
𝑛
Now, by applying repeated substitution method,

2
𝑆(𝑛) = + 𝑆(𝑛 − 1)
𝑛
2 2 2 2 2
= + + 𝑆(𝑛 − 2) = + + + 𝑆(𝑛 − 3)
𝑛 𝑛−1 𝑛 𝑛−1 𝑛−2
2 2 2 2 2
= ⋯………………..= + + + ⋯ + + + 𝑆(1)
𝑛 𝑛−1 𝑛−2 3 2
𝑛
2 2 2 2 2 1 1 1 1
= + + + ⋯ + + + 1 = 2 [ + + ⋯ ] = 2 ∑ ≈ 2 log 𝑛
𝑛 𝑛−1 𝑛−2 3 2 1 2 𝑛 𝑖
𝑖=1

[Harmonic Series]

So, 𝑆(𝑛) ≈ 2 log 𝑛

𝑇(𝑛)
≈ 2 log 𝑛
(𝑛 + 1)

⇒ 𝑇(𝑛) = 2(𝑛 + 1) log 𝑛 ≈ 1.39 n log 𝑛

The expected case for quick sort is fairly close to the best case (only 39% more
comparisons) and nothing like the worst case.

In most (not all) tests, quick sort turns out to be a bit faster than merge sort.

Dr. A. K. Panda
57

Quick sort performs 39 & more comparisons than merge sort, but much less movement
(copying) of array elements.

Space Complexity:

In quick sort, the space complexity is calculated on the basis of space used by the recursion
stack. In the worst case, the space complexity is 𝑂(𝑛) because in worst case, n recursive
calls are made. And, the average space complexity of a quick sort algorithm is 𝑂(log 𝑛).

Dr. A. K. Panda

You might also like