0% found this document useful (0 votes)
3 views

DOC-20230123-WA0005

Uploaded by

Musa Mohammed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

DOC-20230123-WA0005

Uploaded by

Musa Mohammed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 178

Design and Analysis of

Algorithm

Department of Computer Science

Faculty of Computer Sciences and Information


technology
Bayero University, Kano

Csc2202 by S.H. Muhammad 1


Lecture 2

Csc2202 by S.H. Muhammad 2


Previous Lecture
Analysis of algorithms” is an investigation of an algorithm’s
efficiency with respect to two resources:

running time

memory space.

Csc2202 by S.H. Muhammad 3


Previous Lecture
sequence of steps one typically goes through in designing
and analyzing an algorithm
Understanding the Problem
Ascertaining the Capabilities of the Computational Device
and Choosing between Exact and Approximate Problem
Solving
Algorithm Design Techniques
Methods of Specifying an Algorithm
Proving an Algorithm’s Correctness
Analyzing an Algorithm
Coding an Algorithm

Csc2202 by S.H. Muhammad 4


Today
General framework for analyzing algorithm
efficiency
Measuring an Input’s Size
Units for Measuring Running Time
Orders of Growth
Worst-Case, Best-Case, and Average-Case
Efficiencies

Csc2202 by S.H. Muhammad 5


Analysis of algorithms
Issues:
• correctness
• time efficiency
• space efficiency
• optimality

Approaches:
• theoretical analysis
• empirical analysis

Csc2202 by S.H. Muhammad 6


Efficiency

The time and space requirement of an algorithm is called the


computational complexity of the algorithm.
The greater the amount of the time and space required, the
more complex is the algorithm
There are two kinds of efficiency: time efficiency and space
efficiency.
Time efficiency, also called time complexity, indicates how fast
an algorithm in question runs.
Space efficiency, also called space complexity, refers to the
amount of memory units required by the algorithm in ad- dition
to the space needed for its input and output

Csc2202 by S.H. Muhammad 7


Efficiency
In the early days of electronic computing, both resources—
time and space—were at a premium.
Technological innovations have improved the computer’s
speed and memory size by many orders of magnitude.

Csc2202 by S.H. Muhammad 8


Efficiency
Now the amount of extra space required by an
algorithm is typically not of as much concern,
But , the time issue has not diminished quite to the
same extent,
We would be spending more time on analyzing the
running time of an algorithm.
We would also be interested in the efficiency of
algorithms, as a function of input size.

Csc2202 by S.H. Muhammad 9


Efficiency
Assume we have a small input and our algorithm or
a program running on that input will take less
amount of time.
If the input becomes 10 times larger, then the time
taken by the program may also increase.
It may become 10, 20 or 100 times.
It is this behavior of increase in the running time,
with the increase in the size of input would be of
our interest. i.e as a function of input size

Csc2202 by S.H. Muhammad 10


Why consider running time ?
Normally we are concerned with the time
complexity rather than space complexity of an
algorithm.
The reasons are that firstly it becomes easier and
cheaper to obtain space.
Secondly techniques to achieve space efficiency by
spending more time are available.
In what follows, we use ‘complexity’ to mean the
time complexity if not otherwise indicated.

Csc2202 by S.H. Muhammad 11


Running Time

Csc2202 by S.H. Muhammad 12


Measuring the Running time of Algorithm
How should we measure the running time of an algorithm?
There are two ways :

1. Empirical or practical approach


2. Theoretical analysis of time efficiency

Csc2202 by S.H. Muhammad 13


Empirical(Practical) analysis of time efficiency
Experimental Study.
• Write a program that implement the algorithm
• Run the program with data sets in which some are
smaller, some are of larger data sets i.e with different
input size
• Then you clock the time the program takes and clock
does not mean that you should sit down near stopwatch.
Perhaps you can use the system utility like
System.CurrentTimeMillis(), to clock the time that
program takes and then from that you try to figure out,
how good your algorithms is

Csc2202 by S.H. Muhammad 14


Timing Program Example

Csc2202 by S.H. Muhammad 15


Empirical(practical) analysis of time efficiency

In general , we are interested in determining the dependency


of the running time on the size of the input. In order to
determine this we can perform several expirements on many
different test inputs of various sizes.
We can then visualize the results of such expiremnts by
plotting the performance of each run of the algorithm as
point with x-coordinate equal to the input size ,n, and y-
coordinate equal to the running time ,t.
The result of expirmental study in the next slide shows A dot
coordinate (n,t) indicates that on an input size n , the
running time of the algorithm is t millisecond (ms)
The algorithm A executed on a fast computer and algorithm
B executed on a slower computer

Csc2202 by S.H. Muhammad 16


Result of Expirmental studyof Running Time of algorithm

Csc2202 by S.H. Muhammad 17


Limitation of Experimental Studies
It is necessary to implement and test the algorithm in order
to determine the running time
Implementing it is a huge overhead, where you have to
spend considerable amount of time.
Experiments can be done only on a limited set of inputs and
may not indicate the running time on other inputs not
included in the experiment
In order to compare two algorithms , You have to use
exactly the same platforms to do the comparison. Platform
means both the hardware(same computer) and software
environment.

Csc2202 by S.H. Muhammad 18


Limitation of Experimental Studies Cont.
Dependence on the quality of a program implementing the
algorithm
Compiler used in generating the machine code,
Difficulty of clocking the actual running time of the pro-
gram.
Therefore , Since we are after a measure of an algorithm’s
efficiency, we would like to have a metric that does not
depend on these extraneous factors.
One possible approach is to : count the number of times
each of the algorithm’s operations is executed

Csc2202 by S.H. Muhammad 19


Theoretical analysis of time efficiency
We may require , a general methodology for analyzing
running time of algorithms . The approach :
• Uses a high-level description of the algorithm instead of testing one
of its implementations
• Pseudo-code is the high level description of an algorithm and this is
how we would be specifying all our algorithms for the purpose of this
course
• Takes into account all possible inputs
• Allows one to evaluate the efficiency of any algorithm in a way that
is independent of the hardware and software environment
• This method is called Theoretical analysis of time efficiency
• It aim at associating with each algorihm a function f(n) that
characterizes the running of the algorithm in terms of the input size
n.

Csc2202 by S.H. Muhammad 20


Theoretical analysis of time efficiency
Some of thefunctions that will be encountered include n and
n2
For example , we will write statements of the type
“Algorithm A runs in time proportional to n” meaning that
if we were to perform expirements , we would find that the
actual running time of Algorithm A on any input size n
never exceeds Cn , where c is a constant that depend on
the hardware and software enviroment
Given two algorithms A and B , where A runs in time
proportional to n and B runs in time proportional to n2 we
will prefer A to B , since the function n grows at a smaller
rate than function n2

Csc2202 by S.H. Muhammad 21


Theoretical analysis of time efficiency
Complexity theory is an Approximation theory.
We are not interested in exact time required by an algorithm
to solve the problem.
Constant of fastest growing term is insignificant
Rather we are interested in order of growth. i.e
• ︎ How much faster will algorithm run on computer that is twice as
fast?
• ︎ How much longer does it take to solve problem of double input
size?

The unit of the time complexity, strictly speaking, should be


the number of execution steps, although we often do not use
any unit.

Csc2202 by S.H. Muhammad 22


Factors Affecting Running Time

Csc2202 by S.H. Muhammad 23


Pseudo-Code

A mixture of natural language and high-level programming


concepts that describes the main ideas behind a generic
implementation of a data structure or algorithm
Example of pseudocode is given in next slide
This algorithm takes an array A, which stores an integer in
it and it is trying to find the maximum element in this
array.
Algorithm array Max(A, n) The above mentioned example
is not a program, because the syntax is wrong. But it is a
pseudo code which is a mixture of natural language and
some high-level programming concepts.

Csc2202 by S.H. Muhammad 24


Pseudo-Code

Csc2202 by S.H. Muhammad 25


Pseudo-Code
It is more structured than usual prose but less formal than a
programming language

Expressions
• Use standard mathematical symbols to describe numeric and
boolean expressions

• Use for assigment



• Use for equality relationship
Method declaration
=
Algorithm name(argument1 , argument2)
Csc2202 by S.H. Muhammad 26
Definitions
Problem : Specification of what are valid input and
what are acceptable output for each valid input.
Input : Certain input which is needed to be suplied
Output : certain output that needs to be generated
Input instance : A value x is an input instance for
problem p , if x is a valid input as per the
specification

Csc2202 by S.H. Muhammad 27


Pseudo-Code

Csc2202 by S.H. Muhammad 28


Premitive Operation
First we identify what are the primitive operations in our
pseudo-code.
What is a primitive operation? It is a low level operation.
Independent of programming language.
Can be identified in pseudo-code . For e.g

• Data movement (assigment)


• Control (branch ,subroutine call , return)
• Arithmetic and logical operations (e.g addition , comparision)

Csc2202 by S.H. Muhammad 29


Algorithmic Statements
Instruction to be perform at each stage can be

• Arithematic and Logical Operations


– A=B+C This is executed at on step , with 2 operation

• Jumps and Conditional jumps


– If A > B then goto This is executed at on step , with one operation

• Array Operations (1-Dimentional)


• A[i] = B , B = C[i] This is executed at on step , 2 opx

Csc2202 by S.H. Muhammad 30


More Algorithmic Statements
A=B+C*D–F 3 stpes are involve

A[i] = B[j] + C[j] 4 steps during instruction

For I =1 to n
C[i] = A[i] + B[i]

▪ The unit of the time complexity, strictly speaking, should be


the number of execution steps, although we often do not use
any unit.

Csc2202 by S.H. Muhammad 31


Units of measuring running time

1. Measure running time in milliseconds, seconds, etc.


• Depends on computer speed
• Dependence on the quality of a program implementing the
algorithm and of the compiler used in generating the
machine code
• Also , the difficulty of clocking the actual running time of
the program.
2. Count the number of times each operation is
executed
• depends on implementation!
• Difficult and unnecessary

Csc2202 by S.H. Muhammad 32


Units of measuring running time Con
3. The best way is to identify the most important operation of
the algorithm called the basic operation, the operation
contributing the most to the total running time, and
compute the number of times the basic operation is
executed

Csc2202 by S.H. Muhammad 33


Counting number of operation

Csc2202 by S.H. Muhammad 34


Counting number of operation

for I =1 to n
C[i] = A[i] + B[i]

Csc2202 by S.H. Muhammad 35


Basic or primitive operation of an algorithm

Basic operation: the operation that contributes the most towards the
running time of the algorithm
How to identify identify the basic or primitive operation of an
algorithm ? The basic operation is usually selected to be the operation
that:
• is needed to solve the problem
• is the most time consuming
• is the most frequently used
For example, most sorting algorithms work by comparing
elements (keys) of a list being sorted with each other; for such
algorithms, the basic operation is a key comparison.

Csc2202 by S.H. Muhammad 36


Basic or primitive operation of an algorithm
As another example, algorithms for mathematical problems
typically involve some or all of the four arithmetical
operations: addition, subtraction, multiplication, and
division.
So ,we will be measuring the running time of algorithm by
counting the number of times the algorithm’s basic
operation is executed on inputs of size n.
We will also find out how to compute such a count for
nonrecursive and recursive algorithms
efficiency analysis framework ignores multiplicative
constants and concentrates on the count’s order of growth to
within a constant multiple for large-size inputs.

Csc2202 by S.H. Muhammad 37


Input size and basic operation examples

Problem Input size measure Basic operation

Searching for key in a list


Number of list’s items, i.e. n Key comparison
of n items

Multiplication of two Matrix dimensions or total Multiplication of two


matrices number of elements numbers

Checking primality of a n’size = number of digits (in


Division
given integer n binary representation)

Visiting a vertex or
Typical graph problem #vertices and/or edges
traversing an edge

Csc2202 by S.H. Muhammad 38


Theoretical analysis of time efficiency
Time efficiency is analyzed by determining the number of
repetitions of the basic operation as a function of input size

input size

T(n) ≈ copC(n)
running time execution time Number of times
for basic operation basic operation is
or cost executed

Note: Different basic operations may cost differently!


Csc2202 by S.H. Muhammad 39
Theoretical analysis of time efficiency
The count C(n) does not contain any information about
operations that are not basic, and, in fact, the count itself is
often computed only approximately.

Csc2202 by S.H. Muhammad 40


Examples of Basic Operation

for I =1 to n
C[i] = A[i] + B[i]

Csc2202 by S.H. Muhammad 41


Operation Count Examples

for(i=0; i<n; i++)


cout << A[i] << endl;

Number of output = n

42
Csc2202 by S.H. Muhammad 42
Order of Growth
The rate at which the running time increases as a function
of input is called rate of growth or order of growth.
For large values of n, it is the function’s order of growth that
counts
For example in the below case , we can approximate it to n4
since , n4 is the highest rate of growth

4 2 4
100n + 3n + 60n+ 5000 ≈ n
2
1000n + nlong + 2 ≈ 2
n n

Csc2202 by S.H. Muhammad 43


Types of formulas for basic operation’s count

Exact formula
e.g., C(n) = n(n-1)/2

Formula indicating order of growth with specific


multiplicative constant
e.g., C(n) ≈ 0.5 n2

Formula indicating order of growth with unknown


multiplicative constant
e.g., C(n) ≈ cn2

Csc2202 by S.H. Muhammad 44


Commonly used rate of Gowth

Csc2202 by S.H. Muhammad 45


Order of growth in decreasing order

Csc2202 by S.H. Muhammad 46


Values (some approximate) of several functions

Csc2202 by S.H. Muhammad 47


Types of Analysis
Algorithm efficiency depends on the input size n. And for
some algorithms efficiency depends on TYPE OF INPUT.
There are three different types of analysing algorithms

Worst case: Cworst(n) – maximum over inputs of size n

Best case: Cbest(n) – minimum over inputs of size n

Average case: Cavg(n) – “average” over inputs of size n

Csc2202 by S.H. Muhammad 48


Example

Lets concider a sequential search algorithm . This is a


straightforward algorithm that searches for a given item
(some search key K) in a list of n elements by checking
successive elements of the list until either a match with the
search key is found or the list is exhausted.

Csc2202 by S.H. Muhammad 49


Worst case efficiency

The worst-case efficiency of an algorithm is its efficiency for


the worst-case input of size n, which is an input (or inputs) of
size n for which the algorithm runs the longest among all
possible inputs of that size.
In the worst case, when there are no matching elements or
the first matching element happens to be the last one on the
list, the algorithm makes the largest number of key
comparisons among all possible inputs of size n:
Cworst (n) = n.
The worst-case analysis provides very important information
about an algorithm's efficiency by bounding its running time
from above.

Csc2202 by S.H. Muhammad 50


Worst case efficiency

Worst case guarantees that for any instance of size n, the


running time will not exceed C worst (n) its running time on
the worst-case inputs
It is also called upper bound efficency

Csc2202 by S.H. Muhammad 51


Best case Efficiency

The best-case efficiency of an algorithm is its efficiency for


the best-case input of size n, which is an input (or inputs) of
size n for which the algorithm runs the fastest among all
possible inputs of that size.
Example- for sequential search, best-case inputs will be lists
of size n with their first elements equal to a search key;
accordingly, Cbest(n) = 1.
The analysis of the best-case efficiency is not nearly as
important as that of the worst- case efficiency.
But it is not completely useless. For example, there is a
sorting algorithm (insertion sort) for which the best-case
inputs are already sorted arrays on which the algorithm
works very fast.
Csc2202 by S.H. Muhammad 52
Best case Efficiency

Thus, such an algorithm might well be the method of choice


for applications dealing with almost sorted arrays.
And, of course, if the best-case efficiency of an algorithm is
unsatisfactory, we can immediately discard it without
further analysis.
Also called lower bound efficiency

Csc2202 by S.H. Muhammad 53


Average case efficiency

Average case yields the information about an algorithm‘s


behaviour on a ―typicalǁ and ―randomal input.
To analyze the algorithm's average-case efficiency, we must
make some assumptions about possible inputs of size n.
The investigation of the average case efficiency is
considerably more difficult than investigation of the worst
case and best case efficiency.
How do we find average case efficiency ?

Csc2202 by S.H. Muhammad 54


Average Case of Sequential Search

Two assumptions:
• Probability of successful search is p (0 ≤ p ≤ 1)
• Search key can be at any index with equal probability
(uniform distribution)
Cavg(n) = Expected # of comparisons = Expected # of
comparisons for success + Expected # of
comparisons if k is not in the list

Note : The average-case efficiency cannot be obtained by


taking the average of the worst-case and the best-case
efficiencies

Csc2202 by S.H. Muhammad 55


Average case calculation

Csc2202 by S.H. Muhammad 56


Example: Sequential search

Worst case
n key comparisons
Best case
1 comparisons

Average case (n+1)/2, assuming K is in A


Csc2202 by S.H. Muhammad 57
Observation

The complexity of an algorithm normally depends on the size


of input.
In addition , the number of operations may depend on a
particular input.
–Solution
For different sets of input data, we analyse the performance
of an algorithm in the worst case or in the average case.
For different algorithms, we focus on the growth rate of the
time taken by the algorithms as the input size increases.

Csc2202 by S.H. Muhammad 58


Summary of Analysis Framework
Time and space efficiencies are functions of input size
Time efficiency is # of times basic operation is executed
Space efficiency is # of extra memory units consumed
Efficiencies of some algorithms depend on type of input:
requiring worst, best, average case analysis
Focus is on order of growth of running time (or extra
memory units) as input size goes to infinity

Csc2202 by S.H. Muhammad 59


Remainder
Read the textbooks to get more explanation
Practice all the questions at the end of each chapter to
master the contents of the chapter

Csc2202 by S.H. Muhammad 60


Thank You

Csc2202 by S.H. Muhammad 61


Design and Analysis of Algorithm
(CSC2302)

Department of Computer Science

Faculty of Computer Sciences and Information


technology
Bayero University, Kano

Csc2202 by S.H. Muhammad 62


Lecture 3
Asymptotic Notations and Basic Efficiency Classes

Csc2202 by S.H. Muhammad 63


QUESTION FEARLESSLY

Never , ever , be afraid to ask questions.There is no


such thing as a silly question , and you will often find
that several other people have been pondering the
same thing . this in itself gives you confidence

Csc2202 by S.H. Muhammad 64


“Dump Question are the Best”

Dr John
Sargeant

Csc2202 by S.H. Muhammad 65


Csc2202 by S.H. Muhammad 66
Csc2202 by S.H. Muhammad 67
Csc2202 by S.H. Muhammad 68
Roles of algorithm in Computing

Algorithm are taking over the world

cook wirelessly
Self-driving Cars
Robotic……..Have you seen as movie called Chappie :)
Mobile Health through e-monitoring
Drones
Big data

• These lists are far from exhaustive

Csc2202 by S.H. Muhammad 69


Self-Driving Cars

Csc2202 by S.H. Muhammad 70


Robotic

Csc2202 by S.H. Muhammad 71


Apple Watch

Csc2202 by S.H. Muhammad 72


This Lecture is going to be reallymathematical ☺

Csc2202 by S.H. Muhammad 73


Previous Class : Growth-rate Functions
O(1) – constant time, the time is independent of n, e.g.
array look-up
O(log n) – logarithmic time, usually the log is base 2,
e.g. binary search
O(n) – linear time, e.g. linear search
O(n*log n) – e.g. efficient sorting algorithms
O(n2) – quadratic time, e.g. selection sort
O(nk) – polynomial (where k is some constant)
O(2n) – exponential time, very slow!

Order of growth of some common functions


O(1) < O(log n) < O(n) < O(n * log n) < O(n2) < O(n3) < O(2n)

Csc2202 by S.H. Muhammad 74


Previous Class
Note on Constant Time
We write O(1) to indicate something that takes a
constant amount of time
• E.g. finding the minimum element of an ordered array
takes O(1) time, because the min is either at the beginning
or the end of the array
• Important: constants can be huge, and so in practice O(1)
is not necessarily efficient --- all it tells us is that the
algorithm will run at the same speed no matter the size of
the input we give it

Csc2202 by S.H. Muhammad 75


Exact Analysis is Hard!

Csc2202 by S.H. Muhammad 76


Even Harder Exact Analysis

Csc2202 by S.H. Muhammad 77


Introduction
Step count is to compare time complexity of two programs
that compute same function and also to predict the growth
in run time as instance characteristics changes.
Determining exact step count is difficult and not necessary
also.
Because the values are not exact quantities.
We need only comparative statements like c1n2 ≤ tp(n) ≤
c2n2.
For example, consider two programs with complexities c1n2
+ c2n and c3n respectively.
For small values of n, complexity depend upon values of c1,
c2 and c3.

Csc2202 by S.H. Muhammad 78


Introduction
But there will also be an n beyond which complexity of c3n
is better than that of c1n2 + c2n.
This value of n is called break-even point. If this point is
zero, c3n is always faster

Csc2202 by S.H. Muhammad 79


Why ignore constants?
Implementation issues (hardware, code optimizations) can
speed up an algorithm by constant factors
• We want to understand how effective an algorithm is independent of
these factors
Simplification of analysis
• Much easier to analyze if we focus only on n2 rather than worrying
about 3.7 n2 or 3.9 n2
Asymptotic Efficiency : we are looking for efficiency of
algorithm for large n

Csc2202 by S.H. Muhammad 80


Asymptotic Analysis

We focus on the infinite set of large n ignoring small


values of n
Usually, an algorithm that is asymptotically more
efficient will be the best choice for all but very small
inputs.

0 infinity

Csc2202 by S.H. Muhammad 81


Purpose of Asymptotic notation
Asymptotic notation is a standard means for describing
families of functions that share similar asymptotic behavior.
Asymptotic notation allows us to ignore small input sizes,
constant factors, lower-order terms in polynomials, and so
forth.
To estimate the largest input that can reasonably be given to
the program.
To compare the efficiency of different algorithms.
To choose an algorithm for an application.

Csc2202 by S.H. Muhammad 82


Purpose of Asymptotic notation
Efficiency analysis framework concentrates on the order of
growth of an algorithm’s basic operation count as the
principal indicator of the algorithm’s efficiency.
To compare and rank such orders of growth, computer
scientists use three notations: O (big oh), ︎ (big omega), and ︎
(big theta).

Csc2202 by S.H. Muhammad 83


Asymptotic order of growth
• A way of comparing functions that ignores constant factors and
small input sizes (because?)
• 3 notations used to compare orders of growth of an algorithm’s
basic operation count

• O(g(n)): class of functions f(n) that grow no faster than g(n)

• Θ(g(n)): class of functions f(n) that grow at same rate as g(n)

• Ω(g(n)): class of functions f(n) that grow at least as fast as g(n)

Csc2202 by S.H. Muhammad 84


O(big oh)-Notation

c × g(n)
t(n)

Doesn’t
matter

n
n0

t(n) є O(g(n))

Csc2202 by S.H. Muhammad 85


Asymptotic Analysis of Algorithm
The asymptotic analysis of an algorithm determines the
running time in big-Oh notation
To perform the asymptotic analysis
– We find the worst-case number of primitive operations
executed as a function of the input size
– We express this function with big-Oh notation
Example:
– We determine that algorithm arrayMax executes at most
6n – 1 primitive operations
– We say that algorithm arrayMax “runs in O(n) time”
Since constant factors and lower-order terms are eventually
dropped anyhow, we can disregard them when counting
primitive operations

Csc2202 by S.H. Muhammad 86


O-Notation (contd.)

Definition: A function t(n) is said to be in O(g(n)),


denoted t(n) є O(g(n)), if t(n) is bounded above by some
positive constant multiple of g(n) for sufficiently large
n.
The definition is illustrated in the previous figure.

Intuitively: Set of all functions whose rate of growth is the same as or


lower than that of g(n).

If we can find +ve constants c and n0 such that:


t(n) ≤ c × g(n) for all n ≥ n0

g(n) is an asymptotic upper bound for f(n).


Csc2202 by S.H. Muhammad 87
Set Notation Comment

O(f(n)) is a set of functions.


However, we will use one-way equalities like
n = O(n2)
This really means that function n belongs to the
set of functions O(n2)
Incorrect notation: O(n2) = n
Analogy
• “A dog is an animal” but not “an animal is a dog”

Csc2202 by S.H. Muhammad 88


Big-O Visualization

O(g(n)) is the set of


functions with
smaller or same order
of growth as g(n)

Csc2202 by S.H. Muhammad 89 89


Big O Examples Cont.
10n is in O(n2)

5n+20 is in O(n)

Csc2202 by S.H. Muhammad 90


Big O Examples
3n3 = O(n3)
3n3 + 8 = O(n3)
8n2 + 10n * log(n) + 100n + 1020 = O(n2)
3log(n) + 2n1/2 = O(n1/2)
2100 = O(1)

CS 307 Fundamentals of 91
Computer Science
Csc2202 by S.H. Muhammad 91
O-Notation (contd.)

Csc2202 by S.H. Muhammad 92


Big-O Comparisons

Function A Function #2
n3 + 2n2 100n2 + 1000

n0.1 log n
vs.
n + 100n0.1 2n + 10 log n

5n5 n!

n-152n/100 1000n15

82log n 3n7 + 7n

Csc2202 by S.H. Muhammad 93


Big-O Winners (i.e. losers)

Function A Function #2 Winner

O(n2)
n3 + 2n2 100n2 + 1000
O(log n)
n0.1 log n
vs.
O(n) TIE
n + 100n0.1 2n + 10 log n
O(n5)
5n5 n!
O(n15)
n-152n/100 1000n15
O(n6) why???
82log n 3n7 + 7n

Csc2202 by S.H. Muhammad 94


Big-Oh Rules

If is f(n) a polynomial of degree d, then f(n) is O(nd),


i.e.,
• Drop lower-order terms
• Drop constant factors
Use the smallest possible class of functions
• Say “2n is O(n)” instead of “2n is O(n2)”
Use the simplest expression of the class
• Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”

Csc2202 by S.H. Muhammad 95


No Uniqueness
There is no unique set of values for n0 and c in proving the

asymptotic bounds

Prove that 100n + 5 = O(n2)


• 100n + 5 ≤ 100n + n = 101n ≤ 101n2

for all n ≥ 5

n0 = 5 and c = 101 is a solution

• 100n + 5 ≤ 100n + 5n = 105n ≤ 105n2

for all n ≥ 1

n0 = 1 and c = 105 is also a solution

Must find SOME constants c and n0 that satisfy the asymptotic notation relation

Csc2202 by S.H. Muhammad 96


Role of big-Oh Asymptotic Algorithm Analysis

The asymptotic analysis of an algorithm determines the


running time in big-Oh notation
To perform the asymptotic analysis
• We find the worst-case number of primitive operations
executed as a function of the input size
• We express this function with big-Oh notation
Example:
• We determine that algorithm arrayMax executes at most 7n-1
primitive operations
• We say that algorithm arrayMax “runs in O(n) time”
Since constant factors and lower-order terms are
eventually dropped anyhow, we can disregard them
when counting primitive operations

Csc2202 by S.H. Muhammad 97


Big-O Usage
Order notation is not symmetric:
• we can say 2n2 + 4n = O(n2)
• … but never O(n2) = 2n2 + 4n

Order expressions on left can produce unusual-looking, but


true, statements:
O(n2) = O(n3)

Csc2202 by S.H. Muhammad 98


Ω(big omega)-Notation

t(n)
c × g(n)

Doesn’t
matter

n
n0

t(n) є Ω(g(n))

Csc2202 by S.H. Muhammad 99


Ω-Notation (contd.)

Definition: A function t(n) is said to be in Ω(g(n))


denoted t(n) є Ω(g(n)), if t(n) is bounded below by some
positive constant multiple of g(n) for all sufficiently
large n.
Intuitively: Set of all functions whose rate of growth is the same as
or higher than that of g(n).

If we can find +ve constants c and n0 such that


t(n) ≥ c × g(n) for all n ≥ n0

g(n) is an asymptotic lower bound for f(n).

Csc2202 by S.H. Muhammad 100


Ω-Notation (contd.)

Csc2202 by S.H. Muhammad 101


Θ(big theta)-Notation

c1 × g(n)

t(n)
c2 × g(n)

Doesn’t
matter

n
n0

t(n) є Θ(g(n))

Csc2202 by S.H. Muhammad 102


Θ-Notation (contd.)

Definition: A function t(n) is said to be in Θ(g(n))


denoted t(n) є Θ(g(n)), if t(n) is bounded both above
and below by some positive constant multiples of g(n)
for all sufficiently large n.
Intuitively: Set of all functions that
have the same rate of growth as g(n).

If we can find +ve constants c1, c2, and n0 such that


c2×g(n) ≤ t(n) ≤ c1×g(n) for all n ≥ n0

g(n) is an asymptotically tight bound for f(n).

Csc2202 by S.H. Muhammad 103


Θ-Notation (contd.)

Csc2202 by S.H. Muhammad 104


Quick Questions

3n2 - 100n + 6 = O(n2)


3n2 - 100n + 6 = O(n3)
3n2 - 100n + 6 O(n)

3n2 - 100n + 6 = ?(n2)


3n2 - 100n + 6 ? (n3)
3n2 - 100n + 6 = ?(n)

3n2 - 100n + 6 = ?(n2)?


3n2 - 100n + 6 = ?(n3)?
3n2 - 100n + 6 = ?(n)?

Csc2202 by S.H. Muhammad 105


O, Θ, and Ω
(≥) Ω(g(n)): functions that
grow at least as fast as g(n)

(=) Θ(g(n)): functions that


g(n) grow at the same rate as g(n)

(≤) O(g(n)): functions that


grow no faster that g(n)

Csc2202 by S.H. Muhammad 106


Theorem

If t1(n) є O(g1(n)) and t2(n) є O(g2(n)), then


t1(n) + t2(n) є O(max{g1(n),g2(n)})
• Analogous assertions are true for Ω and Θ
notations.
• Implication: if sorting makes no more than n2
comparisons and then binary search makes no
more than log2n comparisons, then efficiency is
O(max{n2, log2n}) = O(n2)

Csc2202 by S.H. Muhammad 107


Some properties of asymptotic order of growth

f(n) ∈ O(f(n))

f(n) ∈ O(g(n)) iff g(n) ∈Ω(f(n))

If f (n) ∈ O(g (n)) and g(n) ∈ O(h(n)) , then f(n) ∈ O(h(n))

Note similarity with a ≤ b

If f1(n) ∈ O(g1(n)) and f2(n) ∈ O(g2(n)) , then


f1(n) + f2(n) ∈ O(max{g1(n), g2(n)})

Also, Σ1≤i≤n Θ(f(i)) = Θ (Σ1≤i≤n f(i))

Exercise: Can you prove these properties?

Csc2202 by S.H. Muhammad 108


Note !!!

For Asymptotic analysis we generally concentrate on


upper bound (O) because knowing lower bound (Ω︎) of
an algorithm is of no practical importance and we use
θ notation if upper bound (O) and lower bound (︎Ω)
are same.

Csc2202 by S.H. Muhammad 109


Using Limits for Comparing Orders of Growth

Big Oh , Omega and Theta are rarely used for comparing


the orders of growth of two specific functions.
A much more convenient method for doing so is based on
computing the limit of the ratio of two functions in question.
Three principal cases may arise: see next slide

Csc2202 by S.H. Muhammad 110


Establishing order of growth using limits

0 order of growth of T(n) < order of growth of g(n)

lim T(n)/g(n) = c > 0 order of growth of T(n) = order of growth of g(n)


n→∞

∞ order of growth of T(n) > order of growth of g(n)

1. First two cases (0 and c) means t(n) є O(g(n))

2. Last two cases (c and ∞) means t(n) є Ω(g(n))

3. The second case (c) means t(n) є Θ(g(n))

Csc2202 by S.H. Muhammad 111


Establishing order of growth using limits

The limit-based approach is often more convenient than the


one based on the definitions because it can take advantage
of the powerful calculus techniques developed for
computing limits, such as L’Hospital’s rule and Stirling’s
formula

Examples:
• 10n vs. n2

• n(n+1)/2 vs. n2

Csc2202 by S.H. Muhammad 112


L’Hôpital’s rule and Stirling’s formula
L’Hôpital’s rule:

t (n) dy ⎛ t(n) ⎞ t´(n)


lim = lim x ⎜ ⎟ =
n→∞ g(n) x→∞ dx ⎝ g(n) ⎠
n→∞
g´(n)

Example: log n vs. n


Stirling’s formula:
Example: 2n vs. n!

Csc2202 by S.H. Muhammad 113


Example 1

1 2
Compare the order of growth n(n−1) and n
2
1
Ans :
2

Csc2202 by S.H. Muhammad 114


Remember Properties of Logarithms ?

Csc2202 by S.H. Muhammad 115


Example 2

Compare the order of growth log2n and n

Csc2202 by S.H. Muhammad 116


Example 3

Compare the order of growth between n! and 2 n

Ans = ∞
n
2 grows very fast,but n!grows still faster.
n
We can write symbolically that n! ∈Ω(2 )
Csc2202 by S.H. Muhammad 117
Basic asymptotic efficiency classes
1 constant

log n logarithmic

n linear

n log n n-log-n

n2 quadratic

n3 cubic

2n exponential

n! factorial

Csc2202 by S.H. Muhammad 118


Lets Practice Classifying Functions

Csc2202 by S.H. Muhammad 119


Which one are more alike ?

Csc2202 by S.H. Muhammad 120


Which one are more alike ?

Csc2202 by S.H. Muhammad 121


Which one are more alike ?

Csc2202 by S.H. Muhammad 122


Which one are more alike ?

Csc2202 by S.H. Muhammad 123


Summary of How to Establish Order of Growth of Basic Operation
Count

Method 1: Using Limits


Method 2: Using Theorem
Method 3: Using definitions of O-, Ω-, and Θ-notations.

Csc2202 by S.H. Muhammad 124


Things to Remember

Asymptotic analysis studies how the values of


functions compare as their arguments grow
without bounds.
Ignores constants and the behavior of the
function for small arguments.
Acceptable because all algorithms are fast for small
inputs and growth of running time is more important
than constant factors.

Csc2202 by S.H. Muhammad 125


Questions
Give an analysis of the running time for the following loops,
using the Big-Oh notation:

Problem 1
sum = 0;
for( i = 0; i < n; i++)
– sum++;

Csc2202 by S.H. Muhammad 126


First C.A Test
Read Chapter 1 and 2 of the book Data Structure and
Algorithm made Easy
Read Chapter 1 and 2 of fundamental of Analysis of
Algorithm Design Chapter 1 and 2 with Exercise
The test will surely comes from here
Chapter one of one test
MIT textbook

Csc2202 by S.H. Muhammad 127


Some Mathematics You need To Review
Before Next Class

Csc2202 by S.H. Muhammad 128


summation formulas

Csc2202 by S.H. Muhammad 129


Sum Manipulation Rules

Csc2202 by S.H. Muhammad 130


Logarithms and Exponents

• properties of logarithms:
logb(xy) = logbx + logby
logb (x/y) = logbx - logby
logbxa = alogbx
logba= logxa/logxb
• properties of exponentials:
a(b+c) = aba c
abc = (ab)c
ab /ac = a(b-c)
b = a logab
bc = a c*logab

Csc2202 by S.H. Muhammad 131


Mathematical Induction

A powerful, rigorous technique for proving that a statement


S(n) is true for every natural number n, no matter how large.

Csc2202 by S.H. Muhammad 132


Csc2202 by S.H. Muhammad 133
Lecture 4

Csc2202 by S.H. Muhammad 134


Design and Analysis of Algorithm
(CSC2302)

Department of Computer Science

Faculty of Computer Sciences and Information


technology
Bayero University, Kano

Csc2202 by S.H. Muhammad 135


Lecture 4
Analysis of Recursive and Non-Recursive Algorithms

Csc2202 by S.H. Muhammad 136


QUESTION FEARLESSLY

Never , ever , be afraid to ask questions.There is no


such thing as a silly question , and you will often find
that several other people have been pondering the
same thing . this in itself gives you confidence

Csc2202 by S.H. Muhammad 137


“Dump Question are the Best”

Dr John
Sargeant

Csc2202 by S.H. Muhammad 138


Summary of How to Establish Order of Growth of Basic Operation
Count

Method 1: Using Limits


Method 2: Using Theorem
Method 3: Using definitions of O-, Ω-, and Θ-notations.

Csc2202 by S.H. Muhammad 139


Recap of previous Class

Csc2202 by S.H. Muhammad 140


Today

Analysis of Recursive and Non-Recursive Algorithms

Csc2202 by S.H. Muhammad 141


Puzzle
A peasant finds himself on a riverbank with a wolf, a goat,
and a head of cabbage. He needs to transport all three to the
other side of the river in his boat. However, the boat has
room for only the peasant himself and one other item (either
the wolf, the goat, or the cabbage). In his absence, the wolf
would eat the goat, and the goat would eat the cabbage.
Solve this problem for the peasant or prove it has no
solution. (Note: The peasant is a vegetarian but does not like
cabbage and hence can eat neither the goat nor the cabbage
to help him solve the problem. And it goes without saying
that the wolf is a protected species

Csc2202 by S.H. Muhammad 142


Puzzle
There are four people who want to cross a rickety bridge;
they all begin on the same side. You have 17 minutes to get
them all across to the other side. It is night, and they have
one flashlight. A maximum of two people can cross the
bridge at one time. Any party that crosses, either one or two
people, must have the flashlight with them. The flashlight
must be walked back and forth; it cannot be thrown, for
example. Person 1 takes 1 minute to cross the bridge, person
2 takes 2 minutes, person 3 takes 5 minutes, and person 4
takes 10 minutes. A pair must walk together at the rate of
the slower person’s pace.

Csc2202 by S.H. Muhammad 143


Program Statements
Number of steps any program statements is
assigned depends on kind of statement
• Comments………0 steps
• Assigment Statement ………………1 step
• Condition Statement ……………….1 step
• Loop Condition for “n” times …….n+1 steps
• Body of loops ……………………….n times

Csc2202 by S.H. Muhammad 144


Types of Algorithm
There are two kinds of Algorithms
• Recursive Algorithm
• Non-recursive Algorithm

When a function called itself is called a recursive function.

A recursive algorithm is an algorithm that have a recursive


call in it.

Csc2202 by S.H. Muhammad 145


Non-recursive Algorithms

Csc2202 by S.H. Muhammad 146


Time efficiency of nonrecursive algorithms
General Plan for Analysis

1. Decide on parameter n indicating input size

2. Identify algorithm’s basic operation

3. Determine worst, average, and best cases for input of size n

4. Set up a sum for the number of times the basic operation is


executed

5. Simplify the sum using standard formulas and rules (see


Appendix A) or atleast establish its order of growth

Csc2202 by S.H. Muhammad 147


Why do we need summation formulas?

For computing the running times of iterative constructs (loops)

Csc2202 by S.H. Muhammad 148


Recall

Csc2202 by S.H. Muhammad 149


Csc2202 by S.H. Muhammad 150
Analysis of Nonrecursive Algorithms

ALGORITHM MaxElement(A[0..n-1])
//Determines largest element
maxval <- A[0]
for i <- 1 to n-1 do
if A[i] > maxval
Input size: n
maxval <- A[i]
Basic operation: > or <-
return maxval

Csc2202 by S.H. Muhammad 151


Analysis of Nonrecursive (contd.)

ALGORITHM UniqueElements(A[0..n-1])
//Determines whether all elements are //distinct
for i <- 0 to n-2 do
for j <- i+1 to n-1 do Input size: n
Basic operation: A[i] = A[j]
if A[i] = A[j] Does C(n) depend on type of input?
return false
return true

Csc2202 by S.H. Muhammad 152


Analysis of Nonrecursive (contd.)

Csc2202 by S.H. Muhammad 153


Guidelines for Asymptotic Analysis of Non-recursive algorithms :
simplified

Csc2202 by S.H. Muhammad 154


Loops:
The running time of a loop is, at most, the running time of
the statements inside the loop (including tests) multiplied by
the number of iterations.

/ executes 𝑛 times
for (i=1; i<=n; i++)
{
m = m + 2; // constant time, c
}
Total time = a constant 𝑐 × 𝑛 = 𝑐 𝑛 = 𝑂(𝑛).

Csc2202 by S.H. Muhammad 155


Nested loops
Analyze from inside out. Total running time is the product of
the sizes of all the loops.
//outer loop executed n times for (i=1; i<=n; i++)
{
// inner loop executed n times for (j=1; j<=n; j++)
{
k = k+1; //constant time
}
}
Total time = 𝑐 × 𝑛 × 𝑛 = 𝑐𝑛︎ = 𝑂(𝑛︎).

Csc2202 by S.H. Muhammad 156


Consecutive statements
Add the time complexities of each statement.
x = x +1; //constant time
// executed n times for (i=1; i<=n; i++) {
m = m + 2; //constant time }
//outer loop executed n times for (i=1; i<=n; i++)
{
//inner loop executed n times for (j=1; j<=n; j++)
{
k = k+1; //constant time
}
}
Total time = 𝑐︎ + 𝑐︎𝑛 + 𝑐︎𝑛︎ = 𝑂 (𝑛︎).
Csc2202 by S.H. Muhammad 157
If-then-else statements
Worst-case running time: the test, plus 𝑒𝑖𝑡h𝑒𝑟 the 𝑡h𝑒𝑛 part or the 𝑒𝑙𝑠𝑒 part (whichever
is the larger).
//test: constant
if (length ( ) != otherStack. length ( ) )
{
return false; //then part: constant
} else {
}}
// else part: (constant + constant) * n for (int n = 0; n < length( ); n++)
{
// another if : constant + constant (no else part) if (!
list[n].equals(otherStack.list[n]))
//constant
return false;
Total time = 𝑐︎ + 𝑐︎ + (𝑐︎ + 𝑐︎) ∗ 𝑛 = 𝑂(𝑛).

Csc2202 by S.H. Muhammad 158


Logarithmic complexity
An algorithm is 𝑂(𝑙𝑜𝑔𝑛) if it takes a constant time to cut the problem size by a
fraction (usually by 1⁄2). As an example let us consider the following program:
for (i=1; i<=n;) {
i = i*2; }
If we observe carefully, the value of i is doubling every time.
Initially 𝑖 = 1, in next step 𝑖 = 2, and in subsequent steps 𝑖 = 4, 8 and so on.

Let us assume that the loop is executing some 𝑘 times. That means at 𝑘︎︎th step 2︎i = 𝑛
and we come out of loop.

So, if we take logarithm on both sides,


𝑙𝑜𝑔︎2I︎ = 𝑙𝑜𝑔𝑛
𝑖𝑙𝑜𝑔2 = 𝑙𝑜𝑔𝑛
𝑖 = 𝑙𝑜𝑔𝑛 //if we assume base-2

So the total time = 𝑂(𝑙𝑜𝑔𝑛).


Csc2202 by S.H. Muhammad 159
Logarithmic complexity
Similarly, for the below case also, worst case rate of growth
is 𝑂(𝑙𝑜𝑔𝑛).
That means, the same discussion holds good for decreasing
sequence also.
for (i=n; i<=1;) {
i = i/2; }

Csc2202 by S.H. Muhammad 160


Recursive Algorithm

Csc2202 by S.H. Muhammad 161


Plan for Analysis of Recursive Algorithms

Decide on a parameter indicating an input’s size.

Identify the algorithm’s basic operation.

Check whether the number of times the basic op. is executed


may vary on different inputs of the same size. (If it may, the
worst, average, and best cases must be investigated
separately.)

Set up a recurrence relation with an appropriate initial


condition expressing the number of times the basic op. is
executed.

Solve the recurrence (or, at the very least, establish its


solution’s order of growth) by backward substitutions or
another method.
Csc2202 by S.H. Muhammad 162
Example 1: Recursive evaluation of n!

Definition: n ! = 1 ∗ 2 ∗ … ∗(n-1) ∗ n for n ≥ 1 and 0! = 1

Recursive definition of n!: F(n) = F(n-1) ∗ n for n ≥ 1 and


F(0) = 1

Size: n
Basic operation: multiplication
Recurrence relation: M(n) = M(n-1) + 1

M(0) = 0
Csc2202 by S.H. Muhammad 163
Analysis of Recursive (contd.)

Recurrence relation
M(n) = M(n-1) + 1
M(0) = 0

By Backward substitution

M(n) = M(n-1) + 1 = M(n-2)+ 1 + 1 = … …


= M(n-n) + 1 + … + 1 = 0 + n = n

n 1’s

We could compute it nonrecursively, saves


function call overhead

Csc2202 by S.H. Muhammad 164


Solving the recurrence for M(n)

M(n) = M(n-1) + 1, M(0) = 0


M(n) = M(n-1) + 1
= (M(n-2) + 1) + 1 = M(n-2) + 2
= (M(n-3) + 1) + 2 = M(n-3) + 3

= M(n-i) + i
= M(0) + n
=n
The method is called backward substitution.

Csc2202 by S.H. Muhammad 165


Example 1: Recursive evaluation of n!

Definition: n ! = 1 ∗ 2 ∗ … ∗(n-1) ∗ n for n ≥ 1 and 0! = 1

Recursive definition of n!: F(n) = F(n-1) ∗ n for n ≥ 1 and


F(0) = 1

Size: n

Basic operation: multiplication


Recurrence relation: M(n) = M(n-1) + 1

M(0) = 0
Csc2202 by S.H. Muhammad 166
Example 2: The Tower of Hanoi Puzzle

Recurrence for number of moves:


M(n) = 2M(n-1) + 1

Csc2202 by S.H. Muhammad 167


Solving recurrence for number of moves

M(n) = 2M(n-1) + 1, M(1) = 1


M(n) = 2M(n-1) + 1
= 2(2M(n-2) + 1) + 1 = 2^2*M(n-2) + 2^1 + 2^0
= 2^2*(2M(n-3) + 1) + 2^1 + 2^0
= 2^3*M(n-3) + 2^2 + 2^1 + 2^0
=…
= 2^(n-1)*M(1) + 2^(n-2) + … + 2^1 + 2^0
= 2^(n-1) + 2^(n-2) + … + 2^1 + 2^0
= 2^n -1

Csc2202 by S.H. Muhammad 168


Tree of calls for the Tower of Hanoi Puzzle
n

n-1 n-1

n-2 n-2 n-2 n-2


... ... ...
2 2 2 2

1 1 1 1 1 1 1 1

Csc2202 by S.H. Muhammad 169


Example 3: Counting #bits

A(n) = A(
⎣n )/ +21,⎦ A(1) = 0

A(
2)k= A( 2) +k −1,1 A( )=1 0 the Smoothness Rule)
(using
2
= (A( ) + 1) + 1 = A( )+2
2 k −2 2 k −2
= A( )+i
= A( 2) k+−ki= k + 0
=
2 k −k
log 2 n
Csc2202 by S.H. Muhammad 170
Smoothness Rule

Let f(n) be a nonnegative function defined on the set of natural


numbers. f(n) is call smooth if it is eventually nondecreasing and
f(2n) ∈ Θ (f(n))
• Functions that do not grow too fast, including logn, n, nlogn, and
nα where α>=0 are smooth.
Smoothness rule
Let T(n) be an eventually nondecreasing function and f(n) be a
smooth function. If
T(n) ∈ Θ (f(n)) for values of n that are powers of b,
where b>=2, then
T(n) ∈ Θ (f(n)) for any n.

Csc2202 by S.H. Muhammad 171


Fibonacci numbers
The Fibonacci numbers:
0, 1, 1, 2, 3, 5, 8, 13, 21, …

The Fibonacci recurrence:


F(n) = F(n-1) + F(n-2)
F(0) = 0
F(1) = 1

General 2nd order linear homogeneous recurrence with


constant coefficients:
aX(n) + bX(n-1) + cX(n-2) = 0

Csc2202 by S.H. Muhammad 172


Solving aX(n) + bX(n-1) + cX(n-2) = 0

Set up the characteristic equation (quadratic)


ar2 + br + c = 0

Solve to obtain roots r1 and r2

General solution to the recurrence


if r1 and r2 are two distinct real roots: X(n) = αr1n + βr2n
if r1 = r2 = r are two equal real roots: X(n) = αrn + βnr n

Particular solution can be found by using initial conditions

Csc2202 by S.H. Muhammad 173


Application to the Fibonacci numbers

F(n) = F(n-1) + F(n-2) or F(n) - F(n-1) - F(n-2) = 0

Characteristic equation: - r -1 =r02

Roots of the characteristic equation: r1, 2 = (1 ± 5 ) / 2


General solution to the recurrence: n
α ⋅ r1 + β ⋅ r2
n

Particular solution for F(0) =0, F(1)=1:


α +β =0
α ⋅ r1 + β ⋅ r2 = 1

Csc2202 by S.H. Muhammad 174


Computing Fibonacci numbers
1. Definition-based recursive algorithm

2. Nonrecursive definition-based algorithm

3. Explicit formula algorithm

4. Logarithmic algorithm based on formula:


n
F(n-1) F(n) 0 1
=
F(n) F(n+1) 1 1

for n≥1, assuming an efficient way of computing matrix powers.

Csc2202 by S.H. Muhammad 175


Important Recurrence Types

Decrease-by-one recurrences
• A decrease-by-one algorithm solves a problem by exploiting a relationship
between a given instance of size n and a smaller size n – 1.
• Example: n!
• The recurrence equation for investigating the time efficiency of such
algorithms typically has the form
T(n) = T(n-1) + f(n)
Decrease-by-a-constant-factor recurrences
• A decrease-by-a-constant-factor algorithm solves a problem by dividing its
given instance of size n into several smaller instances of size n/b, solving
each of them recursively, and then, if necessary, combining the solutions to
the smaller instances into a solution to the given instance.
• Example: binary search.
• The recurrence equation for investigating the time efficiency of such
algorithms typically has the form
T(n) = aT(n/b) + f (n)

Csc2202 by S.H. Muhammad 176


Decrease-by-one Recurrences

One (constant) operation reduces problem size by one.


T(n) = T(n-1) + c T(1) = d
Solution:
T(n) = (n-1)c + d linear
A pass through input reduces problem size by one.
T(n) = T(n-1) + c n T(1) = d
Solution:

T(n) = [n(n+1)/2 – 1] c + d quadratic

Csc2202 by S.H. Muhammad 177


Decrease-by-a-constant-factor recurrences – The Master
Theorem

T(n) = aT(n/b) + f (n), where f (n) ∈ Θ(nk) , k>=0

a < bk T(n) ∈ Θ(nk)


a = bk T(n) ∈ Θ(nk log n )
1. a > bk T(n) ∈ Θ(nlog ba)

2. Examples:
1. T(n) = T(n/2) + 1
Θ(log n)
2. T(n) = 2T(n/2) + n
3. T(n) = 3T(n/2) + n Θ(nlog n)
4. T(n) = T(n/2) + n
Θ(nlog23)
Θ(n)

Csc2202 by S.H. Muhammad 178

You might also like