1
CHAPTER TWO
COMPLEXITY
ANALYSIS
08/13/2024 DSA- CH-2: Complexity Analysis
CH-2 Contents
2
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity
08/13/2024
CH-2 Contents
3
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity
08/13/2024
Introduction to Data Structures and Algorithms
Analysis
4
A program is written in order to solve a problem.
Solution consists of two things:
A way to organize the data
Sequence of steps to solve the problem
The way data are organized in a computers memory is
said to be Data Structure and
The clearly defined sequence of computational steps the
computer follows to solve a problem is said to be an
algorithm.
Therefore, a program is nothing but data structures plus
algorithms.
08/13/2024
Algorithm Analysis - Concepts
5
Refers to the process of determining the amount of computing
time and storage space required by different algorithms.
Or it’s a process of predicting the resource requirement of
algorithms in a given environment.
To solve a problem, there are many possible algorithms.
One has to be able to choose the best algorithm for the problem
at hand using some scientific method.
To classify some data structures and algorithms as good, we
need precise ways of analyzing them in terms of resource
requirement.
The main resources are: Running Time, Memory Usage and
Communication Bandwidth
08/13/2024
6
Running time is usually treated as the most important
since computational time is the most precious
resource in most problem domains.
Two approaches to measure the efficiency of
algorithms:
Theoretical: Determining the quantity of resources
required mathematically (Execution time, memory
space, etc.) needed by each.
Empirical: Programming competing algorithms and
trying them on different instances.
08/13/2024
7
Theoretical - we can analyze an algorithm
according to the number of operations required,
rather than according to an absolute amount of time
involved.
This can show how an algorithm’s efficiency changes
according to the size of the input.
08/13/2024
8
Empirical - However, it is difficult to use actual
clock-time as a consistent measure of an
algorithm’s efficiency, because clock-time can vary
based on many things. For example,
Specific processor speed,
Current processor load and
Specific data for a particular run of the program (Input
Size and Input Properties)
Operating Environment
08/13/2024
CH-2 Contents
9
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity
08/13/2024
Important terms
10
Computational complexity
It is a measuring of the amount of time, storage, or other
resources needed to execute the algorithm in order to compare the
efficiency of algorithms.
Asymptotic complexity
Asymptotic time complexity
The limiting behavior of the execution time of an algorithm when the size
of the problem goes to infinity. This is usually denoted in big-O notation
Asymptotic space complexity
The limiting behavior of the use of memory space of an algorithm when
the size of the problem goes to infinity. This is usually denoted in big-O
notation.
08/13/2024
Theoretical Approach - Complexity Analysis
11
Complexity Analysis is the systematic study of the cost of
computation, measured either in time units or in
operations performed, or in the amount of storage space
required.
The goal is to have a meaningful measure that permits
comparison of algorithms independent of operating platform.
There are two things to consider:
Time Complexity: Determine the approximate number of
operations required to solve a problem of size n.
Space Complexity: Determine the approximate memory
required to solve a problem of size n.
08/13/2024
12
Complexity analysis involves two distinct phases:
1. Algorithm Analysis: Analysis of the algorithm or
data structure to produce a function T(n) that
describes the algorithm in terms of the operations
performed in order to measure the complexity of the
algorithm.
2. Order of Magnitude Analysis: Analysis of the
function T(n) to determine the general complexity
category to which it belongs.
08/13/2024
Algorithm Analysis Rules:
13
OPTION 1 : Exact count
There is no generally accepted set of rules for algorithm
analysis. However, an exact count of operations is
commonly used.
We assume an arbitrary time unit.
1. Execution of one of the following operations takes time 1:
Assignment Operation
Single Input / Output Operation
Single Boolean Operations
Single Arithmetic Operations
Function Return
08/13/2024
14
2. Running time of a selection statement (if, switch)
is the time for the condition evaluation + the
maximum of the running times for the individual
clauses in the selection.
3. Loops: Running time for a loop is equal to the
running time for the statements inside the loop *
number of iterations.
4. For nested loops, analyze inside out.
Always assume that the loop executes the maximum
number of iterations possible.
08/13/2024
15
5. The total running time of a statement inside a group
of nested loops is the running time of the
statements multiplied by the product of the sizes of
all the loops.
6. Running time of a function call is 1 for setup + the time
for any parameter calculations + the time required for
the execution of the function body.
08/13/2024
Complexity Analysis: Loops
16
We start by considering how to count
operations in for-loops.
We use integer division throughout.
First of all, we should know the number of
iterations of the loop; say it is x.
Then the loop condition is executed x + 1 times.
Each of the statements in the loop body is executed
x times.
The loop-index update statement is executed x
times.
08/13/2024
Complexity Analysis: Loops (with <)
17
In the following for-loop:
for(int i=k; i<n; i+=m) for(int i=n; i>k; i-=m)
{ {
statement1; statement1;
statement2; statement2;
} }
The number of iterations is: (n – k ) / m
The initialization statement, i = k, is executed one time.
The condition, i < n, is executed (n – k) / m + 1 times.
The update statement, i = i + m, is executed (n – k) / m
times.
Each of statement1 and statement2 is executed (n – k) / m
times. 08/13/2024
Complexity Analysis : Loops (with <=)
18
In the following for-loop:
for(int i=k; i<=n; i+=m) for(int i=n; i>=k; i-=m)
{ {
statement1; statement1;
statement2; statement2;
} }
The number of iterations is: (n – k) / m + 1
The initialization statement, i = k, is executed one time.
The condition, i <= n, is executed (n – k) / m + 2 times.
The update statement, i = i + m, is executed (n – k) / m + 1
times.
Each of statement1 and statement2 is executed (n – k) / m + 1
08/13/2024
times.
Complexity Analysis:
Loops With Logarithmic Iterations
19
In the following for-loop: (with <)
for (int i=k; i<n; i*=m){ for (int i=n; i>k; i/=m){
statement1; statement1;
statement2; statement2;
} }
The number of iterations is: (Logm (n / k) )
In the following for-loop: (with <=)
for (int i=k; i<=n; i*=m){ for (int i=n; i>=k; i/=m){
statement1; statement1;
statement2; statement2;
} }
The number of iterations is: (Logm (n / k) + 1) 08/13/2024
Example 1
20
08/13/2024
21
Time Units to Compute
Line 3: 1 for the assignment statement: int k=0
Line 4: 1 for the output statement.
Line 5: 1 for the input statement.
Line 6: In the for loop:
1 assignment, n+1 tests, and n increments.
Line 7: n loops of 2 units for an assignment, and an
addition.
Line 8: 1 for the return statement.
T (n)= 1+1+1+(1+n+1+n)+2n+1 = 4n+6 = O(n)
08/13/2024
Example 2
22
08/13/2024
23
Time Units to Compute
Line 3: 1 for the assignment statement: int sum=0
Line 4: In the for loop:
1 assignment, n+1 tests, and n increments.
Line 5: n loops of 2 units for an assignment, and an
addition.
Line 6: 1 for the return statement.
T (n)= 1+ (1+n+1+n)+2n+1 = 4n+4 = O(n)
08/13/2024
Example: 3
24
08/13/2024
25
Time Units to Compute
Line 2: 2 for the assignment statement: x=2.5 , y=3.0
Line 3: In the for loop:
1 assignment, n+1 tests, and n increments.
Line 5: n loops of 2 units for an assignment, and a multiplication
Line 6: n loops of 2 units for an assignment, and a multiplication
Line 7: n loops of 2 units for an assignment, and an addition
T (n)= 2+ (1+n+1+n)+2n+2n+2n = 8n+4 = O(n)
08/13/2024
Exercise 1
26
08/13/2024
Exercise 2
27
08/13/2024
28
OPTION 2 : Formal Approach to Algorithm
Analysis
It can be simplified by using some formal approach in
which case we can ignore initializations, loop control,
and book keeping.
08/13/2024
29
For loops: formally
In general, a for loop translates to a summation. The
index and bounds of the summation are the same as
the index and bounds of the for loop.
N N
fo
r(in
ti=1
;i<=N;i+
+){
fo
r(in
ti=1;i<
=N ;i+
+){
sum =
su
m +
i;
1N sum =su
m +
i; 1N
}
1 }
i i
1
Suppose we count the number of additions that are done.
There is 1 addition per iteration of the loop, hence N
additions in total. 08/13/2024
30
Nested Loops: Formally
Nested for loops translate into multiple
summations, one for each for loop.
for (inti = 1; i <= N; i++) { for (inti = 1; i <= N; i++) {
N M N for (int j = 1; j <= M; j++) { N M N
for (int j = 1; j <= M; j++) {
sum= sum+i+j; 22M 2MN }
sum= sum+i+j; 22M 2MN
i1 j1 i1
} i1 j1 i1
} }
Again, count the number of additions. The outer
summation is for the outer for loop.
08/13/2024
31
Consecutive Statements: Formally
Add the running times of the separate blocks of
your code
for (int i = 1; i <= N; i++) {
for (int i = 1; i <= N; i++) { sum = sum+i;
sum = sum+i; } N N N
N N N
2
} 1 2 N 2 N
1 2 N 2 N
2 for (int i = 1; i <= N; i++) {
for (int i = 1; i <= N; i++) { for (int j = 1; j <= N; j++) { i 1 i 1 j 1
for (int j = 1; j <= N; j++) { i 1 i 1 j 1 }
sum = sum+i+j;
sum = sum+i+j; }
}
} 08/13/2024
32
Conditionals: Formally
If (test) s1 else s2: Compute the maximum of the
running time for s1 and s2.
if (test ==1) { if(test ==1){
for (inti =1; i <=N; i++) { N N N forsum (inti =1; i <=N; i++){ N N N
sum=sum+i;
1, 2 =sum+i;
max max1, 2
}} }} i1 i1 j1
i1 i1 j1 elsefor(inti=1;i<=N;i++){
elsefor (inti =1; i <=N; i++) {
for (int j =1; j <=N; j++) {
maxN, 2N2 2N2 for(intsumj==sum+i+j;1; j <=N; j++){
maxN, 2N2 2N2
sum=sum+i+j; }}
}}
08/13/2024
Example: 4
33
Suppose we have hardware capable of executing
106 instructions per second. How long would it take
to execute an algorithm whose complexity function
is:
T (n) = 2n2 on an input size of n=108?
08/13/2024
34
The total number of operations to be performed
would be T (108):
T(108) = 2*(108)2 =2*1016
The required number of seconds would be given by
T(108)/106 so: Running time =2*1016/106 = 2*1010
Days? 86,400 days
Years? 634 years
08/13/2024
Important mathematics series formulas
35
08/13/2024
Exercises
36
Determine the run time
equation and complexity of
each of the following code
segments.
08/13/2024
Exercise: 3
37
for (i=0;i<n;i++)
{
for (j=0;j<n; j++)
{
sum=sum+i+j;
}
}
08/13/2024
Exercise: 4
38
for(int i=1; i<=n; i++)
{
for (int j=1; j<=i; j++)
{
sum++;
}
}
What is the value of the sum if n=20?
08/13/2024
Exercise: 5
39
int k=0;
for (int i=0; i<n; i++)
{
for (int j=i; j<n; j++)
{
k++;
}
}
What is the value of k when n is equal to 20?
08/13/2024
Exercise: 6
40
int k=0;
for (int i=1; i<n; i*=2)
{
for(int j=1; j<n; j++)
{
k++;
}
}
What is the value of k when n is equal to 20?
08/13/2024
Exercise: 7
41
int x=0;
for(int i=1;i<n;i=i+5)
x++;
What is the value of x when n=25?
int x=0;
for(int k=n;k>=n/3;k=k-5)
x++;
• What is the value of x when n=25?
08/13/2024
Exercise: 8
42
int x=0;
for (int i=1; i<n;i=i+5)
for (int k=n;k>=n/3;k=k-5)
x++;
• What is the value of x when n=25?
int x=0;
for(int i=1;i<n;i=i+5)
for(int j=0;j<i;j++)
for(int k=n;k>=n/2;k=k-3)
x++;
08/13/2024
CH-2 Contents
43
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity
9/13/2017
Order of Magnitude Analysis - Measures of
Times
44
In order to determine the running time of an algorithm it is
possible to define three functions Tbest(n), Tavg(n) and Tworst(n)
as the best, the average and the worst case running time of the
algorithm respectively.
1. Average Case (Tavg): The amount of time the algorithm takes on an
"average" set of inputs.
2. Worst Case (Tworst): The amount of time the algorithm takes on the
worst possible set of inputs.
3. Best Case (Tbest): The amount of time the algorithm takes on the
smallest possible set of inputs.
We are interested in the worst-case time, since it provides a
bound for all input – this is called the “Big-Oh” estimate.
9/13/2017
Asymptotic Analysis
45
Asymptotic analysis is concerned with how the
running time of an algorithm increases with the size
of the input in the limit, as the size of the input
increases without bound.
There are five notations used to describe a running
time function. These are:
Big-Oh Notation (O) (<=)
Big-Omega Notation ()(>=)
Theta Notation ()(=)
Little-o Notation (o)(<)
Little-Omega Notation ()(>)
9/13/2017
The Big-Oh Notation
46
Big-Oh notation is a way of comparing algorithms and is
used for computing the complexity of algorithms;
i.e., the amount of time that it takes for computer program to
run .
It’s only concerned with what happens for very large value of n.
Therefore only the largest term in the expression (function) is
needed.
For example, if the number of operations in an algorithm is n2 – n, n
is insignificant compared to n2 for large values of n. Hence the n term
is ignored. Of course, for small values of n, it may be important.
However, Big-Oh is mainly concerned with large values of
n.
9/13/2017
47
Formal Definition: f (n)= O (g (n)) if there exist c,
k ∊ ℛ+ such that for all n ≥ k, f (n) ≤ c.g (n).
By the definition above, demonstrating that a function
f is big-O of a function g requires that we find specific
constants c and k for which the inequality holds (and
show that the inequality does, in fact, hold).
Big-O expresses an upper bound on the growth
rate of a function, for sufficiently large values of n.
Example
Take the function f(n)= 3n2 + 5n -3 = O(n2)
9/13/2017
Standard Method to Prove Big -oh
48
1. Choose k=1
2. Assuming n>1, find/derive a c such that
This shows that n>1 implies f(n) <= cg(n).
Keep in mind
n >1 implies 1<n, n<n2, n2<n3 ….
Increase numerator to simplify fraction.
9/13/2017
Example 1: Proving Big -Oh
49
Show that f(n)= n2 +2n +1 is O(n2)
Choose k=1
Assuming n > 1, then
f(n)/g(n)= (n2 + 2n +1)/n2 < (n2 + 2n2+n2)/n2 = (4n2)/n2=4
Choose c=4. Note that 2n<2n2 and 1< n2
Thus, n2 + 2n +1 is O(n2), because n2 + 2n + 1 <=4n2
whenever n>1.
9/13/2017
Example 2: Proving Big -Oh
50
Show that f(n)= 4n +6 is O(n)
Choose k=1
Assuming n > 1, then
f(n)/g(n)= (4n +6)/n < (4n+6n)/n = (10n)/n=10
Choose c=10. Note that 6<6n
Thus, 4n +6 is O(n), because 4n + 6 <=10n whenever n>1.
9/13/2017
Exercise 1: Proving Big -Oh
51
Show that f(n)= (n +1)3 is O(n3)
Where (n +1)3 = n3 + 3n2 + 3n +1
???
9/13/2017
Standard Method to Prove Not Big -oh
52
1. Assuming n>1
2. Show
3. n > h-1(c) implies h(n) > c,
which implies f(n) > cg(n)
So choosing n > 1, n > k, and n > h-1(c) implies n > k
and f(n) > cg(n)
9/13/2017
Example 3: Proving Not Big -Oh
53
Show that f(n)= n2 – 2n +1 is not O(n).
Assuming n > 1, then
f(n)/g(n)= (n2 – 2n +1)/n > (n2 – 2n)/n = n -2
n > c+2 implies n-2 > c and f(n) >cn.
So choosing n > 1, n > k, and n > c+2 implies n > k
and f(n) > cn.
Decrease numerator to simplify fraction.
9/13/2017
Example 4: Proving Not Big -Oh
54
Show that f(n)= n2/2 is not O(n).
Assuming n > 1, then
f(n)/g(n)= (n2/2)/n = n/2
n > 2c implies n/2 > c and f(n) >cn.
So choosing n > 1, n > k, and n > 2c implies n > k and
f(n) > cn.
9/13/2017
Exercise 2: Proving Not Big -Oh
55
Show that f(n)= (n +1)3 is not O(n2)
Where (n +1)3 = n3 + 3n2 + 3n +1
???
9/13/2017
56
Examples: The following points are facts that you
can use for Big-Oh problems:
1<=n for all n>=1
n<=n2 for all n>=1
2n <=n! for all n>=4
log2n<=n for all n>=2
n<=nlog2n for all n>=2
9/13/2017
Big-Oh Rules
57
If f(n) is a polynomial of degree d, then f(n) is
O(nd), i.e.,
1. Drop lower-order terms
2. Drop constant factors
Use the smallest possible class of functions
2n is O(n) instead of 2n is O(n2)
Use the simplest expression of the class
3n + 5 is O(n) instead of 3n + 5 is O(3n)
9/13/2017
Big-O Theorems
58
For all the following theorems, assume that f(n) is a
non-negative function of n and that K is an arbitrary
constant.
Theorem 1: K is O(1)
Theorem 2: A polynomial is O(the term containing
the highest power of n)
f (n) = 7n4 + 3n2 + 5n +1000 is O(7n4 )
Theorem 3: K*f(n) is O(f(n))
i.e., constant coefficients can be dropped
g(n) = 7n4 is O(n4 )
9/13/2017
59
Theorem 4: If f(n) is O(g(n)) and g(n) is O(h(n))
then f(n) is O(h(n)). [transitivity]
f (n) = 6n2 + 3n -8 is O(6n2)= O(g(n))
g (n) = 6n2 is O(n3) = O(h(n))
h (n) = n3
f (n) =O( h (n))
Theorem 5: For any base b, logb(n) is O(log(n)).
All logarithms grow at the same rate
logbn is O(logdn) where b, d > 1
9/13/2017
60
Theorem 6: Each of the following functions is strictly big-O of its
successors:
K [constant]
logb(n) [always log base 2 if no base is shown]
n
n logb(n)
n2
n to higher powers
2n
3n
larger constants to the n-th power
n! [n factorial]
nn
f(n)= 3nlogbn + 4 logbn+2 is O(nlogbn) and )(n2) and O(2n)
61
Theorem 7: In general, f(n) is big-O of the
dominant term of f(n), where “dominant” may
usually be determined from Theorem 6.
f (n) = 7n2+ 3n log(n) + 5n +1000 is O(n2 )
g(n) = 7n4 + 3n +1000000 is O(3n )
h(n) = 7n(n + log(n)) is O(n2)
9/13/2017
Properties of the Big-O
62
Higher powers grow faster
nr is O( ns) if 0 <= r <= s
Fastest growing term dominates a sum
If f(n) is O(g(n)), then f(n) + g(n) is O(g)
E.g. 5n4 + 6n3 is O (n4)
Exponential functions grow faster than powers,
E.g. n20 is O( 1.05n)
Logarithms grow more slowly than powers
E.g. log2n is O( n0.5)
9/13/2017
Big-Omega Notation:
63
Just as O-notation provides an asymptotic upper
bound on a function, notation provides an
asymptotic lower bound.
Formal Definition: A function f(n) is ( g (n)) if
there exist constants c and k ∊ ℛ+ such that
f(n) >=c. g(n) for all n>=k.
f(n)= ( g (n)) means that f(n) is greater than or equal
to some constant multiple of g(n) for all values of n
greater than or equal to some k.
9/13/2017
64
Example: If f(n) =n2, then f(n)= ( n)
In simple terms, f(n)= (g(n)) means that the
growth rate of f(n) is greater that or equal to g(n).
9/13/2017
Theta Notation:
65
A function f (n) belongs to the set of (g(n)) if there exist
positive constants c1 and c2 such that it can be sandwiched
between c1.g(n) and c2.g(n), for sufficiently large values of n.
Formal Definition: A function f (n) is (g(n)) if it is both
O( g(n) ) and ( g(n) ).
In other words, there exist constants c1, c2, and k >0 such that c1.g
(n)<=f(n)<=c2. g(n) for all n >= k
If f(n)= (g(n)), then g(n) is an asymptotically tight bound
for f(n).
In simple terms, f(n)= (g(n)) means that f(n) and g(n) have
the same rate of growth.
9/13/2017
66
Example:
1. If f(n)=2n+1, then f(n) = (n)
2. f(n) =2n2 then
f(n)=O(n4)
f(n)=O(n3)
f(n)=O(n2)
All these are technically correct, but the last
expression is the best and tight one. Since 2n 2 and n2
have the same growth rate, it can be written as f(n)=
(n2).
9/13/2017
Little-o Notation
67
Big-Oh notation may or may not be asymptotically tight, for
example:
2n2 = O(n2)
=O(n3)
f(n)=o(g(n)) means for all c>0 there exists some k>0 such
that f(n)< c.g(n) for all n>=k.
Informally, f(n)=o(g(n)) means f(n) becomes insignificant
relative to g(n) as n approaches infinity.
Example: f(n)=3n+4 is o(n2)
In simple terms, f(n) has less growth rate compared to g(n).
g(n)= 2n2 g(n) =o(n3), O(n2), g(n) is not o(n2).
9/13/2017
Little-Omega ( notation)
68
Little-omega () notation is to big-omega ()
notation as little-o notation is to Big-Oh notation.
We use notation to denote a lower bound that is
not asymptotically tight.
Formal Definition: f(n)= (g(n)) if there exists a
constant no>0 such that 0<= c. g(n)<f(n) for all
n>=k.
Example: 2n2=(n) but it’s not (n2).
9/13/2017
Relational Properties of the Asymptotic
Notations
69
Transitivity
if f(n)=(g(n)) and g(n)= (h(n)) then f(n)=(h(n)),
if f(n)=O(g(n)) and g(n)= O(h(n)) then f(n)=O(h(n)),
if f(n)=(g(n)) and g(n)= (h(n)) then f(n)= (h(n)),
if f(n)=o(g(n)) and g(n)= o(h(n)) then f(n)=o(h(n)), and
if f(n)= (g(n)) and g(n)= (h(n)) then f(n)= (h(n)).
Symmetry
f(n)=(g(n)) if and only if g(n)=(f(n)).
9/13/2017
70
Transpose symmetry
f(n)=O(g(n)) if and only if g(n)=(f(n)),
f(n)=o(g(n)) if and only if g(n)=(f(n)).
Reflexivity
f(n)=(f(n)),
f(n)=O(f(n)),
f(n)=(f(n)).
9/13/2017
CH-2 Contents
71
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity
08/13/2024
Common complexity classes
72
Algorithm can be classified by their time and space
complexities, and in this respect, several classes of
such algorithms can be distinguished.
An algorithm is called constant if its execution time
remains the same for any number of elements; it is
called quadratic if its execution time is O(n2).
For each of classes, a number of operations is shown along
with the real time needed for executing them on a machine
able to perform 1 million operations per second, or one
operation per microsecond(µsec)
08/13/2024
Class 10 102 103
Constant O(1) 1 1µsec 1 1µsec 1 1µsec
Logarithmic O(log n) 3.32 3µsec 6.64 7µsec 9.97 10µsec
Linear O(n) 10 10µsec 102 100µsec 103 1msec
Linear logarithmic O(nlogn 33.2 33µsec 664 664µsec 9970 10msec
)
Quadratic O(n2) 102 1msec 104 10msec 106 1sec
Cubic O(n3) 103 10msec 106 1sec 109 16.7min
exponential O(2n) 1024 10msec 1030 3.17*1017yr 10301 -
73 08/13/2024
Class 104 105 106
Constant O(1) 1 1µsec 1 1µsec 1 1µsec
Logarithmi O(log n) 13.5 13µsec 16.6 7µsec 19.93 20µsec
c
Linear O(n) 104 10msec 105 0.1sec 106 1sec
Linear O(nlogn 133*103 133msec 166*103 1.6sec 199.3*105 20sec
logarithmic )
Quadratic O(n2) 108 1.7min 1010 16.7min 1012 11.6days
Cubic O(n3) 1012 11.6days 1015 31.7yr 1018 31709yr
exponential O(2n) 103010 - 1030103 - 10301030 -
74 08/13/2024
75 08/13/2024
CH-2 Contents
76
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity
08/13/2024
Types of analysis
77
The term analysis of algorithms is used to describe
approaches to the study of the performance of
algorithms. In this course we will perform the
following types of analysis:
The worst-case runtime
The best case runtime
The average case runtime
The amortized runtime
08/13/2024
78
The worst-case runtime complexity of the algorithm is
The function defined by the maximum number of steps taken on
any instance of size a.
The best-case runtime complexity of the algorithm is
The function defined by the minimum number of steps taken on
any instance of size a.
The average case runtime complexity of the algorithm is
The function defined by an average number of steps taken on any
instance of size a.
The amortized runtime complexity of the algorithm is
The function defined by a sequence of operations applied to the input
of size a and averaged over time.
08/13/2024
Example
79
Let us consider an algorithm of sequential
searching in an array of size n.
08/13/2024
80
Its worst-case runtime complexity is O(n)
Its best-case runtime complexity is O(1)
Its average case runtime complexity is O(n/2)=O(n)
08/13/2024
CH-2 Contents
81
1. Introduction
2. Computational and asymptotic complexity
3. Big-O, Ω, Θ, little-o and OO notations
4. Common complexity classes
5. Best, average and worst case complexity
6. Amortized complexity
08/13/2024
Amortized Time Complexity
82
Worst case analysis of runtime complexity is often too
pessimistic.
Average case analysis may be difficult because
I. It is not clear what is “average data”,
II. Uniformly random data is usually not average,
III. Probabilistic arguments can be difficult.
Amortized complexity analysis is a different way to estimate
run times.
The main ideas is to spread out the cost of operations, charging
more than necessary when they are cheap — there by “saving up
for later use” when they occasionally be come expensive.
08/13/2024
Three Methods of Amortized Analysis
83
Aggregate analysis:
Total cost of n operations/n,
Accounting method:
Assign each type of operation an (different) amortized cost
Overcharge some operations,
Store the overcharge as credit on specific objects,
Then use the credit for compensation for some later
operations.
Potential method:
Same as accounting method
But store the credit as “potential energy” and as a whole.
08/13/2024
84
Consider a dynamic array stack.
In this model push() will double up the array size if
there is no enough space.
Since copying arrays cannot be performed in constant
time, we say that push is also cannot be done in
constant time.
In this section, we will show that push() takes
amortized constant time.
Let us count the number of copying operations needed
to do a sequence of pushes.
08/13/2024
85
push() copy old array size new array size
1 0 1 -
2 1 1 2
3 2 2 4
4 0 4 -
5 4 4 8
6 0 8 -
7 0 8 -
8 0 8 -
9 8 8 16
08/13/2024
86
We see that 3 pushes requires 2 + 1 = 3 copies.
We see that 5 pushes requires 4 + 2 + 1 = 7 copies.
We see that 9 pushes requires 8 + 4 + 2 + 1 = 15
copies.
In general, 2n+1 pushes requires 2n + 2n-1+ ... + 2 +
1 = 2n+1 - 1 copies.
08/13/2024
87
Asymptotically speaking, the number of copies is
about the same as the number of pushes.
We say that the algorithm runs at amortized
constant time.
08/13/2024
88
Questions?
08/13/2024
89
Thank You
08/13/2024