0% found this document useful (0 votes)
7 views

L2- Complexity

An algorithm is a defined set of instructions for solving a problem, taking inputs and producing outputs, which can be expressed in various forms including natural language or pseudocode. The document discusses the efficiency of algorithms, comparing them through experimental studies and theoretical analysis, emphasizing the importance of running time and asymptotic notation. It also covers concepts such as Big-Oh notation, primitive operations, and the RAM model, providing insights into algorithmic complexity and performance evaluation.

Uploaded by

Jay Wardhan Suri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

L2- Complexity

An algorithm is a defined set of instructions for solving a problem, taking inputs and producing outputs, which can be expressed in various forms including natural language or pseudocode. The document discusses the efficiency of algorithms, comparing them through experimental studies and theoretical analysis, emphasizing the importance of running time and asymptotic notation. It also covers concepts such as Big-Oh notation, primitive operations, and the RAM model, providing insights into algorithmic complexity and performance evaluation.

Uploaded by

Jay Wardhan Suri
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

What is an algorithm?

A clearly specified set of simple instructions to be followed to solve


a problem
✓Takes a set of values, as input and
✓ produces a value, or set of values, as output
May be specified
✓In English or Hindi or Any Other Language
✓As a computer program
✓As a pseudo-code

Input Algorithm Output


Some Vocabulary with an example
Problem: Sorting of given keys

Input: A sequence of n keys a1, . . . , an.

Output: The permutation (reordering) of the input sequence such that a1


≤ a2 ≤ · · · ≤ an−1 ≤ an.

Instance: An instance of sorting might be an array of names, like {Mike,


Bob, Sally, Jill ,Jan}, or a list of numbers like {154, 245, 568, 324, 654,
324}

Algorithm: An algorithm is a procedure that takes any of the possible


input instances and transforms it to the desired output.
Which algorithm is better?

Who is topper in BU-CSET?

Algorithm 1: Algorithm 2:

Sort A into decreasing order int i;


int m = A[1];
Output A[1]. for (i = 2; i <= n; i ++)
if (A[i] > m)
Which is better? m = A[i];
return m;
Who’s the champion?
“Better” = more efficient

✓Time

✓Space
Experimental Studies

✓Write a program implementing the 9000


algorithm 8000

7000
✓Run the program with inputs of varying 6000

Time (ms)
size and composition
5000

4000
✓Use a method like
3000
System.currentTimeMillis() or Clock functions
to get an accurate measure of the actual 2000
running time 1000

0
✓Plot the results 0 50 100
Input Size
Limitations of Experiments

✓Results may not be indicative of the running time as all inputs may not be
included in the experiment.

✓In order to compare two algorithms, the same hardware and software
environments must be used

✓It is necessary to implement the algorithm, and the efficiency of implementation


varies
Asymptotic notation

O(n)

o(n)

Ω(n)

(n)

Ө(n)
Edmund Landau
• 1877~1938
• Inventor of the asymptotic notation
Donald E. Knuth

• 1938 ~
• Turing Award, 1974.
• Father of the analysis of algorithms
• Popularizing the asymptotic notation
Theoretical Analysis

✓Uses a high-level description of the algorithm instead of an implementation

✓Characterizes running time as a function of the input size, n.

✓Takes into account all possible inputs

✓Allows us to evaluate the speed of an algorithm independent of the


hardware/software environment
Pseudocode
➢High-level description of an algorithm

➢More structured than English prose

➢Less detailed than a program

➢Preferred notation for describing algorithms


Example: find max element of an array
➢Hides program design issues
Algorithm arrayMax(A, n)
Input array A of n integers
Output maximum element of A
currentMax  A[0]
for i  1 to n − 1 do
if A[i]  currentMax then
currentMax  A[i]
return currentMax
Pseudocode Details

• Control flow • Expressions


• if … then … [else …] Assignment
• while … do … (like = in C / Java)
• repeat … until … = Equality testing
• for … do … (like == in C / Java)
• Indentation replaces braces n2 Superscripts and other
mathematical formatting
• Method declaration allowed
Algorithm method (arg [, arg…])
Input …
Output …
Primitive Operations

• Basic computations performed by an algorithm


• Identifiable in pseudocode
• Largely independent from the programming
language Examples:
• Assigning a value to a
• Exact definition not important (we will see why variable
later)
• Indexing into an array
• Assumed to take a constant amount of time in • Calling a method
the RAM model
• Returning from a method
The Random Access Machine (RAM) Model (*)

➢A CPU

➢A potentially unbounded bank of memory cells,


each of which can hold an arbitrary number or
character 2
1
0

✓ Memory cells are numbered and accessing any cell in memory


takes unit time.
Counting Primitive Operations

➢By inspecting the pseudocode, we can determine the maximum number of primitive
operations executed by an algorithm, as a function of the input size.

Algorithm arrayMax(A, n) # operations


currentMax  A[0] 2
for (i =1; i<n; i++) 2n
(i=1 once, i<n n times, i++ (n-1) times)
if A[i]  currentMax then 2(n − 1)
currentMax  A[i] 2(n − 1)
return currentMax 1
Total 6n −1
Estimating Running Time

➢Algorithm arrayMax executes 6n − 1 primitive operations in the worst case.

➢Define:
➢a = Time taken by the fastest primitive operation
➢b = Time taken by the slowest primitive operation

➢Let T(n) be worst-case time of arrayMax. Then


a (6n − 1)  T(n)  b(6n − 1)

➢Hence, the running time T(n) is bounded by two linear functions


Running Time

➢What is best, average, worst case running best case


of a problem? average case
worst case
120
➢Average case time is often difficult to
determine – why? 100

Running Time
80
➢We focus on the worst case running time.
60
➢Easier to analyze and best to bet
➢Crucial to applications such as games, 40

finance and robotics 20

0
1000 2000 3000 4000
Input Size
The Growth Rate of the Six Popular functions

n logn n nlogn n2 n3 2n

4 2 4 8 16 64 16
8 3 8 24 64 512 256
16 4 16 64 256 4,096 65,536

32 5 32 160 1,024 32,768 4,294,967,296

64 6 64 384 4,094 262,144 1.84 * 1019


128 7 128 896 16,384 2,097,152 3.40 * 1038
256 8 256 2,048 65,536 16,777,216 1.15 * 1077
512 9 512 4,608 262,144 134,217,728 1.34 * 10154

1024 10 1,024 10,240 1,048,576 1,073,741,824 1.79 * 10308


Practical Complexity

10000000

1000000

100000

10000

1000

100

10

1
1 4 16 64 256 1024 4096 16384 65536
Asymptotic Dominance in Action

Implications of Dominance
Asymptotic Dominance in Action

n n nlogn n2 nn3 n4 n10 2n


1000 1mic 10mic 1milli 1sec
1000 17min 3.2 x 1013 3.2 x 10283
years years

10000 10mic 130mic 100milli 17min


10000 116 ??? ???
days

106 1milli 20milli 17min 106


32years 3 x 107 ?????? ??????
years

109 instructions/second
Faster Computer Vs Better Algorithm

Algorithmic improvement more useful than hardware


improvement.
E.g. 2n to n3
Big-Oh Notation
Given functions f(n) and g(n), we say that f(n) is O(g(n)) if there are positive
constants c and n0 such that
f(n)  cg(n) for n  n0

For function g(n), we define O(g(n)), big-O


of n, as the set:
O(g(n)) = {f(n) :  positive constants c and n0,
such that 0  f(n)  cg(n) n  n0 }

Example
O(n2) = {2n2 + 10n + 2000, 125n2 – 233n - 250, 10n + 25, 150, …..}
Big-Oh Notation

10,000
Example: 2n + 10 is O(n) 3n

– 2n + 10  cn 1,000 2n+10

– (c − 2) n  10 n
– n  10/(c − 2) 100
– Pick c = 3 and n0 = 10
10

1
1 10 100 1,000
n

To simplify the running time estimation, for a function f(n),


we ignore the constants and lower order terms.
Example: 10n3+4n2-4n+5 is O(n3).
10,00,000
n^2
Big-Oh Example 100n
1,00,000
Example: the function n2 is not O(n) 10n
n2  cn 10,000 n
nc
1,000

The above inequality cannot be 100


satisfied since c must be a
constant and hence n2 is O(n2). 10

1
1 10 100 1,000
n
More Big-Oh Examples

7n-2
7n-2 is O(n)
need c > 0 and n0  1 such that 7n-2  c•n for n  n0
this is true for c = 7 and n0 = 1

◼ 3n3 + 20n2 + 5
3n3 + 20n2 + 5 is O(n3)
need c > 0 and n0  1 such that 3n3 + 20n2 + 5  c•n3 for n  n0
this is true for c = 4 and n0 = 21

◼ 3 log n + 5
3 log n + 5 is O(log n)
need c > 0 and n0  1 such that 3 log n + 5  c•log n for n  n0
this is true for c = 8 and n0 = 2
Big-Oh Rules

✓If f(n) is a polynomial of degree d, then f(n) is O(nd), i.e.,


➢Drop lower-order terms
➢Drop constant factors

✓Use the smallest possible class of functions


➢Say “2n is O(n)” instead of “2n is O(n2)”

✓Use the simplest expression of the class


➢Say “3n + 5 is O(n)” instead of “3n + 5 is O(3n)”
Relatives of Big-Oh

O(n)

o(n)

Ω(n)

(n)

Ө(n)
Relatives of Big-Oh ((n) )
big-Omega
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0  1 such
that f(n)  c•g(n) for n  n0

For function g(n), we define (g(n)),


big-Omega of n, as the set:
(g(n)) = {f(n) :  positive constants c and n0,
such that 0  cg(n)  f(n) n  n0}
Relatives of Big-Oh ((n))
big-Theta
◼ f(n) is (g(n)) if there are constants c’ > 0 and c’’ > 0 and an integer
constant n0  1 such that c’•g(n)  f(n)  c’’•g(n) for n  n0

If fO(g) and gO(f), then we say


“g and f are of the same order” or
“f is (exactly) order g” and
write f(g).

For function g(n), we define (g(n)),


big-Theta of n, as the set:

(g(n)) = {f(n) :  positive constants c1, c2, and n0,


such that n  n0,we have 0  c1g(n)  f(n)  c2g(n)}
Relatives of Big-Oh (small oh – o(n))
Example
O(n2) = {2n2 + 10n + 2000, 125n2 – 233n - 250, 10n + 25, 150, …..}

What is the difference between

n2 being the asymptotic upper bound 2n2 + 10n + 2000

And

n2 being the asymptotic upper bound of 10n + 25?

n2 is Asymptotically Tight Upper Bound of 2n2 + 10n + 2000


Relatives of Big-Oh (small o – o(n))
For a given function g(n),
o(g(n)) = {f(n):  c > 0,  n0 > 0
such that  n  n0, we have 0  f(n) < cg(n)}.
c > 0,  n0 > 0 such that 0  f(n) < cg(n)  n  n0
 c > 0,  n0 > 0 such that 0  f(n) / g(n) < c  n  n0
 lim [f(n) / g(n)] = 0
n→

Alternatively
o(g(n)) = {f(n): lim [f(n) / g(n)] = 0 }
n→

Example
o(n2) = {10n + 25, 150, …..}
Relatives of Big-Oh ((n))

For a given function g(n),


(g(n)) = {f(n):  c > 0,  n0 > 0 such that
 n  n0, we have 0  cg(n) < f(n)}.

g(n) is a lower bound for f(n) that is not asymptotically tight.

Alternatively
w(g(n)) = {f(n): lim [f(n) / g(n)] = }
→
n→

Example
w(n2) = {8n3 + 25, 150, …..}
Example Uses of the Relatives of Big-Oh

◼ 3logn+loglogn is (logn)
f(n) is (g(n)) if there is a constant c > 0 and an integer constant n0  1
such that f(n)  c•g(n) for n  n0
let c = 3 and n0 = 2
◼ 3logn+loglogn is (logn)
◼ f(n) is (g(n)) if f(n) is asymptotically equal to g(n)

◼ 12n2 + 6n is o(n3)
◼ f(n) is o(g(n)) if f(n) is asymptotically strictly less than g(n)

◼ 12n2 + 6n is (n)
◼ f(n) is (g(n)) if f(n) is asymptotically strictly greater than g(n)
Some more Examples

◼ 3n + 2 is (n)
As 3n +2 >= 3n for n>=1, though the inequality holds for n>=0, for
 we need n0>0

◼ 3n + 2 is (n)
◼ As 3n+2 >=3n for all n>=2 and 3n+2 <=4n for all n>=2, c1=3,
c2=4 and n0=2

◼ 3n + 2 is o(n2)
Useful Facts about Big O

➢Big O, as a relation, is transitive:


fO(g)  gO(h) → fO(h)

➢Sums of functions:
If fO(g) and hO(g), then f+hO(g).

➢c>0, O(cf) = O(f)

➢f1O(g1)  f2O(g2) → f1 f2 O(g1g2)


f1+f2 O(g1+g2)
Subset relations between order-of-growth sets

O(g) (g)
•g
o(g) (g) (g)
Why o(f)O(x)−(x)

A function that is O(x), but neither o(x) nor (x):


Other Order-of-Growth Relations

✓(g) = {f | gO(f)}
“The functions that are at least order g.”

✓o(g) = {f | c>0 k x>k : 0  f(n) < cg(n)}


“The functions that are strictly lower order than g.” o(g) 
O(g) − (g).

✓(g) = {f | c>0 k x>k : 0  cg(x) < f(x)}


“The functions that are strictly higher order than g.” (g) 
(g) − (g).
1. int i, j;
2. j = 1;
3. for (i = 2; i <= n; i++) O(1) time for operations
like i = 2, i <= n and i++)
4. if (A[i] > A[j])
In fact n -1 iterations
5. j = i;
6. return j;
Adding everything together
yields
an upper bound on the worst-case time complexity.

Complexity is O(1) + O(1) + O(n) {O(1) + O(1) + O(1)} + O(1) = O(n) 42


Practice exercises – Find sum of ‘n’ integers

Devise an algorithm that finds the sum of all the integers in a list.

procedure sum(a1, a2, …, an: integers)


s := 0 {sum of elems so far}

for i := 1 to n {go thru all elements}


s := s + ai {add current item}
{at this point s is the sum of all items}
return s
Practice exercises

What is the complexity of the following algorithm?

Algorithm Awesome(A: list [1,2,..n])


var int i, j, k, sum
for i from 1 to 100
for j from 1 to 1000
for k from 1 to 50
sum = sum + i + j + k
Practice exercises
Practice exercises

Work out the computational complexity of the following piece of code assuming
that n = 2m:
for( int i = n; i > 0; i-- ) {
for( int j = 1; j < n; j *= 2 ) {
for( int k = 0; k < j; k++ ) {
for(int m = 0; m < 10000; m++)
sum = sum + m;
}
}
}
Practice exercises
A()
{
int i=1, s=1;
While(s<=n)
{
i++;
s = s+i;
printf(“hi”);
}
}
A()
{
int i=;
For(i=1; i^2 <=n; i++)
Print(“hi”);
}
A()
{
int I, j, k, n;
for(i=1; i<n; i++)
{
for(j=1; j<i^2; j++
{
for(k=1; k<n/2; k++)
{
print(“hi”);
}
}

}
}
A()
{
for(i=1; i<n; i++)
for(j=1; j<n; j=j+i)
print(“Hello”);

}
A()
{
int n= 2^2^k;
for(i=1; i<=n; i++)
{
j=2;
while(j<=n)
{
j=j^2;
printf(“Hi”);
}
}

}
Thank You!!

You might also like