Unit-1_part_2
Unit-1_part_2
Analysis of Algorithms
Algorithm
• An algorithm is simply a finite set of steps to solve a problem. An algorithm presents the solution
to a problem as a well defined set of steps or instructions.
Qualities of a good algorithm
• Inputs and outputs should be defined precisely.
• Each steps in algorithm should be clear and unambiguous.
• Algorithm should be most effective among many different ways to solve a problem.
• An algorithm shouldn't have computer code. Instead, the algorithm should be written in such a
way that, it can be used in similar programming languages.
Features of an Algorithm:
• As we know that an algorithm takes some inputs, execute some finite number of steps and gives
an output. So, the certain step involved in the algorithm must be executable.
• It must generate some result.
Efficiency of Algorithms
• Two algorithms on two systems
• Algorithm A1 50 n lg n
• Algorithm A2 2 n2
A2 Super computer A1 P.C
108 ins/sec 106 ins /sec
For n = 106
4
Algorithm vs Program
independent
Pseudocode
• Pseudocode is an English language like representation of the
code required for an algorithm.
6
Sequence, Selection, Loop
• A sequence is a series of statements that do not alter the
execution path within an algorithm.
• Statements such as assign and add are sequence statements.
• A call to another algorithm is also considered a sequence
statement.
• Selection statements evaluate one or more alternatives. Paths
are followed based on its result.
7
• The typical selection statement is the two way selection
• if (condition) action 1 else action 2.
• The part of the loop are identified by indentation.
• Loop iterates a block of code.
8
Linear loop
1 l=1
2 Loop(I <= 1000)
1 application code
2 I=I + 1
1 I=1
2 Loop (I <= n)
1 Application code
2 I=I+2
For this the code time is proportional to n
Logarithm Loops
Multiply loops Divide loops
1 I=1 1 I=n
2 Loop (I < n) 2 loop( l>= 1)
1 application code 1 application code
2 I = l*2 2 I=I/2
• 1 I=1
• 2 loop(I <= n)
1 j=1
2 loop( j < = n)
1 application code
2 j = j *2
3 I=I+1
F(n ) = [n log n]
Dependent Quadratic
1 I=1
2 loop ( I < = n)
1 j=1
2 loop( j < = l)
1 application code
2 j=j+1
3 I=I + 1
no of iterations in the body of the inner loop is
1 + 2 + 3 + 4 +… + 10 = 55 I.e. n(n +1)/2
On an average = ( n+1/)2
Thus total no of iterations = n (n+1)/2
Quadratic
1 l=1
2 Loop (I < = n)
1 j=1
2 Loop( j < = l)
1 application code
2 j = j+1
3 I = l+1
F(n) = n2
Algorithm Analysis
Analysis vs. Design
16
Time Complexity
Real Time:
To analyze the real time complexity of a program we need to determine two
numbers for each statement in it:
• Product of these two, will be the total time taken by the statement.
First no. depends upon the machine and compiler used , hence the real time
complexity is machine dependent.
Frequency count
To make analysis machine independent it is assumed that every
instruction takes the same constant amount of time for execution.
Hence the determination of time complexity of a program is the
matter of summing the frequency counts of all the statements.
Asymptotic Notation
In mathematics, asymptotic analysis, also known as asymptotics, is a method of describing the
limiting behavior of a function. In computing, asymptotic analysis of an algorithm refers to defining
the mathematical boundation of its run-time performance based on the input size. For example,
the running time of one operation is computed as f(n), and maybe for another operation, it is
computed as g(n2). This means the first operation running time will increase linearly with the
increase in n and the running time of the second operation will increase exponentially
when n increases. Similarly, the running time of both operations will be nearly the same if n is
small in value.
19
Usually, the analysis of an algorithm is done based on three cases:
Best Case
Average Case
Worst Case
20
Big-Oh
21
Omega
22
Theta
23
Relations Between Q, O, W
Asymptotic Notation (cont.)
constant O(1)
logarithmic: O(log n)
linear: O(n)
quadratic: O(n2)
polynomial: O(nk), k ≥ 1
exponential: O(an), a > 1
Categories of algorithm efficiency
Efficiency Big O
Constant O(1)
Logarithmic O(log n)
Linear O(n)
Linear logarithmic O(n log n)
Quadratic O(n2)
Polynomial O(nk)
Exponential O(cn)
Factorial O(n!)
Asymptotic Analysis of The
Running Time
• Use the Big-Oh notation to express the number of
primitive operations executed as a function of the input
size.
• Comparing the asymptotic running time
-an algorithm that runs in O(n) time is better than one that
runs in O(n2) time
-similarly, O(log n) is better than O(n)
-hierarchy of functions: log n << n << n2 << n3 << 2n
Amortized Analysis vs Asymptotic Analysis
• 1. Asymptotic Analysis
• Asymptotic analysis is used to analyze the growth rate of an algorithm's
running time or space requirements as the input size n approaches
infinity. It helps classify algorithms into Big-O, Big-Ω, or Big-Θ notations.
• Focus: Worst-case or average-case performance for a given input size.
• Goal: To estimate performance in terms of n for large inputs.
• Example:
• Suppose you want to find the maximum element in an array of size n.
29
•Time Complexity: O(n)(linear time), since we iterate through the array once.
•Asymptotic Behavior: As n grows, the runtime grows linearly.
Asymptotic analysis focuses on how the algorithm scales with input size, irrespective of specific
operations or varying costs over time.
30
• 2. Amortized Analysis
• Amortized analysis considers the average cost of operations over a sequence of operations, rather than
analyzing each operation individually. This is particularly useful when some operations are expensive, but the
expensive operations are infrequent and balanced out by many cheaper ones.
• Focus: Average cost of operations over the long run.
• Goal: To provide a realistic view of performance for algorithms with non-uniform costs.
• Example:
• Suppose you're working with a dynamic array (like Python's list), which doubles in size when full. Here's how
it works:
• Append an element to the array.
• If the array is full, allocate a new array (double the size), copy all elements, and then append the new
element.
• Operation Cost:
– Regular append: O(1).
– Resize (when full): O(n) because all elements are copied.
31
• Amortized Analysis for n Operations:
• Suppose the array starts empty and has size 1. Every time it doubles, the total number of
elements is copied, but this doubling happens rarely.
• Over n operations:
– Most operations are cheap O(1).
– Only a few resizing operations are costly O(n).
• Total Cost:
• The cost of all the resizing is: 1+2+4+⋯+n, which sums up to O(n).
• Dividing this O(n) cost over n operations, the amortized cost per operation is O(1).
• Amortized analysis gives the average cost per operation, accounting for occasional expensive
operations, while asymptotic analysis doesn't consider this balance—it would simply state O(n)
for the resizing step.
32
Key Differences:
Key Differences
33
Real Life Example
• Asymptotic Analysis: Imagine you're timing how long it takes
to drive to a destination. If there’s a traffic jam, you record the
maximum time it takes for the worst-case scenario.
34