0% found this document useful (0 votes)
111 views

Module1 Algorithm Analysis

The document describes the modules covered in the Design and Analysis of Algorithms course. The 6 modules are: 1) Introduction and Performance Analysis, 2) Brute Force and Greedy Methods, 3) Dynamic Programming, 4) Branch-and-Bound and Backtracking Methodologies, 5) Graph Algorithms, and 6) Class P and NP. The outcomes of the course are to analyze algorithms, design algorithms using different techniques, apply suitable algorithms to problems, and identify complexity classes of problems.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
111 views

Module1 Algorithm Analysis

The document describes the modules covered in the Design and Analysis of Algorithms course. The 6 modules are: 1) Introduction and Performance Analysis, 2) Brute Force and Greedy Methods, 3) Dynamic Programming, 4) Branch-and-Bound and Backtracking Methodologies, 5) Graph Algorithms, and 6) Class P and NP. The outcomes of the course are to analyze algorithms, design algorithms using different techniques, apply suitable algorithms to problems, and identify complexity classes of problems.
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 26

18CS2010

Design and Analysis of Algorithms


Module 1: Introduction and Performance Analysis
Introduction: Performance analysis, Asymptotic Notations & Basic Efficiency Classes. Asymptotic analysis of complexity bounds –
best, average and worst-case behaviour, Time and space trade-offs, Analysis of recursive algorithms through recurrence relations:
Substitution method, Recursion tree method and Masters’ theorem.
Module 2: Brute Force and Greedy Methods
Brute-Force: Searching and String Matching, Greedy: General Method, Huffman Codes Knapsack Problem, Task Scheduling
Problem, Optimal Merge Pattern.
Module 3: Dynamic Programming
Matrix Chain Multiplication, Longest Common Sequences – Warshall‘s Transitive Closure and Floyds All Pairs Shortest Path
Algorithm – 0/1 Knapsack – Optimal Binary Search Tree – Travelling Salesman Problem.
Module 4: Branch-and-Bound and Backtracking Methodologies
Branch-and-Bound: Knapsack, Travelling Salesman Problem, Backtracking: Knapsack, Sum of subsets, 8-Queens Problem, Bin-
packing
Module 5: Graph Algorithms
Traversal algorithms: Depth First Search (DFS) and Breadth First Search (BFS); Prim’s and Kruskal’s Minimum Spanning Tree,
Topological sorting, Network Flow Algorithm.
Module 6: Class P and NP
Tractable and Intractable Problems: Computability of Algorithms, Computability classes – P, NP, NP, Complete and NP-hard. Cook’s
theorem, Approximation algorithms.
Outcomes
The student will be able to
1. Analyze given algorithm and express its complexity in asymptotic
notation
2. Design algorithms using greedy and dynamic programming techniques
3. Propose solutions using backtracking and branch-and-bound technique
4. Apply suitable algorithmic technique to solve a problem
5. Solve problems using fundamental graph algorithms
6. Identify the problems belonging to the class of P, NP-Complete or NP-
Hard
Analysis of Algorithms

Input Algorithm Output

An algorithm is a step-by-step procedure for solving a problem in a finite


amount of time.
Running Time
• Most algorithms transform input objects best case
average case
into output objects. worst case
• The running time of an algorithm typically 120

grows with the input size. 100

Running Time
• Average case time is often difficult to 80

determine. 60

• We focus on the worst case running time. 40

• Easier to analyze 20
• Crucial to applications such as games, finance
0
and robotics 1000 2000 3000 4000
Input Size
Experimental Studies

• Write a program implementing the 9000

algorithm 8000

7000
• Run the program with inputs of varying
size and composition 6000

Time (ms)
5000
• Get an accurate measure of the actual
running time 4000

3000
• Plot the results
2000

1000

0
0 50 100
Input Size
Limitations of Experiments
• It is necessary to implement the algorithm, which may be
difficult
• Results may not be indicative of the running time on other
inputs not included in the experiment.
• In order to compare two algorithms, the same hardware and
software environments must be used
Theoretical Analysis
• Uses a high-level description of the algorithm instead of an
implementation
• Characterizes running time as a function of the input size, n.
• Takes into account all possible inputs
• Allows us to evaluate the speed of an algorithm independent
of the hardware/software environment
Pseudocode
• High-level description of an algorithm Example: find max
element of an array
• More structured than English prose
• Less detailed than a program Algorithm arrayMax(A, n)
Input array A of n integers
• Preferred notation for describing
Output maximum element of
algorithms
A
• Hides program design issues
currentMax  A[0]
for i  1 to n  1 do
if A[i]  currentMax then
currentMax  A[i]
return currentMax
Pseudocode Details
• Control flow • Method call
• if … then … [else …] var.method (arg [, arg…])
• while … do … • Return value
• repeat … until … return expression
• for … do …
• Indentation replaces braces • Expressions
¬ Assignment
• Method declaration
Algorithm method (arg [, arg…]) = Equality testing
Input …
Output … n2 Superscripts and other mathematical
formatting allowed
Primitive Operations
• Basic computations performed by an
algorithm
• Identifiable in pseudocode • Examples:
• Evaluating an expression
• Largely independent from the • Assigning a value to a variable
programming language • Indexing into an array
• Exact definition not important • Calling a method
• Returning from a method
Running Time Calculations
General Rules
1. FOR loop
• The number of iterations times the time of the inside statements.
2. Nested loops
• The product of the number of iterations times the time of the inside statements.
3. Consecutive Statements
• The sum of running time of each segment.
4. If/Else
• The testing time plus the larger running time of the cases.
Counting Primitive Operations
• By inspecting the pseudocode, we can determine the maximum number of
primitive operations executed by an algorithm, as a function of the input size

Algorithm arrayMax(A, n)
# operations
currentMax  A[0] 2
for i  1 to n  1 do 2n
if A[i]  currentMax then 2(n  1)
currentMax  A[i] 2(n  1)
{ increment counter i } 2(n  1)
return currentMax 1
Total 8n  2
Estimating Running Time
• Algorithm arrayMax executes 8n  2 primitive
operations in the worst case. Define:
a = Time taken by the fastest primitive operation
b = Time taken by the slowest primitive operation
• Let T(n) be worst-case time of arrayMax. Then
a (8n  2)  T(n)  b(8n  2)
• Hence, the running time T(n) is bounded by two linear
functions
Growth Rate of Running Time
• Changing the hardware/ software environment
• Affects T(n) by a constant factor, but
• Does not alter the growth rate of T(n)
• The linear growth rate of the running time T(n) is an intrinsic
property of algorithm arrayMax
Seven Important Functions
• Seven functions that often appear in 1E+29
Cubic
algorithm analysis: 1E+27
1E+25
• Constant  1 1E+23
Quadratic

• Logarithmic  log n 1E+21 Linear


• Linear  n 1E+19
1E+17
• N-Log-N  n log n

T(n)
1E+15
• Quadratic  n2 1E+13
1E+11
• Cubic  n3 1E+9
• Exponential  2n 1E+7
1E+5
1E+3
• In a log-log chart, the slope of the line 1E+1
1E-1
corresponds to the growth rate of the 1E-1 1E+2 1E+5 1E+8
function n
Some Numbers

2 3 n
n log2n n log2n n n 2
1 0 0 1 1 2
2 1 2 4 8 4
4 2 8 16 64 16
8 3 24 64 512 256
16 4 64 256 4096 65536
32 5 160 1024 32768 4294967296
Different Growth Functions
Space / Time Complexity
• The space complexity of an algorithm is the amount memory it needs
to run to completion.

• The time complexity of an algorithm is the amount of computer time


it needs to run to completion.
Asymptotic Notation
• An asymptote of a curve is a line to which the curve converges. In
other words, the curve and its asymptote get infinitely close, but they
never meet.
Big – oh Notation
• The function f(n) = O (g(n)) if and
only if there exists positive constants
c and n0 such that f(n) ≤ c*g(n) for all
n, n≥ n0.
• Denotes Asymptotic Upper Bound
Big – Omega Notation
• The function f(n) = Ω (g(n)) if and
only if there exists positive constants
c and n0 such that f(n) ≥ c*g(n) for all
n, n≥ n0.

• Denotes Asymptotic Lower Bound.


Big – Theta Notation
• The function f(n) = Θ (g(n)) if and
only if there exists positive constants
c1, c2 and n0 such that
c1(g(n)) ≤ f(n) ≤ c2*g(n) for all n, n≥ n0.

• Denotes Asymptotic Tight Bound


Asymptotic Notations

Asymptotic comparison operator Numeric comparison operator

Our algorithm is o( something ) A number is < something

Our algorithm is O( something ) A number is ≤ something

Our algorithm is Θ( something ) A number is = something

Our algorithm is Ω( something ) A number is ≥ something

Our algorithm is ω( something ) A number is > something

You might also like