0% found this document useful (0 votes)
47 views22 pages

Algortihm Short Notes

The document provides a comprehensive overview of algorithm analysis, focusing on asymptotic notations such as O, Ω, and Θ, which are used to describe the performance and resource consumption of algorithms. It discusses various methodologies for analyzing algorithms, including a priori and a posteriori analysis, and introduces the Master Theorem for solving recurrence relations. Additionally, it covers space complexity and the divide and conquer approach, highlighting examples and time complexities for different algorithmic scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
47 views22 pages

Algortihm Short Notes

The document provides a comprehensive overview of algorithm analysis, focusing on asymptotic notations such as O, Ω, and Θ, which are used to describe the performance and resource consumption of algorithms. It discusses various methodologies for analyzing algorithms, including a priori and a posteriori analysis, and introduces the Master Theorem for solving recurrence relations. Additionally, it covers space complexity and the divide and conquer approach, highlighting examples and time complexities for different algorithmic scenarios.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

ALGORITHMS

GATE फर्रे

Module 1: Analysis of Algorithm ∀ n ≥ n0 such that there exists three positive constant
c1 > 0, c2 > 0 and n0 ≥ 1
Aim : The goal of analysis of algorithms is to
compare algorithms mainly in terms of running time O-Notation [Pronounced “big-oh”]
but also in terms of other factors like memory, Let f(n) and g(n) be two positive functions
developer effort. f(n) = O(g(n)), if and only if
f(n) ≤ c . g(n), ∃ n, ≥ n0
Need for Analysis (Why to analyze || What to such that $ two positive constants c > 0, n0 ≥ 1.
analyze || How to analyze)
Ω-Notation: [Pronounced “big-omega”]
Ω notation provides an asymptotic lower bound for a
1. To determine resource consumption
given function g(n), denoted by Ω(g(n)). The set of
<resource such that
functions f(n) = Ω(g(n)) if and only if f(n) ≥ c . g(n), ∀ n ≥
space+time+cost+register>
n0 such that two positive constants c > 0, n0 ≥ 1.
Resources may differ from domain to
domain. Analogy between real no & asymptotic notation
: Let a, b are two real no & f, g two positive
2. Performance comparison to find out efficient functions
solution If f(n) is O(g(n)) : a ≤ b

● If f(n) is Ω(g(n)) : a ≥ b
● If f(n) is Θ(g(n)) : a = b
● If f(n) is o(g(n)) : a < b
● If f(n) is ω(g(n)) : a > b

Methodology of algorithm
● Depends on language
● Operating system
● Hardware (CPU, processor, memory,
Input/output)
Types of analysis Rate of growth of function
1. Aposteriori analysis(platform dependent) : It gives
[Highest] —> n! —> 4n —> 2n —> n² —>
exact value in real units.
nlogn —>log(n!) —> n —> 2logn —> log² n —>
2. Apriori analysis(platform independent) : It allows log logn —> 1 [lowest]
us to calculate the relative efficient performance of
two algorithms in a way such that it is platform Trichotomy property:
independent. It will not give real values in units. For any two real numbers (a, b) there must be a
relation between them
Asymptotic Notations (a > b, a < b, a = b)
θ-Notation
Let f(n) and g(n) be two positive functions
f(n) = θ(g(n)) if and only if
f(n) ≤ c1 . g(n) and f(n) ≥ c2 . g(n)

Page No:- 01
ALGORITHMS
GATE फर्रे

Asymptotic notation does not satisfy trichotomy T(n) = O(log2 n)


property 6. Time is infinite
ex: f(n) = n, g(n) = n * |sin(n)|, n > 0 c= 0;
∴ These two functions cannot converge while(1)
C += 1;
Example 7. Mutually Exclusive Loops
1. Loop 1. For i←1 to n: C=C+1;
for ( i = 1; i <= n; i++) { 2. For j←1 to m: K=K∗2;
x=y+z; Time = O(max(n, m))
} 8. Nested loop analysis
T(n) = O(n) for (i = 1; i <= n; ++i) // Executes 'n' times
2. Nested loop for (j = 1; j <= n; ++j) // Executes 'n' times
for(i=1; i<=n; i++){ for (k = n/2; k <= n; k += n/2) // Executes 2 times (n/2,
for(j=1; j<=n; j++){ n)
k= k+1; C = C + H;
} Time = O(n2)
} for(i=1;i<n;i = i+a)
T(n) = O(n2) Time : O(n/a) = O(n)
3. Logarithm 9. For a loop with a multiplicative increment:
for(i=1; i<=n ; i *= 2){ for (i=1;i<=n;i=i∗2): This loop's complexity is Log2n.
k=k+1; for (i=1;i<=n;i=i∗3): This loop's complexity is Log3n.
} for (i=1;i<=n;i=i∗a);
for (int i = 1; i <= n; i++) {
for (int j = 1; j <= n; j *= 2) {
General formula: The time complexity is
printf(“GFG”);
O(logan).
}
}
T(n) = O(n log n) k=1, i=1
4. Linear recursion: while(k<=n){
void fun(int n) { i++;
if (n > 0) { k=k+i;
fun(n - 1); }
} Time complexity : T(n) = O(√n)
} for (i = n; i >= 2; i = sqrt(i))
T(n) = T(n-1) + C Time complexity: O(log log n).
T(n) = O(n) 10. for (i = 2; i <= n; i++) { // log logn
5. Recursive with logarithmic loop for (j = 1; j <= i; j++) {
void fun(int n) { for (k = 1; k <= n; k +=j) { // n/j , where j = 1,2,3,..n
if (n > 1) { x = y + z;
fun(n / 2); // Recursive call first ...
for (int i = 1; i <= n; i *= 2) { }
printf("Hello\n"); }
} }
} T(m) = O(log logn (n logn))
}
T(n) = T(n/2) + O(log n)

Page No:- 02
ALGORITHMS
GATE फर्रे

Master Theorem: Common Recurrence Relation

Let f(n) is a positive function and T (n) is defined Recurrence relation Time
recurrence relation: complexity
T(n) = aT(n/b) + f(n) T (n) = C; n = 2 O(logn)
Where a >= 1 and b > 1 are two positive constants. T (n) = 2 T(√𝑛𝑛) + C; n > 2
Case 1: T (n) = C; n = 2 O(n)
If f(n) = O(n(logb a−∈)) for some constant ∈ > o then T(n) T (n) = T(n – 1) + C ; n > 2
= θ (n(logb a) ) T (n) = C; n = 1 O(n^2)
Case 2: T (n) = T(n – 1) + n + C ; n > 2
If f(n) = θ (n(logb a) ), then T(n) = θ (n(logb a) *log n) T(n) = C ; n = 1 O(n^n)
Case 3: T(n) = T(n-1) * n + C ; n > 2
log a+
If f(n) = Ω (n( b ∈)) for some constant ∈> 0, and if a T(n) = C ; n = 1 O(n)
T(n) = 2T(n/2) + C ; n > 1
f (n/b) ≤ cf(n) for some constant c < 1 and all
T(n) = C ; n = 1 O(nlogn)
sufficiently large n, then T(n) = θ(f(n))
T(n) = 2T(n/2) + C ; n > 2
Master theorem for subtract and conquer T(n) = C ; n = 1 O(logn)
recurrence: T(n) = T(n/2) + C ; n > 1
Let T(n) be a function defined on possible n: T (n) = 1; n = 2 Θ(loglogn)
T(n) = aT(n-b) + f(n), if n > 1 T(n) = T( n ) + C; n > 2
T(n) = C, if n <= 1 T(n) = T(n/2) + 2^n if n > 1 O(2^n)
For some constant C, a>0, b>0, and f(n) = O(nd)
1. T(n) = O(nd) , if a < 1 Analogy between real no & asymptotic notation
2. T(n) = O(nd+1) , if a = 1 Let a, b are two real no & f, g two positive functions
3. T(n) = O(nd * a(n/b)) , if a > 1 ● If f(n) is O(g(n)) : a ≤ b (f grows slower than
some multiple of g)
● If f(n) is Ω(g(n)) : a ≥ b (f grows faster than
some multiple of g)
● If f(n) is Θ(g(n)) : a = b (f grows at same rate of
g)
● If f(n) is o(g(n)) : a < b (f grows slower than any
multiple of g)
● If f(n) is ω(g(n)) : a > b (f grows faster than any
multiple of g)
Analysis
1. f(n) = n!
n! <= c*nn : n >= 2
n! = O(nn) with c = 1, n0 = 2.
using stirling’s approximation : n! ≈ √(2nπ) nn *
e-n

Page No:- 03
ALGORITHMS
GATE फर्रे

Discrete Properties of Asymptotic Notation


2. Symmetric
Property Big Big Theta() Small Small f(n) = Θ(g(n)), iff g(n) = Θ(f(n))
Oh(O) Omega() oh(o) omega
()

Reflexive ✓ ✓ ✓ × ×
3. Transitive
f(n) = Θ(g(n)) & g(n) = O(h(n))
Symmetric × × ✓ × × f(n) = O(h(n))
Note : Ω and Θ also satisfy transitivity
Transitive ✓ ✓ ✓ ✓ ✓

Transpose ✓ ✓ × ✓ ✓ 4. Transpose Symmetric


symmetric f(n) = O(g(n)) iff g(n) = Ω(f(n))

Best case(n) ≤ Average case(n) ≤ Worst case(n)


Analogy between real no & asymptotic notation
Space Complexity
Let a, b are two real no & f, g two positive functions
● If f(n) is O(g(n)) : a ≤ b (f grows slower than some Space required by algorithm to solve an instance of
multiple of g) the problem, excluding the space allocated to hold
● If f(n) is Ω(g(n)) : a ≥ b (f grows faster than some input.
multiple of g) Space complexity : C + S(n)
● If f(n) is Θ(g(n)) : a = b (f grows at same rate of g) C - Constant space
● If f(n) is o(g(n)) : a < b (f grows slower than any S(n) - Additional space that depends on input size n
multiple of g)
● If f(n) is ω(g(n)) : a > b (f grows faster than any Space Complexity VS Auxiliary space
multiple of g)
Analysis Space Complexity = Total space used including input
1. f(n) = n! Auxiliary space = Extra space used excluding the input
n! <= c*n^n : n >= 2
n! = O(n^n) with c = 1, n0 = 2.
using stirling’s approximation : n! ≈ √(2nπ) n^n * e
^ -n

Trichotomy property:

For any two real numbers (a, b) there must be a


relation between them
(a > b, a < b, a = b)

Asymptotic notation does not satisfy trichotomy


property
Ex: f(n) = n, g(n) = n ^ |sin(n)|, n > 0
∴ These two functions cannot converge
1. Reflexive
f(n) = O(f(n))
f(n) = Ω(f(n))
f(n) = Θ(f(n))

Page No:- 04
ALGORITHMS
GATE फर्रे

Space Complexity (Memory)


Example:
Algo sum(A, n){
int n, a[], i;
int sum=0;
for(i=0; i<n; i++)
sum = sum+arr[i];
}
Time complexity : O(n)
Space complexity : O(1)
Algo swapNum(int a, int b){
int temp = a;
a = b;
b = temp;
}
Time complexity : O(1)
Auxiliary space : O(1)

Page No:- 05
ALGORITHMS
GATE फर्रे

Module 2 : Divide and Conquer (DAC) T(n) = 2T(n/2) + g(n) : if n is large

Note : In DAC, divide and conquer is mandatory


but combine is optional.

Generalized Form : T(n) = aT(n/b) + g(n)


g(n) – +ve, a > 0, b > 0;

I Symmetric form
T(n) = aT(n/b) + g(n)
a : number of sub-problem
b : size shrink factor(each sub-problem n/b)
g(n) : cost to divide and combine
eg. Merge sort : T(n)=2⋅T(2n)+O(n)

II Asymmetric form 1
T(n) = T(αn) + T((1-α)n) + g(n) provide that : 0 < α <
1
eg: T(n) = T(n/3) + T(2n/3) + g(n)

III Asymmetric form 2


T(n) = T(n/2) + T(n/4) + g(n)
ex. Quick sort with asymmetric partitioning

● Algorithm DAC(A, 1, n)
● if(small(1, n)
● return (S(A, 1, n);
● Else
● m ←– Divide(1, n)
● S1 ←– DAC(A, 1, m)
● S2 ←– DAC(A, m+1, n)
● Combine (S1, S2);

Time Complexity for DAC Problem


T(n) = F(n), if n is small,

Page No:- 06
ALGORITHMS
GATE फर्रे

Divide and conquer problem 6. Matrix Multiplication

1. Finding minimum and maximum I. Using DAC: T(n) = 8T(n/2)+O(n^2), for n>1

T(n) = 2T(n/2) + 2 , n>2 T(n) = O(n^3)


Time complexity using DAC : T(n) = O(n)
Space complexity using DAC : II. Strassen’s matrix multiplication :
T(n) = O(logn)
T(n) = 7T(n/2) + a.n^2, for n > 1
2. Power of an element Time complexity : O(n^2.81) (by Strassen’s)
Time complexity : O(n^2.37) (by Coppersmith and
Recurrence relation winograd)
T(n) = 1 if n = 1 Space complexity :
T(n) = T(n/2) + C if n > 1
Time complexity : O(logn) 7. Selection Procedure (Find kth smallest on given an
Space complexity : O(logn) array of element and integer k)
Time complexity : O(n^2)
3. Binary Search Algorithm Space complexity : O(n)

Note: Provided that list of elements already sorted. 8. Counting Number of Inversion (An inversion in an
T(n) = c : n = 1 array is a pair(i, j) such that i<j and arr[i] > arr[j])
T(n) = a + T(n/2) : n > 1 Time complexity : O(nlogn)
Time complexity : T(n) = O(logn) Space complexity : O(n)(due to merges)
Space complexity : T(n) = O(1)
9. Closest pair of points (Find the minimum Euclidean
4. Merge Sort Algorithm distance between any two points in a 2D plane.)
Recurrence relation => T(n) = 2T(n/2) + O(n)
Comparison based sorting Time complexity = O(nlogn)
Stable sorting but outplace Sorting copies (x-sorted, y-sorted): O(n)
Recurrence relation: T(n) = c if n = 1 Auxiliary space (recursion stack): O(log n)
T(n) = T(n/2) + T(n/2) + cn if n > 1 Space complexity : O(logn)

Time complexity = O(n log n) 10. Convex hull (Find smallest convex polygon that
= Ω(n log n) encloses a point in a 2D plane)
= Θ(n log n) T(n) = 2T(n/2) + O(n)
Space complexity : O(n + logn) = O(n) Time complexity = O(nlogn)
Space complexity : O(logn)
5. Quick Sort Algorithm
Best Case / Average Case Note: In GATE exam if merge sort given then
T(n) = 1 ; if n = 1. always consider outplace.
T(n) = 2T(n/2) + n + C, if n>1
Time complexity : O(n logn) • If array size is very large, merge sort preferable.
Worst case : T(n) = n + T(n-1) + C ; if n > 1 • If array size is very small, then prefer insertion sort.
Note: Quick sort behaves in worst case when • Merge sort is a stable sorting technique.
element are already sorted
Time complexity : O(n^2)

Page No:- 07
ALGORITHMS
GATE फर्रे

11. Longest Integer multiplication(LIM)


int data type, can stores digits max 32767
long int data types, can store 4B/8B (8-10 digits
number) not more than that.
Solution : We can store long integer multiplication
in an array.
T(n) = 4T(n/2) + b.n ; if n>1
Time complexity : O(n^2)
Space complexity : O(logn)

Karatsuba optimization :
T(n) = 3T(n/2) + bn ; if n > 1
Time complexity : O(n^1.58)
Karatsuba is better but still not fast enough

Toom cook optimization :


Toom-Cook is a generalization of Karatsuba’s
algorithm that splits the input numbers into three
parts.

Toom-3 (3-way split)


I. T(n) = 9T(n/3) + bn.
Time complexity : O(n^2)
II. T(n) = 8T(n/3) + bn.
Time complexity :
T(n) = xT(n/3) + bn
for x = 5,
T(n) = 5T(n/3) + bn
Time complexity : O(O(n^1.464)

Generalised equation of time complexities of k-


ways split
1. DAC : T(n) = kT(n/k) + bn
2. Karatsuba : T(n) = (k^2 - 1)T(n/k) + bn
3. Toom-cook : T(n) = (2k-)T(n/k) + bn

Toom-4 exists but is less practical due to overhead.

Page No:- 08
ALGORITHMS
GATE फर्रे

Module 3 : Greedy Algorithm Huffman coding is a direct application of the


optimal merge pattern.
The greedy technique algorithm is a method that Step to solve a problem : Create a min-heap
makes the locally optimal choice at each step with (priority queue) of all characters based on their
the hope of finding a global optimum, without frequencies.
reconsidering previous choices. Repeat until the heap contains only one node:
(i) Extract the two nodes with the smallest
frequencies.
(ii) Create a new internal node with:
Frequency = sum of the two nodes.
Left child = node with smaller frequency.
ight child = node with larger frequency.
(iii) Insert this new node back into the heap.
The remaining node is the root of the Huffman
tree.
Time complexity : O(n log n)
Space complexity : O(n)

3. Fractional Knapsack

Maximize profit with given weight capacity.


Fractional items allowed.
Sort by value/weight ratio. Pick greedy until
Application of Greedy algorithm
capacity is full.
for(i=1;i<=n;i++)
1. Job sequencing with deadline
a[i] = Profit(i)/weight(i)
Schedule jobs to maximize profit before their
Take one by one object from a and keep in
deadlines (one job per time slot).
knapsack until knapsack becomes full arrange array
Sort jobs by profit (descending).
a in ascending order
Find maximum deadline in the given array of n-
Time complexity : O(n log n)
deadlines and take the array of maximum deadlines
size
Schedule each job in the latest available slot
4. Activity selection problem (You are given n
before its deadline.
activities, each with a start time and finish time.
The goal is to select the maximum number of
Time complexity
activities that can be performed by a single
Best case : O(nlogn)
person, under the constraint that the person can
Worst case : O(n2)
work on only one activity at a time (i.e., no
overlapping activities).
Sort the activities by their finishing time (in
2. Optimal merge pattern (Huffman coding is one
ascending order).
of its application)
Select the first activity in the sorted list and
include it in the final solution.
Merge n sorted files with minimum total cost
Iterate through the remaining activities in the
(record movements).
sorted list:
Always merge the two smallest files. Repeat until
one file remains.

Page No:- 09
ALGORITHMS
GATE फर्रे

● For each activity, check if its start time is greater ● Add the selected edge and vertex to the MST.
than or equal to the finish time of the last ● More efficient for dense graph
selected activity.
● If the condition holds, select the activity and Time complexity :
update the last selected finish time.
Adjacency matrix + linear search = O(V^2)
Time complexity Adjacency list + binary heap = O(E log V)
Adjacency list + Fibonacci heap = O(E + log V)
If activities are not sorted by finish time:
● Sorting takes O(nlog⁡n) 6. Single source shortest path algorithm
● Selecting activities takes O(n)O(n)O(n)
● Total time = O(nlog⁡n) I. Dijkstra’s algorithm
If activities are sorted by finish time: Using min heap & adjacency list = O(E + V)logV
● Only the selection loop runs → O(n) Using adjacency Matrix & min heap = O(V^2 * E *
● Total time = O(n) logV)
Using adjacency list & Unsorted array = O(V^2)
5. Minimum cost spanning tree Using adjacency list & sorted Doubly linked list =
O(EV)
I. Kruskal’s Minimum spanning tree algorithm
It builds the Minimum Spanning Tree by always II. Bellman Ford algorithm
choosing the next lightest edge that doesn't form It finds the shortest path from source to every
a cycle. vertex, if the graph doesn’t contain a negative
Sort all edges of the graph in non-decreasing weight edge cycle.
order of their weights. If a graph contains a negative weight cycle, it does
Initialize an empty set for the MST. not compute the shortest path form source to all
For each edge in the sorted list: other vertices but it will report saying “negative
● If the edge does not form a cycle with the MST weight cycle exists”.
formed so far, include it in the MST. Otherwise It finds shortest path from source to every vertex,
discard the edge. Input : A weighted, directed graph G = (V+E), with
Repeat until the MST includes V−1V - 1V−1 edges edge weights w(u, v), and a source vertex s.
(where VVV is the number of vertices). Output : Shortest path distance from source s to all
Time complexity : O(E log E) or O(E log V) other vertices, or detection of a negative-weight.
Note : Works well with sparse graphs (fewer Time complexity : O(EV)
edges). May produce a forest if the graph is not
connected.
II. Prim’s minimum spanning tree algorithm
It builds the MST by growing it one vertex at a
time, always choosing the minimum-weight edge
that connects a vertex inside the MST to one
outside.
Start with a random vertex, Initialize a MST set
(vertices included in MST), and a priority queue (or
min-heap) of edge weights.
While the MST set does not include all vertices:
● Select the minimum-weight edge that connects a
vertex in the MST to a vertex outside.

Page No:- 10
ALGORITHMS
GATE फर्रे

Module 4 : Dynamic Programming Time complexity : O(M * N) (We compute and


store results in a 2D of table of size M* N)
Use case of Tabulation and memoization method
● If the original problem requires all subproblems to 3. Travelling salesman problem
be solved, then tabulation is usually more efficient
than memoization. Given a set of cities and distances between every
● Tabulation avoids the overhead of recursion and pair of cities, the goal is to find the shortest
can use a preallocated array, leading to better possible tour that visits each city exactly once and
performance in both time and space in some cases. returns to the starting city.
● If only some subproblems are needed to solve the This is equivalent to finding the minimum cost
original problem, then memoization is preferable, Hamiltonian cycle.
because it solves only the required subproblems A cost/distance function C(i, j) representing the cost
(solved lazily, i.e., on-demand). to travel from city i to city j.
TSP(A, R) be the minimum cost of visiting all cities
1. Longest common subsequence (LCS) in the set R, starting from city A.
TSP = C( A, S ) ; if R = 0
Given two strings, find the length of their longest TSP = min( C(A, K ) + TSP(K, R-{K}) ; otherwise.
subsequence that appears in both. Subsequence Where A is current city
must be linear only not necessarily contiguous. R : set of unvisited cities
S : Starting city
Input : x= <ABCD>, y = <BDC> C(A, K) : cost from city A to city K
Output : 2 <BC>
Time complexity : (without dynamic programming)
Let, i & j denote indices of x & y. L(i, j) denote the O (n^n) with dynamic programming O(2^n * n^2)
LCS of string x & y , n & m are length respectively. Space complexity : O(2^n * n^2)
L(i, j) = 1 + L(i-1, j-1) ; if x[i] = y[j]
L(i, j) = max( L(i-1, j) , L(i , j-1) ; if x[i] != y[j] 4. Matrix chain multiplication
L(-i, j) = 0
L(i, -j) = 0 Given a sequence of matrices, find the most
Time complexity : O(n * m) efficient order of multiplication of these matrices
Space complexity : O(n * m) together in order to minimize the number of
multiplications.
2. 0/1 Knapsack problem Let MCM(i, j) denote the minimum number of scaler
multiplication required to multiply matrices from Ai
Input : N items, each item has weight W[i], profit[i] to Aj.
and a knapsack with a minimum capacity M MCM(i, j) = 0 ; i = j
Objective : Total weight <= M, and total profit MCM(i, j) = min(MCM(i, k) + MCM(k + 1, j) + Pi-1 *
maximized. Each item can be either 1 (include) or 0 Pk * Pj) ; if i < j
(exclude) The cost of multiplying the resulting matrices is
Recurrence relation Pi−1⋅Pk⋅Pj
KS(M, N) = 0 ; if M=0 or N=0 The total number of ways to parenthesize the
KS(M, N) = 0 ; if W[N]>M matrix chain of n matrices :
KS(M, N) = max(K( M - W[N], N-1) + P[N], K(M, N- T(n) =∑𝑛𝑛−1
𝑖𝑖−1
T(i)⋅T(n−i)
1)) ; otherwise Number of parenthesizing for a given chain
represented by catalan number : ⌊1 / (n+1) (2𝑛𝑛𝑛𝑛𝑛𝑛)⌋

Page No:- 11
ALGORITHMS
GATE फर्रे

Time complexity: 7. Optimal binary search tree


● Without DP : O(n!)
● With DP : O(n^3) Given a sorted array of keys[0..n-1] and their
Space complexity: frequencies:
● Without DP : O(n) - p[i]: frequency of successful searches for keys[i]
● With DP : O(n^2) - q[i]: frequency of unsuccessful searches between
keys
5. Sum of subset problem Goal: Construct a binary search tree that minimizes
the total expected cost of searches.
Given a set of numbers W[1...N] and a value M,
determine if there exists a subset whose sum is Recurrence relation
exactly M. cost(i, j) =
if j < i → return 0
Recursive Relation: else → min over k ∈ [i..j] of:
SoS(M, N, S) = cost(i, k-1) + cost(k+1, j) + w(i, j)
return(S) ; if M = 0 Where: w(i, j) = sum of p[i..j] + sum of q[i-1..j]
return(-1) ; if N = 0 Time complexity : O(n^3)
SoS(M, N - 1, S) ; if W[N] > M Space complexity : O(n^2)
min(
SoS(M - W[N], N - 1, S ∪ {W[N]}),
SoS(M, N - 1, S)
); otherwise 8. Multistage graph
Time complexity by brute force: O(2^N)
Time complexity with DP: O(M × N) where M is the A multistage graph is a directed acyclic graph
target sum, N is the number of elements (DAG) in which the set of vertices is partitioned into
Space complexity : without optimization O(M × N) stages (e.g., S1,S2,...,Sk) such that:
With space optimization : O(M) Every edge connects a vertex from stage iii to stage
i+1
6. Floyd-warshall’s : All pair shortest path The goal is to find the shortest path from the
source vertex in stage S1to the destination vertex
Used to find the shortest distances between every in stage Sk
pair of vertices in a weighted graph. Works with MSG(si, vj) =
positive and negative edge weights (but no 0 if si = F and vj is the destination
negative weight cycles allowed). min over all K in si+1 where (vj, K) ∈ E:
Recurrence relation cost(vj, K) + MSG(si+1, K)
A^0(i, j) = C(i, j)
A^k(i, j) = min( A^{k-1}(i, j), A^{k-1}(i, k) + A^{k- Time Complexity:
1}(k, j) ) ● Without dynamic programming: O(2^n)
where : C(i,j): initial weight of the edge from iii to jjj ● With dynamic programming: O(V + E)

A^k(i,j)A^k(i, j)Ak(i,j): shortest path from iii to jjj Space Complexity:


using vertices {1,2,…,k}\{1, 2, \dots, k\}{1,2,…,k} as ● Without dynamic programming: O(V^2)
intermediate nodes
Time complexity O(n^3)
Space complexity O(n^2)

Page No:- 12
ALGORITHMS
GATE फर्रे

● With dynamic programming: O(V^2)

Time Complexity:

● Without DP: O(2^n) (due to exponential


combinations)
● With DP: O(V + E)
(because each vertex and edge is processed only
once)
Space Complexity:
● Without DP: O(V^2)
● With DP: O(V^2)

Page No:- 13
ALGORITHMS
GATE फर्रे

Module 5 : Graph traversal Techniques Linear order of the vertices representing the activities
Visiting all nodes of the tree/Graph in a specified maintaining precedence.
order and processing the information only once.
Example below.

DFS in Undirected graph:


a. Connected graph
Structure of node
E-node: Exploring node Topological sort(){
Live node: Node which is not fully explored 1. DFS(v).
Dead node: Node which is fully explored 2. Arrange all the nodes of traversal in
Time associated with the node, during traversal decreasing order of finishing time.
Discovery time: The time at which the node is }
visited for the first time. BFS : Level by level order traversal
Finishing time: The time at which nodes 1. FIFO BFS: (BFS spanning tree) A B C D E F G
become dead. H

b. Disconnected/Disjoint graph : Depth forest


tree
DFS in Directed graph: DFS when carried out on a
directed graph leads to following types of edge.
1. Tree edge : it is part of DFS spanning tree or
forest
2. Forward edge: Leads from a node to its non
child descendant in the spanning tree
2. LIFO BFS: A C G H E D F B
3. Back edge: Leads from a node to its ancestors
4. Cross edge: Leads to a node which is neither
ascending nor descending.

DFS in Directed graph acyclic graph


Topological Sort:

Page No:- 14
ALGORITHMS
GATE फर्रे

Bi-connected Graph: A graph with no articulation


points.
Bi-connected Component: A maximal subgraph that
is bi-connected.

Application of DFS & BSF


Time complexity of DFS and BFS depends upon
representation of Graph:
(i) Adjacency matrix: O(V²)
(ii) Adjacency list: O(V + E)
Both DFS and BFS can be used to detect the presence
of cycle in the graph.
Both DFS and BFS can be used to know whether the
given graph is connected or not.
Both DFS and BFS can be used to know whether the
two vertices u and v are connected or not.
DFS is used to determine connected, strongly
connected, biconnected components, and
articulation points.
Connected Component (Undirected graph) : It is a
maximal set of vertices such that there is a path
between any pair of vertices in that set.
Strongly connected component (Directed graph): A
Strongly Connected Component (SCC) of a directed
graph is a maximal set of vertices such that for every
pair of vertices u and v in the set, there is a path from
u to v and a path from v to u.
Properties of Strongly Connected Components
1. Every directed graph is a D.A.G. of strongly
connected components.

2. Let C and C′ be distinct strongly connected


components in directed graph G = (V, E). Let u,
v ∈ C and u′, v′ ∈ C′. Suppose that there is a
path u → u′ in G, then there cannot be a path v′
→ v in G.

3. If C and C′ are strongly connected components


of G, and there is an edge from a node in C to a
node in C′, then the highest post number in C is
bigger than the highest post number in C′.
Articulation point (cut vertex)
Articulation Point: A vertex whose removal
increases the number of connected components in a
graph.

Page No:- 15
ALGORITHMS
GATE फर्रे

Searching and Sorting 1. Comparison based sorting


Classification Explanation algorithm
Internal vs Internal: All data fits into main
External memory (RAM). Algorithm Time complexity Stable In
sorting External: Used when data is too large sorting place
to fit into memory and uses external sorting
storage
Best Avera Worst
Comparison Comparison: Sorting is done using ge
vs Non- comparisons between elements
Quick sort Ω(n Θ(n O(n²) No Yes
comparison Non-comparison: Uses digit-based
log n) log n)
Based or counting approaches(radix sort,
counting sort) Merge Ω(n Θ(n O(n Yes No
sort log n) log n) log n)
Recursive vs Recursive: The function calls itself to
Iterative divide and conquer (e.g., Merge Sort, Insertion Ω(n) Θ(n²) O(n²) Yes Yes
Quick Sort). sort

In-place vs Space required is generally O(1) or Selection Ω(n²) Θ(n²) O(n²) No Yes
Not-in-place O(log n) at most (for recursion stack) sort
Merge Sort → O(n) space
Bubble Ω(n) Θ(n²) O(n²) Yes Yes
Stable vs Relative order of same elements is sort
Unstable maintained (Stable)
Heap sort Ω(n Θ(n O(n No Yes
log n) log n) log n)

Selection sort takes the least number of swaps overall


i.e. (n – 1) swaps — no matter how unsorted the input
is.

Page No:- 16
ALGORITHMS
GATE फर्रे

2. Non Comparison based sorting:

Algorithm Time complexity Stable In


sorting place
Best Averag Worst sorting
e

Radix sort Ω(d * (n Θ(d * (n O(d * (n Yes No


+ k)) + k)) + k))

Counting Ω(n + k) Θ(n + k) O(n + Yes No


sort k)

Bucket sort Ω(n + k) Θ(n + k) O(n²) Yes(if No


stable
sort
used
inside
buckets
)

Page No:- 17

You might also like