0% found this document useful (0 votes)
7 views

5thAOAEXP

The document outlines the implementation of Minimum Cost Spanning Trees using Kruskal's and Prim's algorithms, both of which utilize a greedy approach. It details the algorithms' steps, complexity analysis, and provides a sample program for Kruskal's algorithm. The conclusion emphasizes the time complexities of both algorithms, with Kruskal's being O(E log V) and Prim's O((V + E) log V).

Uploaded by

shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

5thAOAEXP

The document outlines the implementation of Minimum Cost Spanning Trees using Kruskal's and Prim's algorithms, both of which utilize a greedy approach. It details the algorithms' steps, complexity analysis, and provides a sample program for Kruskal's algorithm. The conclusion emphasizes the time complexities of both algorithms, with Kruskal's being O(E log V) and Prim's O((V + E) log V).

Uploaded by

shubham
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

Name of Student

Roll No

DOP DOS Marks/Grade Signature

Experiment No.: 5

Aim: Implement a program for Minimum cost spanning tree(s) - Kruskal and Prim’s
algorithm using Greedy approach
.
Objectives: To implement and analyze complexity of Minimum cost spanning tree(s) using
Kruskal and Prim’s algorithm.

Outcomes: Students will be able to compute the best-case, worst-case complexity of Minimum
cost spanning tree(s) using Kruskal and Prim’s algorithm.

Hardware / Software Required: Turbo C/ Notepad/ Eclipse/ IDEs/ code editors, and online
compilers.

Theory:
Greedy Approach: A greedy algorithm is an algorithmic paradigm that follows the problem
solving heuristic of making the locally optimal choice at each stage with the hope of finding a
global optimum.The Principle of optimality states that “in an optimal sequence of decisions or
choices, each subsequence must also be optimal”.

Problem Definition:
In a weighted graph, a minimum spanning tree is a spanning tree that has minimum weight
than all other spanning trees of the same graph. In real-world situations, this weight can be
measured as distance, congestion, traffic load or any arbitrary value denoted to the edges.

Minimum Spanning-Tree Algorithm:


1) Kruskal’s Algorithm:

Kruskal's algorithm to find the minimum cost spanning tree uses the greedy approach. This
algorithm treats the graph as a forest and every node it has as an individual tree. A tree connects
to another only and only if it has the least cost among all available options and does not violate
MST properties.

Algorithm Steps:

●Sort the graph edges with respect to their weights.


●Start adding edges to the MST from the edge with the smallest weight until the edge of the
largest weight.
●Only add edges which do not form a cycle , edges which connect only disconnected
components.

Step 1 - Remove all loops and Parallel Edges

Remove all loops and parallel edges from the given graph.
In case of parallel edges, keep the one which has the least cost associated and remove all others.

Step 2 - Arrange all edges in their increasing order of weight The next step is to create a set of
edges and weight, and arrange them in an ascending order of weightage (cost).

Step 3 - Add the edge which has the least weight age Now we start adding edges to the graph
beginning from the one which has the least weight. Throughout, we shall keep checking that the
spanning properties remain intact. In case, by adding one edge, the spanning tree property does
not hold then we shall consider not including the edge in the graph.
The least cost is 2 and edges involved are B,D and D,T. We add them. Adding them does not
violate spanning tree properties, so we continue to our next edge selection. Next cost is 3, and
associated edges are A,C and C,D. We add them again −

Next cost in the table is 4, and we observe that adding it will create a circuit in the graph.

We ignore it. In the process we shall ignore/avoid all edges that create a circuit.

We observe that edges with cost 5 and 6 also create circuits. We ignore them and move on.
Now we are left with only one node to be added. Between the two least cost edges available 7
and 8, we shall add the edge with cost 7.
By adding edge S,A we have included all the nodes of the graph and we now have a minimum
cost spanning tree.

Cost solution= (S,A)+(C,A)+(D,C)+(D,B)+(D,T) =7+3+3+2+2 =17

2) Prim’s Algorithm:

Prim’s Algorithm also uses the Greedy approach to find the minimum spanning tree. In Prim’s
Algorithm we grow the spanning tree from a starting position. Unlike an edge in Kruskal's, we
add vertex to the growing spanning tree in Prim's.
Algorithm Steps:

●Maintain two disjoint sets of vertices. One containing vertices that are in the growing
spanning tree and other that are not in the growing spanning tree.
●Select the cheapest vertex that is connected to the growing spanning tree and is not in the
growing spanning tree and add it into the growing spanning tree. This can be done using
Priority Queues. Insert the vertices, that are connected to growing spanning trees, into the
Priority Queue.
●Check for cycles. To do that, mark the nodes which have been already selected and insert
only those nodes in the Priority Queue that are not marked.
Step 1: Remove all loops

Any edge that starts and ends at the same vertex is a loop.

Loops are marked in the image given below.

Step 2: Remove all parallel edges between two vertex except the one with least weight

In this graph, vertex A and C are connected by two parallel edges having weight 10 and 12
respectively. So, we will remove 12 and keep 10.

We are now ready to find the minimum spanning tree.

Step 3: Create table


As our graph has 4 vertices, so our table will have 4 rows and 4 columns.

Now, put 0 in cells having same


row and column name.

Find the edges that directly


connect two vertices and fill the
table with the weight of the edge.
If no direct edge exists then fill the
cell with infinity.

Finding MST

Start from vertex A, find the smallest value in the A-row.

Note! We will not consider 0 as it will correspond to the same vertex.

5 is the smallest unmarked value in the A-row. So, we will mark the edge connecting vertex A
and B and tick 5 in AB and BA cell.

As we connected vertex A and B in the previous step, so we will now find the smallest value in
the A-row and B-row.
4 is the smallest unmarked value in the A-row and B-row. So, we will mark the edge
connecting vertex B and C and tick 4 in BC and CB cell.

As vertex A-B and B-C were connected in the previous steps, so we will now find the smallest
value in A-row, B-row and C-row.
5 is the smallest unmarked value in the A-row, B-row and C-row. So, we will mark the
edge connecting vertex C and D and tick 5 in CD and DC cell.

Result

Following is the required Minimum Spanning Tree for the given graph.

Analysis:

● If an adjacency list is used to represent the graph, then using breadth first search, all the
vertices can be traversed in O(V + E) time.
● We traverse all the vertices of graph using breadth first search and use a min heap for
storing the vertices not yet included in the MST.
● To get the minimum weight edge, we use min heap as a priority queue.
● Min heap operations like extracting minimum elements and decreasing key value takes
O(logV) time.

So, overall time complexity


= O(E + V) x O(logV)
= O((E + V)logV)
= O(ElogV)

This time complexity can be improved and reduced to O(E + VlogV) using the Fibonacci heap.

The time complexity of the Prim’s Algorithm is O((V+E)logV) because each vertex is inserted in
the priority queue only once and insertion in the priority queue takes logarithmic time.

Program:-

class DisjointSet:
def __init__(self, n):
self.parent = list(range(n))
self.rank = [0] * n

def find(self, u):


if self.parent[u] != u:
self.parent[u] = self.find(self.parent[u])
return self.parent[u]

def union(self, u, v):


root_u = self.find(u)
root_v = self.find(v)
if root_u != root_v:
if self.rank[root_u] > self.rank[root_v]:
self.parent[root_v] = root_u
elif self.rank[root_u] < self.rank[root_v]:
self.parent[root_u] = root_v
else:
self.parent[root_v] = root_u
self.rank[root_u] += 1

def kruskal(n, edges):


edges.sort(key=lambda x: x[2])
ds = DisjointSet(n)
mst_weight = 0
mst_edges = []

for u, v, weight in edges:


if ds.find(u) != ds.find(v):
ds.union(u, v)
mst_edges.append((u, v, weight))
mst_weight += weight

return mst_edges, mst_weight

edges = [(0, 1, 10), (0, 2, 6), (0, 3, 5), (1, 3, 15), (2, 3, 4)]
n=4
mst_edges, total_weight = kruskal(n, edges)
print("Kruskal's MST Edges:", mst_edges)
print("Total Weight of MST:", total_weight)

Output:-

Conclusion: Thus it is proved that the complexity of Kruskal’s Algorithm is O(ElogV) and that
of Prim’s Algorithm is O((V+E)logV).

You might also like